CN113706394A - Image processing method, apparatus and storage medium - Google Patents

Image processing method, apparatus and storage medium Download PDF

Info

Publication number
CN113706394A
CN113706394A CN202010432614.2A CN202010432614A CN113706394A CN 113706394 A CN113706394 A CN 113706394A CN 202010432614 A CN202010432614 A CN 202010432614A CN 113706394 A CN113706394 A CN 113706394A
Authority
CN
China
Prior art keywords
image
sub
frequency component
target image
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010432614.2A
Other languages
Chinese (zh)
Inventor
周津同
王群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Fad Technology Co ltd
Original Assignee
Beijing Fad Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Fad Technology Co ltd filed Critical Beijing Fad Technology Co ltd
Priority to CN202010432614.2A priority Critical patent/CN113706394A/en
Publication of CN113706394A publication Critical patent/CN113706394A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image processing method, an image processing device and a storage medium, wherein the image processing method comprises the following steps: performing image filtering on the target image according to a preset image filtering algorithm to obtain a high-frequency component, a medium-frequency component and a low-frequency component of the target image; carrying out first processing on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image with the high-frequency component of the target image to obtain a first sub-image; carrying out dynamic range mapping on the intermediate frequency component of the target image to obtain a second sub-image; carrying out second processing on the low-frequency component of the target image to obtain a second processed sub-image, carrying out third processing on the low-frequency component of the target image to obtain a third processed sub-image, and synthesizing the second processed sub-image and the third processed sub-image according to a first preset proportion to obtain a third sub-image; and synthesizing the first sub-image, the second sub-image and the third sub-image in a second preset proportion. The image processing method provided by the embodiment of the application can enable the details of the target image to be more prominent and the signal-to-noise ratio to be higher.

Description

Image processing method, apparatus and storage medium
Technical Field
The present disclosure relates to the field of digital image processing, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
In nature, all objects with a temperature above absolute zero always emit infrared radiation constantly. By collecting and measuring this radiant energy with an infrared thermal imaging system, an infrared image corresponding to the scene temperature can be formed. However, the infrared image generated by the infrared thermal imaging system has the defects of large noise, low contrast, large non-uniformity, poor spatial resolution and the like, and needs to be further processed to meet the actual requirements of users.
Disclosure of Invention
In view of the above, embodiments of the present invention provide an image processing method to overcome the problems of the prior art.
In a first aspect, an embodiment of the present application provides an image processing method, including:
performing image filtering on the target image according to a preset image filtering algorithm to obtain a high-frequency component, a medium-frequency component and a low-frequency component of the target image;
carrying out first processing on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image with the high-frequency component of the target image to obtain a first sub-image;
carrying out dynamic range mapping on the intermediate frequency component of the target image to obtain a second sub-image;
carrying out second processing on the low-frequency component of the target image to obtain a second processed sub-image, carrying out third processing on the low-frequency component of the target image to obtain a third processed sub-image, and synthesizing the second processed sub-image and the third processed sub-image according to a first preset proportion to obtain a third sub-image;
and synthesizing the first sub-image, the second sub-image and the third sub-image in a second preset proportion.
Optionally, in an embodiment of the present application, performing a first processing on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image by a high-frequency component of the target image to obtain a first sub-image includes:
and performing mean square error calculation on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image by the high-frequency component of the target image to obtain a first sub-image.
Optionally, in an embodiment of the present application, performing dynamic range mapping on the intermediate frequency component of the target image to obtain a second sub-image includes:
acquiring the dynamic range of the intermediate frequency component of the target image;
and establishing a mapping relation between the dynamic range of the intermediate frequency component and a first preset interval, and mapping the dynamic range of the intermediate frequency component to the first preset interval.
Optionally, in an embodiment of the present application, performing a second processing on the low-frequency component of the target image to obtain a second processed sub-image includes:
and carrying out double-platform histogram equalization on the low-frequency component of the target image to obtain a second processing sub-image.
Optionally, in an embodiment of the present application, the third processing on the low-frequency component of the target image to obtain a third processed sub-image includes:
and carrying out histogram bidirectional equalization on the low-frequency component of the target image to obtain a third processing sub-image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the system comprises an image decomposition module, a first calculation module, a second calculation module, a third calculation module and an image synthesis module;
the image decomposition module is used for carrying out image filtering on the target image according to a preset image filtering algorithm to obtain a high-frequency component, a medium-frequency component and a low-frequency component of the target image;
the first calculation module is used for carrying out first processing on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image and the high-frequency component of the target image to obtain a first sub-image;
the second calculation module is used for carrying out dynamic range mapping on the intermediate frequency component of the target image to obtain a second sub-image;
the third calculation module is used for carrying out second processing on the low-frequency component of the target image to obtain a second processed sub-image, carrying out third processing on the low-frequency component of the target image to obtain a third processed sub-image, and synthesizing the second processed sub-image and the third processed sub-image according to the first preset proportion to obtain a third sub-image;
the image synthesis module is used for synthesizing the first sub-image, the second sub-image and the third sub-image in a second preset proportion.
Optionally, in an embodiment of the present application, the first calculating module is configured to perform a first processing on the target image to obtain a first processed sub-image, and multiply the first processed sub-image with the high-frequency component of the target image to obtain a first sub-image, and the first calculating module includes:
and performing mean square error calculation on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image by the high-frequency component of the target image to obtain a first sub-image.
Optionally, in an embodiment of the present application, the second calculating module is configured to perform dynamic range mapping on the intermediate frequency component of the target image to obtain a second sub-image, and includes:
acquiring the dynamic range of the intermediate frequency component of the target image;
and establishing a mapping relation between the dynamic range of the intermediate frequency component and a first preset interval, and mapping the dynamic range of the intermediate frequency component to the first preset interval.
Optionally, in an embodiment of the present application, the third calculating module is configured to perform second processing on the low-frequency component of the target image to obtain a second processed sub-image, and includes:
and carrying out double-platform histogram equalization on the low-frequency component of the target image to obtain a second processing sub-image.
Optionally, in an embodiment of the present application, the third calculating module is configured to perform third processing on the low-frequency component of the target image to obtain a third processed sub-image, and includes:
and carrying out histogram bidirectional equalization on the low-frequency component of the target image to obtain a third processing sub-image.
In a third aspect, an embodiment of the present application provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method according to any one of the first aspect is implemented.
According to the image processing method provided by the embodiment of the application, the target image is decomposed into the high-frequency component, the medium-frequency component and the low-frequency component, the three components are respectively processed, and finally the processed results of the three components are superposed to obtain the processed target image, so that the details of the processed target image are more prominent, and the signal-to-noise ratio is higher.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
fig. 1 is an image processing method according to an embodiment of the present disclosure;
fig. 2 is a diagram of another image processing method provided in an embodiment of the present application;
fig. 3 is an image processing apparatus according to an embodiment of the present application.
Detailed Description
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
Example one
Referring to fig. 1, in a first aspect, an embodiment of the present application provides an image processing method, including:
s101: performing image filtering on the target image according to a preset image filtering algorithm to obtain a high-frequency component, a medium-frequency component and a low-frequency component of the target image;
specifically, the function of the target image in the spatial domain is used for representing the gray level distribution of the target image in the spatial domain. Preferably, the function of the target image in the spatial domain can be transformed into the function of the frequency domain through fourier transform, and then the function of the target image in the frequency domain is subjected to image filtering according to a preset image filtering algorithm to obtain the high-frequency component, the medium-frequency component and the low-frequency component of the target image. The high-frequency component of the target image is a component with higher frequency in a function of a frequency domain of the target image, and corresponds to a part with violent change of the gray value of the target image, namely an edge or a detail part of the target image; the intermediate frequency component of the target image is a component with medium frequency in a function of a frequency domain of the target image, and corresponds to a part with medium change of the gray value of the target image, namely a basic structure of the image; the low-frequency component of the target image is a component of the target image with a lower frequency in a function of a frequency domain, and corresponds to a part of the target image with a slow change in gray level value, namely, a background part of the image. It should be noted that the partition boundary with higher frequency, medium frequency and lower frequency may be a partition boundary preset artificially according to actual needs, and the preset image filtering algorithm may include a partition boundary preset artificially according to actual needs. For example, in the function of the frequency domain, the part of the target image above a is set as a high-frequency component, the part below a but above b is set as a medium-frequency component, the part below b is set as a low-frequency component, and a and b are values artificially preset according to actual needs. Of course, this is merely an example and does not represent a limitation of the present application.
S102: carrying out first processing on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image with the high-frequency component of the target image to obtain a first sub-image;
here, in detail, after the first processing is performed on the target image, a first processed sub-image representing the gray scale distribution of each part of the target image can be obtained. For example: the method comprises the steps of carrying out M × N mean square error calculation on a function of a target image in a space domain, namely dividing the target image into M × N parts, wherein M and N can be set manually according to actual needs, calculating the mean square error of gray values of each part of the target image, replacing the gray values of all pixel points of the corresponding part with the calculated mean square error, and taking the mean square errors of the M × N parts as adaptive gain coefficients, namely taking the corresponding relation between all pixel points of the target image in the space domain after first processing and the corresponding mean square error as the function of a first processed sub-image in the space domain. And multiplying the function of the high-frequency component of the target image in the spatial domain by the function of the first processing sub-image in the spatial domain to obtain the function of the first sub-image in the spatial domain. In this way, the high-frequency components of the target image can be further gained, and the degrees of different target image gains are different, namely, the high-frequency components of the target image can be subjected to adaptive gain, so that the edges and details of the processed target image are more prominent.
S103, carrying out dynamic range mapping on the intermediate frequency component of the target image to obtain a second sub-image;
specifically, the dynamic range of the image refers to a range from a pixel value of a darkest pixel point in the image to a pixel value of a brightest pixel point, the pixel value of the darkest pixel point in the image is a minimum value of the dynamic range of the image, and the pixel value of the brightest pixel point in the image is a maximum value of the dynamic range of the image. Preferably, the number of pixels can be counted from small to large according to the pixel value of the pixel corresponding to the intermediate frequency component of the target image, the pixel value of the c-th pixel is taken as the minimum value of the dynamic range of the intermediate frequency component of the target image, the number of pixels is counted from large to small according to the pixel value of the pixel corresponding to the intermediate frequency component of the target image, and the pixel value of the d-th pixel is taken as the maximum value of the dynamic range of the intermediate frequency component of the target image, wherein c and d can be manually set according to actual needs, and generally both are about 100. Therefore, the error of the dynamic range corresponding to the intermediate frequency component can be reduced, and the subsequent processing is facilitated.
The dynamic range mapping refers to establishing a mapping relationship between pixel value intervals [ c, d ] and [0,255], and mapping the pixel values of the pixels with the pixel values between [ c, d ] to the intervals [0,255], that is, converting the pixel values of the pixels with the pixel values between [ c, d ] into unique values between [0,255], and the mapping and the conversion can be completed through a linear shift algorithm, a logarithm mapping algorithm, a piecewise function mapping algorithm, an adaptive logarithm mapping algorithm, a high dynamic range image visualization algorithm, a logarithm piecewise mapping algorithm and the like, which is not limited in the present application. The correspondence of each pixel point to each value in the interval [0,255] thus obtained is a function of the second sub-image.
S104, carrying out second processing on the low-frequency component of the target image to obtain a second processed sub-image, carrying out third processing on the low-frequency component of the target image to obtain a third processed sub-image, and synthesizing the second processed sub-image and the third processed sub-image according to a first preset proportion to obtain a third sub-image;
here, in detail, the second process may be a dual-stage histogram equalization, the third process may be a histogram bi-directional equalization, and the first preset ratio may be a ratio manually set according to actual needs, and is generally a ratio of one to one, that is, a ratio of 1: the ratio of 1 synthesizes the second processed sub-image and the third processed sub-image. Therefore, the background of the processed target image is purer, and the signal to noise ratio is higher.
The double-platform histogram equalization refers to selecting two appropriate platform threshold values T1 and T2, wherein T1 is greater than T2, the distribution of the gray values of the low-frequency components of the target image in the spatial domain can be represented by a histogram, the histogram value of the gray value of the pixel point with the gray value greater than T1 is replaced by T1, namely the gray value of the pixel point with the gray value greater than T1 is replaced by T1, the histogram value of the gray value of the pixel point with the gray value less than T2 is replaced by T2, and the gray value of the pixel point with the gray value less than T2 is replaced by T2. Thus, noise in the low-frequency component of the target image can be effectively suppressed, and detailed expression of the low-frequency component can be enhanced.
The histogram bidirectional equalization is to equalize the gray density of the histogram and the gray pitch of the histogram. Equalizing the gray-scale density of the histogram means that the input histogram is approximately transformed into a histogram having a uniform density distribution, so that the dynamic range and contrast of the image can be increased. The equalization processing of the gray scale interval of the histogram refers to that the gray scales of all pixel points are arranged at equal intervals in the display range. In this way, the detail and definition of the image can be enhanced.
Of course, the first processing and the second processing may be both dual-platform histogram equalization, both histogram bidirectional equalization, the second processing may also be histogram bidirectional equalization, and the third processing is dual-platform histogram equalization, which is not limited in this application.
And S105, synthesizing the first sub-image, the second sub-image and the third sub-image according to a second preset proportion.
Specifically, the second preset ratio may be a ratio manually set according to actual needs, for example, if the edge and the detail of the processed target image are to be highlighted, the ratio of the first sub-image may be set to be larger, if the basic structure of the processed target image is to be highlighted, the ratio of the second sub-image may be set to be larger, if the background of the processed target image is to be highlighted, the ratio of the third sub-image may be set to be larger, and the present application does not limit this. Therefore, the edge, the detail, the basic structure and the background of the target image can be effectively enhanced, and the style of the processed target image can be adjusted more flexibly.
Optionally, in an embodiment of the present application, performing a first processing on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image by a high-frequency component of the target image to obtain a first sub-image includes:
and performing mean square error calculation on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image by the high-frequency component of the target image to obtain a first sub-image.
For example: the method comprises the steps of carrying out M × N mean square error calculation on a function of a target image in a space domain, namely dividing the target image into M × N parts, wherein M and N can be set manually according to actual needs, calculating the mean square error of gray values of each part of the target image, replacing the gray values of all pixel points of the corresponding part with the calculated mean square error, and taking the mean square errors of the M × N parts as adaptive gain coefficients, namely taking the corresponding relation between all pixel points of the target image in the space domain after first processing and the corresponding mean square error as the function of a first processed sub-image in the space domain. And multiplying the function of the high-frequency component of the target image in the spatial domain by the function of the first processing sub-image in the spatial domain to obtain the function of the first sub-image in the spatial domain. In this way, the high-frequency components of the target image can be further gained, and the degrees of different target image gains are different, that is, the high-frequency components of the target image can be subjected to adaptive gain, so that the edges and details of the processed target image are more prominent.
Optionally, in an embodiment of the present application, performing dynamic range mapping on the intermediate frequency component of the target image to obtain a second sub-image includes:
acquiring the dynamic range of the intermediate frequency component of the target image;
and establishing a mapping relation between the dynamic range of the intermediate frequency component and a first preset interval, and mapping the dynamic range of the intermediate frequency component to the first preset interval.
Specifically, the dynamic range of the image refers to a range from a pixel value of a darkest pixel point in the image to a pixel value of a brightest pixel point, the pixel value of the darkest pixel point in the image is a minimum value of the dynamic range of the image, and the pixel value of the brightest pixel point in the image is a maximum value of the dynamic range of the image. Preferably, the number of pixels can be counted from small to large according to the pixel value of the pixel corresponding to the intermediate frequency component of the target image, the pixel value of the c-th pixel is taken as the minimum value of the dynamic range of the intermediate frequency component of the target image, the number of pixels is counted from large to small according to the pixel value of the pixel corresponding to the intermediate frequency component of the target image, and the pixel value of the d-th pixel is taken as the maximum value of the dynamic range of the intermediate frequency component of the target image, wherein c and d can be manually set according to actual needs, and generally both are about 100. Therefore, the error of the dynamic range corresponding to the intermediate frequency component can be reduced, and the subsequent processing is facilitated.
The dynamic range mapping refers to establishing a mapping relationship between pixel value intervals [ c, d ] and [0,255], and mapping the pixel values of the pixels with the pixel values between [ c, d ] to the intervals [0,255], that is, converting the pixel values of the pixels with the pixel values between [ c, d ] into unique values between [0,255], and the mapping and the conversion can be completed through a linear shift algorithm, a logarithm mapping algorithm, a piecewise function mapping algorithm, an adaptive logarithm mapping algorithm, a high dynamic range image visualization algorithm, a logarithm piecewise mapping algorithm and the like, which is not limited in the present application. The correspondence of each pixel point to each value in the interval [0,255] thus obtained is a function of the second sub-image.
Optionally, in an embodiment of the present application, performing a second processing on the low-frequency component of the target image to obtain a second processed sub-image includes:
and carrying out double-platform histogram equalization on the low-frequency component of the target image to obtain a second processing sub-image.
The double-platform histogram equalization refers to selecting two appropriate platform threshold values T1 and T2, wherein T1 is greater than T2, the distribution of the gray values of the low-frequency components of the target image in the spatial domain can be represented by a histogram, the histogram value of the gray value of the pixel point with the gray value greater than T1 is replaced by T1, namely the gray value of the pixel point with the gray value greater than T1 is replaced by T1, the histogram value of the gray value of the pixel point with the gray value less than T2 is replaced by T2, and the gray value of the pixel point with the gray value less than T2 is replaced by T2. Thus, the noise in the low-frequency component of the target image can be effectively suppressed, and the details of the low-frequency component can be enhanced.
Optionally, in an embodiment of the present application, the third processing on the low-frequency component of the target image to obtain a third processed sub-image includes:
and carrying out histogram bidirectional equalization on the low-frequency component of the target image to obtain a third processing sub-image.
The histogram bidirectional equalization is to equalize the gray density of the histogram and the gray pitch of the histogram. Equalizing the gray-scale density of the histogram means that the input histogram is approximately transformed into a histogram having a uniform density distribution, so that the dynamic range and contrast of the image can be increased. The equalization processing of the gray scale interval of the histogram refers to that the gray scales of all pixel points are arranged at equal intervals in the display range. In this way, the detail and definition of the image can be enhanced.
Referring to fig. 2, fig. 2 is a flowchart illustrating another image processing method according to an embodiment of the present application, including: performing image filtering on the target image according to a preset image filtering algorithm to obtain a high-frequency component, a medium-frequency component and a low-frequency component of the target image; carrying out mean square error calculation on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image by the high-frequency component of the target image to obtain a first sub-image; acquiring the dynamic range of the intermediate frequency component of the target image; establishing a mapping relation between the dynamic range of the intermediate frequency component and a first preset interval, and mapping the dynamic range of the intermediate frequency component to the first preset interval; performing double-platform histogram equalization on the low-frequency component of the target image to obtain a second processed sub-image, performing histogram bidirectional equalization on the low-frequency component of the target image to obtain a third processed sub-image, and synthesizing the second processed sub-image and the third processed sub-image according to a first preset proportion to obtain a third sub-image; and synthesizing the first sub-image, the second sub-image and the third sub-image in a second preset proportion. Therefore, the details of the processed target image can be more prominent, and the signal-to-noise ratio is higher.
Example two
In a second aspect, an embodiment of the present application provides an image processing apparatus 20, including:
an image decomposition module 201, a first calculation module 202, a second calculation module 203, a third calculation module 204 and an image synthesis module 205;
the image decomposition module 201, the first calculation module 202, the second calculation module 203, the third calculation module 204 and the image synthesis module 205 may be integrated into a data processing module, which is divided into five virtual modules according to different functions, and does not represent the actual hardware structure.
The image decomposition module 201 is configured to perform image filtering on the target image according to a preset image filtering algorithm to obtain a high-frequency component, a medium-frequency component and a low-frequency component of the target image;
specifically, the function of the target image in the spatial domain is used for representing the gray level distribution of the target image in the spatial domain. Preferably, the function of the target image in the spatial domain can be transformed into the function of the frequency domain through fourier transform, and then the function of the target image in the frequency domain is subjected to image filtering according to a preset image filtering algorithm to obtain the high-frequency component, the medium-frequency component and the low-frequency component of the target image. The high-frequency component of the target image is a component with higher frequency in a function of a frequency domain of the target image, and corresponds to a part with violent change of the gray value of the target image, namely an edge or a detail part of the target image; the intermediate frequency component of the target image is a component with medium frequency in a function of a frequency domain of the target image, and corresponds to a part with medium change of the gray value of the target image, namely a basic structure of the image; the low-frequency component of the target image is a component of the target image with a lower frequency in a function of a frequency domain, and corresponds to a part of the target image with a slow change in gray level value, namely, a background part of the image. It should be noted that the partition boundary with higher frequency, medium frequency and lower frequency may be a partition boundary preset artificially according to actual needs, and the preset image filtering algorithm may include a partition boundary preset artificially according to actual needs. For example, a portion of the target image in the function of the frequency domain that is higher than a preset artificially according to actual needs may be set as a high-frequency component, a portion that is lower than a but is higher than b preset artificially according to actual needs may be set as a medium-frequency component, and a portion that is lower than b may be set as a low-frequency component. Of course, this is merely an example and does not represent a limitation of the present application.
The first calculating module 202 is configured to perform first processing on the target image to obtain a first processed sub-image, and multiply the first processed sub-image with a high-frequency component of the target image to obtain a first sub-image;
here, in detail, after the first processing is performed on the target image, a first processed sub-image representing the gray scale distribution of each part of the target image can be obtained. For example: the method comprises the steps of carrying out M × N mean square error calculation on a function of a target image in a space domain, namely dividing the target image into M × N parts, wherein M and N can be set manually according to actual needs, calculating the mean square error of gray values of each part of the target image, replacing the gray values of all pixel points of the corresponding part with the calculated mean square error, and taking the mean square errors of the M × N parts as adaptive gain coefficients, namely taking the corresponding relation between all pixel points of the target image in the space domain after first processing and the corresponding mean square error as the function of a first processed sub-image in the space domain. And multiplying the function of the high-frequency component of the target image in the spatial domain by the function of the first processing sub-image in the spatial domain to obtain the function of the first sub-image in the spatial domain. In this way, the high-frequency components of the target image can be further gained, and the degrees of different target image gains are different, that is, the high-frequency components of the target image can be subjected to adaptive gain, so that the edges and details of the processed target image are more prominent.
The second calculating module 203 is configured to perform dynamic range mapping on the intermediate frequency component of the target image to obtain a second sub-image;
specifically, the dynamic range of the image refers to a range from a pixel value of a darkest pixel point in the image to a pixel value of a brightest pixel point, the pixel value of the darkest pixel point in the image is a minimum value of the dynamic range of the image, and the pixel value of the brightest pixel point in the image is a maximum value of the dynamic range of the image. Preferably, the number of pixels can be counted from small to large according to the pixel value of the pixel corresponding to the intermediate frequency component of the target image, the pixel value of the c-th pixel is taken as the minimum value of the dynamic range of the intermediate frequency component of the target image, the number of pixels is counted from large to small according to the pixel value of the pixel corresponding to the intermediate frequency component of the target image, and the pixel value of the d-th pixel is taken as the maximum value of the dynamic range of the intermediate frequency component of the target image, wherein c and d can be manually set according to actual needs, and generally both are about 100. Therefore, the error of the dynamic range corresponding to the intermediate frequency component can be reduced, and the subsequent processing is facilitated.
The dynamic range mapping refers to establishing a mapping relationship between pixel value intervals [ c, d ] and [0,255], and mapping the pixel values of the pixels with the pixel values between [ c, d ] to the intervals [0,255], that is, converting the pixel values of the pixels with the pixel values between [ c, d ] into unique values between [0,255], and the mapping and the conversion can be completed through a linear shift algorithm, a logarithm mapping algorithm, a piecewise function mapping algorithm, an adaptive logarithm mapping algorithm, a high dynamic range image visualization algorithm, a logarithm piecewise mapping algorithm and the like, which is not limited in the present application. The correspondence of each pixel point to each value in the interval [0,255] thus obtained is a function of the second sub-image.
The third calculating module 204 is configured to perform second processing on the low-frequency component of the target image to obtain a second processed sub-image, perform third processing on the low-frequency component of the target image to obtain a third processed sub-image, and synthesize the second processed sub-image and the third processed sub-image according to the first preset ratio to obtain a third sub-image;
here, in detail, the second process may be a dual-stage histogram equalization, the third process may be a histogram bi-directional equalization, and the first preset ratio may be a ratio manually set according to actual needs, and is generally a ratio of one to one, that is, a ratio of 1: the ratio of 1 synthesizes the second processed sub-image and the third processed sub-image. Therefore, the background of the processed target image is purer, and the signal to noise ratio is higher.
The double-platform histogram equalization refers to selecting two appropriate platform threshold values T1 and T2, wherein T1 is greater than T2, the distribution of the gray values of the low-frequency components of the target image in the spatial domain can be represented by a histogram, the histogram value of the gray value of the pixel point with the gray value greater than T1 is replaced by T1, namely the gray value of the pixel point with the gray value greater than T1 is replaced by T1, the histogram value of the gray value of the pixel point with the gray value less than T2 is replaced by T2, and the gray value of the pixel point with the gray value less than T2 is replaced by T2. Thus, the noise in the low-frequency component of the target image can be effectively suppressed, and the details of the low-frequency component can be enhanced.
The histogram bidirectional equalization is to equalize the gray density of the histogram and the gray pitch of the histogram. Equalizing the gray-scale density of the histogram means that the input histogram is approximately transformed into a histogram having a uniform density distribution, so that the dynamic range and contrast of the image can be increased. The equalization processing of the gray scale interval of the histogram refers to that the gray scales of all pixel points are arranged at equal intervals in the display range. In this way, the detail and definition of the image can be enhanced.
Of course, the first processing and the second processing may be both dual-platform histogram equalization, both histogram bidirectional equalization, the second processing may also be histogram bidirectional equalization, and the third processing is dual-platform histogram equalization, which is not limited in this application.
The image composition module 205 is configured to compose the first sub-image, the second sub-image and the third sub-image at a second preset ratio.
Specifically, the second preset ratio may be a ratio manually set according to actual needs, for example, if the edge and the detail of the processed target image are to be highlighted, the ratio of the first sub-image may be set to be larger, if the basic structure of the processed target image is to be highlighted, the ratio of the second sub-image may be set to be larger, if the background of the processed target image is to be highlighted, the ratio of the third sub-image may be set to be larger, and the present application does not limit this. Therefore, the edge, the detail, the basic structure and the background of the target image can be effectively enhanced, and the style of the processed target image can be adjusted more flexibly.
Optionally, in an embodiment of the present application, the first calculating module 202 is configured to perform a first processing on the target image to obtain a first processed sub-image, and multiply the first processed sub-image with the high-frequency component of the target image to obtain a first sub-image, and includes:
and performing mean square error calculation on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image by the high-frequency component of the target image to obtain a first sub-image.
For example: the method comprises the steps of carrying out M × N mean square error calculation on a function of a target image in a space domain, namely dividing the target image into M × N parts, wherein M and N can be set manually according to actual needs, calculating the mean square error of gray values of each part of the target image, replacing the gray values of all pixel points of the corresponding part with the calculated mean square error, and taking the mean square errors of the M × N parts as adaptive gain coefficients, namely taking the corresponding relation between all pixel points of the target image in the space domain after first processing and the corresponding mean square error as the function of a first processed sub-image in the space domain. And multiplying the function of the high-frequency component of the target image in the spatial domain by the function of the first processing sub-image in the spatial domain to obtain the function of the first sub-image in the spatial domain. In this way, the high-frequency components of the target image can be further gained, and the degrees of different target image gains are different, that is, the high-frequency components of the target image can be subjected to adaptive gain, so that the edges and details of the processed target image are more prominent.
Optionally, in an embodiment of the present application, the second calculating module 203 is configured to perform dynamic range mapping on the intermediate frequency component of the target image to obtain a second sub-image, and includes:
acquiring the dynamic range of the intermediate frequency component of the target image;
and establishing a mapping relation between the dynamic range of the intermediate frequency component and a first preset interval, and mapping the dynamic range of the intermediate frequency component to the first preset interval.
Specifically, the dynamic range of the image refers to a range from a pixel value of a darkest pixel point in the image to a pixel value of a brightest pixel point, the pixel value of the darkest pixel point in the image is a minimum value of the dynamic range of the image, and the pixel value of the brightest pixel point in the image is a maximum value of the dynamic range of the image. Preferably, the number of pixels can be counted from small to large according to the pixel value of the pixel corresponding to the intermediate frequency component of the target image, the pixel value of the c-th pixel is taken as the minimum value of the dynamic range of the intermediate frequency component of the target image, the number of pixels is counted from large to small according to the pixel value of the pixel corresponding to the intermediate frequency component of the target image, and the pixel value of the d-th pixel is taken as the maximum value of the dynamic range of the intermediate frequency component of the target image, wherein c and d can be manually set according to actual needs, and generally both are about 100. Therefore, the error of the dynamic range corresponding to the intermediate frequency component can be reduced, and the subsequent processing is facilitated.
The dynamic range mapping refers to establishing a mapping relationship between pixel value intervals [ c, d ] and [0,255], and mapping the pixel values of the pixels with the pixel values between [ c, d ] to the intervals [0,255], that is, converting the pixel values of the pixels with the pixel values between [ c, d ] into unique values between [0,255], and the mapping and the conversion can be completed through a linear shift algorithm, a logarithm mapping algorithm, a piecewise function mapping algorithm, an adaptive logarithm mapping algorithm, a high dynamic range image visualization algorithm, a logarithm piecewise mapping algorithm and the like, which is not limited in the present application. The correspondence of each pixel point to each value in the interval [0,255] thus obtained is a function of the second sub-image.
Optionally, in an embodiment of the present application, the third calculating module 204 is configured to perform a second processing on the low-frequency component of the target image to obtain a second processed sub-image, and includes:
and carrying out double-platform histogram equalization on the low-frequency component of the target image to obtain a second processing sub-image.
And carrying out double-platform histogram equalization on the low-frequency component of the target image to obtain a second processing sub-image.
The double-platform histogram equalization refers to selecting two appropriate platform threshold values T1 and T2, wherein T1 is greater than T2, the distribution of the gray values of the low-frequency components of the target image in the spatial domain can be represented by a histogram, the histogram value of the gray value of the pixel point with the gray value greater than T1 is replaced by T1, namely the gray value of the pixel point with the gray value greater than T1 is replaced by T1, the histogram value of the gray value of the pixel point with the gray value less than T2 is replaced by T2, and the gray value of the pixel point with the gray value less than T2 is replaced by T2. Thus, the noise in the low-frequency component of the target image can be effectively suppressed, and the details of the low-frequency component can be enhanced.
Optionally, in an embodiment of the present application, the third calculating module 204 is configured to perform third processing on the low-frequency component of the target image to obtain a third processed sub-image, and includes:
and carrying out histogram bidirectional equalization on the low-frequency component of the target image to obtain a third processing sub-image.
The histogram bidirectional equalization is to equalize the gray density of the histogram and the gray pitch of the histogram. Equalizing the gray-scale density of the histogram means that the input histogram is approximately transformed into a histogram having a uniform density distribution, so that the dynamic range and contrast of the image can be increased. The equalization processing of the gray scale interval of the histogram refers to that the gray scales of all pixel points are arranged at equal intervals in the display range. In this way, the detail and definition of the image can be enhanced.
EXAMPLE III
In a third aspect, an embodiment of the present application provides a storage medium, where a computer program is stored on the storage medium, and when the computer program is executed by a processor, the method according to the first embodiment is implemented.
The storage medium of the embodiments of the present application exists in various forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play multimedia content. This type of device comprises: audio, video players (e.g., ipods), handheld game consoles, electronic books, and smart toys and portable car navigation devices.
(4) And other electronic equipment with data interaction function.
Thus, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular transactions or implement particular abstract data types. The application may also be practiced in distributed computing environments where transactions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (11)

1. An image processing method, comprising:
performing image filtering on a target image according to a preset image filtering algorithm to obtain a high-frequency component, a medium-frequency component and a low-frequency component of the target image;
performing first processing on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image and the high-frequency component of the target image to obtain a first sub-image;
carrying out dynamic range mapping on the intermediate frequency component of the target image to obtain a second sub-image;
carrying out second processing on the low-frequency component of the target image to obtain a second processed sub-image, carrying out third processing on the low-frequency component of the target image to obtain a third processed sub-image, and synthesizing the second processed sub-image and the third processed sub-image according to a first preset proportion to obtain a third sub-image;
and synthesizing the first sub-image, the second sub-image and the third sub-image in a second preset proportion.
2. The method of claim 1, wherein first processing the target image to obtain a first processed sub-image and multiplying the first processed sub-image with the high frequency component of the target image to obtain a first sub-image, comprises:
and performing mean square error calculation on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image and the high-frequency component of the target image to obtain a first sub-image.
3. The method of claim 1, wherein dynamic range mapping the mid-frequency component of the target image to obtain a second sub-image comprises:
acquiring the dynamic range of the intermediate frequency component of the target image;
and establishing a mapping relation between the dynamic range of the intermediate frequency component and a first preset interval, and mapping the dynamic range of the intermediate frequency component to the first preset interval.
4. The method of claim 1, wherein second processing the low frequency component of the target image to obtain a second processed sub-image comprises:
and carrying out double-platform histogram equalization on the low-frequency component of the target image to obtain a second processing sub-image.
5. The method of claim 1, wherein the third processing of the low frequency component of the target image to obtain a third processed sub-image comprises:
and carrying out histogram bidirectional equalization on the low-frequency component of the target image to obtain a third processing sub-image.
6. An image processing apparatus characterized by comprising:
the system comprises an image decomposition module, a first calculation module, a second calculation module, a third calculation module and an image synthesis module;
the image decomposition module is used for carrying out image filtering on a target image according to a preset image filtering algorithm to obtain a high-frequency component, a medium-frequency component and a low-frequency component of the target image;
the first calculating module is used for carrying out first processing on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image and the high-frequency component of the target image to obtain a first sub-image;
the second calculation module is used for carrying out dynamic range mapping on the intermediate frequency component of the target image to obtain a second sub-image;
the third calculating module is used for carrying out second processing on the low-frequency component of the target image to obtain a second processed sub-image, carrying out third processing on the low-frequency component of the target image to obtain a third processed sub-image, and synthesizing the second processed sub-image and the third processed sub-image according to a first preset proportion to obtain a third sub-image;
the image synthesis module is used for synthesizing the first sub-image, the second sub-image and the third sub-image according to a second preset proportion.
7. The apparatus of claim 6, wherein the first computing module is configured to perform a first processing on the target image to obtain a first processed sub-image, and to multiply the first processed sub-image with a high-frequency component of the target image to obtain a first sub-image, and comprises:
and performing mean square error calculation on the target image to obtain a first processed sub-image, and multiplying the first processed sub-image and the high-frequency component of the target image to obtain a first sub-image.
8. The method of claim 6, wherein the second computing module is configured to perform dynamic range mapping on the mid-frequency component of the target image to obtain a second sub-image, and comprises:
acquiring the dynamic range of the intermediate frequency component of the target image;
and establishing a mapping relation between the dynamic range of the intermediate frequency component and a first preset interval, and mapping the dynamic range of the intermediate frequency component to the first preset interval.
9. The apparatus of claim 6, wherein the third computing module is configured to perform a second processing on the low frequency component of the target image to obtain a second processed sub-image, and comprises:
and carrying out double-platform histogram equalization on the low-frequency component of the target image to obtain a second processing sub-image.
10. The method of claim 6, wherein the third computing module is configured to perform a third processing on the low frequency component of the target image to obtain a third processed sub-image, and comprises:
and carrying out histogram bidirectional equalization on the low-frequency component of the target image to obtain a third processing sub-image.
11. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, carries out the method according to any one of claims 1-5.
CN202010432614.2A 2020-05-20 2020-05-20 Image processing method, apparatus and storage medium Pending CN113706394A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010432614.2A CN113706394A (en) 2020-05-20 2020-05-20 Image processing method, apparatus and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010432614.2A CN113706394A (en) 2020-05-20 2020-05-20 Image processing method, apparatus and storage medium

Publications (1)

Publication Number Publication Date
CN113706394A true CN113706394A (en) 2021-11-26

Family

ID=78645883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010432614.2A Pending CN113706394A (en) 2020-05-20 2020-05-20 Image processing method, apparatus and storage medium

Country Status (1)

Country Link
CN (1) CN113706394A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693531A (en) * 2012-01-11 2012-09-26 河南科技大学 Adaptive double-platform based infrared image enhancement method
CN104067311A (en) * 2011-12-04 2014-09-24 数码装饰有限公司 Digital makeup
CN104685536A (en) * 2012-10-05 2015-06-03 皇家飞利浦有限公司 Real-time image processing for optimizing sub-images views
CN109146814A (en) * 2018-08-20 2019-01-04 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104067311A (en) * 2011-12-04 2014-09-24 数码装饰有限公司 Digital makeup
CN102693531A (en) * 2012-01-11 2012-09-26 河南科技大学 Adaptive double-platform based infrared image enhancement method
CN104685536A (en) * 2012-10-05 2015-06-03 皇家飞利浦有限公司 Real-time image processing for optimizing sub-images views
CN109146814A (en) * 2018-08-20 2019-01-04 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴浩然: "斑点牛测绘师笔记 上 测绘综合能力", 31 March 2017, 天津大学出版社, pages: 259 *
熊湾: "基于工业现场环境的图像增强方法研究", 《现代电子》, pages 163 - 167 *

Similar Documents

Publication Publication Date Title
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN109214193B (en) Data encryption and machine learning model training method and device and electronic equipment
CN109521994A (en) Multiplication hardware circuit, system on chip and electronic equipment
CN113222813B (en) Image super-resolution reconstruction method and device, electronic equipment and storage medium
CN104094312A (en) Control of video processing algorithms based on measured perceptual quality characteristics
CN111798545A (en) Method and device for playing skeleton animation, electronic equipment and readable storage medium
CN116128894A (en) Image segmentation method and device and electronic equipment
CN109213468B (en) Voice playing method and device
CN112492382B (en) Video frame extraction method and device, electronic equipment and storage medium
CN116912923B (en) Image recognition model training method and device
KR101597623B1 (en) Fast approach to finding minimum and maximum values in a large data set using simd instruction set architecture
CN115511754B (en) Low-illumination image enhancement method based on improved Zero-DCE network
CN111507726B (en) Message generation method, device and equipment
CN114234984B (en) Indoor positioning track smoothing method, system and equipment based on difference matrix
CN113706394A (en) Image processing method, apparatus and storage medium
CN113780534B (en) Compression method, image generation method, device, equipment and medium of network model
CN113112084B (en) Training plane rear body research and development flow optimization method and device
CN113875228B (en) Video frame inserting method and device and computer readable storage medium
CN112752101B (en) Video quality optimization method and device and storage medium
CN110209851B (en) Model training method and device, electronic equipment and storage medium
CN110222777B (en) Image feature processing method and device, electronic equipment and storage medium
CN111984247A (en) Service processing method and device and electronic equipment
CN112967208A (en) Image processing method and device, electronic equipment and storage medium
CN113256765A (en) AI anchor video generation method and device, electronic equipment and storage medium
CN117369765A (en) Equalizer self-adaptive adjusting method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination