CN116342434B - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116342434B
CN116342434B CN202310617735.8A CN202310617735A CN116342434B CN 116342434 B CN116342434 B CN 116342434B CN 202310617735 A CN202310617735 A CN 202310617735A CN 116342434 B CN116342434 B CN 116342434B
Authority
CN
China
Prior art keywords
image
channel
multichannel
virtual channel
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310617735.8A
Other languages
Chinese (zh)
Other versions
CN116342434A (en
Inventor
郭江涛
胡昌欣
李丹
孙二东
张武杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Original Assignee
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casi Vision Technology Luoyang Co Ltd, Casi Vision Technology Beijing Co Ltd filed Critical Casi Vision Technology Luoyang Co Ltd
Priority to CN202310617735.8A priority Critical patent/CN116342434B/en
Publication of CN116342434A publication Critical patent/CN116342434A/en
Application granted granted Critical
Publication of CN116342434B publication Critical patent/CN116342434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an image processing method, an image processing device, image processing equipment and a storage medium, and relates to the technical field of computers. The method mainly comprises the following steps: acquiring a multichannel image; adjusting the resolution of the multichannel image to obtain an adjusted multichannel image; performing first denoising treatment on the adjusted multichannel image to obtain a denoised multichannel image; carrying out pixel fusion on the denoised multichannel image to obtain a virtual channel image; restoring the resolution of the virtual channel image to obtain a restored virtual channel image; and performing second denoising processing on the restored virtual channel image to obtain an image processing result. The method and the device can realize the processing of the multichannel image and improve the accuracy of the multichannel image processing.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method, an apparatus, a device, and a storage medium.
Background
The image processing is an indispensable technology in computer vision, most of the existing image processing algorithms can only aim at images of a single channel, if a computer vision task relates to multi-channel images, each channel image can only be processed independently, the processing efficiency is low, and under the condition that each channel image has defects, the finally obtained multiple image processing results are not accurate enough, and the accuracy of the computer vision task results cannot be ensured; in addition, most of the existing image processing algorithms can only perform single processing on an image, for example, the histogram equalization algorithm can only perform contrast enhancement processing on the image, but cannot perform denoising processing on the image.
Disclosure of Invention
The present disclosure provides an image processing method, apparatus, device, and storage medium, to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided an image processing method including: acquiring a multichannel image; adjusting the resolution of the multichannel image to obtain an adjusted multichannel image; performing first denoising processing on the adjusted multichannel image to obtain a denoised multichannel image; performing pixel fusion on the denoised multichannel image to obtain a virtual channel image; restoring the resolution of the virtual channel image to obtain a restored virtual channel image; and performing second denoising processing on the restored virtual channel image to obtain an image processing result.
According to a second aspect of the present disclosure, there is provided an image processing apparatus including: the acquisition module is used for acquiring the multichannel image; the resolution adjustment module is used for adjusting the resolution of the multichannel image to obtain an adjusted multichannel image; the first denoising module is used for performing first denoising processing on the adjusted multichannel image to obtain a denoised multichannel image; the pixel fusion module is used for carrying out pixel fusion on the denoised multichannel image to obtain a virtual channel image; the resolution reduction module is used for reducing the resolution of the virtual channel image to obtain a reduced virtual channel image; and the second denoising module is used for performing second denoising processing on the restored virtual channel image to obtain an image processing result.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described in the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the present disclosure.
According to the image processing method, the device, the equipment and the storage medium, after resolution adjustment and first denoising processing are carried out on the multichannel image, pixel fusion is carried out on the denoised multichannel image, a plurality of channel images are fused into a virtual channel image, resolution reduction and second denoising processing are carried out on the virtual channel image, and therefore an image processing result is obtained. Therefore, the processing of the multi-channel image can be realized, and the resolution adjustment of the multi-channel image can improve the image processing efficiency or the image processing accuracy; the denoising processing of the multichannel image can efficiently remove the noise of the multichannel image, and the accuracy of image processing is improved; the pixel fusion of the multichannel image can obviously enhance the multichannel image, and the obtained virtual channel image fuses the characteristics of a plurality of channel images, so that the accuracy of image processing can be improved, and the accuracy of the result of a computer vision task performed by using the image processing result can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 shows a flowchart of an image processing method according to a first embodiment of the present disclosure;
fig. 2 shows a first scene diagram of an image processing method according to a fifth embodiment of the present disclosure;
fig. 3 shows a second scene diagram of an image processing method according to a fifth embodiment of the present disclosure;
fig. 4 shows a schematic configuration diagram of an image processing apparatus according to an eighth embodiment of the present disclosure;
fig. 5 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure will be clearly described in conjunction with the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
Fig. 1 is a schematic flow chart of an image processing method according to a first embodiment of the present disclosure, and as shown in fig. 1, the image processing method mainly includes:
step S101, acquiring a multi-channel image.
In this embodiment, the multi-channel image is an image of the same scene captured by a line camera or an area camera under different illumination conditions, and the multi-channel image can be read by using a vector data type in a standard template library (STL, standard Template Library) of a c++ programming language, where the vector data type is a dynamic array defined in the STL, and a plurality of objects of the same type can be stored in the vector data type. Specifically, the multi-channel image may be stored in vector data type in the format of matrix-like image data in OpenCV, i.e., vector < Mat > src images, where OpenCV (Open Source Computer Vision Library) is an open-source computer vision library.
Step S102, the resolution of the multichannel image is adjusted to obtain an adjusted multichannel image.
In this embodiment, the resolution of the multi-channel image may be adjusted according to the actual image processing scenario, for example, if a more accurate image processing result is pursued, the resolution of the multi-channel image is improved; if faster image processing speeds are pursued, the resolution of the multi-channel image is reduced. Specifically, a super-resolution reconstruction algorithm can be adopted to improve the resolution of the multichannel image; the resolution of the multi-channel image is reduced by downsampling.
Step S103, performing first denoising processing on the adjusted multichannel image to obtain a denoised multichannel image.
In this embodiment, a filter-based method, a model-based method, a deep learning-based method, and the like may be adopted to perform the first denoising process on the adjusted multi-channel image, where the filter-based method includes median filtering, wiener filtering, and the like; model-based methods include sparse models, gradient models, and Markov random field (MRF, markov Random Field) models, among others; deep learning-based methods include convolutional neural networks (CNN, convolutional Neural Networks), and the like.
And step S104, carrying out pixel fusion on the denoised multichannel image to obtain a virtual channel image.
In this embodiment, the denoised multi-channel image includes a plurality of channel images, and the pixel fusion is to perform pixel level fusion on the plurality of channel images, and fuse pixel features of the plurality of channel images into one image, so as to obtain a virtual channel image. Specifically, pixel fusion can be performed on the denoised multi-channel image by adopting modes such as weighted average, maximum value selection and difference fusion, wherein the weighted average is to perform weighted average on gray values of pixel points at corresponding positions of the multi-channel image, and the weighted average result is used as the gray value of the pixel points at the corresponding positions of the virtual channel image; selecting the maximum value, namely selecting the maximum gray value in the gray values of the pixel points at the corresponding positions of the plurality of channel images, and taking the maximum gray value as the gray value of the pixel points at the corresponding positions of the virtual channel images; and performing difference fusion, namely performing difference operation on gray values of pixel points at corresponding positions of the plurality of channel images, and taking a difference operation result as the gray value of the pixel point at the corresponding position of the virtual channel image.
Step S105, the resolution of the virtual channel image is restored, and the restored virtual channel image is obtained.
In this embodiment, since the resolution of the multi-channel image is adjusted in step S102, it is necessary to restore the resolution of the virtual channel image to the resolution before adjustment after the virtual channel image is obtained. Specifically, if the resolution of the multi-channel image is increased in step S102, the resolution of the virtual channel image is decreased in step S105; if the resolution of the multi-channel image is reduced in step S102, the resolution of the virtual channel image is increased in step S105.
And S106, performing second denoising processing on the restored virtual channel image to obtain an image processing result.
In this embodiment, the second denoising process is also required to be performed on the restored virtual channel image, and specifically, the method of the second denoising process is similar to the method of the first denoising process in step S103, and will not be described herein.
In the first embodiment of the present disclosure, multiple processes are performed on a multi-channel image, and resolution adjustment is performed on the multi-channel image, so that image processing efficiency or image processing accuracy can be improved; the denoising processing of the multichannel image can efficiently remove the noise of the multichannel image, and the accuracy of image processing is improved; the pixel fusion of the multichannel image can obviously enhance the multichannel image, and the obtained virtual channel image fuses the characteristics of a plurality of channel images, so that the accuracy of image processing can be improved, and the accuracy of the result of a computer vision task performed by using the image processing result can be improved.
In a second embodiment of the present disclosure, step S101 acquires a multi-channel image, including:
acquiring an initial image; judging whether the initial image meets preset conditions or not to obtain a first judging result; and if the first judgment result is yes, graying the initial image to obtain a multichannel image.
In this embodiment, determining whether the initial image satisfies the preset condition includes: judging whether the number of channels of the initial image is larger than 1 or not to obtain a second judging result; judging whether the image corresponding to each channel of the initial image can be read or not, and obtaining a third judging result; judging whether the image size corresponding to each channel of the initial image is the same or not, and obtaining a fourth judging result; and if the second judgment result, the third judgment result and the fourth judgment result are all yes, determining that the initial image meets the preset condition. Specifically, if the second judgment result is yes, proving that the initial image is a multi-channel image; if the third judgment result is yes, proving that no empty image data exists in each channel of the initial image; if the fourth judgment result is yes, the image size corresponding to each channel of the initial image is proved to be the same, pixel fusion can be carried out on the initial image subsequently, and under the condition that the second judgment result, the third judgment result and the fourth judgment result are all yes, the initial image is determined to meet the preset condition, namely the first judgment result is yes, and the initial image is subjected to graying, so that a multi-channel image is obtained, wherein the graying method can be a component method, a maximum value method, an average value method and the like, and the graying method is not limited.
In an embodiment, in the case that the fourth determination result is not yes, the image size corresponding to each channel in the initial image may be adjusted to be the same according to a neighbor algorithm, a double interpolation algorithm, a pyramid algorithm, and the like.
In the second embodiment of the present disclosure, the availability determination is first performed on the initial image, that is, if the number of channels of the initial image is greater than 1, the image corresponding to each channel is readable and the image corresponding to each channel is the same in size, the initial image is available, and the initial image may be grayed, so as to obtain an accurate and available multi-channel image.
In a third embodiment of the present disclosure, the adjusting the resolution of the multi-channel image in step S102 includes:
determining a resolution adjustment direction of the multi-channel image; the resolution adjustment direction is a positive direction, and up-sampling is carried out on the multichannel image to obtain an adjusted multichannel image; and if the resolution adjustment direction is a negative direction, downsampling the multichannel image to obtain an adjusted multichannel image.
In this embodiment, the resolution adjustment direction of the multi-channel image may be determined according to the actual image processing scene, where the resolution adjustment direction includes a positive direction and a negative direction, the positive direction is a direction from low resolution to high resolution, the negative direction is a direction from high resolution to low resolution, and if the resolution adjustment direction is the positive direction, the multi-channel image is up-sampled, so as to obtain an adjusted multi-channel image; and if the resolution adjustment direction is a negative direction, downsampling the multichannel image to obtain an adjusted multichannel image.
In an embodiment, the up-sampling of the multi-channel image may employ a nearest neighbor algorithm, a bilinear interpolation algorithm, a transpose convolution, and the like; downsampling a multi-channel image may employ a pooling layer (dropping) with a stride greater than 1, a Convolution layer (rotation) with a stride greater than 1, and the like.
In a third embodiment of the present disclosure, upsampling the multi-channel image may increase the accuracy of image processing, and downsampling the multi-channel image may increase the image processing rate.
In a fourth embodiment of the present disclosure, performing a first denoising process on the adjusted multichannel image in step S103 includes: and performing first filtering processing on the adjusted multichannel image to obtain a denoised multichannel image.
In the present embodiment, the first filtering process may be performed as follows: configuring parameters of the filter as first parameters to obtain a first configured filter; and performing first filtering processing on the adjusted multichannel image according to the first configured filter to obtain a denoised multichannel image. Specifically, the filter is a filter of selectable parameters including, but not limited to: the image filtering method, the filtering kernel size, other filter parameters and the like can configure the parameters of the filter as first parameters according to an actual image processing scene, and perform first filtering processing on the adjusted multichannel image according to the first configured filter.
In the fourth embodiment of the present disclosure, the first filtering process is performed on the adjusted multi-channel image, so that noise mixed therein can be eliminated, thereby obtaining an accurate multi-channel image.
In a fifth embodiment of the present disclosure, performing pixel fusion on the denoised multichannel image in step S104 includes:
if the number of channels of the denoised multi-channel image is 2, calculating the gray difference value between pixel points at the corresponding positions of the two channel images of the denoised multi-channel image, and determining the gray difference value as a virtual gray value at the corresponding position of the virtual channel image; and if the channel number of the denoised multichannel image is greater than 2, carrying out pixel fusion on the denoised multichannel image according to the gray values of pixel points in a plurality of channel images of the denoised multichannel image to obtain a virtual channel image.
In this embodiment, if the number of channels of the denoised multi-channel image is 2, the virtual gray value of the virtual channel image may be calculated according to the following formula (one):
formula 1
wherein ,for the gray value of the first pixel point in the first channel image, < >>Gray value of second pixel point corresponding to the first pixel point in the second channel image, < +.>And the virtual gray value of the pixel point corresponding to the positions of the first pixel point and the second pixel point in the virtual channel image. As shown in fig. 2, the number of channels of the multi-channel image is 2, each channel corresponds to one image, that is, a first channel image and a second channel image, and after the pixel fusion is performed on the first channel image and the second channel image, a virtual channel image as shown in fig. 3 can be obtained.
In this embodiment, if the number of channels of the denoised multi-channel image is greater than 2, two designated channel images are selected from the multiple channel images of the denoised multi-channel image, a gray difference value between pixels at positions corresponding to the two designated channel images is calculated according to formula (a), and the gray difference value is determined as a virtual gray value at the position corresponding to the virtual channel image. Specifically, when two specified channel images are selected from a plurality of channel images of the denoised multi-channel image, the first two channel images can be determined as two specified channel images according to the input sequence of the plurality of channel images; or determining any two channel images in the plurality of channel images as two specified channel images, wherein the input sequence of the plurality of channel images can be replaced, so that different virtual channel images are obtained; or selecting the designated channel images for multiple times, wherein each designated channel image is different, so that different virtual channel images are obtained.
In this embodiment, if the number of channels of the denoised multi-channel image is greater than 2, after obtaining a virtual channel image according to the two selected specified channel images, pixel fusion may be performed on the virtual channel image and other channel images to obtain a new virtual channel image, where the other channel images are channel images that are not subjected to pixel fusion in the multiple channel images; and repeating the pixel fusion of the new virtual channel image and other channel images until the pixel fusion of a plurality of channel images is carried out. For example, if the multiple channel images are channel image 1, channel image 2 and channel image 3, the two selected designated channel images are channel image 1 and channel image 2, the channel images 1 and 2 are subjected to pixel fusion according to formula (one) to obtain a virtual channel image, then the virtual channel image is subjected to pixel fusion with the channel image 3 which is not subjected to pixel fusion, so as to obtain a new virtual channel image, and at this time, the multiple channel images are subjected to pixel fusion, then the new virtual channel image can be used as the finally obtained virtual channel image.
In this embodiment, if the number of channels of the denoised multi-channel image is greater than 2, the gray level difference between the pixel points at the corresponding positions between every two of the plurality of channel images can be calculated; and determining the average value of the gray level difference values as a virtual gray level value of the corresponding position of the virtual channel image. For example, if the plurality of channel images are channel image 1, channel image 2 and channel image 3, since the sizes of the plurality of channel images are the same, for any coordinate, a gray scale difference value 1 of a pixel corresponding to the coordinate in channel image 1 and a gray scale difference value 2 of a pixel corresponding to the coordinate in channel image 2 and a pixel corresponding to the coordinate in channel image 3 are calculated, a gray scale difference value 3 of a pixel corresponding to the coordinate in channel image 1 and a pixel corresponding to the coordinate in channel image 3 are calculated, and an average value of the gray scale difference value 1, the gray scale difference value 2 and the gray scale difference value 3 is taken as a virtual gray scale value of the pixel at the coordinate in the virtual channel image.
In the fifth embodiment of the present disclosure, pixel fusion of a multi-channel image may significantly enhance the multi-channel image, and the obtained virtual channel image fuses features of a plurality of channel images, which not only can improve accuracy of image processing, but also can improve accuracy of results of computer vision tasks performed by using image processing results.
In a sixth embodiment of the present disclosure, restoring the resolution of the virtual channel image in step S105 includes:
the resolution adjustment direction is the positive direction, the virtual channel image is downsampled, and the restored virtual channel image is obtained; and if the resolution adjustment direction is a negative direction, up-sampling the virtual channel image to obtain a restored virtual channel image.
In this embodiment, the resolution of the virtual channel image needs to be restored to the resolution of the multi-channel image before the resolution is adjusted, so if the resolution adjustment direction is a positive direction, that is, the mode of sampling up-sampling when the resolution of the multi-channel image is adjusted before, the mode of sampling down-sampling when the resolution of the virtual channel image is restored is needed; if the resolution adjustment direction is a negative direction, that is, the sampling down-sampling mode when the resolution of the multi-channel image is adjusted before, the sampling up-sampling mode is needed when the resolution of the virtual channel image is restored. Specifically, the resolution of the virtual channel image is restored, so that the resolution of the virtual channel image is identical to that of the original input multi-channel image, namely, the restored virtual channel image can reflect the characteristics of the original input multi-channel image more accurately, and the accuracy of an image processing result is further ensured.
In a seventh embodiment of the present disclosure, step S106 performs a second denoising process on the restored virtual channel image to obtain an image processing result, including: and performing second filtering processing on the restored virtual channel image to obtain an image processing result.
In the present embodiment, the second filtering process may be performed as follows: configuring parameters of the filter as second parameters to obtain a second configured filter; and carrying out second filtering processing on the restored virtual channel image according to the second configured filter to obtain an image processing result. Specifically, the filter is a filter of selectable parameters including, but not limited to: the image filtering method, the filtering kernel size, other filter parameters and the like can configure parameters of the filter as second parameters according to an actual image processing scene, and perform second filtering processing on the restored virtual channel image according to the second configured filter, wherein the second parameters can be the same as or different from the first parameters.
In an embodiment, after the image processing result is obtained, histogram equalization may be further performed on the image processing result, so as to enhance the contrast of the image processing result.
In the seventh embodiment of the present disclosure, the restored virtual channel image is subjected to the second filtering process, so that noise mixed therein can be further eliminated, thereby obtaining an accurate image processing result.
Fig. 4 shows a schematic structural view of an image processing apparatus according to an eighth embodiment of the present disclosure, and as shown in fig. 4, the image processing apparatus mainly includes:
an acquisition module 10 for acquiring a multichannel image; the resolution adjustment module 11 is configured to adjust the resolution of the multichannel image, so as to obtain an adjusted multichannel image; a first denoising module 12, configured to perform a first denoising process on the adjusted multichannel image, so as to obtain a denoised multichannel image; the pixel fusion module 13 is used for carrying out pixel fusion on the denoised multichannel image to obtain a virtual channel image; the resolution reduction module 14 is configured to reduce the resolution of the virtual channel image, so as to obtain a reduced virtual channel image; and the second denoising module 15 is used for performing second denoising processing on the restored virtual channel image to obtain an image processing result.
In one embodiment, the acquisition module 10 includes: the acquisition sub-module is used for acquiring an initial image; the judging sub-module is used for judging whether the initial image meets the preset condition or not to obtain a first judging result; and the graying sub-module is used for graying the initial image to obtain a multichannel image if the first judgment result is yes.
In an embodiment, the judging sub-module is further configured to: judging whether the number of channels of the initial image is larger than 1 or not to obtain a second judging result; judging whether the image corresponding to each channel of the initial image can be read or not, and obtaining a third judging result; judging whether the image size corresponding to each channel of the initial image is the same or not, and obtaining a fourth judging result; and if the second judgment result, the third judgment result and the fourth judgment result are all yes, determining that the initial image meets the preset condition.
In an embodiment, the resolution adjustment module 11 is further configured to: determining a resolution adjustment direction of the multi-channel image; the resolution adjustment direction is a positive direction, and up-sampling is carried out on the multichannel image to obtain an adjusted multichannel image; and if the resolution adjustment direction is a negative direction, downsampling the multichannel image to obtain an adjusted multichannel image.
In one embodiment, the first denoising module 12 is further configured to: and performing first filtering processing on the adjusted multichannel image to obtain a denoised multichannel image.
In one embodiment, the first denoising module 12 is further configured to: configuring parameters of the filter as first parameters to obtain a first configured filter; and performing first filtering processing on the adjusted multichannel image according to the first configured filter to obtain a denoised multichannel image.
In an embodiment, the pixel fusion module 13 is further configured to: if the number of channels of the denoised multi-channel image is 2, calculating the gray difference value between pixel points at the corresponding positions of the two channel images of the denoised multi-channel image, and determining the gray difference value as a virtual gray value at the corresponding position of the virtual channel image; and if the channel number of the denoised multichannel image is greater than 2, carrying out pixel fusion on the denoised multichannel image according to the gray values of pixel points in a plurality of channel images of the denoised multichannel image to obtain a virtual channel image.
In an embodiment, the pixel fusion module 13 is further configured to: selecting two specified channel images from the plurality of channel images; and calculating the gray level difference value between the pixel points at the corresponding positions of the two appointed channel images, and determining the gray level difference value as a virtual gray level value at the corresponding position of the virtual channel image.
In an embodiment, the pixel fusion module 13 is further configured to: determining the first two channel images as two specified channel images according to the input sequence of the plurality of channel images; or, any two channel images among the plurality of channel images are determined as two specified channel images.
In an embodiment, the pixel fusion module 13 is further configured to: performing pixel fusion on the virtual channel image and other channel images to obtain a new virtual channel image, wherein the other channel images are channel images which are not subjected to pixel fusion in the multiple channel images; and repeating the pixel fusion of the new virtual channel image and other channel images until the pixel fusion of a plurality of channel images is carried out.
In an embodiment, the pixel fusion module 13 is further configured to: calculating gray level difference values between pixel points at corresponding positions between every two of the plurality of channel images; and determining the average value of the gray level difference values as a virtual gray level value of the corresponding position of the virtual channel image.
In one embodiment, the resolution reduction module 14 is further configured to: the resolution adjustment direction is the positive direction, the virtual channel image is downsampled, and the restored virtual channel image is obtained; and if the resolution adjustment direction is a negative direction, up-sampling the virtual channel image to obtain a restored virtual channel image.
In an embodiment, the second denoising module 15 is further configured to: and performing second filtering processing on the restored virtual channel image to obtain an image processing result.
In an embodiment, the second denoising module 15 is further configured to: configuring parameters of the filter as second parameters to obtain a second configured filter; and carrying out second filtering processing on the restored virtual channel image according to the second configured filter to obtain an image processing result.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
Fig. 5 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the respective methods and processes described above, for example, an image processing method. For example, in some embodiments, an image processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When a computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of one image processing method described above can be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform an image processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely a specific embodiment of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it should be covered in the protection scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (15)

1. An image processing method, the method comprising:
acquiring a multichannel image;
adjusting the resolution of the multichannel image to obtain an adjusted multichannel image;
performing first denoising processing on the adjusted multichannel image to obtain a denoised multichannel image;
performing pixel fusion on the denoised multichannel image to obtain a virtual channel image;
restoring the resolution of the virtual channel image to obtain a restored virtual channel image;
performing second denoising processing on the restored virtual channel image to obtain an image processing result;
the pixel fusion of the denoised multichannel image comprises the following steps:
if the channel number of the denoised multi-channel image is 2, calculating the gray difference value between pixel points at corresponding positions of two channel images of the denoised multi-channel image, and determining the gray difference value as a virtual gray value at the corresponding position of the virtual channel image;
if the channel number of the denoised multi-channel image is greater than 2, carrying out pixel fusion on the denoised multi-channel image according to gray values of pixel points in a plurality of channel images of the denoised multi-channel image to obtain a virtual channel image;
The pixel fusion of the denoised multi-channel image according to the gray values of the pixel points in the plurality of channel images of the denoised multi-channel image comprises the following steps:
selecting two specified channel images from the plurality of channel images;
calculating gray level difference values between pixel points at corresponding positions of the two specified channel images, and determining the gray level difference values as virtual gray level values at corresponding positions of the virtual channel images;
wherein the selecting two specified channel images from the plurality of channel images includes:
determining the first two channel images as the two specified channel images according to the input sequence of the plurality of channel images; or alternatively, the first and second heat exchangers may be,
determining any two channel images of the plurality of channel images as the two specified channel images;
after calculating the gray difference value between the pixel points at the corresponding positions of the two specified channel images and determining the gray difference value as the virtual gray value at the corresponding position of the virtual channel image, the method further comprises the steps of:
performing pixel fusion on the virtual channel image and other channel images to obtain a new virtual channel image, wherein the other channel images are channel images which are not subjected to pixel fusion in the channel images;
Repeating pixel fusion of the new virtual channel image and other channel images until the multiple channel images are subjected to pixel fusion;
the pixel fusion of the denoised multi-channel image according to the gray values of the pixel points in the plurality of channel images of the denoised multi-channel image comprises the following steps:
calculating gray level difference values between pixel points at corresponding positions between every two of the plurality of channel images;
and determining the average value of the gray level difference values as a virtual gray level value of the corresponding position of the virtual channel image.
2. The method of claim 1, wherein the acquiring the multi-channel image comprises:
acquiring an initial image;
judging whether the initial image meets a preset condition or not to obtain a first judging result;
and if the first judgment result is yes, graying the initial image to obtain the multichannel image.
3. The method of claim 2, wherein the determining whether the initial image satisfies a preset condition comprises:
judging whether the channel number of the initial image is greater than 1 or not to obtain a second judging result;
judging whether the image corresponding to each channel of the initial image can be read or not, and obtaining a third judging result;
Judging whether the image size corresponding to each channel of the initial image is the same or not, and obtaining a fourth judging result;
and if the second judgment result, the third judgment result and the fourth judgment result are all yes, determining that the initial image meets the preset condition.
4. The method of claim 1, wherein said adjusting the resolution of the multi-channel image comprises:
determining a resolution adjustment direction of the multi-channel image;
the resolution adjustment direction is a positive direction, and up-sampling is carried out on the multichannel image to obtain an adjusted multichannel image;
and if the resolution adjustment direction is a negative direction, downsampling the multichannel image to obtain an adjusted multichannel image.
5. The method of claim 1, wherein said first denoising of said adjusted multi-channel image comprises:
and performing first filtering processing on the adjusted multichannel image to obtain a denoised multichannel image.
6. The method of claim 5, wherein said performing a first filtering process on said adjusted multi-channel image comprises:
configuring parameters of the filter as first parameters to obtain a first configured filter;
And performing first filtering processing on the adjusted multichannel image according to the first configured filter to obtain a denoised multichannel image.
7. The method of claim 4, wherein the restoring the resolution of the virtual channel image comprises:
the resolution adjustment direction is a positive direction, and the virtual channel image is downsampled to obtain a restored virtual channel image;
and if the resolution adjustment direction is a negative direction, upsampling the virtual channel image to obtain a restored virtual channel image.
8. The method of claim 1, wherein performing a second denoising process on the restored virtual channel image to obtain an image processing result comprises:
and performing second filtering processing on the restored virtual channel image to obtain an image processing result.
9. The method of claim 8, wherein performing a second filtering process on the restored virtual channel image comprises:
configuring parameters of the filter as second parameters to obtain a second configured filter;
and carrying out second filtering processing on the restored virtual channel image according to the second configured filter to obtain an image processing result.
10. An image processing apparatus for performing the method according to any one of claims 1-9, the apparatus comprising:
the acquisition module is used for acquiring the multichannel image;
the resolution adjustment module is used for adjusting the resolution of the multichannel image to obtain an adjusted multichannel image;
the first denoising module is used for performing first denoising processing on the adjusted multichannel image to obtain a denoised multichannel image;
the pixel fusion module is used for carrying out pixel fusion on the denoised multichannel image to obtain a virtual channel image;
the resolution reduction module is used for reducing the resolution of the virtual channel image to obtain a reduced virtual channel image;
and the second denoising module is used for performing second denoising processing on the restored virtual channel image to obtain an image processing result.
11. The apparatus of claim 10, wherein the acquisition module comprises:
the acquisition sub-module is used for acquiring an initial image;
the judging sub-module is used for judging whether the initial image meets a preset condition or not to obtain a first judging result;
and the graying sub-module is used for graying the initial image to obtain the multichannel image if the first judgment result is yes.
12. The apparatus of claim 11, wherein the determination submodule is further configured to:
judging whether the channel number of the initial image is greater than 1 or not to obtain a second judging result;
judging whether the image corresponding to each channel of the initial image can be read or not, and obtaining a third judging result;
judging whether the image size corresponding to each channel of the initial image is the same or not, and obtaining a fourth judging result;
and if the second judgment result, the third judgment result and the fourth judgment result are all yes, determining that the initial image meets the preset condition.
13. The apparatus of claim 10, wherein the resolution adjustment module is further configured to:
determining a resolution adjustment direction of the multi-channel image;
the resolution adjustment direction is a positive direction, and up-sampling is carried out on the multichannel image to obtain an adjusted multichannel image;
and if the resolution adjustment direction is a negative direction, downsampling the multichannel image to obtain an adjusted multichannel image.
14. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
15. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-9.
CN202310617735.8A 2023-05-29 2023-05-29 Image processing method, device, equipment and storage medium Active CN116342434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310617735.8A CN116342434B (en) 2023-05-29 2023-05-29 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310617735.8A CN116342434B (en) 2023-05-29 2023-05-29 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116342434A CN116342434A (en) 2023-06-27
CN116342434B true CN116342434B (en) 2023-08-18

Family

ID=86876285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310617735.8A Active CN116342434B (en) 2023-05-29 2023-05-29 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116342434B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108694705A (en) * 2018-07-05 2018-10-23 浙江大学 A kind of method multiple image registration and merge denoising
CN110009590A (en) * 2019-04-12 2019-07-12 北京理工大学 A kind of high-quality colour image demosaicing methods based on convolutional neural networks
CN110213458A (en) * 2019-05-31 2019-09-06 腾讯科技(深圳)有限公司 A kind of image processing method, device and storage medium
CN110381331A (en) * 2019-07-23 2019-10-25 深圳市道通智能航空技术有限公司 A kind of image processing method, device, equipment of taking photo by plane and storage medium
CN114360449A (en) * 2022-01-14 2022-04-15 苇创微电子(上海)有限公司 Multi-pixel fusion compression and decompression method for Mura calibration of display

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9497429B2 (en) * 2013-03-15 2016-11-15 Pelican Imaging Corporation Extended color processing on pelican array cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108694705A (en) * 2018-07-05 2018-10-23 浙江大学 A kind of method multiple image registration and merge denoising
CN110009590A (en) * 2019-04-12 2019-07-12 北京理工大学 A kind of high-quality colour image demosaicing methods based on convolutional neural networks
CN110213458A (en) * 2019-05-31 2019-09-06 腾讯科技(深圳)有限公司 A kind of image processing method, device and storage medium
CN110381331A (en) * 2019-07-23 2019-10-25 深圳市道通智能航空技术有限公司 A kind of image processing method, device, equipment of taking photo by plane and storage medium
CN114360449A (en) * 2022-01-14 2022-04-15 苇创微电子(上海)有限公司 Multi-pixel fusion compression and decompression method for Mura calibration of display

Also Published As

Publication number Publication date
CN116342434A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN110766632A (en) Image denoising method based on channel attention mechanism and characteristic pyramid
CN110544214A (en) Image restoration method and device and electronic equipment
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
US20170041620A1 (en) Image Processing Methods and Image Processing Apparatuses
CN112967381A (en) Three-dimensional reconstruction method, apparatus, and medium
JP2023531350A (en) A method for incrementing a sample image, a method for training an image detection model and a method for image detection
CN113570608B (en) Target segmentation method and device and electronic equipment
CN113327193A (en) Image processing method, image processing apparatus, electronic device, and medium
CN114913325B (en) Semantic segmentation method, semantic segmentation device and computer program product
CN114202648B (en) Text image correction method, training device, electronic equipment and medium
CN116071279A (en) Image processing method, device, computer equipment and storage medium
CN110310293B (en) Human body image segmentation method and device
CN113920313B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111757100A (en) Method and device for determining camera motion variation, electronic equipment and medium
CN111882565A (en) Image binarization method, device, equipment and storage medium
CN114898177A (en) Defect image generation method, model training method, device, medium, and product
CN115115793A (en) Image processing method, device, equipment and storage medium
CN110717864A (en) Image enhancement method and device, terminal equipment and computer readable medium
CN113888635A (en) Visual positioning method, related device and computer program product
CN116342434B (en) Image processing method, device, equipment and storage medium
CN114972361B (en) Blood flow segmentation method, device, equipment and storage medium
CN116862762A (en) Video superdivision method, device, equipment and storage medium
CN114757843A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113658277B (en) Stereo matching method, model training method, related device and electronic equipment
WO2020124355A1 (en) Image processing method, image processing device, and unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant