CN117196973A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN117196973A
CN117196973A CN202311078141.0A CN202311078141A CN117196973A CN 117196973 A CN117196973 A CN 117196973A CN 202311078141 A CN202311078141 A CN 202311078141A CN 117196973 A CN117196973 A CN 117196973A
Authority
CN
China
Prior art keywords
target point
value
gray value
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311078141.0A
Other languages
Chinese (zh)
Inventor
陈仕长
袁艳阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Surgnova Healthcare Technologies (zhejiang) Co ltd
Sinosurgical Healthcare Technologies Beijing Co ltd
Original Assignee
Surgnova Healthcare Technologies (zhejiang) Co ltd
Sinosurgical Healthcare Technologies Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Surgnova Healthcare Technologies (zhejiang) Co ltd, Sinosurgical Healthcare Technologies Beijing Co ltd filed Critical Surgnova Healthcare Technologies (zhejiang) Co ltd
Priority to CN202311078141.0A priority Critical patent/CN117196973A/en
Publication of CN117196973A publication Critical patent/CN117196973A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides an image processing method and device. The image processing method comprises the following steps: responding to the acquired image to be processed, and acquiring a plurality of single-channel images corresponding to the image to be processed according to the image to be processed; for non-edge pixel points in each single-channel image, determining a first target point corresponding to each single-channel image according to the gray value of each non-edge pixel point; carrying out differential gradient calculation in a plurality of directions in a horizontal direction and a vertical direction on a first target point in each single-channel image to obtain a calculation result corresponding to each first target point; acquiring a threshold value, and determining a second target point in the first target points according to each calculation result and the threshold value; and correcting the gray value of each second target point respectively to obtain a plurality of corrected single-channel images.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
The image sensor inevitably generates dead spots during the manufacturing and using processes. There are generally two forms of dead spots, namely bright spots and dark spots. The dead pixel is fixedly arranged in the photosensitive element, and can not sense the change of light and shade rays and the color of the outside, so that the image can not truly embody the actual scene. Therefore, in the image processing, it is necessary to perform the dead pixel correction for the dead pixel existing.
There are two general methods for correcting dead pixels. One is to record the position of the dead pixel in advance, search the position to correct the dead pixel directly while using, this kind of method needs a larger memory space to record the position information, for the dead pixel produced in the course of using at the same time, this kind of method can't correct effectively; the other is that the position information is not required to be stored by a filtering method, but the detection method is complex and has slower calculation speed, and different filtering modes can cause certain loss on the detail information of the image.
Disclosure of Invention
In view of the above, the present invention provides an image processing method and apparatus.
According to a first aspect of the present invention, there is provided an image processing method comprising: responding to an acquired image to be processed, and acquiring a plurality of single-channel images corresponding to the image to be processed according to the image to be processed; for non-edge pixel points in each single-channel image, determining a first target point corresponding to each single-channel image according to the gray value of each non-edge pixel point; carrying out differential gradient calculation in a plurality of directions in a horizontal direction and a vertical direction on a first target point in each single-channel image to obtain a calculation result corresponding to each first target point; acquiring a threshold value, and determining a second target point in the first target points according to each calculation result and the threshold value; and correcting the gray value of each second target point respectively to obtain a plurality of corrected single-channel images.
According to an embodiment of the present invention, further comprising: and generating a target image according to the plurality of corrected single-channel images.
According to an embodiment of the present invention, for the non-edge pixel point in each single-channel image, determining a first target point corresponding to each single-channel image according to the gray value of each non-edge pixel point includes: for any non-edge pixel point, acquiring gray values of adjacent pixel points around the non-edge pixel point to obtain gray values of a plurality of adjacent pixel points; determining the maximum value and the minimum value in the gray values of the plurality of adjacent pixel points; acquiring the gray value of the non-edge pixel point; determining whether the non-edge pixel point is used as the first target point according to a comparison result of the maximum value and the gray value of the non-edge pixel point; and determining whether to take the non-edge pixel point as the first target point according to the comparison result of the minimum value and the gray value of the non-edge pixel point.
According to an embodiment of the present invention, the performing differential gradient calculation in multiple directions horizontally and vertically for the first target point in each single-channel image to obtain a calculation result corresponding to each first target point includes: for any first target point, acquiring gray values of adjacent pixel points around the first target point; the gray values of adjacent pixels around the first target point comprise a first gray value of adjacent pixels located in the west direction of the first target point, a second gray value of adjacent pixels located in the east direction of the first target point, a third gray value of adjacent pixels located in the north direction of the first target point, a fourth gray value of adjacent pixels located in the south direction of the first target point, a fifth gray value of adjacent pixels located in the west direction of the first target point, a sixth gray value of adjacent pixels located in the east-south direction of the first target point, a seventh gray value of adjacent pixels located in the east-north direction of the first target point and an eighth gray value of adjacent pixels located in the west-south direction of the first target point; acquiring a gray value of the first target point; obtaining a first calculated value of differential gradient calculation according to the gray value of the first target point, the first gray value and the second gray value; obtaining a second calculation value of differential gradient calculation according to the gray value of the first target point, the third gray value and the fourth gray value; obtaining a third calculation value of differential gradient calculation according to the gray value of the first target point, the fifth gray value and the sixth gray value; obtaining a fourth calculated value of differential gradient calculation according to the gray value of the first target point, the seventh gray value and the eighth gray value; and taking the first calculated value, the second calculated value, the third calculated value and the fourth calculated value as calculation results corresponding to the first target point.
According to an embodiment of the present invention, the acquiring the threshold value, and determining the second target point in the first target points according to each calculation result and the threshold value includes: determining a maximum value of the first, second, third, and fourth calculated values; and determining a second target point in the first target points according to the comparison result of the maximum value and the threshold value.
According to an embodiment of the present invention, the correcting the gray value of each second target point to obtain a plurality of corrected single-channel images includes: for any second target point, determining a target value corresponding to the second target point for correction; and converting the gray value of the second target point into the target value to obtain a corrected single-channel image.
A second aspect of the present invention provides an image processing apparatus including: the first obtaining module is used for responding to the obtained image to be processed, and obtaining a plurality of single-channel images corresponding to the image to be processed according to the image to be processed; the first determining module is used for determining a first target point corresponding to each single-channel image according to the gray value of each non-edge pixel point aiming at the non-edge pixel point in each single-channel image; the second obtaining module is used for carrying out differential gradient calculation in a plurality of directions in the horizontal direction and the vertical direction on the first target point in each single-channel image to obtain a calculation result corresponding to each first target point; the second determining module is used for acquiring a threshold value and determining a second target point in the first target points according to each calculation result and the threshold value; and the third obtaining module is used for correcting the gray value of each second target point respectively to obtain a plurality of corrected single-channel images.
A third aspect of the present invention provides an electronic device comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the methods disclosed above.
A fourth aspect of the invention also provides a computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to perform the method disclosed above.
Drawings
The foregoing and other objects, features and advantages of the invention will be apparent from the following description of embodiments of the invention with reference to the accompanying drawings, in which:
fig. 1 schematically shows a flowchart of an image processing method according to an embodiment of the invention;
fig. 2 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present invention; and
fig. 3 schematically shows a block diagram of an electronic device adapted to implement an image processing method according to an embodiment of the invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The image processing method of the disclosed embodiment is described in detail by fig. 1.
Fig. 1 schematically shows a flowchart of an image processing method according to an embodiment of the invention. As shown in fig. 1, this embodiment includes operations S101 to S105.
In operation S101, in response to acquiring the image to be processed, a plurality of single-channel images corresponding to the image to be processed are obtained according to the image to be processed.
For example, the image to be processed is a medical device acquisition image.
For example, the image to be processed is a color filter matrix image.
It will be appreciated that this step may extract a single channel image, such as four single color channel images, from the image to be processed.
It will be appreciated that for a pending image IM εR M×N Each 2×2 grids of the three-dimensional image consists of 2G (G1, G2), one B and one R, and if the arrangement mode is "BGGR", four single-color channel images can be obtained by the following formula.
In operation S102, for non-edge pixels in each single-channel image, a first target point corresponding to each single-channel image is determined according to a gray value of each non-edge pixel.
The target point, such as the first target point determined in this step, may be a dead point.
For example, for non-edge pixels in each single-channel image, determining a first target point corresponding to each single-channel image according to a gray value of each non-edge pixel, including: for any non-edge pixel point, acquiring gray values of adjacent pixel points around the non-edge pixel point to obtain gray values of a plurality of adjacent pixel points; determining the maximum value and the minimum value in the gray values of a plurality of adjacent pixel points; acquiring gray values of non-edge pixel points; determining whether the non-edge pixel point is used as a first target point according to a comparison result of the maximum value and the gray value of the non-edge pixel point; and determining whether the non-edge pixel point is used as the first target point according to the comparison result of the minimum value and the gray value of the non-edge pixel point.
For example, the four single-color channel images are respectively subjected to the de-centering processing, and the value in this step may be a gray value. Such as an imageTaking 8 neighboring pixels around the point I (I, j) (I-1, j-1), I (I-1, j), I (I-1, j+1), I (I, j-1), I (I, j+1), I (i+1, j-1), I (i+1, j) and I (i+1, j+1) and obtaining corresponding gray values. It will be appreciated that this step does not process the image edge points. The minimum value maximum value comparison is performed on the above 8 points so that a maximum value and a minimum value can be determined, and compared with the current point I (I, j) (i.e., the gray value of the non-edge pixel point), if I (I, j) is greater than the maximum value or I (I, j) is less than the minimum value, the current point I (I, j) can be taken as the first target point.
It will be appreciated that if I (I, j) does not satisfy the condition that I (I, j) is greater than the maximum value or I (I, j) is less than the minimum value, the current point is not considered to be a dead point, and no correction is performed.
In operation S103, differential gradient computation in multiple directions of horizontal and vertical is performed for the first target point in each single-channel image, so as to obtain a computation result corresponding to each first target point.
It can be understood that, in order to determine the dead pixel more accurately, the determined first target point needs to be further screened, and the second target point determined after screening is used as the dead pixel determined by the invention.
For example, the first target point is screened by using a calculation result obtained by differential gradient calculation.
For example, performing differential gradient calculation in multiple directions in a horizontal direction and a vertical direction for a first target point in each single-channel image to obtain a calculation result corresponding to each first target point, including: for any first target point, acquiring gray values of adjacent pixel points around the first target point; the gray values of the adjacent pixels around the first target point comprise a first gray value of the adjacent pixels located in the west direction of the first target point, a second gray value of the adjacent pixels located in the east direction of the first target point, a third gray value of the adjacent pixels located in the north direction of the first target point, a fourth gray value of the adjacent pixels located in the south direction of the first target point, a fifth gray value of the adjacent pixels located in the west direction of the first target point, a sixth gray value of the adjacent pixels located in the east-south direction of the first target point, a seventh gray value of the adjacent pixels located in the east-north direction of the first target point and an eighth gray value of the adjacent pixels located in the west-south direction of the first target point; acquiring a gray value of a first target point; obtaining a first calculated value of differential gradient calculation according to the gray value, the first gray value and the second gray value of the first target point; obtaining a second calculated value of differential gradient calculation according to the gray value of the first target point, the third gray value and the fourth gray value; obtaining a third calculation value of differential gradient calculation according to the gray value, the fifth gray value and the sixth gray value of the first target point; obtaining a fourth calculated value of differential gradient calculation according to the gray value of the first target point, the seventh gray value and the eighth gray value; and taking the first calculated value, the second calculated value, the third calculated value and the fourth calculated value as calculated results corresponding to the first target point.
For example, differential gradient calculation in multiple directions is performed for a first target point that satisfies a condition, specifically: when I (I, j) is greater than the maximum value.
diffVer(i,j)=2*I(i,j)-I(i-1,j)-I(i+1,j)
diffHor(i,j)=2*I(i,j)-I(i,j-1)-I(i,j+1)
diffDiag1(i,j)=2*I(i,j)-I(i-1,j-1)-I(i+1,j+1)
diffDiag2(i,j)=2*I(i,j)-I(i-1,j+1)-I(i+1,j+1)
When I (I, j) is less than the minimum value,
diffVer(i,j)=I(i-1,j)+I(i+1,j)-2*I(i,j)
diffHor(i,j)=I(i,j-1)+I(i,j+1)-2*I(i,j)
diffDiag1(i,j)=I(i-1,j-1)+I(i+1,j+1)-2*I(i,j)
diffDiag2(i,j)=I(i-1,j+1)+I(i+1,j+1)-2*I(i,j)
for example, I (I, j) is a gray value of the first target point, I (I-1, j) is a first gray value of the adjacent pixel located in the west-south direction of the first target point, I (i+1, j) is a second gray value of the adjacent pixel located in the east-north direction of the first target point, I (I, j-1) is a third gray value of the adjacent pixel located in the north-north direction of the first target point, I (I, j+1) is a fourth gray value of the adjacent pixel located in the south-south direction of the first target point, I (I-1, j-1) is an eighth gray value of the adjacent pixel located in the west-south direction of the first target point, I (i+1, j+1) is a seventh gray value of the adjacent pixel located in the east-north direction of the first target point, I (i+1, j+1) is a fifth gray value of the adjacent pixel located in the north-south direction of the first target point, I (i+1, j+1) is a fourth gray value of the adjacent pixel located in the north-south direction of the first target point, I (i+1, j+1) is a fourth gray value of the adjacent pixel located in the south-north direction of the first target point, I (j) is calculated, I (j) is a fourth gray value of the fourth value of the first value is calculated.
Further, the first calculated value, the second calculated value, the third calculated value, and the fourth calculated value are taken as calculation results corresponding to the first target point.
In operation S104, a threshold is acquired, and a second target point of the first target points is determined according to each calculation result and the threshold.
It can be appreciated that in this step, the dead pixel detection is continued. A second one of the first target points is determined by a threshold.
For example, a threshold is obtained, and based on each calculation and the threshold, a second target point of the first target points is determined, including: determining a maximum value of the first, second, third and fourth calculated values; and determining a second target point in the first target points according to the comparison result of the maximum value and the threshold value.
For example, the maximum value of the calculation results corresponding to the first target point is taken and compared with the threshold T.
Maximum value: diffmax=max (DiffVer (i, j), diffHor (i, j),
DiffDiag1(i,j),DiffDiag1(i,j))
the second one of the first target points may be determined based on a comparison of the maximum value with the threshold T. For example, t=20.
It can be understood that, since the noise in the image has a local extremum characteristic, when the threshold T is reduced, not only the dead pixel can be corrected, but also a certain noise reduction function is provided.
In operation S105, gray values of each second target point are corrected, respectively, to obtain a plurality of corrected single-channel images.
For example, the gray value of each second target point is corrected respectively to obtain a plurality of corrected single-channel images, including: for any second target point, determining a target value for correction corresponding to the second target point; and converting the gray value of the second target point into a target value to obtain a corrected single-channel image.
Specifically, a pixel point (i.e., a second target point) determined as a dead point updates the original gray value of the second target point by a target value calculated by the following formula.
Wherein,is a target value for correction.
Further, a target image may be generated from the plurality of corrected single channel images.
It will be appreciated that, for example, the corrected single-channel image is recombined into a color filter matrix image IM e rm×n in this step.
The image processing method provided by the invention can correct the target point and has the effect of reducing image noise. For example, the images are subjected to color channel splitting, and a plurality of single-channel images are obtained through recombination; then, detecting and correcting the target point of the single-channel image to obtain a corrected single-channel image, which can effectively inhibit the pixel of the target point and correct the gray value of the pixel; meanwhile, the image noise can be reduced by setting a threshold value to determine the target point; the image processing method provided by the invention only carries out correction processing on the determined target point, and can improve the speed of correcting the target point.
Fig. 2 schematically shows a block diagram of the structure of an image processing apparatus according to an embodiment of the present invention.
As shown in fig. 2, the image processing apparatus 200 of this embodiment includes a first obtaining module 210, a first determining module 220, a second obtaining module 230, a second determining module 240, and a third obtaining module 250.
A first obtaining module 210, configured to obtain, in response to obtaining an image to be processed, a plurality of single-channel images corresponding to the image to be processed according to the image to be processed; a first determining module 220, configured to determine, for non-edge pixels in each single-channel image, a first target point corresponding to each single-channel image according to a gray value of each non-edge pixel; the second obtaining module 230 is configured to perform differential gradient computation in multiple directions horizontally and vertically for the first target point in each single-channel image, so as to obtain a computation result corresponding to each first target point; a second determining module 240, configured to obtain a threshold value, and determine a second target point in the first target points according to each calculation result and the threshold value; and a third obtaining module 250, configured to correct the gray value of each second target point, to obtain a plurality of corrected single-channel images.
Any of the first obtaining module 210, the first determining module 220, the second obtaining module 230, the second determining module 240, and the third obtaining module 250 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules according to an embodiment of the present invention. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to an embodiment of the present invention, at least one of the first obtaining module 210, the first determining module 220, the second obtaining module 230, the second determining module 240, and the third obtaining module 250 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or as hardware or firmware in any other reasonable way of integrating or packaging the circuits, or as any one of or a suitable combination of any of the three. Alternatively, at least one of the first obtaining module 210, the first determining module 220, the second obtaining module 230, the second determining module 240, and the third obtaining module 250 may be at least partially implemented as computer program modules, which when executed, may perform the respective functions.
Fig. 3 schematically shows a block diagram of an electronic device adapted to implement an image processing method according to an embodiment of the invention.
As shown in fig. 3, an electronic device 300 according to an embodiment of the present invention includes a processor 301 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage section 308 into a Random Access Memory (RAM) 303. Processor 301 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 301 may also include on-board memory for caching purposes. Processor 301 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the invention.
In the RAM 303, various programs and data required for the operation of the electronic apparatus 300 are stored. The processor 301, the ROM302, and the RAM 303 are connected to each other via a bus 304. The processor 301 performs various operations of the method flow according to the embodiment of the present invention by executing programs in the ROM302 and/or the RAM 303. Note that the program may be stored in one or more memories other than the ROM302 and the RAM 303. The processor 301 may also perform various operations of the method flow according to embodiments of the present invention by executing programs stored in the one or more memories.
According to an embodiment of the invention, the electronic device 300 may further comprise an input/output (I/O) interface 305, the input/output (I/O) interface 305 also being connected to the bus 304. The electronic device 300 may also include one or more of the following components connected to the I/O interface 305: an input section 306 including a keyboard, a mouse, and the like; an output portion 307 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 308 including a hard disk or the like; and a communication section 309 including a network interface card such as a LAN card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. The drive 310 is also connected to the I/O interface 305 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 310 as needed, so that a computer program read therefrom is installed into the storage section 308 as needed.
The present invention also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present invention.
According to embodiments of the present invention, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to an embodiment of the invention, the computer-readable storage medium may include ROM302 and/or RAM 303 and/or one or more memories other than ROM302 and RAM 303 described above.
Embodiments of the present invention also include a computer program product comprising a computer program containing program code for performing the method shown in the flowcharts. The program code means for causing a computer system to carry out the image processing method provided by the embodiment of the present invention when the computer program product is run on the computer system.
The above-described functions defined in the system/apparatus of the embodiment of the present invention are performed when the computer program is executed by the processor 301. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the invention.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed over a network medium in the form of signals, downloaded and installed via the communication part 309, and/or installed from the removable medium 311. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 309, and/or installed from the removable medium 311. The above-described functions defined in the system of the embodiment of the present invention are performed when the computer program is executed by the processor 301. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the invention.
According to embodiments of the present invention, program code for carrying out computer programs provided by embodiments of the present invention may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or in assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments and/or claims of the invention can be combined in a variety of ways, even if such combinations or combinations are not explicitly recited in the present invention. In particular, the features recited in the various embodiments and/or claims of the present invention can be combined and/or combined in various ways without departing from the spirit and teachings of the invention. All such combinations and/or combinations fall within the scope of the invention.
The embodiments of the present invention are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the invention is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the invention, and such alternatives and modifications are intended to fall within the scope of the invention.

Claims (10)

1. An image processing method, comprising:
responding to an acquired image to be processed, and acquiring a plurality of single-channel images corresponding to the image to be processed according to the image to be processed;
for non-edge pixel points in each single-channel image, determining a first target point corresponding to each single-channel image according to the gray value of each non-edge pixel point;
carrying out differential gradient calculation in a plurality of directions in a horizontal direction and a vertical direction on a first target point in each single-channel image to obtain a calculation result corresponding to each first target point;
acquiring a threshold value, and determining a second target point in the first target points according to each calculation result and the threshold value; and
and correcting the gray value of each second target point to obtain a plurality of corrected single-channel images.
2. The method of claim 1, further comprising:
and generating a target image according to the plurality of corrected single-channel images.
3. The method of claim 1, wherein the determining, for the non-edge pixels in each single-channel image, the first target point corresponding to each single-channel image from the gray value of each non-edge pixel comprises:
for any non-edge pixel point, acquiring gray values of adjacent pixel points around the non-edge pixel point to obtain gray values of a plurality of adjacent pixel points;
determining the maximum value and the minimum value in the gray values of the plurality of adjacent pixel points;
acquiring the gray value of the non-edge pixel point;
determining whether the non-edge pixel point is used as the first target point according to a comparison result of the maximum value and the gray value of the non-edge pixel point; and
and determining whether the non-edge pixel point is used as the first target point according to a comparison result of the minimum value and the gray value of the non-edge pixel point.
4. The method according to claim 1, wherein the performing differential gradient calculation in multiple directions horizontally and vertically for the first target point in each single-channel image, to obtain a calculation result corresponding to each first target point, includes:
for any first target point, acquiring gray values of adjacent pixel points around the first target point; the gray values of adjacent pixels around the first target point comprise a first gray value of adjacent pixels located in the west direction of the first target point, a second gray value of adjacent pixels located in the east direction of the first target point, a third gray value of adjacent pixels located in the north direction of the first target point, a fourth gray value of adjacent pixels located in the south direction of the first target point, a fifth gray value of adjacent pixels located in the west direction of the first target point, a sixth gray value of adjacent pixels located in the east-south direction of the first target point, a seventh gray value of adjacent pixels located in the east-north direction of the first target point and an eighth gray value of adjacent pixels located in the west-south direction of the first target point;
acquiring a gray value of the first target point;
obtaining a first calculated value of differential gradient calculation according to the gray value of the first target point, the first gray value and the second gray value;
obtaining a second calculation value of differential gradient calculation according to the gray value of the first target point, the third gray value and the fourth gray value;
obtaining a third calculation value of differential gradient calculation according to the gray value of the first target point, the fifth gray value and the sixth gray value;
obtaining a fourth calculated value of differential gradient calculation according to the gray value of the first target point, the seventh gray value and the eighth gray value; and
and taking the first calculated value, the second calculated value, the third calculated value and the fourth calculated value as calculated results corresponding to the first target point.
5. The method of claim 4, wherein the acquiring a threshold and determining a second one of the first target points based on each calculation and the threshold comprises:
determining a maximum value of the first, second, third, and fourth calculated values; and
and determining a second target point in the first target points according to the comparison result of the maximum value and the threshold value.
6. The method according to claim 1, wherein the correcting the gray value of each second target point separately obtains a plurality of corrected single-channel images, including:
for any second target point, determining a target value corresponding to the second target point for correction; and
and converting the gray value of the second target point into the target value to obtain a corrected single-channel image.
7. An image processing apparatus comprising:
the first obtaining module is used for responding to the obtained image to be processed, and obtaining a plurality of single-channel images corresponding to the image to be processed according to the image to be processed;
the first determining module is used for determining a first target point corresponding to each single-channel image according to the gray value of each non-edge pixel point aiming at the non-edge pixel point in each single-channel image;
the second obtaining module is used for carrying out differential gradient calculation in a plurality of directions in the horizontal direction and the vertical direction on the first target point in each single-channel image to obtain a calculation result corresponding to each first target point;
the second determining module is used for acquiring a threshold value and determining a second target point in the first target points according to each calculation result and the threshold value; and
and the third obtaining module is used for correcting the gray value of each second target point respectively to obtain a plurality of corrected single-channel images.
8. The apparatus of claim 7, further comprising:
and the generating module is used for generating a target image according to the plurality of corrected single-channel images.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-6.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method according to any of claims 1-6.
CN202311078141.0A 2023-08-24 2023-08-24 Image processing method and device Pending CN117196973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311078141.0A CN117196973A (en) 2023-08-24 2023-08-24 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311078141.0A CN117196973A (en) 2023-08-24 2023-08-24 Image processing method and device

Publications (1)

Publication Number Publication Date
CN117196973A true CN117196973A (en) 2023-12-08

Family

ID=89000823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311078141.0A Pending CN117196973A (en) 2023-08-24 2023-08-24 Image processing method and device

Country Status (1)

Country Link
CN (1) CN117196973A (en)

Similar Documents

Publication Publication Date Title
US11457138B2 (en) Method and device for image processing, method for training object detection model
US9779490B2 (en) Defective pixel fixing
CN110675404B (en) Image processing method, image processing apparatus, storage medium, and terminal device
CN110248096B (en) Focusing method and device, electronic equipment and computer readable storage medium
US8774555B2 (en) Image defogging method and system
US20230230206A1 (en) Image denoising method and apparatus, electronic device, and storage medium
EP3076364A1 (en) Image filtering based on image gradients
US9036047B2 (en) Apparatus and techniques for image processing
WO2019022812A9 (en) Multiplexed high dynamic range images
US20180315174A1 (en) Apparatus and methods for artifact detection and removal using frame interpolation techniques
US20180182070A1 (en) Image processing device and image enhancing method
CN112330576A (en) Distortion correction method, device and equipment for vehicle-mounted fisheye camera and storage medium
CN111563517B (en) Image processing method, device, electronic equipment and storage medium
US20140092116A1 (en) Wide dynamic range display
CN110175963B (en) Dual-purpose image enhancement method and device suitable for underwater image and atmospheric dark image
CN117196973A (en) Image processing method and device
CN107392870A (en) Image processing method, device, mobile terminal and computer-readable recording medium
US11706529B2 (en) Blur correction device, imaging apparatus, monitoring system, and non-transitory computer-readable storage medium
US20180146214A1 (en) Elimination of artifacts from lossy encoding of digital images by color channel expansion
EP4124015A1 (en) Information processing device, information processing method, and information processing program
US9779470B2 (en) Multi-line image processing with parallel processing units
CN106097287B (en) A kind of multichannel image bearing calibration
US8917956B1 (en) Enhancing spatial resolution of an image
US10896487B2 (en) Method and apparatus for reducing noise
CN115825128A (en) Wide-angle neutron photography method, device, equipment and medium with image correction function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination