US20210295480A1 - Systems and methods for image processing - Google Patents

Systems and methods for image processing Download PDF

Info

Publication number
US20210295480A1
US20210295480A1 US17/342,695 US202117342695A US2021295480A1 US 20210295480 A1 US20210295480 A1 US 20210295480A1 US 202117342695 A US202117342695 A US 202117342695A US 2021295480 A1 US2021295480 A1 US 2021295480A1
Authority
US
United States
Prior art keywords
image
pixel
decomposed
luminance
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/342,695
Inventor
Changjiu YANG
Xiaotao JIANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201610456890.6A external-priority patent/CN106097286B/en
Priority claimed from CN201710021180.5A external-priority patent/CN106780400B/en
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to US17/342,695 priority Critical patent/US20210295480A1/en
Assigned to ZHEJIANG DAHUA TECHNOLOGY CO., LTD. reassignment ZHEJIANG DAHUA TECHNOLOGY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JIANG, Xiaotao, YANG, Changjiu
Publication of US20210295480A1 publication Critical patent/US20210295480A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/007Dynamic range modification
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing

Definitions

  • the present disclosure relates to a technical field of image processing, and more particularly, systems and methods for image processing.
  • a display device usually may display some of the luminance in nature.
  • An image displayed on a display device may appear over exposed in a bright area and under exposed in a dark area, rendering it difficult to distinguish some details in the image.
  • Systems and methods for image processing, which may generate images with more details and stronger contrast are widely welcome and in high demand.
  • the system may be further directed to: for a specific pixel in a first decomposed image, identify a frequency of the specific pixel; determine a gain of the specific pixel based on the frequency of the specific pixel and a frequency adjustment threshold associated with the first decomposed image; and adjust the frequency of the specific pixel based on the gain of the specific pixel.
  • the system may be further directed to: identify, from the plurality of the first decomposed images, a certain number of pixels each of which is located at a position corresponding to the specific pixel of the first decomposed images; determine the frequencies of the identified certain number of pixels, a pixel of the certain number of pixels having a frequency; filter the certain number of frequencies to obtain a plurality of filtered frequencies; determine an average filtered frequency associated with the position based on the plurality of filtered frequencies; determine the gain associated with the position based on the average filtered frequency; and assign the gain associated with the position to the specific pixel.
  • the system may be further directed to: for an original pixel of a plurality of original pixels in the original image, identify the position of the original pixel in the original image; determine a first luminance of a pixel in the first luminance image, the pixel in the first luminance image being at the same position as the original pixel in the original image; determine a second luminance of a pixel in the second luminance image, the pixel in the second luminance image being at the same position as the original pixel in the original image; and determine a final pixel associated with the original pixel based on the first luminance and the second luminance; and generate the final image of the original image based on the determined final pixels associated with the plurality of original pixels.
  • the system may be further directed to: perform a reverse operation of the decomposition that provides the plurality of first decomposed images.
  • a system may include at least one computer-readable storage medium including a set of instructions for processing an image, and at least one processor in communication with the computer-readable storage medium.
  • the system may be directed to: identify a target region in the image, the target region having a plurality of gray levels; determine a plurality of statistical probabilities relating to the plurality of gray levels, a statistical probability relating to a gray level of the plurality of gray levels; determine a mapping curve of the target region based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels; identify at least one pixel that needs to be processed in the target region; for a pixel of the at least one pixels that needs to be processed, determine the value of the pixel based on the mapping curve of the target region; and generate a processed image based on the determined at least one value of the at least one pixel.
  • the system may be further directed to: determine a plurality of optimal coefficients relating to the plurality of statistical probabilities, an optimal coefficient being associated with a statistical probability of the plurality of statistical probabilities relating to the target region; determine a plurality of optimal curves, an optimal curve being associated with an optimal coefficient of the plurality of optimal coefficients; and determine the mapping curve of the target region based on the plurality of optimal curves and the plurality of predetermined curves.
  • the system may be further directed to: for a statistical probability of the plurality of statistical probabilities, determine an initial coefficient associated with the statistical probability based on the gray level associated with the statistical probability; identify a central pixel of the target region; and determine an optimal coefficient corresponding to the initial coefficient based on the central pixel of the target region.
  • the system may be further directed to: for a gray level of the plurality of gray levels in the target region, determine a sub mapping curve associated with the gray level based on an optimal curve associated with the gray level and a predetermined curve associated with the gray level; and determine the mapping curve of the target region based on the plurality of the sub mapping curves associated with the plurality of gray levels.
  • the image may include at least one target region.
  • a method for processing an original image may comprise: obtaining a first luminance image of the original image; decomposing the first luminance image of the original image to provide a plurality of first decomposed images; adjusting pixel frequencies in at least some of the plurality of first decomposed images to generate a plurality of second decomposed images; generating a second luminance image of the original image based on the plurality of second decomposed images; and determining a final image of the original image based on the first luminance image, the second luminance image, and the original image.
  • the adjusting pixel frequencies in at least some of the plurality of first decomposed images may comprise: for a specific pixel in a first decomposed image, identifying a frequency of the specific pixel; determining a gain of the specific pixel based on the frequency of the specific pixel and a frequency adjustment threshold associated with the first decomposed image; and adjusting the frequency of the specific pixel based on the gain of the specific pixel.
  • the determining a gain of the specific pixel may comprise: identifying from the plurality of the first decomposed images, a certain number of pixels that are located at a position corresponding to the specific pixel of the first decomposed images; determining the frequencies of the identified certain number of pixels, a pixel of the certain number of pixels having a frequency; filtering the certain number of frequencies to obtain a plurality of filtered frequencies; determining an average filtered frequency associated with the position based on the plurality of filtered frequencies; determining the gain associated with the position based on the average filtered frequency; and assigning the gain associated with the position to the specific pixel.
  • the determining the final image of the original image based on the first luminance image, the second luminance image, and the original image may comprise: for an original pixel of a plurality of original pixels in the original image, identifying the position of the original pixel in the original image; determining a first luminance of a pixel in the first luminance image, the pixel in the first luminance image being at the same position as the original pixel in the original image; determining a second luminance of a pixel in the second luminance image, the pixel in the second luminance image being at the same position as the original pixel in the original image; and determining a final pixel associated with the original pixel based on the first luminance and the second luminance; and generating the final image of the original image based on the determined final pixels associated with the plurality of original pixels.
  • the obtaining the plurality of first decomposed images may comprise performing one or more orders of decomposition on the first luminance image.
  • the one or more orders of decomposition may be performed based on a wavelet transformation.
  • the reconstructing a second luminance image of the original image based on the plurality of second decomposed images may comprise performing a reverse operation of the decomposition that provides the plurality of first decomposed images.
  • a method for processing an image may comprise: identifying a target region in the image, the target region having a plurality of gray levels; determining a plurality of statistical probabilities relating to the plurality of gray levels, a statistical probability relating to a gray level of the plurality of gray levels; determining a mapping curve of the target region based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels; identifying at least one pixel that needs to be processed in the target region; for a pixel of the at least one pixels that needs to be processed, determining the value of the pixel based on the mapping curve of the target region; and generating a processed image based on the determined at least one value of the at least one pixel.
  • the determining the mapping curve of the target region may comprise: determining a plurality of optimal coefficients relating to the plurality of statistical probabilities, an optimal coefficient being associated with a statistical probability of the plurality of statistical probabilities relating to the target region; determining a plurality of optimal curves, an optimal curve being associated with an optimal coefficient of the plurality of optimal coefficients; and determining the mapping curve of the target region based on the plurality of optimal curves and the plurality of predetermined curves.
  • the determining the plurality of optimal coefficients may comprise: for a statistical probability of the plurality of statistical probabilities, determining an initial coefficient associated with the statistical probability based on the gray level associated with the statistical probability; identifying a central pixel of the target region; and determining an optimal coefficient corresponding to the initial coefficient based on the central pixel of the target region.
  • the determining the mapping curve of the target region based on the plurality of optimal curves and the plurality of predetermined curves may comprise: for a gray level of the plurality of gray levels in the target region, determining a sub mapping curve associated with the gray level based on an optimal curve associated with the gray level and a predetermined curve associated with the gray level; and determining the mapping curve of the target region based on the plurality of the sub mapping curves associated with the plurality of gray levels.
  • the image may include at least one target region.
  • a non-transitory computer readable medium may comprise at least one set of instructions for processing an original image, wherein when executed by at least one processor, the at least one set of instructions may direct the at least one processor to perform acts of: obtaining a first luminance image of the original image; decomposing the first luminance image of the original image to provide a plurality of first decomposed images; adjusting pixel frequencies in at least some of the plurality of first decomposed images to generate a plurality of second decomposed images; generating a second luminance image of the original image based on the plurality of second decomposed images; and determining a final image of the original image based on the first luminance image, the second luminance image, and the original image.
  • a non-transitory computer readable medium may comprise at least one set of instructions for processing an original image, wherein when executed by at least one processor, the at least one set of instructions may direct the at least one processor to perform acts of: identifying a target region in the image, the target region having a plurality of gray levels; determining a plurality of statistical probabilities relating to the plurality of gray levels, a statistical probability relating to a gray level of the plurality of gray levels; determining a mapping curve of the target region based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels; identifying at least one pixel that needs to be processed in the target region; for a pixel of the at least one pixel that needs to be processed, determining the value of the pixel based on the mapping curve of the target region; and generating a processed image based on the determined at least one value of the at least one pixel.
  • a system may include: at least one acquisition module configured to obtain a first luminance image of the original image; at least one decomposition module configured to decompose the first luminance image of the original image to provide a plurality of first decomposed images; at least one frequency adjustment module configured to adjust pixel frequencies in at least some of the plurality of first decomposed images to generate a plurality of second decomposed images; at least one reconstruction module configured to generate a second luminance image of the original image based on the plurality of second decomposed images; and at least one determination module configured to determine a final image of the original image based on the first luminance image, the second luminance image, and the original image.
  • a system may include at least one acquisition module and at least one determination module.
  • the at least one acquisition module may be configured to: identify a target region in the image, the target region having a plurality of gray levels; and determine a plurality of statistical probabilities relating to the plurality of gray levels, a statistical probability relating to a gray level of the plurality of gray levels.
  • the at least one determination module may be configured to: determine a mapping curve of the target region based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels; identify at least one pixel that needs to be processed in the target region; for a pixel of the at least one pixel that needs to be processed, determine the value of the pixel based on the mapping curve of the target region; and generate a processed image based on the determined at least one value of the at least one pixel.
  • FIG. 1 is a block diagram illustrating an exemplary system for image processing according to some embodiments of the present disclosure
  • FIG. 2A is a schematic diagram illustrating an exemplary computing device according to some embodiments of the present disclosure
  • FIG. 2B is a schematic diagram illustrating an exemplary mobile device according to some embodiments of the present disclosure.
  • FIG. 3 is a block diagram illustrating an exemplary image processing device according to some embodiments of the present disclosure
  • FIG. 4 is a block diagram illustrating an exemplary frequency adjustment module according to some embodiments of the present disclosure
  • FIG. 5 is a block diagram illustrating an exemplary determination module according to some embodiments of the present disclosure.
  • FIG. 6 a flowchart illustrating an exemplary process for processing an original image according to some embodiments of the present disclosure
  • FIG. 7 is a flowchart illustrating an exemplary process for determining a decomposed image according to some embodiments of the present disclosure
  • FIG. 8 is a flowchart illustrating an exemplary process for determining a gain of a pixel in a decomposed image according to some embodiments of the present disclosure
  • FIG. 9 is a flowchart illustrating an exemplary process for obtaining a filtered frequency associated with a position according to some embodiments of the present disclosure
  • FIGS. 10 -I through 10 -III are schematic diagrams illustrating an Nth order decomposed images of high frequency according to some embodiments of the present disclosure
  • FIG. 11 is a flowchart illustrating an exemplary process for determining a final image of the original image according to some embodiments of the present disclosure
  • FIG. 12 is a flowchart illustrating an exemplary process for processing an image according to some embodiments of the present disclosure
  • FIG. 13 is a flowchart illustrating an exemplary process for processing at least one target region having a part outside an image according to some embodiments of the present disclosure
  • FIGS. 14 -I through 14 -V is are schematic diagrams illustrating patching at least one edge of an image according to some embodiments of the present disclosure
  • FIGS. 15 -I through 15 -V are schematic diagrams illustrating patching at least one edge of an image according to some embodiments of the present disclosure
  • FIG. 16 is a flowchart illustrating an exemplary process for determining a mapping curve of a target region according to some embodiments of the present disclosure
  • FIG. 17 is a flowchart illustrating an exemplary process for determining an optimal coefficient of a target region according to some embodiments of the present disclosure
  • FIG. 18 is a flowchart illustrating an exemplary process for determining an initial optimal coefficient of a target region according to some embodiments of the present disclosure
  • FIG. 19 is a flowchart illustrating an exemplary process for determining a mapping curve of a target region according to some embodiments of the present disclosure
  • FIG. 20 is a schematic diagram illustrating exemplary optimal gamma curves according to some embodiments of the present disclosure.
  • FIG. 21 is a schematic diagram illustrating exemplary Gaussian weight curves according to some embodiments of the present disclosure.
  • FIG. 22 is a schematic diagram illustrating an exemplary mapping curve according to some embodiments of the present disclosure.
  • FIG. 23 is a flowchart illustrating an exemplary process for processing an image according to some embodiments of the present disclosure.
  • module refers to logic embodied in hardware or firmware, or to a collection of software instructions.
  • a module, a unit, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or other storage device.
  • a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts.
  • Software modules/units/blocks configured for execution on computing devices (e.g., processor 220 as illustrated in FIG.
  • a computer readable medium such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution).
  • a computer readable medium such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution).
  • Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device.
  • Software instructions may be embedded in a firmware, such as an EPROM.
  • hardware modules/units/blocks may be included of connected logic components, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors.
  • modules/units/blocks or computing device functionality described herein may be implemented as software modules/units/blocks, but may be represented in hardware or firmware.
  • the modules/units/blocks described herein refer to logical modules/units/blocks that may be combined with other modules/units/blocks or divided into sub-modules/sub-units/sub-blocks despite their physical organization or storage.
  • An aspect of the present disclosure relates to systems and methods for image processing.
  • a luminance image of an original image may be first decomposed and then reconstructed after adjusting the frequencies of pixels.
  • the luminance of pixels in a processed image may be determined based on the luminance of corresponding pixels in the decomposed images and the reconstructed images.
  • a target region of an image may be processed according to determining the value of pixels in the target region based on statistical probabilities associated with gray levels in the target region and a plurality of predetermined coefficients.
  • an image may display more details and increase original contrast.
  • FIG. 1 is a block diagram illustrating an exemplary image processing system 100 according to some embodiments of the present disclosure.
  • the image processing system 100 may include an imaging device 110 , an image processing device 120 , a terminal 130 , a storage 140 , a network 150 , and a base station 160 .
  • the imaging device 110 may be configured to capture one or more images.
  • the one or more images may be images about a static or moving object.
  • the image may include a still picture, a motion picture, a video (offline or live streaming), a frame of a video, or a combination thereof.
  • the imaging device 110 may be any suitable device that is capable of capturing an image.
  • the imaging device 110 may be and/or include a camera, a sensor, a video recorder, or the like, or any combination thereof.
  • the imaging device 110 may be and/or include any suitable type of camera, such as a fixed camera, a fixed dome camera, a covert camera, a Pan-Tilt-Zoom (PTZ) camera, a thermal camera, etc.
  • the imaging device 110 may be and/or include any suitable type of sensor, such as an audio sensor, a light sensor, a wind speed sensor, or the like, or a combination thereof.
  • the light sensor e.g., an infrared detector
  • the audio sensor may be configured to obtain an audio signal.
  • the audio signal and the light signal may be configured to provide reference information for processing images captured by the imaging device 110 .
  • Data obtained by the imaging device 110 may be stored in the storage 140 , sent to the image processing device 120 or the terminal(s) 130 via the network 150 .
  • the image processing device 120 may be configured to process an image.
  • the image processing device 120 may be configured to, based on the image, identify luminance of the image, decompose a first luminance image of the image, reconstruct a second luminance image of the image, determine a final image associated with the image, or the like, or a combination thereof.
  • the image processing device 120 may be configured to identify a target region of the image, determine a statistical probability associated with the gray level of the target region, determine a mapping curve of the target region, determine a value of a pixel in the image, or the like, or any combination thereof.
  • the image that the imaging processing device 120 processes may be captured by the imaging device 110 or retrieved from another source (e.g., the storage 140 , the terminal(s) 130 , etc.).
  • the image processing device 120 may further be configured to generate a control signal.
  • the control signal may be generated based on a feature of an object being imaged, luminance of a scene when an image of the scene is being acquired, displayed luminance of an image, or the like, or any combination thereof.
  • the control signal may be used to control the imaging device 110 .
  • the image processing device 120 may generate a control signal to instruct the imaging device 110 (e.g., a camera) to track an object and obtain an image of the object.
  • the image processing device 120 may be any suitable device that is capable of processing an image.
  • the image processing device 120 may include a high-performance computer specializing in image processing or transaction processing, a personal computer, a portable device, a server, a microprocessor, an integrated chip, a digital signal processor (DSP), a tablet computer, a personal digital assistant (PDA), a mobile phone, or the like, or a combination thereof.
  • the image processing device 120 may be implemented on a computing device 200 A shown in FIG. 2A and/or a mobile device 200 B shown in FIG. 2B .
  • the image processing device 120 may include one or more processing engines (e.g., single-core processing engine(s) or multi-core processor(s)).
  • the image processing device 120 may include a central processing unit (CPU), an application-specific integrated circuit (ASIC), an application-specific instruction-set processor (ASIP), a graphics processing unit (GPU), a physics processing unit (PPU), a digital signal processor (DSP), a field programmable gate array (FPGA), a programmable logic device (PLD), a controller, a microcontroller unit, a reduced instruction-set computer (RISC), a microprocessor, or the like, or any combination thereof.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • ASIP application-specific instruction-set processor
  • GPU graphics processing unit
  • PPU physics processing unit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • PLD programmable logic device
  • controller a controller
  • microcontroller unit a reduced instruction-set computer (RISC)
  • the terminal 130 may be connected to or communicate with the image processing device 120 .
  • the terminal 130 may allow one or more operators (e.g., a law enforcement officer, etc.) to control the production and/or display of the data (e.g., the image captured by the imaging device 110 ) on a display.
  • the terminal 130 may include an input device, an output device, a control panel, a display (not shown in FIG. 1 ), or the like, or a combination thereof.
  • Exemplary input device may include a keyboard, a touch screen, a mouse, a remote controller, a wearable device, or the like, or a combination thereof.
  • the input device may include alphanumeric and other keys that may be inputted via a keyboard, a touch screen (e.g., with haptics or tactile feedback, etc.), a speech input, an eye tracking input, a brain monitoring system, or any other comparable input mechanism.
  • the input information received through the input device may be communicated to the image processing device 120 via the network 150 for further processing.
  • Exemplary input device may further include a cursor control device, such as a mouse, a trackball, or cursor direction keys to communicate direction information and command selections to, for example, the image processing device 120 and to control cursor movement on display or another display device.
  • a display may be configured to display the data received (e.g., the image captured by the imaging device 110 ).
  • the information may include data before and/or after data processing, a request for input or parameter relating to image acquisition and/or processing, or the like, or any combination thereof.
  • Exemplary display may include a liquid crystal display (LCD), a light emitting diode (LED)-based display, a flat panel display or curved screen (or television), a cathode ray tube (CRT), or the like, or a combination thereof.
  • the storage 140 may store data and/or instructions.
  • the data may include an image (e.g., an image obtained by the imaging device 110 ) relevant information of the image, etc.
  • the storage 140 may store data and/or instructions that the server 110 may execute or use to perform exemplary methods described in the present disclosure.
  • the storage 140 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM), or the like, or any combination thereof.
  • Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc.
  • Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
  • Exemplary volatile read-and-write memory may include a random access memory (RAM).
  • RAM may include a dynamic RAM (DRAM), a double date rate synchronous dynamic RAM (DDR SDRAM), a static RAM (SRAM), a thyristor RAM (T-RAM), and a zero-capacitor RAM (Z-RAM), etc.
  • Exemplary ROM may include a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (PEROM), an electrically erasable programmable ROM (EEPROM), a compact disk ROM (CD-ROM), and a digital versatile disk ROM, etc.
  • the storage 140 may be implemented on a cloud platform.
  • the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
  • the network 150 may facilitate communications between various components of the image processing system 100 .
  • the network 150 may be a single network, or a combination of various networks.
  • the network 150 may include a cable network, a wireline network, an optical fiber network, a tele communications network, an intranet, an Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a metropolitan area network (MAN), a wide area network (WAN), a public telephone switched network (PSTN), a BluetoothTM network, a ZigBeeTM network, a near field communication (NFC) network, a global system for mobile communications (GSM) network, a code-division multiple access (CDMA) network, a time-division multiple access (TDMA) network, a general packet radio service (GPRS) network, an enhanced data rate for GSM evolution (EDGE) network, a wideband code division multiple access (WCDMA) network, a high speed downlink packet access (HSDPA) network,
  • the descriptions above in relation to the image processing system 100 is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
  • various variations and modifications may be conducted under the guidance of the present disclosure.
  • those variations and modifications do not depart the scope of the present disclosure.
  • part or all of the image data generated by the imaging device 110 may be processed by the terminal 130 .
  • the imaging device 110 and the image processing device 120 may be implemented in one single device configured to perform the functions of the imaging device 110 and the image processing device 120 described in this disclosure.
  • the terminal 130 , and the storage 140 may be combined with or part of the image processing device 120 as a single device. Similar modifications should fall within the scope of the present disclosure.
  • FIG. 2A is an architecture illustrating an exemplary computing device 200 A on which a specialized system incorporating the present teaching may be implemented.
  • a specialized system incorporating the present teaching has a functional block diagram illustration of a hardware platform that may include user interface elements.
  • a computing device 200 A may be a general-purpose computer or a special purpose computer.
  • the computing device 200 A may be used to implement any component of image processing as described herein.
  • the image processing device 120 may be implemented on a computer such as the computing device 200 , via its hardware, software program, firmware, or a combination thereof. Although only one such computer is shown, for convenience, the computer functions relating to image processing as described herein may be implemented in a distributed fashion on a number of similar platforms, to distribute the processing load.
  • the computing device 200 A may include a communication (COM) ports 250 connected to and from a network connected thereto to facilitate data communications.
  • the computing device 200 A may also include a processor 220 , in the form of one or more processors, for executing program instructions stored in a storage device (e.g., a disk 270 , a read only memory (ROM) 230 , or a random-access memory (RAM) 240 )), and when executing the program instructions, the processor 220 may be configured to cause the computing device 200 A to perform the functions thereof described herein.
  • a storage device e.g., a disk 270 , a read only memory (ROM) 230 , or a random-access memory (RAM) 240
  • RAM random-access memory
  • the exemplary computer platform may include an internal communication bus 210 , program storage, and data storage of different forms, e.g., a disk 270 , a ROM 230 , or a RAM 240 , for various data files to be processed and/or communicated by the computer, as well as possibly program instructions to be executed by the processor 220 .
  • the computing device 200 A may also include an I/O component 260 , supporting input/output flows between the computer and other components therein such as user interface elements (not shown in FIG. 2A ).
  • the computing device 200 A may also receive programming and data via network communications.
  • aspects of the methods of the image processing and/or other processes, as described herein, may be embodied in programming.
  • Program aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of executable code and/or associated data that is carried on or embodied in a type of machine readable medium.
  • Tangible non-transitory “storage” type media include any or all of the memory or other storage for the computers, processors, or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide storage at any time for the software programming.
  • All or portions of the software may at times be communicated through a network such as the Internet or various other telecommunication networks.
  • Such communications may enable loading of the software from one computer or processor into another, for example, from a management server or host computer of a scheduling system into the hardware platform(s) of a computing environment or other system implementing a computing environment or similar functionalities in connection with image processing.
  • another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links.
  • the physical elements that carry such waves, such as wired or wireless links, optical links or the like, also may be considered as media bearing the software.
  • terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.
  • a non-transitory machine-readable medium may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium or physical transmission medium.
  • Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s), or the like, which may be used to implement the system or any of its components shown in the drawings.
  • Volatile storage media may include dynamic memory, such as a main memory of such a computer platform.
  • Tangible transmission media may include coaxial cables; copper wire and fiber optics, including the wires that form a bus within a computer system.
  • Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications.
  • RF radio frequency
  • IR infrared
  • Computer-readable media may include, for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a physical processor for execution.
  • FIG. 2B is a schematic diagram illustrating an exemplary mobile device 200 B according to some embodiments of the present disclosure.
  • the mobile device 200 B may illustrate hardware and/or software components of the terminal 130 .
  • the mobile device 200 B may include a communication platform 295 , a display 255 , a graphic processing unit (GPU) 266 , a central processing unit (CPU) 265 , an I/O 260 , a memory 275 , and a storage 290 .
  • any other suitable component including but not limited to a system bus or a controller (not shown), may also be included in the mobile device 200 B.
  • a mobile operating system 280 e.g., iOSTM, AndroidTM, Windows PhoneTM, etc.
  • the applications 285 may include a browser or any other suitable mobile apps for receiving and rendering information relating to image processing or other information from, for example, the image processing device 120 .
  • User interactions with the information stream may be achieved via the I/O 260 and provided to the image processing device 120 and/or other components of the Diagnostic and treatment system 100 via the network 150 .
  • computer hardware platforms may be used as the hardware platform(s) for one or more of the elements described herein.
  • a computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device.
  • PC personal computer
  • a computer may also act as a server if appropriately programmed.
  • FIG. 3 is a block diagram illustrating an exemplary image processing device 120 according to some embodiments of the present disclosure of the present disclosure.
  • the image processing device 120 may include an acquisition module 310 , a decomposition module 320 , a frequency adjustment module 330 , a reconstruction module 340 , and a determination module 350 .
  • the processor 220 may include more or fewer components without loss of generality. For example, two of the modules may be combined into a single module, or one of the modules may be divided into two or more modules. As another example, one or more of the modules may reside on different computing devices (e.g., a desktop, a laptop, a mobile device, a tablet computer, a wearable computing device, or the like, or a combination thereof). As still another example, the image processing device 120 may be implemented on the computing device 200 A shown in FIG. 2A or the mobile device 200 B shown in FIG. 2B .
  • a module may be implemented in many different ways and as hardware, software or in different combinations of hardware and software.
  • all or parts of a module implementations may be a processing circuitry that may include part or all of an instruction processor, such as a central processing unit (CPU), a microcontroller, a microprocessor; or an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), a programmable logic device (PLD), a field programmable gate array (FPGA), a controller, other electronic components; or as circuitry that includes discrete logic or other circuit components, including an analog circuit component, a digital circuit component or both; or any combination thereof.
  • the circuitry may include discrete interconnected hardware components or may be combined on a single integrated circuit die, distributed among multiple integrated circuit dies, or implemented in a Multiple Chip Module (MCM) of multiple integrated circuit dies in a common package, as examples.
  • MCM Multiple Chip Module
  • the acquisition module 310 may be configured to acquire data of an original image.
  • the original image may include a picture in a video sequence including signals sampled in both the horizontal and vertical directions.
  • Exemplary data of the original image may include red green blue (RGB) data, Bayer data, Luma and Chroma (YUV) data, raw image format (RAW) data, joint photographic experts group (JPEG) data, or the like, or any combination thereof.
  • the acquisition module 310 may be connected to an I/O module (not shown in FIG. 3 ) to acquire data.
  • the acquisition module 310 may also acquire data relating to the original image.
  • the data relating to the original image may include a plurality of gray levels in the original image, a plurality of statistical probabilities associated with each gray level, etc.
  • the decomposition module 320 may be configured to decompose a luminance image of the original image.
  • Exemplary techniques for decomposing an image may include a wavelet transform technique, a Gauss-pyramid technique, a Laplacian-pyramid technique, a contrast-pyramid technique, a wavelet-pyramid technique, or the like, or any combination thereof.
  • the decomposition module 320 may decompose a luminance image by a wavelet transform decomposition to generate at least one decomposed image of high frequency and a decomposed image of low frequency.
  • “high frequency” is a relative term compared to “low frequency” relating to the frequency variation of brightness in the image.
  • the image of high frequency may display details, and the image of low frequency may display outlines.
  • Exemplary wavelet transform decomposition algorithms may include stationary wavelet transform (SWT), orthogonal wavelet transform (OWT), fast wavelet transform (FWT), discrete wavelet transform (DWT), or the like, or any combination thereof.
  • the decomposition module 320 may decompose an image into a decomposed image of low frequency and three decomposed images of high frequency in a horizontal direction, in a vertical direction, and in a diagonal direction by the SWT algorithm.
  • the decomposed image of low frequency may include components showing outlines of an object in the decomposed image.
  • Three decomposed images of high frequency may include components showing details of the object in the decomposed image in different directions, respectively.
  • the decomposition module 320 may decompose the first luminance image by multiple orders of decompositions.
  • an N-th order decomposition an N-th order decomposed image of low frequency and one or more N-th order decomposed images of high frequency may be obtained.
  • the N-th order decomposed image of low frequency may be further decomposed to generate a (N+1)-th order decomposed image of low frequency and one or more (N+1)-order decomposed image of high frequency.
  • the decomposition module 320 may perform a three-order wavelet transform decomposition (e.g., SWT) to generate a first-order decomposition set, a second-order decomposition set, and a third-order decomposition set.
  • the decomposed image of low frequency from the first-order decomposition may be decomposed further to generate a second-order decomposed image of low frequency and one or more second-order decomposed images of high frequency.
  • the second-order decomposed image of low frequency may be decomposed further to generate a third-order decomposed image of low frequency and one or more third-order decomposed images of high frequency.
  • the first-order decomposition set and the second-order decomposition set may each include three decomposed images of high frequency, respectively.
  • the third-order decomposition set may include a decomposed image of low frequency and three decomposed images of high frequency.
  • the frequency adjustment module 330 may be configured to adjust frequencies of pixels in a first image to generate a second image (or referred to as a frequency-adjusted image).
  • the first decomposed image may be received from the decomposition module 320 .
  • the frequency adjustment module 330 may adjust frequencies of pixels in the first image based on the frequency adjustment threshold corresponding to the first image. For example, the frequency adjustment module 330 may reduce the frequency of a pixel that is greater than a frequency adjustment threshold in the first image. As another example, the frequency adjustment module 330 may increase the frequency of a pixel that is less than the frequency adjustment threshold in the first image.
  • the frequency adjustment threshold corresponding to the first image may be provided by a user via, for example, the I/O 250 .
  • the frequency adjustment threshold corresponding to the first decomposed image may be determined by the system 100 based on a default setting of the system 100 , an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc.
  • An empirical setting from prior imaging processing may be derived by, for example, machine learning.
  • An image subject to frequency adjustment may be a decomposed image obtained by, for example, decomposition as described elsewhere in the present disclosure.
  • the frequency adjustment module 330 may adjust frequencies of pixels in a first decomposed image to generate a second decomposed image (or referred to as a frequency-adjusted decomposed image.
  • the frequency adjustment module 330 may adjust frequencies of pixels in an image based on one or more adjustment factors corresponding to the image. For example, the frequency adjustment module 330 may adjust a frequency of a pixel in a first decomposed image of high frequency based on an adjustment factor corresponding to the first decomposed image of high frequency. In some embodiments, the adjustment factors corresponding to decomposed images of different orders may be the same or different.
  • the reconstruction module 340 may be configured to reconstruct a frequency-adjusted luminance image based on a plurality of frequency-adjusted decomposed images.
  • the frequency-adjusted decomposed images may be generated from the frequency adjustment module 330 .
  • the frequency of a pixel in a frequency-adjusted decomposed image may be determined based on the frequency of a pixel locate at the same position in the corresponding decompose image before the frequency adjustment.
  • the reconstruction may be a reverse process of the decomposition.
  • the reconstruction may include multiple orders of reconstructions.
  • the number of orders in the reconstruction may be the same as the number of orders in the decomposition.
  • the reconstruction module 340 may reconstruct a frequency-adjusted luminance image by a three-order wavelet transform when the frequency-adjusted decomposed images are generated based on a three-order wavelet transform decomposition.
  • the determination module 350 may be configured to determine a final image of the original image based on the luminance image without the frequency adjustment, the frequency-adjusted luminance image, and the original image.
  • the determination module 350 may determine a final image of the original image by first determining the position of a pixel in the original image, and then adjusting the frequency of the pixel based on the luminance of the pixel being at the same position in the corresponding luminance image before frequency adjustment; based on the frequency-adjusted pixels and their respective positions, the frequency-adjusted luminance image may be obtained.
  • the determination module 350 may be configured to determine data relating to processing an image. For example, the determination module 350 may determine a mapping curve of a target region in the image, an optimal coefficient of the target region, an optimal curve of the target region, the value of a pixel in the target region, or the like, or any combination thereof.
  • the term “optimal” e.g., optimal coefficient, optimal curve
  • the image processing device 120 may also include a storage module (not shown in FIG. 3 ) for storing data relating to the image processing.
  • a storage module not shown in FIG. 3
  • FIG. 4 is a block diagram illustrating an exemplary frequency adjustment module 330 according to some embodiments of the present disclosure.
  • the frequency adjustment module 330 may include a gain determination unit 410 and a frequency adjustment unit 420 .
  • the frequency adjustment module 330 may include more or fewer components without loss of generality. For example, two of the units may be combined into a single unit, or one of the units may be divided into two or more units. For example, one or more of the units may reside on different computing devices (e.g., a desktop, a laptop, a mobile device, a tablet computer, a wearable computing device, or the like, or a combination thereof). However, those variations and modifications do not depart from the protecting scope of the present disclosure.
  • the gain determination unit 410 may be configured to determine a gain of a pixel in an image that is to be adjusted (or referred to as an unadjusted image). The gain determination unit 410 may determine the gain of the pixel based on the frequency of the pixel in the first image and a frequency threshold associated with the first image.
  • the frequency adjustment unit 420 may be configured to adjust frequencies of pixels in an unadjusted image.
  • the frequency adjustment unit 420 may adjust the frequency of a pixel in the unadjusted image based on a gain corresponding to the pixel in the unadjusted image.
  • the frequency of the corresponding pixel in the frequency-adjusted image may be determined based on the original frequency of the pixel and the gain of the original pixel in the unadjusted image.
  • FIG. 5 is a block diagram illustrating an example of a determination module 350 according to some embodiments of the present disclosure.
  • the determination module 350 may include a position determination unit 510 , a pixel value adjustment unit 520 , and a construction unit 530 .
  • the determination module 350 may include more or fewer components without loss of generality. For example, two of the units may be combined into a single unit, or one of the units may be divided into two or more units. In one implementation, one or more of the units may reside on different computing devices (e.g., a desktop, a laptop, a mobile device, a tablet computer, a wearable computing device, or the like, or a combination thereof). However, those variations and modifications do not depart from the protecting scope of the present disclosure.
  • the position determination unit 510 may be configured to determine the position of a pixel in an image.
  • the position determination unit 510 may determine an original pixel in an original image, the position of a pixel in a luminance image before frequency adjustment, the position of a pixel in a frequency-adjusted luminance image, etc.
  • the position of the pixel in the image may be represented by coordinates (e.g., orthogonal coordinates, spherical coordinates, polar coordinates, etc.) in, for example, a two-dimensional coordinate system, a three-dimensional coordinate system, etc.
  • the position of an original pixel may be represented as (x, y) as illustrated in FIG. 10 .
  • an original image may refer to an image acquired by an imaging device (e.g., the imaging device 110 illustrated in FIG. 1 ).
  • An original image may be stored or retrieved from a storage device (e.g., the storage 140 illustrated in FIG. 1 , the disk 270 illustrated in FIG. 2A , or an external source, such as a hard disk, a wireless terminal, or the like, or any combination thereof, that is connected to or otherwise communicates with the system 100 ), or from an imaging device by which the original image is acquired.
  • an unadjusted image may refer to an image that is obtained by performing, on an original image, one or more operations (e.g., decomposition, transform, etc.) except for frequency adjustment.
  • a frequency-adjusted image may refer to an image that is obtained by performing, on an original image or an unadjusted image, frequency adjustment.
  • the pixel value adjustment unit 520 may be configured adjust a pixel value of an original pixel in an original image. For example, the pixel value adjustment unit 520 may adjust the pixel value of the original pixel in the original image to determine a pixel value of the final pixel (e.g., the pixel in the final image that is the same as the original pixel except for the pixel value adjustment) in a final image of the original image.
  • a pixel value of the final pixel e.g., the pixel in the final image that is the same as the original pixel except for the pixel value adjustment
  • the construction unit 530 may be configured to construct a final image based on a plurality of final pixels.
  • FIG. 6 is a flowchart illustrating an exemplary process 600 for processing an original image according to some embodiments of the present disclosure.
  • one or more operations of process 600 may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 600 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 ).
  • a first luminance image (or referred to as an unadjusted luminance image) of an original image may be obtained.
  • the first luminance image of the original image may be obtained by determining luminance of each pixel in the original image.
  • the original image may be captured by one or more sensors and/or imaging devices.
  • the original image may be retrieved from a storage device (e.g., the storage 140 illustrated in FIG. 1 , the disk 270 illustrated in FIG. 2A ) or received from an imaging device (e.g., the imaging device 110 illustrated in FIG. 1 ).
  • the original image may be retrieved from an external source, such as a hard disk, a wireless terminal, or the like, or any combination thereof, that is connected to or otherwise communicates with the system 100 .
  • the original image may include red green blue (RGB) data, Bayer data, Luma and Chroma (YUV) data, raw image format (RAW) data, joint photographic experts group (JPEG) data, or the like, or any combination thereof.
  • RGB red green blue
  • YUV Luma and Chroma
  • RAW raw image format
  • JPEG joint photographic experts group
  • the first luminance image of the original image may be decomposed to obtain a plurality of first decomposed images (or referred to as unadjusted decomposed images).
  • the first decomposed images may include frequency information of the original image.
  • the first decomposed images may include information relating to the frequencies of one or more pixels in the original image.
  • the first decomposed image of high frequency may include information relating to details of an object in the first decomposed image.
  • the first decomposed image of low frequency may include information relating to outlines of the object in the first decomposed image.
  • the decomposition of the first luminance image of the original image may be performed by the decomposition module 320 as illustrated in FIG. 3 .
  • the decomposition of the first luminance image of the original image may be performed based on different decomposition techniques.
  • Exemplary decomposition techniques may include pyramid decomposition, wavelet decomposition, Laplace transform, filtering, or the like, or any combination thereof.
  • Exemplary pyramid decomposition techniques may include a Gauss-pyramid, a Laplacian-pyramid, a contrast-pyramid, a wavelet-pyramid, or the like, or any combination thereof.
  • Exemplary wave decomposition techniques may include a stationary wavelet transform (SWT), a fast wavelet transform (FWT), a discrete wavelet transform (DWT), an orthogonal wavelet transform (OWT), or the like, or any combination thereof.
  • Exemplary filtering techniques may include low-pass filtering, feather-edge filtering, etc.
  • the wavelet decomposition may be described as an example.
  • the wavelet decomposition of the first luminance image may generate at least one decomposed image of high frequency and a decomposed image of low frequency.
  • One or more orders of decomposition may be performed based on the wavelet decomposition.
  • a three-order wavelet decomposition e.g., SWT
  • the first-order wavelet decomposition set may include three first (or unadjusted) decomposed images of high frequency.
  • the second-order wavelet decomposition set may include three first (or unadjusted) decomposed images of high frequency.
  • the third-order wavelet decomposition set may include a first (or unadjusted) decomposed image of low frequency and three first (or unadjusted) decomposed images of high frequency.
  • the first (or unadjusted) decomposed images of high frequency in the same order may include frequency information in a horizontal direction, in a vertical direction, and in a diagonal direction, respectively.
  • the direction may be determined based on a two-dimensional coordinate system.
  • the two-dimensional coordinate system may include an x axis and a y axis.
  • the horizontal direction may be parallel to the x axis.
  • the vertical direction may be parallel to the y axis.
  • a plurality of second (or frequency-adjusted) decomposed images may be determined based on the plurality of first (or unadjusted) decomposed images.
  • one or more frequency adjustment operations for determining the second (or frequency-adjusted) decomposed images may be performed by the frequency adjustment module 330 as illustrated in FIGS. 3 and 4 .
  • the second (or frequency-adjusted) decomposed image may be determined according to adjusting the frequencies of the pixels in the first decomposed image.
  • the frequencies of the corresponding pixels in the second (or frequency-adjusted) decomposed image may be determined based on the frequencies of the pixels in the first (or unadjusted) decomposed image and a frequency adjustment threshold associated with the first (or unadjusted) decomposed image. For example, the frequency of a pixel that is greater than a frequency adjustment threshold in the first decomposed image may be reduced to provide the frequency of the corresponding pixel in the second decomposed image.
  • a frequency of a pixel that is lower than the frequency adjustment threshold in the first decomposed image may be increased to provide the frequency of the corresponding pixel in the second decomposed image.
  • the frequency adjustment threshold may be varied according to different application scenarios of the image processing system 100 .
  • the frequency adjustment threshold corresponding to the first decomposed image may be provided by a user via, for example, the I/O 250 .
  • the frequency adjustment threshold corresponding to the first decomposed image may be determined by the system 100 based on a default setting of the system 100 , an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc.
  • An empirical setting from prior imaging processing may be derived by, for example, machine learning.
  • the frequencies of the corresponding pixels in the second (or frequency-adjusted) decomposed image may be determined based on the frequencies of the pixels in the first decomposed image and gains of the pixels in the first (or unadjusted) decomposed image.
  • the gain of the pixel may be determined based on the frequency of the pixel and a frequency adjustment threshold associated with the first decomposed image. Exemplary processes for determining a second decomposed image may be found elsewhere in the present disclosure. See, for example, FIG. 7 and the description thereof.
  • a second (or frequency-adjusted) luminance image of the original image may be reconstructed based on the plurality of second (or frequency-adjusted) decomposed images.
  • one or more operations of reconstructing the second luminance image may be performed by the reconstruction module 340 as illustrated in FIG. 3 .
  • the reconstruction of the second luminance image may include or be a reverse process of the decomposition.
  • Exemplary reconstruction techniques may include pyramid reconstruction, wavelet reconstruction, Laplace transform reconstruction, inverse filtering, or the like, or any combination thereof. Multiple orders of reconstruction may be performed. In some embodiments, the number of orders in the reconstruction may be the same as the number of orders in the decomposition.
  • the reconstruction module 340 may reconstruct a second (or frequency-adjusted) luminance image by a three-order wavelet transform when the decomposed images are generated based on a three-order wavelet transform decomposition.
  • a final image of the original image may be determined based on the first (or unadjusted) luminance image, the second (or frequency-adjusted) luminance image, and the original image.
  • one or more operations of determining the final image of the original image may be performed by the determination module 350 as illustrated in FIG. 3 .
  • the final image may include a plurality of final pixels corresponding to original pixels in the original image.
  • the values of the final pixels may be determined based on the first (or unadjusted) luminance image, the second (or frequency-adjusted) luminance image, and the original image. For example, the values of the final pixels may be determined by multiplying the values of corresponding original pixels in the original image by a ratio of the second luminance image to the first luminance image.
  • exemplary processes for determining the final image of the original image may be elsewhere in the present disclosure. See, for example, FIG. 11 and the description thereof.
  • process 600 is merely provided for the purposes of illustration, and not intended to be understood as the only embodiment.
  • various variations and modifications may be conduct under the teaching of some embodiments of the present disclosure.
  • some steps may be reduced or added.
  • 610 may be omitted.
  • the luminance of an image may be predetermined and stored in a storage medium of the image processing system 100 .
  • those variations and modifications may not depart from the protecting of some embodiments of the present disclosure.
  • FIG. 7 is a flowchart illustrating an exemplary process 700 for determining a second (or frequency-adjusted) decomposed image according to some embodiments of the present disclosure.
  • one or more operations of process 700 for determining a second decomposed image may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 700 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 , the frequency adjustment module 330 as illustrated in FIGS. 3-4 ).
  • frequencies of pixels in a first (or unadjusted) decomposed image Ii may be identified. For example, the frequencies of all or a portion of the pixels included in the first decomposed image Ii may be identified. In some embodiments, the frequencies of the pixels may be determined by Fourier transform, Z transform, Laplace transform, or the like, or any combination thereof. In some embodiments, the frequencies of the pixels may be predetermined and stored in a storage medium of the image processing device 120 .
  • gains of the pixels in the first (or unadjusted) decomposed image Ii may be determined based on the frequencies of the pixels and a frequency adjustment threshold associated with the first decomposed image Ii. In some embodiments, the gains of different pixels in the same first decomposed image Ii may be the same or different. In some embodiments, the gain of a pixel may be determined based on the position of the pixel in the first decomposed images. Exemplary processes for determining a gain of a pixel in a first decomposed image may be found elsewhere in the present disclosure. See, for example, FIG. 8 and the description thereof.
  • the frequency adjustment threshold may vary in different application scenarios of the image processing system 100 .
  • the frequency adjustment threshold associated with the first (or unadjusted) decomposed image may be provided by a user via, for example, the I/O 250 .
  • the frequency adjustment threshold corresponding to the first decomposed image may be determined by the system 100 based on a default setting of the system 100 , an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc.
  • An empirical setting from prior imaging processing may be derived by, for example, machine learning.
  • frequencies of corresponding pixels in the second (or frequency-adjusted) decomposed image may be determined based on the original frequencies and the gains of the original pixels in the first (or unadjusted) decomposed image Ii.
  • the frequency of a corresponding pixel in the second (or frequency-adjusted) decomposed image may be determined by multiplying the original frequency by the gain corresponding to the original pixel in the first (or unadjusted) decomposed image Ii.
  • a pixel in the second decomposed image is considered to correspond to an original pixel in the first decomposed image Ii or in an original image if the two pixels correspond to a same physical point in the space or in an object to which the original image, or the first decomposed image, or the second decomposed image relates.
  • a pixel in the second decomposed image and a corresponding original pixel in the first decomposed image Ii or in an original image may be considered to be located at a same position.
  • an original pixel in the first decomposed image Ii or in an original image and the corresponding pixel in the second decomposed image may be the same except for the frequency adjustment.
  • the second (or frequency-adjusted) decomposed image associated with the first (or unadjusted) decomposed image Ii may be determined based on the frequencies of the corresponding pixels in the second decomposed image.
  • the second decomposed image may be associated with an original image that is associated with the first (or unadjusted) decomposed image.
  • a pixel in the second (or frequency-adjusted) decomposed image may be located at the same position as the corresponding original pixel in the original image.
  • the second decomposed image may be generated by arranging the corresponding pixels in the same way as the original pixels in the original image.
  • the plurality of second (or frequency-adjusted) decomposed images at operation 630 may be determined by implementing operations 710 - 740 on each of the plurality of the first (or unadjusted) decomposed images.
  • FIG. 8 is a flowchart illustrating an exemplary process 800 for determining a gain of a pixel in a first decomposed image according to some embodiments of the present disclosure.
  • one or more operations of process 800 for determining a gain of a pixel may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 800 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 , the determination module 350 as illustrated in FIG. 3 , the gain determination unit 410 as illustrated in FIG. 4 ).
  • a certain number of pixels that are located at a same position from a plurality of first (or unadjusted) decomposed images may be identified.
  • the plurality of first decomposed images may be obtained by decomposing a first (or unadjusted) luminance image of an original image at 620 illustrated in FIG. 6 .
  • the plurality of first decomposed images may be stored in a storage medium of the image processing system 100 .
  • the certain number of pixels may include pixels from all or a portion of the plurality of first decomposed images. For example, the certain number of pixels from three first decomposed images obtained by decomposition of a same order may be identified.
  • the particular number of pixels may be all or a portion of the pixels in the one or more first decomposed images. For example, all the pixels that are located at a same position from three first decomposed images obtained by the decomposition of a same order may be identified.
  • the same position may be determined based on a position of a pixel to be adjusted in the first decomposed image Ii.
  • the pixel to be adjusted may be determined based on the frequency and the gain of the original pixel in the first decomposed image Ii.
  • FIG. 10 -I, FIG. 10 -II, and FIG. 10 -III illustrate examples of pixels that are located at a same position.
  • the frequencies of the identified pixels may be determined.
  • the frequency of an identified pixel may be determined by Fourier transform, Z transform, Laplace transform, or the like, or any combination thereof.
  • the frequencies of the identified pixels may be predetermined and stored in a storage medium of the image processing device 120 .
  • the frequencies of the identified pixels may be filtered to obtain the particular number of filtered frequencies.
  • the filter frequency may be obtained using a Butterworth filter, a Chebyshev filter, a Bessel filter, an elliptic filter, a Gaussian filter, an Hourglass filter, a raised-cosine filter, or the like, or any combination thereof.
  • the filtered frequencies may be associated with positions of the identified pixels.
  • Exemplary processes for obtaining a filtered frequency associated with a position may be found elsewhere in the present disclosure. See, for example, FIG. 9 and the description thereof.
  • the average filtered frequency associated with the position may be determined based on the filtered frequencies.
  • the average filtered frequency may be an average value of the filtered frequencies of pixels at a same position in the one or more first (or unadjusted) decomposed images.
  • the average value may include an arithmetic mean value, a geometric mean value, a square mean value, a harmonic mean value, a weighted average value, or the like, or any combination thereof.
  • the average filtered frequency of a pixel in a first decomposed image may be the same as frequencies of pixels at the same position in one or more first decomposed images of a same order.
  • a gain associated with the position may be determined based on the average filtered frequency.
  • the gains of pixels at a same position of the first decomposed images of high frequency of a same order may be the same or different.
  • process 800 is merely provided for the purposes of illustration, and not intended to be understood as the only embodiment.
  • the gain associated with the position may be determined based on filtered frequencies of the identified pixels.
  • Operation 840 may be omitted.
  • the gains of the pixels in the first decomposed image Ii at operation 720 may be determined by implementing operations 810 - 850 on each of the pixels in the first decomposed image Ii.
  • those variations and modifications may not depart from the protecting of some embodiments of the present disclosure.
  • a gain of a pixel (or associated with a position) in a first (or unadjusted) decomposed image may be determined based on an average filtered frequency of the pixels at a same position in the first (or unadjusted) decomposed images obtained by decomposition of a same order.
  • the gain of a pixel in the first decomposed image may be determined based on equations (1) and (2):
  • G i ⁇ j ⁇ ( x , y ) m i ⁇ ( A ag ⁇ ( x , y ) + ⁇ ⁇ ⁇ A ′ ) ⁇ - 1 , ( 1 )
  • i denotes the order of decomposition
  • j denotes any one of the first (or unadjusted) decomposed images obtained by the ith-order decomposition
  • Gij(x, y) denotes the gain of the pixel at the position (x, y) in the first (or unadjusted) decomposed image j obtained in the ith-order decomposition
  • Aag(x, y) denotes the average filtered frequency of the pixels at the position (x, y) in the first (or unadjusted) decomposed image j
  • mi denotes a gain adjustment factor corresponding to the ith-order decomposition, and 0 ⁇ m i ⁇ 1
  • denotes a constant, and 0 ⁇ 1
  • denotes a constant, and 0 ⁇ 1
  • A′ denotes an average frequency corresponding to the position (x, y) of the pixel in the first (or unadjusted) de
  • A′ may be an arithmetic mean of the frequencies of pixels at a same position in one or more first (or unadjusted) decomposed images obtained by decomposition of the same order.
  • denotes a noise level correlation parameter which may be used to suppress a noise amplification, and 0 ⁇ 1.
  • N denotes the total number of orders of decomposition.
  • Ak(x, y) denotes a filtered frequency of the pixel at the position (x, y) in the first (or unadjusted) decomposed image k of the plurality of the first decomposed images obtained after the Nth-order decomposition.
  • the number of the first (or unadjusted) decomposed images may be 3N+1.
  • the gain adjustment factor mi of a first (or unadjusted) decomposed image of an order may vary in different application scenarios of the image processing system 100 .
  • the mi of first (or unadjusted) decomposed images obtained by decomposition of different orders may be the same or different.
  • mi of first (or unadjusted) decomposed images obtained by decomposition of different orders may be the same or different.
  • mi of first (or unadjusted) decomposed images obtained by decomposition of a same order may be the same or different.
  • a gain of a pixel (or associated with a position) in a first (or unadjusted) decomposed image may be determined based on filtered frequencies of pixels at a same position in the one or more first (unadjusted) decomposed images of a same order.
  • the gain of a pixel (or associated with a position) in the first (or unadjusted) decomposed image may be determined based on equation (3):
  • i denotes the order of decomposition
  • j denotes any one of the first decomposed images obtained by the ith-order decomposition
  • Gij(x, y) denotes the gain of the pixel at the position (x, y) in the first (or unadjusted) decomposed image j obtained in the ith-order decomposition
  • Aij(x, y) denotes the filtered frequency of the pixel at the position (x, y) in the first decomposed image j
  • mi denotes a gain adjustment factor corresponding to the ith-order decomposition, and 0 ⁇ m i ⁇ 1
  • denotes a constant, and 0 ⁇ 1
  • denotes a constant, and 0 ⁇ 1
  • A′ denotes an average frequency corresponding to the position (x, y) of the pixel in the first (or unadjusted) decomposed image j.
  • A′ may be an arithmetic mean of frequencies of pixels at the same position in of the one or more first (or unadjusted) decomposed images of the same order.
  • denotes a noise level correlation parameter which may be used to suppress a noise amplification, and 0 ⁇ 1.
  • the mi of each order's first decomposed image (or unadjusted) may be varied according to different application scenarios of the image processing system 100 .
  • the mi of each order's first (or unadjusted) decomposed image may be same or different.
  • the mi of different order's first (or unadjusted) decomposed image may be different.
  • the mi of same order's first (or unadjusted) decomposed image may be same.
  • FIG. 9 is a flowchart illustrating an exemplary process 900 for obtaining a gain of a pixel in a first (or unadjusted) decomposed image according to some embodiments of the present disclosure.
  • one or more operations of process 900 for obtaining a gain of a pixel may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 900 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 , the determination module 350 as illustrated in FIG. 3 , the gain determination unit 410 as illustrated in FIG. 4 ).
  • the Nth-order decomposed images of high frequency may be identified.
  • the first decomposed image Ii is a first (or unadjusted) decomposed image with high frequency
  • one or more first (or unadjusted) decomposed images which are of the same order as the first decomposed image Ii are identified.
  • three second-order first decomposed images (the first decomposed image Ii and other two second-order first decomposed images) of high frequency may be identified.
  • the frequencies of the pixels at a same position of the identified Nth-order first (or unadjusted) decomposed images of high frequency may be determined. For example, the frequencies of the three pixels at a same position of the three identified Nth-order first decomposed images of high frequency may be determined.
  • FIG. 10 -I through 10 -DI are schematic diagrams illustrating an N th order decomposed images of high frequency according to some embodiments of the present disclosure.
  • FIG. 10 -I shows a decomposed image of high frequency including four pixels a 1 , a 2 , a 3 , and a 4 .
  • FIG. 10 -II and FIG. 10 -III are the other two decomposed images of high frequency obtained in the decomposition of the same order as the image in FIG. 10 -I.
  • each may include four pixels, b 1 , b 2 , b 3 , and b 4 in FIG. 10 -II, and c 1 , c 2 , c 3 , and c 4 in FIG. 10 -III, respectively.
  • the images in FIG. 10 -I through FIG. 10 -III may overlap with each other.
  • the pixels a 1 , b 1 , and c 1 are located at a same position.
  • the pixels a 2 , b 2 , and c 2 are located at a same position.
  • the pixels a 3 , b 3 , and c 3 are located at a same position.
  • the pixels a 4 , b 4 , and c 4 are located at a same position.
  • the average frequency value associated with the position may be determined based on the frequencies of the pixels.
  • the average frequency value may be an average value of the absolute values of the frequencies of the pixels in the identified Nth-order decomposed images of high frequency.
  • the average value may be an arithmetic mean value, a geometric mean value, a square mean value, a harmonic mean value, a weighted average value, or the like, or any combination thereof.
  • the average frequency value associated with the position may be determined based on the absolute value of the frequency of the pixel in the first (or unadjusted) decomposed image Ii of high frequency and the absolute values of the frequencies of the pixels at the same position in the first (or unadjusted) decomposed images of high frequency of the same order as the first (or unadjusted) decomposed image Ii.
  • the average frequency value of pixel a 1 may be based on the frequencies of pixels a 1 , b 1 , and c 1 .
  • an absolute value F 1 of the frequency of a pixel at a position in the first (or unadjusted) decomposed image Ii of high frequency may be determined, and the other two absolute values, F 2 and F 3 , of the two pixels (in the first (or unadjusted) decomposed images of high frequency by the decomposition of the same order as the image Ii) at the same position may be determined.
  • the average frequency value F associated with the position may be determined based on equation (4):
  • the average frequency value F may be filtered to obtain a filtered frequency associated with the position.
  • the filtered frequency may be obtained using a Butterworth filter, a Chebyshev filter, a Bessel filter, an elliptic filter, a Gaussian filter, an Hourglass filter, a raised-cosine filter, or the like, or any combination thereof.
  • a plurality of filtered frequencies may be obtained by implementing operations 910 - 940 on each of the pixels in the first (or unadjusted) decomposed image Ii.
  • the frequencies of one or more pixels in the first decomposed image Ii may be identified. Then the frequencies may be filtered to obtain filtered frequencies. The gains corresponding to the one or more pixels may be determined based on the corresponding filtered frequencies of the one or more pixels.
  • FIG. 11 is a flowchart illustrating an exemplary process 1100 for determining a final image of the original image according to some embodiments of the present disclosure.
  • one or more operations of process 1100 for determining a final image may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 1100 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 , the determination module 350 as illustrated in FIG. 3 and FIG. 5 ).
  • a position of an original pixel in the original image may be identified.
  • the original image may be in a two-dimensional coordinate system, and the position of an original pixel may be represented by (x, y).
  • the value of coordinates x and y may be used to identify the position of the original pixel.
  • a first (or unadjusted) luminance of a pixel in the first (or unadjusted) luminance image may be determined, in which the pixel in the first (or unadjusted) luminance image is at the same position as the original pixel in the original image.
  • the first (or unadjusted) luminance image may be determined as described in 610 in the present disclosure.
  • the first (or unadjusted) luminance image of the original image may be stored in a storage medium of the image processing system 100 or an external storage device.
  • the same position may be identified based on the first luminance image and the original image. For example, pixels in the first (or unadjusted) luminance image and the original image may be arranged in the same way in identical two-dimensional coordinate systems. For the same position (x, y), there are two pixels, one in the first (or unadjusted) luminance image, and the other in the original image.
  • a second (or frequency-adjusted) luminance of a pixel in the first luminance image may be determined, in which the pixel in the second (or frequency-adjusted) luminance image is at the same position as the original pixel in the original image.
  • the second (or frequency-adjusted) luminance image may be reconstructed from a plurality of the second (or frequency-adjusted) decomposed images as described in 640 in the present disclosure.
  • the second luminance image of the original image may be stored in a storage medium of the image processing system 100 .
  • the same position may be identified based on the second (or frequency-adjusted) luminance image and the original image.
  • pixels in the second (or frequency-adjusted) luminance image and the original image may be arranged in the same way in identical two-dimensional coordinate systems.
  • a final pixel associated with the original pixel may be determined based on the first (or unadjusted) luminance and the second (or frequency-adjusted) luminance.
  • the final pixel may be a pixel in a final image.
  • the final pixel may include data of the final pixel such as luminance information of the final pixel, frequency information of the final pixel, hue information of the final pixel, saturation information of the final pixel, or the like, or any combination thereof.
  • the data of the final pixel may be determined based on equation (5):
  • C in (x, y) denotes the data of the original image associated with the original pixel at the position (x, y) in the original image
  • C out (x, y) denotes the data of the final pixel at the position (x, y) in the final image
  • I in (x, y) denotes the first (or unadjusted) luminance of the pixel at the position (x, y) in the first (or unadjusted) luminance image
  • I out (x,y) denotes the second (or frequency-adjusted) luminance of the pixel at the position (x, y) in the second (or frequency-adjusted) luminance image.
  • Operations 1110 through 1140 may be performed for multiple positions and multiple original pixels at these positions to provide corresponding final pixels.
  • a final image of the original image may be determined based on the final pixels.
  • FIG. 12 is a flowchart illustrating an exemplary process 1200 for processing an image according to some embodiments of the present disclosure.
  • one or more operations of process 1200 may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 1200 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 , the determination module 350 of the image processing device 120 , etc.).
  • a target region in an image and a plurality of gray levels in the target region may be identified.
  • the image may be captured by one or more sensors and/or imaging devices.
  • the image may be retrieved from a storage device (e.g., the storage 140 illustrated in FIG. 1 , the disk 270 illustrated in FIG. 2A ) or received from an imaging device (e.g., the imaging device 110 illustrated in FIG. 1 ).
  • the image may be retrieved from an external source, such as a hard disk, a wireless terminal, or the like, or any combination thereof, that is connected to or otherwise communicates with the system 100 .
  • the image may include gray scale data, red green blue (RGB) data, Bayer data, Luma and Chroma (YUV) data, raw image format (RAW) data, joint photographic experts group (JPEG) data, or the like, or any combination thereof.
  • the image may be a gray scale image transformed from an RGB image.
  • the image may be obtained from a finial image obtained at 650 in FIG. 6 .
  • the image may include a gray scale image transformed from the final image obtained at 650 .
  • the image may include at least one target region.
  • the number of the target regions in the image may vary in different application scenarios. For example, the number of the target regions in the image may be determined based on the number of pixels that need to be processed in the image. A pixel that needs to be processed may be designated as the central pixel of a target region. As another example, the entire image may constitute the only one target region of the image.
  • the size of the target region in the image may vary in different application scenarios. For example, the size of two target regions in an image may be the same or different.
  • the target region may include a plurality of gray levels.
  • the gray levels may include original gray levels in the target region, normalized gray levels of the original levels, or other processed gray levels.
  • the plurality of gray levels in the target region may be within a range from 0 to 1.
  • the gray level of 0 represents white in the target region
  • the gray level of 1 represents black in the target region.
  • the number of the gray levels in the target region may vary in different application scenarios. For example, the number of the gray levels in the target region may be four. In some embodiments, the number of gray levels in the target region may be determined by a user of the image processing system 100 via, for example an I/O.
  • the gray levels in a target region may be determined by the system 100 based on a default setting of the system 100 , an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc.
  • An empirical setting from prior imaging processing may be derived by, for example, machine learning.
  • the gray levels in a target region may be determined based on the range of gray values of the pixels in the target region.
  • the four gray levels may include 0.25, 0.5, 0.75, and 1.
  • the interval between two adjacent gray levels may be the same or different.
  • the interval between two adjacent gray levels in the target region may vary according to different application scenarios.
  • the interval between two adjacent gray levels may be 0.25.
  • the gray levels of the target region may include four, 0.25, 0.5, 0.75, and 1, respectively.
  • the interval between two adjacent gray levels may be 0.2.
  • the gray levels of the target region may include five, 0.2, 0.4, 0.6, 0.8, and 1, respectively.
  • the gray levels of the target region may include four, 0.1, 0.4, 0.6 and 0.9, respectively.
  • the gray levels of any two target regions in the image may be the same or different.
  • the at least one target region extending outside of the image may be processed according to, for example, the process 1300 illustrated in FIG. 13 after 1210 .
  • a plurality of statistical probabilities may be determined, in which a statistical probability is associated with a gray level of the plurality of gray levels.
  • the plurality of statistical probabilities may be associated with all the gray levels except for the greatest gray level in the target region.
  • a statistical probability associated with a gray level may include a proportion or number of the pixels associated with the gray value.
  • the statistical probability associated with the gray level 0.25 may include the proportion of the pixels whose gray levels are not greater than 0.25 in the target region.
  • the proportion may be a ratio of the number of the pixels whose gray values are not greater than the gray level 0.25 to the total number of pixels in the target region.
  • the statistical probabilities may be expressed in one or more of various ways.
  • the statistical probabilities may be represented as numerical values, a diagram, a table, or the like, or any combination thereof.
  • Exemplary diagrams may include a cumulative histogram, a line chart, a pie chart, a scatter diagram, a bar chart, or the like, or any combination thereof.
  • the statistical probabilities may be represented as a cumulative histogram of the gray levels.
  • the cumulative histogram may include a horizontal axis and a vertical axis.
  • the horizontal axis may indicate gray levels of the plurality of gray levels.
  • the horizontal axis may include all the plurality of gray levels in the target region.
  • the horizontal axis may include all the plurality of gray levels except for the greatest gray level in the target region.
  • the vertical axis may indicate the proportions of pixels whose gray values are not greater than the corresponding gray levels in the horizontal axis.
  • a mapping curve of the target region may be determined based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels.
  • the mapping curve of the target region may be determined based on the plurality of statistical probabilities of the plurality of gray levels except for the greatest gray level and the plurality of predetermined curves associated with the plurality of gray levels except for the greatest gray level. Exemplary processes for determining a mapping curve of the target region may be found elsewhere in the present disclosure. See, for example, FIG. 16 and the description thereof.
  • the predetermined curves associated with the plurality of the gray levels may vary in different application scenarios of the image processing system 100 .
  • the predetermined curve associated with various gray levels may be the same or different.
  • the predetermined curves may include a Gaussian distribution curve, a Weibull distribution curve, an exponential distribution curve, a Poisson distribution curve, a binomial distribution curve, or the like, or any combination thereof.
  • At 1240 at least one pixel that needs to be processed in the target region may be identified.
  • the pixels that needs to be processed may be determined manually, automatically, and/or semi-automatically. For example, the pixel in the center of the target region may need to be processed. As another example, all the pixels in the target region may need to be processed. As a further example, in the target region, the pixels whose gray values are lower or higher than a predetermined value may be determined to be processed.
  • the value of the pixel may be determined based on the mapping curve of the target region.
  • a processed image may be generated based on the determined at least one value of the at least one pixel.
  • the mapping curve may be expressed in the form of a formula.
  • the value of a pixel may be determined based on the formula.
  • process 1200 is merely provided for the purposes of illustration, and not intended to be understood as the only embodiment.
  • at least one pixel that needs to be processed may be first identified (e.g., at 1240 ) before the target region (e.g., at 1210 ) is identified.
  • the target region may be identified based on the at least one pixel that needs to be processed in the image.
  • those variations and modifications may not depart from the protecting of some embodiments of the present disclosure.
  • FIG. 13 is a flowchart illustrating an exemplary process 1300 for processing at least one target region having a part outside of an image according to some embodiments of the present disclosure.
  • one or more operations of process 1300 may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 1300 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 , the determination module 350 of the image processing device 120 , etc.).
  • the image may be captured by one or more sensors and/or imaging devices.
  • the image may be retrieved from a storage device (e.g., the storage 140 illustrated in FIG. 1 , the disk 270 illustrated in FIG. 2A ) or received from an imaging device (e.g., the imaging device 110 illustrated in FIG. 1 ).
  • the image may be retrieved from an external source, such as a hard disk, a wireless terminal, or the like, or any combination thereof, that is connected to or otherwise communicating with the system 100 .
  • the image may include gray scale data, red green blue (RGB) data, Bayer data, Luma and Chroma (YUV) data, raw image format (RAW) data, joint photographic experts group (JPEG) data, or the like, or any combination thereof.
  • the image may include a gray scale image transformed from an RGB image.
  • the image may be obtained from a finial image obtained at 650 in FIG. 6 .
  • the image may include a gray scale image transformed from the final image obtained at 650 .
  • the number of the target regions in the image may vary in different application scenarios. For example, the number of the target regions in the image may be determined based on the number of pixels that need to be processed in the image. Each pixel that needs to be processed may be the pixel located at the center of a target region.
  • the size of the target region in the image may vary in different application scenarios. For example, the sizes of two target regions in the image may be the same or different.
  • a determination may be made as to whether at least one target region has a part outside of the image.
  • the pixels that needs to be processed may be the pixels located at the center of the corresponding target region.
  • the corresponding target region may have a part outside the image.
  • At 1330 at least one edge of the image may be processed in response to the determination that at least one target region has a part outside the image.
  • Exemplary processes for patching at least one edge of an image may be found elsewhere in the present disclosure. See, for example, FIG. 14 -I through FIG. 14 -V and FIG. 15 -I through FIG. 15 -V, or any combination thereof.
  • the process 1200 described in FIG. 12 may be implemented on each of the identified target regions in the image at 1310 to process the image. In some embodiments, in response to a determination that at least one target region has a part outside the image, the process 1200 described in FIG. 12 may be implemented on the at least one processed target region obtained after 1330 .
  • FIGS. 14 -I through 14 -V are schematic diagrams illustrating the patching of at least one edge of an image according to some embodiments of the present disclosure.
  • FIG. 14 -I shows an image A that needs to be processed.
  • the image A may include a plurality of pixels arranged in rows and columns. For example, the number of pixels in each row is W, and the number of pixels in each column is H, where W and H are positive integers.
  • First Section and Second Section may be identified in the image A.
  • the First Section may be located on the leftmost side of the image A.
  • the number of pixels in each row of the First Section may be W/2, and the number of pixels in each column of the First Section may be equal to the number of pixels in each column of the image A.
  • the Second Section may be located on the rightmost side of the image A.
  • the number of pixels in each row of the Second Section may be W/2, and the number of pixels in each column of the Second Section may be equal to the number of pixels in each column of the image A.
  • FIG. 14 -II illustrates the locations of the First Section and the Second Section.
  • FIG. 14 -III illustrates an image B which is generated by the mirroring of the First Section and the Second Section.
  • the Third Section may be located at the top of the image B.
  • the number of pixels in each column of the Third Section may be H/2, and the number of pixels in each row of the Third Section may be equal to the number of pixels in each row of the image B.
  • the Fourth Section may be located at the bottom of the image B.
  • the number of pixels in each column of the Fourth Section may be H/2, and the number of pixels in each row of the Fourth Section may be equal to the number of pixels in each raw of the image B.
  • FIG. 14 -IV illustrates the locations of the Third Section and the Fourth Section. Fourthly, the Third Section may be mirrored with the top edge of the image B as a symmetry axis, and the Fourth Section may be mirrored with the bottom edge of the image B as a symmetry axis.
  • FIG. 14 -V illustrates an image C which is generated by the mirroring of the Third Section and the Fourth Section. In some embodiments, at least one target region may be identified in the processed image C as illustrated in FIG. 14 -V. It should be noted that the steps for patching the image described above may be implemented in a different order. For example, the top and bottom sections of the image may be patched before the left and right sections are patched.
  • the target region may include a predetermined area having the pixel in the center (the pixel being as the central pixel).
  • the target region may include a W*H of region centered at the pixel 1 .
  • the pixel 1 represents the central pixel
  • W represents the number of pixels in a row of the target region
  • H represents the number of pixels in a column of the target region.
  • FIGS. 15 -I through 15 -V illustrate schematic diagrams for patching at least one edge of an image according to some embodiments of the present disclosure.
  • FIGS. 15 -I through 15 -V illustrate the cases that a part of a target region is inside the image (a part outside the image).
  • an X axis and a Y axis may divide the target region having the shape of a rectangle enclosed by the solid lines into four sections according to the center of the target region (e.g., the center of the target region is the original of the coordinate system including the X axis and the Y axis). It is understood that a target region may have a shape other than a rectangle.
  • the four sections may be respectively named as Section A, Section B, Section C, and Section D.
  • the Section A is inside the image, and the Section B, the Section C, and the Section D are outside the image.
  • the pixels in the Section A may be mirrored to generate the pixels in the Section B with the X axis as a symmetry axis.
  • the pixels in the Section A and the Section B may be mirrored to generate pixels in the Section C and the Section D with the Y axis as a symmetry axis.
  • an X axis and a Y axis may divide the target region into four sections (Section A, Section B, Section C, and Section D) according to the center of the target region (e.g., the center of the target region is the original point of the X axis and the Y axis).
  • the Section B may include a part (B 1 ) outside the image and a part (B 2 ) inside the image.
  • the Section A 1 is symmetrical to the Section B 1 .
  • the pixels in the Section A 1 may be mirrored to generate the pixels in the Section B 1 with the X axis as a symmetry axis.
  • the pixels in the Section A and the Section B may be mirrored to generate the pixels in the Section D and the Section C with the Y axis as a symmetry axis.
  • an X axis may divide the target region into two sections (Section A and Section B) based on the center of the target region.
  • the pixels in the Section A may be mirrored to generate the pixels in the Section B with the X axis as a symmetry axis.
  • an X axis may divide the target region into two sections (Section A and Section B) based on the center of the target region.
  • the Section B may include a part (B 1 ) outside the image and a part (B 2 ) inside the image.
  • the Section A 1 is symmetrical to the Section B 1 .
  • the pixels in the Section A 1 may be mirrored to generate the pixels in the Section B 1 with the X axis as a symmetry axis.
  • other cases e.g., the case illustrated in FIG. 15 -V
  • FIG. 16 is a flowchart illustrating an exemplary process 1600 for determining a mapping curve of a target region according to some embodiments of the present disclosure.
  • one or more operations of process 1600 may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 1600 may be stored in the storage 140 as a form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 , the determination module 350 of the image processing device 120 , etc.).
  • a plurality of optimal coefficients may be determined.
  • An optimal coefficient may be associated with a statistical probability of the plurality of statistical probabilities relating to the target region.
  • an optimal coefficient associated with a statistical probability may be determined based on the statistical probability of a gray level, a predetermined range of the statistical probability corresponding to the gray level (e.g., the gray level is not the greatest gray level in the target region), the pixel value of the central pixel of the target region, the pixel values of neighbor pixels around the central pixel, or the like, or any combination thereof.
  • the predetermined range of the statistical probability may vary in different application scenarios of the image processing system 100 .
  • a neighbor pixel of a certain pixel may be within a range (e.g., a square region with an area of 3 pixels*3 pixels centered at the central pixel) of the pixel. Exemplary processes for determining an optimal coefficient may be described may be found elsewhere in the present disclosure. See, for example, FIG. 17 and the description thereof.
  • a plurality of optimal curves may be determined.
  • An optimal curve may be associated with an optimal coefficient of the plurality of optimal coefficients.
  • an optimal curve may be determined based on the corresponding optimal coefficient.
  • the optimal curves of different gray levels in the target region may be the same or different. For instance, an optimal curve may be determined based on the equation (6):
  • Y out denotes an output pixel value
  • Yin denotes an input pixel value
  • n denotes the bit width of the image
  • W′ denotes an optimal coefficient.
  • the optimal curve may be a Gamma distribution curve.
  • a plurality of predetermined curves associated with the plurality of optimal coefficients may be identified.
  • a predetermined curve may correspond to a gray level except for the greatest gray level.
  • the predetermined curves associated with different gray level may be the same or different.
  • the predetermined curves may include a Gaussian distribution curve, a Weibull distribution curve, an exponential distribution curve, a Poisson distribution curve, a binomial distribution curve, or the like, or any combination thereof.
  • a mapping curve of the target region may be determined based on the plurality of optimal curves and the plurality of predetermined curves.
  • a plurality of sub mapping curves may be determined before the mapping curve is determined.
  • a sub mapping curve may correspond to a gray level except for the greatest gray level.
  • the sub mapping curves may be determined based on the optimal curves and the predetermined curves.
  • a sub mapping curve, which corresponds to a gray level may be determined based on the optimal curve corresponding to the gray level and the predetermined curve corresponding to the gray level.
  • the mapping curve of the target region may be determined based on all or a portion of the plurality of sub mapping curves of the target region. For example, the mapping curve may be determined as a sum of all the plurality of sub mapping curves.
  • FIG. 17 is a flowchart illustrating an exemplary process 1700 for determining an optimal coefficient of a target region according to some embodiments of the present disclosure.
  • one or more operations of process 1700 may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 1700 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 , the determination module 350 of the image processing device 120 , etc.).
  • an initial coefficient associated with a statistical probability may be determined based on the gray level associated with the statistical probability.
  • the initial coefficient may be used to describe a factor in the process for determining an optimal coefficient of a target region.
  • the initial coefficient corresponding to a gray level may be determined based on the gray level, the statistical probability associated with the gray level, and a predetermined range of the statistical probability associated with the gray level.
  • the predetermined range of a statistical probability associated with the gray level may vary in different application scenarios of the image processing system 100 .
  • the predetermined range of the statistical probability associated with a gray level may be predetermined by a user of the image processing system 100 via, for example, the I/O 250 .
  • the predetermined range of the statistical probability associated with a gray level may be determined by the system 100 based on a default setting of the system 100 , an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc.
  • An empirical setting from prior imaging processing may be derived by, for example, machine learning. Exemplary processes for determining an initial coefficient may be found elsewhere in the present disclosure. See, for example, FIG. 18 and the description thereof.
  • the central pixel of the target region may be identified.
  • an optimal coefficient corresponding to the initial coefficient may be determined based on the central pixel of the target region.
  • the optimal coefficient may be determined by different ways when the gray level corresponding to the initial coefficient is within different ranges.
  • the optimal coefficient corresponding to the initial coefficient may be determined based on the pixel value of the central pixel of the target region and the pixel values of neighbor pixels around the central pixel.
  • the first threshold associated with a gray level may be determined by the system 100 based on a default setting of the system 100 , an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc.
  • An empirical setting from prior imaging processing may be derived by, for example, machine learning.
  • the first threshold may be predetermined within (0, 1). For example, the first threshold may be within (0.25, 0.75).
  • the first threshold may be 0.5.
  • a neighbor pixel of a certain pixel may be within a certain range (e.g., a region with an area of L pixels in a row, M pixels in a column, and centered with the central pixel) from the pixel.
  • the neighbor pixels may be the pixels in the neighbor region of the central pixel of the target region.
  • the optimal coefficient may be determined based on the exemplary equation (7):
  • W denotes the initial coefficient
  • W denotes the optimal coefficient corresponding to the initial coefficient
  • m denotes the ratio of the average pixel value of the neighbor pixels to the pixel value of the central pixel.
  • the m may be determined based on the equation (8):
  • b 1 denotes the average pixel value of the neighbor pixels
  • a 1 denotes the pixel value of the central pixel
  • the pixel value of the central pixel in response to the determination that m ⁇ 1, the pixel value of the central pixel may be greater than the average pixel value of the neighbor pixels.
  • the optimal coefficient may be adjusted to be less than the initial coefficient.
  • the pixel value of the central pixel in response to the determination that m>1, the pixel value of the central pixel may be less than the average pixel value of the neighbor pixels.
  • the optimal coefficient may be adjusted to be greater than the initial coefficient.
  • the optimal coefficient corresponding to the initial coefficient may be determined based on the pixel value of the central pixel of the target region. For instance, the optimal coefficient may be determined based on the equation (9):
  • W denotes the initial coefficient
  • W denotes optimal coefficient corresponding to the initial coefficient
  • pix_value denotes the normalized pixel value of the central pixel of the target region.
  • the pix_value is within the range from 0 to 1.
  • Exemplary algorithms for normalizing the pixel value of the central pixel may include a Min-Max normalized algorithm, a z-score normalized algorithm, a decimal scaling normalized algorithm, or the like, or any combination thereof.
  • FIG. 18 is a flowchart illustrating an exemplary process 1800 for determining an initial optimal coefficient of a target region according to some embodiments of the present disclosure.
  • one or more operations of process 1800 may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 1800 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 , the determination module 350 of the image processing device 120 , etc.).
  • a range of the statistical probability associated with a gray level may be determined.
  • the range of the statistical probability associated with a gray level may be determined manually by a user of the image processing system 100 via, for example, the I/O 250 .
  • the range of the statistical probability associated with a gray level may be determined by the system 100 based on a default setting of the system 100 , an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc.
  • An empirical setting from prior imaging processing may be derived by, for example, machine learning.
  • the statistical probabilities associated with the first three gray levels may be P ys , P ym , and P yh , respectively.
  • the range of the statistical probabilities associated with the first three gray levels may be predetermined as [0.01, 0.35], [0.35, 0.65], and [0.65, 0.95], respectively.
  • the statistical probabilities P ys associated with gray level 0.25 may be adjusted based on the range [0.01, 0.35]
  • the statistical probabilities P ym associated with gray level 0.5 may be adjusted based on the range [0.35, 0.65]
  • the statistical probabilities P yh associated with gray level 0.75 may be adjusted based on the range [0.65, 0.95].
  • the range of the statistical probability may be determined automatically.
  • the range of the statistical probability associated with a gray level may be determined based on the equation (10) and the equation (11):
  • k 1 and k 2 denote empirical coefficients, and k 1 >k 2 .
  • the empirical coefficients k 1 and k 2 may be vary in different application scenarios. In some embodiments, k 1 and k 2 may be determined by the system 100 based on a default setting of the system 100 , an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc. An empirical setting from prior imaging processing may be derived by, for example, machine learning. In some embodiments, k 1 and k 2 corresponding to the different gray levels may be the same or different.
  • an adjusted statistical probability may be determined based on the range of the statistical probability.
  • the adjusted statistical probability may be determined based on the exemplary equation (12):
  • P y′ denotes the adjusted statistical probability
  • P y denotes the statistical probability
  • max denotes the maximum of the range
  • min denotes the minimum of the range.
  • an initial coefficient may be determined based on the adjusted statistical probability.
  • the initial coefficient may be determined based on the equation (13):
  • W denotes the initial coefficient
  • P y′ denotes the adjusted statistical probability
  • bin denotes the normalized gray level, and 0 ⁇ bin ⁇ 1.
  • FIG. 19 is a flowchart illustrating an exemplary process 1900 for determining a mapping curve of a target region according to some embodiments of the present disclosure.
  • one or more operations of process 1900 may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 1900 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 , the determination module 350 of the image processing device 120 , etc.).
  • a sub mapping curve associated with the gray level may be determined based on an optimal curve associated with the gray level and a predetermined curve associated with the gray level.
  • a plurality of sub mapping curves associated with the plurality of gray levels except for the greatest gray level may be determined.
  • a predetermined curve may be associated with a gray level.
  • the predetermined curves associated with different gray levels may be the same or different.
  • the predetermined curve may be predetermined manually by a user of the image processing system 100 via, for example, the I/O 250 .
  • the predetermined curve may be determined by the system 100 based on a default setting of the system 100 , an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc.
  • An empirical setting from prior imaging processing may be derived by, for example, machine learning.
  • Exemplary predetermined curves may include a Gaussian distribution curve, a Weibull distribution curve, an exponential distribution curve, a Poisson distribution curve, a binomial distribution curve, or the like, or any combination thereof.
  • a mapping curve may be determined based on the plurality of the sub mapping curves associated with the plurality of gray levels.
  • the mapping curve may be the sum of all the sub mapping curves.
  • the mapping curve may be determined based on the equation (14):
  • Yin denotes an input pixel value of a pixel that needs to be processed
  • Y out denotes an output pixel value of the pixel that needs to be processed
  • G 1 , G 2 , . . . , Gn ⁇ 1 denote respective optimal curves corresponding to each of the plurality of gray levels except for the greatest gray level
  • B 1 , B 2 , . . . , Be ⁇ 1 denote respective predetermined curves corresponding to each of the plurality of gray levels except for the greatest gray level.
  • the sum of B 1 (Y in ), B 2 (Y in ), . . . , B n-1 (Y in ) may be 1.
  • the initial coefficient associated with the statistical probability may be an initial gamma value.
  • the initial gamma value may be determined based on the equation (15):
  • initial gamma_value denotes the initial gamma coefficient
  • P y′ denotes the adjusted statistical probability
  • bin denotes the normalized gray level, and 0 ⁇ bin ⁇ 1.
  • An optimal gamma value may be determined based on the initial gamma coefficient associated with the gray level, the central pixel of the target region, the neighbor pixels around the central pixel, or any combination thereof.
  • the term “optimal” is used herein for describing a gamma value only.
  • the optimal gamma value corresponding to the initial coefficient may be determined based on the initial gamma coefficient associated with the gray level, the pixel value of the central pixel of the target region, and the pixel values of neighbor pixels around the central pixel. For instance, the optimal coefficient may be determined based on the equation (16):
  • gamma_value′ initial gamma_value ⁇ m, (16)
  • gamma_value′ denotes to the optimal gamma value
  • initial gamma_value denotes to the initial gamma coefficient
  • m denotes the ratio of the average pixel value of the neighbor pixels to the pixel value of the central pixel.
  • the pixel value of the central pixel may be greater than the average pixel value of the neighbor pixels.
  • the optimal coefficient may be adjusted to be less than the initial gamma coefficient.
  • the pixel value of the central pixel may be less than the average pixel value of the neighbor pixels.
  • the optimal gamma coefficient may be adjusted to be greater than the initial gamma coefficient
  • the optimal gamma value corresponding to the initial coefficient may be determined based on the initial gamma coefficient associated with the gray level and the pixel value of the central pixel of the target region.
  • the optimal coefficient may be determined based on the equation (17):
  • gamma_value′ initial gamma_value ⁇ (1+pix_value), (17)
  • gamma_value′ denotes to the optimal gamma value associated with a gray level
  • initial gamma_value denotes to the initial gamma coefficient associated with the gray level
  • pixel_value denotes to normalized pixel value of the central pixel of the target region.
  • the pix_value is within the range from 0 to 1.
  • Exemplary technique for normalizing the pixel value of the central pixel may include a Min-Max normalized technique, a z-score normalized technique, a decimal scaling normalized technique, or the like, or any combination thereof.
  • the gamma curve associated with a gray level may be determined based on the corresponding optimal gamma value, and determined based on the equation (18):
  • Y out denotes an output pixel value
  • Y in denotes an input pixel value
  • n denotes the bit width of the image
  • gamma_value′ denotes an optimal gamma value
  • FIG. 20 is a schematic diagram illustrating exemplary optimal gamma curves according to some embodiments of the present disclosure.
  • G s a gamma curve associated with a shadow gray level of the target region
  • G m a gamma curve associated with a middle gray level of the target region
  • G h a gamma curve associated with a high gray level of the target region
  • the gamma curve G s may be associated with a gray level of 0.25 in the target region; the gamma curve G m may be associated with a gray level of 0.5 in the target region; and the gamma curve G h may be associated with a gray level of 0.75 in the target region.
  • the statistical probability P ys′ associated with the gray level of 0.25 may be adjusted based on the gamma curve G s .
  • the statistical probability P ym′ associated with the gray level of 0.5 may be adjusted based on the gamma curve G m .
  • the statistical probability P yh′ associated with the gray level of 0.75 may be adjusted based on the gamma curve G h .
  • the mapping curve of the target region may be determined based the three gamma curves of G s , G m and G h , and three corresponding Gaussian weight curves of B s , B m and B h .
  • FIG. 21 is a schematic diagram illustrating exemplary Gaussian weight curves according to some embodiments of the present disclosure.
  • the Gaussian weight curve B s may be associated with the shadow gray level of the target region; the Gaussian weight curve B m may be associated with the middle gray level of the target region; and the Gaussian weight curve B h may be associated with the high gray level of the target region.
  • the mapping curve of the target region may be determined based on the equation (19):
  • FIG. 22 is a schematic diagram illustrating an exemplary mapping curve according to some embodiments of the present disclosure. The pixel value of a pixel that needs to be processed may be adjusted based on the mapping curve as illustrated in FIG. 22 .
  • FIG. 23 is a flowchart illustrating an exemplary process of processing an image according to some embodiments of the present disclosure.
  • one or more operations of process 2300 may be implemented in the image processing system 100 illustrated in FIG. 1 .
  • the process 2300 may be stored in the storage 140 as a form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120 , the determination module 350 of the image processing device 120 , etc.).
  • a first luminance image of an original image may be obtained.
  • the first luminance image of the original image may be obtained by determining luminance of each pixel in the original image.
  • the original image may be captured by one or more sensors and/or imaging devices.
  • the original image may be retrieved from a storage device (e.g., the storage 140 illustrated in FIG. 1 , the disk 270 illustrated in FIG. 2A ) or received from an imaging device (e.g., the imaging device 110 illustrated in FIG. 1 ).
  • the original image may be retrieved from an external source, such as a hard disk, a wireless terminal, or the like, or any combination thereof, that is connected to or otherwise communicates with the system 100 .
  • the original image may include red green blue (RGB) data, Bayer data, Luma and Chroma (YUV) data, raw image format (RAW) data, joint photographic experts group (JPEG) data, or the like, or any combination thereof.
  • RGB red green blue
  • YUV Luma and Chroma
  • RAW raw image format
  • JPEG joint photographic experts group
  • the first luminance image of the original image may be decomposed to obtain a plurality of first decomposed images (or referred to as unadjusted decomposed images).
  • the decomposition of the first luminance image of the original image may be performed by the decomposition module 320 as illustrated in FIG. 3 .
  • the method of decomposing the first luminance image of the original image may be described as 620 of process 600 in FIG. 6 in the present disclosure.
  • a plurality of second (or frequency-adjusted) decomposed images may be determined based on the plurality of first (or unadjusted) decomposed images.
  • one or more frequency adjustment operations for determining the second (or frequency-adjusted) decomposed images may be performed by the frequency adjustment module 330 as illustrated in FIGS. 3 and 4 .
  • the method of determining the plurality of second decomposed images may be described as 630 of process 600 in FIG. 6 in the present disclosure.
  • a second (or frequency-adjusted) luminance image of the original image may be reconstructed based on the plurality of second (or frequency-adjusted) decomposed images.
  • one or more operations of reconstructing the second luminance image may be performed by the reconstruction module 340 as illustrated in FIG. 3 .
  • the method of reconstructing the second luminance image may be described as 640 of process 600 in FIG. 6 in the present disclosure.
  • a final image of the original image may be determined based on the first (or unadjusted) luminance image, the second (or frequency-adjusted) luminance image, and the original image.
  • one or more operations of determining the final image of the original image may be performed by the determination module 350 as illustrated in FIG. 3 .
  • the method of determining the final image of the original image may be described as 650 of process 600 in FIG. 6 in the present disclosure.
  • a target region in the final image and a plurality of gray levels in the target region may be identified.
  • the method of identifying the target region and the plurality of gray levels may be described as 1210 of process 1200 in FIG. 12 in the present disclosure.
  • a plurality of statistical probabilities may be determined, in which a statistical probability is associated with a gray level of the plurality of gray levels.
  • the method of determining the plurality of statistical probabilities may be described as 1220 of process 1200 in FIG. 12 in the present disclosure.
  • a mapping curve of the target region may be determined based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels.
  • the method of determining the mapping curve of the target region may be described as 1230 of process 1200 in FIG. 12 in the present disclosure.
  • At 2390 at least one pixel that needs to be processed in the target region may be identified.
  • the method of identifying the at least one pixel that needs to be processed in the target region may be described as 1240 of process 1200 in FIG. 12 in the present disclosure.
  • the value of a pixel that needs to be processed may be determined based on the mapping curve of the target region.
  • the method of determining the value of the at least one pixel that needs to be processed may be described as 1250 of process 1200 in FIG. 12 in the present disclosure.
  • any suitable computer readable media may be used for storing instructions for performing the processes described herein.
  • computer readable media can be transitory or non-transitory.
  • non-transitory computer readable media can include media such as magnetic media (such as hard disks, floppy disks, etc.), optical media (such as compact discs, digital video discs, Blu-ray discs, etc.), semiconductor media (such as flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media.
  • transitory computer readable media can include signals on networks, in connectors, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
  • the numbers expressing quantities, properties, and so forth, used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about,” “approximate,” or “substantially.” For example, “about,” “approximate,” or “substantially” may indicate ⁇ 20% variation of the value it describes, unless otherwise stated. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable.

Abstract

According to aspects of the present disclosure, methods, systems, and media for processing an image are provided. A system may include at least one computer-readable storage medium including a set of instructions for processing an original image, and at least one processor in communication with the at least one computer-readable storage medium. When executing the set of instructions, the system is directed to obtain a first luminance image of the original image; decompose the first luminance image to provide a plurality of first decomposed images; adjust pixel frequencies in at least some of the plurality of first decomposed images to generate a plurality of second decomposed images; generate a second luminance image of the original image based on the plurality of second decomposed images; and determine a final image of the original image based on the first luminance image, the second luminance image, and the original image.

Description

    CROSS-REFERENCE TO THE RELATED APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 16/219,907, filed on Dec. 13, 2018, which is a continuation of International Application No. PCT/CN2017/089192, filed on Jun. 20, 2017, which claims priority to Chinese Patent Application No. 201610456890.6, filed on Jun. 21, 2016 and Chinese Patent Application No. 201710021180.5, filed on Jan. 11, 2017, the contents of which are hereby incorporated by reference.
  • TECHNICAL FIELD
  • The present disclosure relates to a technical field of image processing, and more particularly, systems and methods for image processing.
  • BACKGROUND
  • In general, a display device usually may display some of the luminance in nature. An image displayed on a display device may appear over exposed in a bright area and under exposed in a dark area, rendering it difficult to distinguish some details in the image. Systems and methods for image processing, which may generate images with more details and stronger contrast are widely welcome and in high demand.
  • SUMMARY
  • According to an aspect of the present disclosure, a system may include at least one computer-readable storage medium including a set of instructions for processing an original image, and at least one processor in communication with the at least one computer-readable storage medium. When executing the set of instructions, the system may be directed to: obtain a first luminance image of the original image; decompose the first luminance image of the original image to provide a plurality of first decomposed images; adjust pixel frequencies in at least some of the plurality of first decomposed images to generate a plurality of second decomposed images; generate a second luminance image of the original image based on the plurality of second decomposed images; and determine a final image of the original image based on the first luminance image, the second luminance image, and the original image.
  • In some embodiments, to adjust pixel frequencies in at least some of the plurality of first decomposed images, the system may be further directed to: for a specific pixel in a first decomposed image, identify a frequency of the specific pixel; determine a gain of the specific pixel based on the frequency of the specific pixel and a frequency adjustment threshold associated with the first decomposed image; and adjust the frequency of the specific pixel based on the gain of the specific pixel.
  • In some embodiments, to determine a gain of the specific pixel, the system may be further directed to: identify, from the plurality of the first decomposed images, a certain number of pixels each of which is located at a position corresponding to the specific pixel of the first decomposed images; determine the frequencies of the identified certain number of pixels, a pixel of the certain number of pixels having a frequency; filter the certain number of frequencies to obtain a plurality of filtered frequencies; determine an average filtered frequency associated with the position based on the plurality of filtered frequencies; determine the gain associated with the position based on the average filtered frequency; and assign the gain associated with the position to the specific pixel.
  • In some embodiments, to determine the final image of the original image based on the first luminance image, the second luminance image, and the original image, the system may be further directed to: for an original pixel of a plurality of original pixels in the original image, identify the position of the original pixel in the original image; determine a first luminance of a pixel in the first luminance image, the pixel in the first luminance image being at the same position as the original pixel in the original image; determine a second luminance of a pixel in the second luminance image, the pixel in the second luminance image being at the same position as the original pixel in the original image; and determine a final pixel associated with the original pixel based on the first luminance and the second luminance; and generate the final image of the original image based on the determined final pixels associated with the plurality of original pixels.
  • In some embodiments, to obtain the plurality of first decomposed images, the system may be further directed to perform one or more orders of decomposition on the first luminance image.
  • In some embodiments, the one or more orders of decomposition may be performed based on a wavelet transformation.
  • In some embodiments, to reconstruct a second luminance image of the original image based on the plurality of second decomposed images, the system may be further directed to: perform a reverse operation of the decomposition that provides the plurality of first decomposed images.
  • According to an aspect of the present disclosure, a system may include at least one computer-readable storage medium including a set of instructions for processing an image, and at least one processor in communication with the computer-readable storage medium. When executing the set of instructions, the system may be directed to: identify a target region in the image, the target region having a plurality of gray levels; determine a plurality of statistical probabilities relating to the plurality of gray levels, a statistical probability relating to a gray level of the plurality of gray levels; determine a mapping curve of the target region based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels; identify at least one pixel that needs to be processed in the target region; for a pixel of the at least one pixels that needs to be processed, determine the value of the pixel based on the mapping curve of the target region; and generate a processed image based on the determined at least one value of the at least one pixel.
  • In some embodiments, to determine the mapping curve of the target region, the system may be further directed to: determine a plurality of optimal coefficients relating to the plurality of statistical probabilities, an optimal coefficient being associated with a statistical probability of the plurality of statistical probabilities relating to the target region; determine a plurality of optimal curves, an optimal curve being associated with an optimal coefficient of the plurality of optimal coefficients; and determine the mapping curve of the target region based on the plurality of optimal curves and the plurality of predetermined curves.
  • In some embodiments, to determine the plurality of optimal coefficients, the system may be further directed to: for a statistical probability of the plurality of statistical probabilities, determine an initial coefficient associated with the statistical probability based on the gray level associated with the statistical probability; identify a central pixel of the target region; and determine an optimal coefficient corresponding to the initial coefficient based on the central pixel of the target region.
  • In some embodiments, to determine the mapping curve of the target region based on the plurality of optimal curves and the plurality of predetermined curves, the system may be further directed to: for a gray level of the plurality of gray levels in the target region, determine a sub mapping curve associated with the gray level based on an optimal curve associated with the gray level and a predetermined curve associated with the gray level; and determine the mapping curve of the target region based on the plurality of the sub mapping curves associated with the plurality of gray levels.
  • In some embodiments, the image may include at least one target region.
  • According to an aspect of the present disclosure, a method for processing an original image may comprise: obtaining a first luminance image of the original image; decomposing the first luminance image of the original image to provide a plurality of first decomposed images; adjusting pixel frequencies in at least some of the plurality of first decomposed images to generate a plurality of second decomposed images; generating a second luminance image of the original image based on the plurality of second decomposed images; and determining a final image of the original image based on the first luminance image, the second luminance image, and the original image.
  • In some embodiments, the adjusting pixel frequencies in at least some of the plurality of first decomposed images may comprise: for a specific pixel in a first decomposed image, identifying a frequency of the specific pixel; determining a gain of the specific pixel based on the frequency of the specific pixel and a frequency adjustment threshold associated with the first decomposed image; and adjusting the frequency of the specific pixel based on the gain of the specific pixel.
  • In some embodiments, the determining a gain of the specific pixel may comprise: identifying from the plurality of the first decomposed images, a certain number of pixels that are located at a position corresponding to the specific pixel of the first decomposed images; determining the frequencies of the identified certain number of pixels, a pixel of the certain number of pixels having a frequency; filtering the certain number of frequencies to obtain a plurality of filtered frequencies; determining an average filtered frequency associated with the position based on the plurality of filtered frequencies; determining the gain associated with the position based on the average filtered frequency; and assigning the gain associated with the position to the specific pixel.
  • In some embodiments, the determining the final image of the original image based on the first luminance image, the second luminance image, and the original image may comprise: for an original pixel of a plurality of original pixels in the original image, identifying the position of the original pixel in the original image; determining a first luminance of a pixel in the first luminance image, the pixel in the first luminance image being at the same position as the original pixel in the original image; determining a second luminance of a pixel in the second luminance image, the pixel in the second luminance image being at the same position as the original pixel in the original image; and determining a final pixel associated with the original pixel based on the first luminance and the second luminance; and generating the final image of the original image based on the determined final pixels associated with the plurality of original pixels.
  • In some embodiments, the obtaining the plurality of first decomposed images may comprise performing one or more orders of decomposition on the first luminance image.
  • In some embodiments, the one or more orders of decomposition may be performed based on a wavelet transformation.
  • In some embodiments, the reconstructing a second luminance image of the original image based on the plurality of second decomposed images may comprise performing a reverse operation of the decomposition that provides the plurality of first decomposed images.
  • According to an aspect of the present disclosure, a method for processing an image may comprise: identifying a target region in the image, the target region having a plurality of gray levels; determining a plurality of statistical probabilities relating to the plurality of gray levels, a statistical probability relating to a gray level of the plurality of gray levels; determining a mapping curve of the target region based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels; identifying at least one pixel that needs to be processed in the target region; for a pixel of the at least one pixels that needs to be processed, determining the value of the pixel based on the mapping curve of the target region; and generating a processed image based on the determined at least one value of the at least one pixel.
  • In some embodiments, the determining the mapping curve of the target region may comprise: determining a plurality of optimal coefficients relating to the plurality of statistical probabilities, an optimal coefficient being associated with a statistical probability of the plurality of statistical probabilities relating to the target region; determining a plurality of optimal curves, an optimal curve being associated with an optimal coefficient of the plurality of optimal coefficients; and determining the mapping curve of the target region based on the plurality of optimal curves and the plurality of predetermined curves.
  • In some embodiments, the determining the plurality of optimal coefficients may comprise: for a statistical probability of the plurality of statistical probabilities, determining an initial coefficient associated with the statistical probability based on the gray level associated with the statistical probability; identifying a central pixel of the target region; and determining an optimal coefficient corresponding to the initial coefficient based on the central pixel of the target region.
  • In some embodiments, the determining the mapping curve of the target region based on the plurality of optimal curves and the plurality of predetermined curves may comprise: for a gray level of the plurality of gray levels in the target region, determining a sub mapping curve associated with the gray level based on an optimal curve associated with the gray level and a predetermined curve associated with the gray level; and determining the mapping curve of the target region based on the plurality of the sub mapping curves associated with the plurality of gray levels.
  • In some embodiments, the image may include at least one target region.
  • According to an aspect of the present disclosure, a non-transitory computer readable medium may comprise at least one set of instructions for processing an original image, wherein when executed by at least one processor, the at least one set of instructions may direct the at least one processor to perform acts of: obtaining a first luminance image of the original image; decomposing the first luminance image of the original image to provide a plurality of first decomposed images; adjusting pixel frequencies in at least some of the plurality of first decomposed images to generate a plurality of second decomposed images; generating a second luminance image of the original image based on the plurality of second decomposed images; and determining a final image of the original image based on the first luminance image, the second luminance image, and the original image.
  • According to an aspect of the present disclosure, a non-transitory computer readable medium may comprise at least one set of instructions for processing an original image, wherein when executed by at least one processor, the at least one set of instructions may direct the at least one processor to perform acts of: identifying a target region in the image, the target region having a plurality of gray levels; determining a plurality of statistical probabilities relating to the plurality of gray levels, a statistical probability relating to a gray level of the plurality of gray levels; determining a mapping curve of the target region based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels; identifying at least one pixel that needs to be processed in the target region; for a pixel of the at least one pixel that needs to be processed, determining the value of the pixel based on the mapping curve of the target region; and generating a processed image based on the determined at least one value of the at least one pixel.
  • According to an aspect of the present disclosure, a system may include: at least one acquisition module configured to obtain a first luminance image of the original image; at least one decomposition module configured to decompose the first luminance image of the original image to provide a plurality of first decomposed images; at least one frequency adjustment module configured to adjust pixel frequencies in at least some of the plurality of first decomposed images to generate a plurality of second decomposed images; at least one reconstruction module configured to generate a second luminance image of the original image based on the plurality of second decomposed images; and at least one determination module configured to determine a final image of the original image based on the first luminance image, the second luminance image, and the original image.
  • According to still an aspect of the present disclosure, a system may include at least one acquisition module and at least one determination module. The at least one acquisition module may be configured to: identify a target region in the image, the target region having a plurality of gray levels; and determine a plurality of statistical probabilities relating to the plurality of gray levels, a statistical probability relating to a gray level of the plurality of gray levels. The at least one determination module may be configured to: determine a mapping curve of the target region based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels; identify at least one pixel that needs to be processed in the target region; for a pixel of the at least one pixel that needs to be processed, determine the value of the pixel based on the mapping curve of the target region; and generate a processed image based on the determined at least one value of the at least one pixel.
  • Additional features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities and combinations set forth in the detailed examples discussed below.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
  • FIG. 1 is a block diagram illustrating an exemplary system for image processing according to some embodiments of the present disclosure;
  • FIG. 2A is a schematic diagram illustrating an exemplary computing device according to some embodiments of the present disclosure;
  • FIG. 2B is a schematic diagram illustrating an exemplary mobile device according to some embodiments of the present disclosure;
  • FIG. 3 is a block diagram illustrating an exemplary image processing device according to some embodiments of the present disclosure;
  • FIG. 4 is a block diagram illustrating an exemplary frequency adjustment module according to some embodiments of the present disclosure;
  • FIG. 5 is a block diagram illustrating an exemplary determination module according to some embodiments of the present disclosure;
  • FIG. 6 a flowchart illustrating an exemplary process for processing an original image according to some embodiments of the present disclosure;
  • FIG. 7 is a flowchart illustrating an exemplary process for determining a decomposed image according to some embodiments of the present disclosure;
  • FIG. 8 is a flowchart illustrating an exemplary process for determining a gain of a pixel in a decomposed image according to some embodiments of the present disclosure;
  • FIG. 9 is a flowchart illustrating an exemplary process for obtaining a filtered frequency associated with a position according to some embodiments of the present disclosure;
  • FIGS. 10-I through 10-III are schematic diagrams illustrating an Nth order decomposed images of high frequency according to some embodiments of the present disclosure;
  • FIG. 11 is a flowchart illustrating an exemplary process for determining a final image of the original image according to some embodiments of the present disclosure;
  • FIG. 12 is a flowchart illustrating an exemplary process for processing an image according to some embodiments of the present disclosure;
  • FIG. 13 is a flowchart illustrating an exemplary process for processing at least one target region having a part outside an image according to some embodiments of the present disclosure;
  • FIGS. 14-I through 14-V is are schematic diagrams illustrating patching at least one edge of an image according to some embodiments of the present disclosure;
  • FIGS. 15-I through 15-V are schematic diagrams illustrating patching at least one edge of an image according to some embodiments of the present disclosure;
  • FIG. 16 is a flowchart illustrating an exemplary process for determining a mapping curve of a target region according to some embodiments of the present disclosure;
  • FIG. 17 is a flowchart illustrating an exemplary process for determining an optimal coefficient of a target region according to some embodiments of the present disclosure;
  • FIG. 18 is a flowchart illustrating an exemplary process for determining an initial optimal coefficient of a target region according to some embodiments of the present disclosure;
  • FIG. 19 is a flowchart illustrating an exemplary process for determining a mapping curve of a target region according to some embodiments of the present disclosure;
  • FIG. 20 is a schematic diagram illustrating exemplary optimal gamma curves according to some embodiments of the present disclosure;
  • FIG. 21 is a schematic diagram illustrating exemplary Gaussian weight curves according to some embodiments of the present disclosure;
  • FIG. 22 is a schematic diagram illustrating an exemplary mapping curve according to some embodiments of the present disclosure; and
  • FIG. 23 is a flowchart illustrating an exemplary process for processing an image according to some embodiments of the present disclosure.
  • DETAILED DESCRIPTION
  • Subject matter will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific example embodiments. Subject matter may, however, be embodied in a variety of different forms and, therefore, covered or claimed subject matter is intended to be construed as not being limited to any example embodiments set forth herein; example embodiments are provided merely to be illustrative. Likewise, a reasonably broad scope for claimed or covered subject matter is intended. Among other things, for example, subject matter may be embodied as methods, devices, components, or systems. The following detailed description is, therefore, not intended to be limiting on the scope of what is claimed.
  • Throughout the specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. Likewise, the phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment” as used herein does not necessarily refer to a different embodiment. It is intended, for example, that claimed subject matter includes combinations of example embodiments in whole or in part.
  • In general, terminology may be understood at least in part from usage in context. For example, terms, such as “and”, “or”, or “and/or,” as used herein may include a variety of meanings that may depend at least in part upon the context in which such terms are used. Typically, “or” if used to associate a list, such as A, B or C, is intended to mean A, B, and C, here used in the inclusive sense, as well as A, B or C, here used in the exclusive sense. In addition, the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • It will be understood that the term “system,” “engine,” “unit,” “module,” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by other expression if they may achieve the same purpose.
  • Generally, the word “module,” “unit,” or “block,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions. A module, a unit, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or other storage device. In some embodiments, a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules/units/blocks configured for execution on computing devices (e.g., processor 220 as illustrated in FIG. 2A) may be provided on a computer readable medium, such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution). Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device. Software instructions may be embedded in a firmware, such as an EPROM. It will be further appreciated that hardware modules/units/blocks may be included of connected logic components, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors. The modules/units/blocks or computing device functionality described herein may be implemented as software modules/units/blocks, but may be represented in hardware or firmware. In general, the modules/units/blocks described herein refer to logical modules/units/blocks that may be combined with other modules/units/blocks or divided into sub-modules/sub-units/sub-blocks despite their physical organization or storage.
  • It will be understood that when a unit, engine, module or block is referred to as being “on,” “connected to,” or “coupled to” another unit, engine, module, or block, it may be directly on, connected or coupled to, or communicate with the other unit, engine, module, or block, or an intervening unit, engine, module, or block may be present, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
  • These and other features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawing(s), all of which form a part of this specification. It is to be expressly understood, however, that the drawing(s) are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale.
  • An aspect of the present disclosure relates to systems and methods for image processing. According to the present disclosure, a luminance image of an original image may be first decomposed and then reconstructed after adjusting the frequencies of pixels. The luminance of pixels in a processed image may be determined based on the luminance of corresponding pixels in the decomposed images and the reconstructed images. According to the present disclosure, a target region of an image may be processed according to determining the value of pixels in the target region based on statistical probabilities associated with gray levels in the target region and a plurality of predetermined coefficients. According to the systems and methods in the present disclosure, an image may display more details and increase original contrast.
  • FIG. 1 is a block diagram illustrating an exemplary image processing system 100 according to some embodiments of the present disclosure. The image processing system 100 may include an imaging device 110, an image processing device 120, a terminal 130, a storage 140, a network 150, and a base station 160.
  • The imaging device 110 may be configured to capture one or more images. The one or more images may be images about a static or moving object. The image may include a still picture, a motion picture, a video (offline or live streaming), a frame of a video, or a combination thereof.
  • The imaging device 110 may be any suitable device that is capable of capturing an image. The imaging device 110 may be and/or include a camera, a sensor, a video recorder, or the like, or any combination thereof. The imaging device 110 may be and/or include any suitable type of camera, such as a fixed camera, a fixed dome camera, a covert camera, a Pan-Tilt-Zoom (PTZ) camera, a thermal camera, etc. The imaging device 110 may be and/or include any suitable type of sensor, such as an audio sensor, a light sensor, a wind speed sensor, or the like, or a combination thereof.
  • The light sensor (e.g., an infrared detector) may be configured to obtain a light signal, such as a near infrared signal. The audio sensor may be configured to obtain an audio signal. The audio signal and the light signal may be configured to provide reference information for processing images captured by the imaging device 110.
  • Data obtained by the imaging device 110 (e.g., images, audio signals, light signals, etc.) may be stored in the storage 140, sent to the image processing device 120 or the terminal(s) 130 via the network 150.
  • The image processing device 120 may be configured to process an image. For example, the image processing device 120 may be configured to, based on the image, identify luminance of the image, decompose a first luminance image of the image, reconstruct a second luminance image of the image, determine a final image associated with the image, or the like, or a combination thereof. As another example, the image processing device 120 may be configured to identify a target region of the image, determine a statistical probability associated with the gray level of the target region, determine a mapping curve of the target region, determine a value of a pixel in the image, or the like, or any combination thereof. The image that the imaging processing device 120 processes may be captured by the imaging device 110 or retrieved from another source (e.g., the storage 140, the terminal(s) 130, etc.).
  • The image processing device 120 may further be configured to generate a control signal. The control signal may be generated based on a feature of an object being imaged, luminance of a scene when an image of the scene is being acquired, displayed luminance of an image, or the like, or any combination thereof. The control signal may be used to control the imaging device 110. For example, the image processing device 120 may generate a control signal to instruct the imaging device 110 (e.g., a camera) to track an object and obtain an image of the object.
  • The image processing device 120 may be any suitable device that is capable of processing an image. For example, the image processing device 120 may include a high-performance computer specializing in image processing or transaction processing, a personal computer, a portable device, a server, a microprocessor, an integrated chip, a digital signal processor (DSP), a tablet computer, a personal digital assistant (PDA), a mobile phone, or the like, or a combination thereof. In some embodiments, the image processing device 120 may be implemented on a computing device 200A shown in FIG. 2A and/or a mobile device 200B shown in FIG. 2B.
  • In some embodiments, the image processing device 120 may include one or more processing engines (e.g., single-core processing engine(s) or multi-core processor(s)). Merely by way of example, the image processing device 120 may include a central processing unit (CPU), an application-specific integrated circuit (ASIC), an application-specific instruction-set processor (ASIP), a graphics processing unit (GPU), a physics processing unit (PPU), a digital signal processor (DSP), a field programmable gate array (FPGA), a programmable logic device (PLD), a controller, a microcontroller unit, a reduced instruction-set computer (RISC), a microprocessor, or the like, or any combination thereof.
  • The terminal 130 may be connected to or communicate with the image processing device 120. The terminal 130 may allow one or more operators (e.g., a law enforcement officer, etc.) to control the production and/or display of the data (e.g., the image captured by the imaging device 110) on a display. The terminal 130 may include an input device, an output device, a control panel, a display (not shown in FIG. 1), or the like, or a combination thereof.
  • Exemplary input device may include a keyboard, a touch screen, a mouse, a remote controller, a wearable device, or the like, or a combination thereof. For example, the input device may include alphanumeric and other keys that may be inputted via a keyboard, a touch screen (e.g., with haptics or tactile feedback, etc.), a speech input, an eye tracking input, a brain monitoring system, or any other comparable input mechanism. The input information received through the input device may be communicated to the image processing device 120 via the network 150 for further processing. Exemplary input device may further include a cursor control device, such as a mouse, a trackball, or cursor direction keys to communicate direction information and command selections to, for example, the image processing device 120 and to control cursor movement on display or another display device.
  • A display may be configured to display the data received (e.g., the image captured by the imaging device 110). The information may include data before and/or after data processing, a request for input or parameter relating to image acquisition and/or processing, or the like, or any combination thereof. Exemplary display may include a liquid crystal display (LCD), a light emitting diode (LED)-based display, a flat panel display or curved screen (or television), a cathode ray tube (CRT), or the like, or a combination thereof.
  • The storage 140 may store data and/or instructions. The data may include an image (e.g., an image obtained by the imaging device 110) relevant information of the image, etc. In some embodiments, the storage 140 may store data and/or instructions that the server 110 may execute or use to perform exemplary methods described in the present disclosure. In some embodiments, the storage 140 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM), or the like, or any combination thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memory may include a random access memory (RAM). Exemplary RAM may include a dynamic RAM (DRAM), a double date rate synchronous dynamic RAM (DDR SDRAM), a static RAM (SRAM), a thyristor RAM (T-RAM), and a zero-capacitor RAM (Z-RAM), etc. Exemplary ROM may include a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (PEROM), an electrically erasable programmable ROM (EEPROM), a compact disk ROM (CD-ROM), and a digital versatile disk ROM, etc. In some embodiments, the storage 140 may be implemented on a cloud platform. Merely by way of example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
  • The network 150 may facilitate communications between various components of the image processing system 100. The network 150 may be a single network, or a combination of various networks. Merely by way of example, the network 150 may include a cable network, a wireline network, an optical fiber network, a tele communications network, an intranet, an Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a metropolitan area network (MAN), a wide area network (WAN), a public telephone switched network (PSTN), a Bluetooth™ network, a ZigBee™ network, a near field communication (NFC) network, a global system for mobile communications (GSM) network, a code-division multiple access (CDMA) network, a time-division multiple access (TDMA) network, a general packet radio service (GPRS) network, an enhanced data rate for GSM evolution (EDGE) network, a wideband code division multiple access (WCDMA) network, a high speed downlink packet access (HSDPA) network, a long term evolution (LTE) network, a user datagram protocol (UDP) network, a transmission control protocol/Internet protocol (TCP/IP) network, a short message service (SMS) network, a wireless application protocol (WAP) network, a ultra wide band (UWB) network, an infrared ray, or the like, or any combination thereof. The network 150 may also include various network access points, e.g., wired or wireless access points such as one or more base stations 160 or Internet exchange points through which a data source may connect to the network 150 in order to transmit information via the network 150.
  • It should be noted that the descriptions above in relation to the image processing system 100 is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modifications may be conducted under the guidance of the present disclosure. However, those variations and modifications do not depart the scope of the present disclosure. For example, part or all of the image data generated by the imaging device 110, may be processed by the terminal 130. As another example, the imaging device 110 and the image processing device 120 may be implemented in one single device configured to perform the functions of the imaging device 110 and the image processing device 120 described in this disclosure. As still another example, the terminal 130, and the storage 140 may be combined with or part of the image processing device 120 as a single device. Similar modifications should fall within the scope of the present disclosure.
  • FIG. 2A is an architecture illustrating an exemplary computing device 200A on which a specialized system incorporating the present teaching may be implemented. Such a specialized system incorporating the present teaching has a functional block diagram illustration of a hardware platform that may include user interface elements. A computing device 200A may be a general-purpose computer or a special purpose computer. The computing device 200A may be used to implement any component of image processing as described herein. For example, the image processing device 120 may be implemented on a computer such as the computing device 200, via its hardware, software program, firmware, or a combination thereof. Although only one such computer is shown, for convenience, the computer functions relating to image processing as described herein may be implemented in a distributed fashion on a number of similar platforms, to distribute the processing load.
  • The computing device 200A, for example, may include a communication (COM) ports 250 connected to and from a network connected thereto to facilitate data communications. The computing device 200A may also include a processor 220, in the form of one or more processors, for executing program instructions stored in a storage device (e.g., a disk 270, a read only memory (ROM) 230, or a random-access memory (RAM) 240)), and when executing the program instructions, the processor 220 may be configured to cause the computing device 200A to perform the functions thereof described herein.
  • The exemplary computer platform may include an internal communication bus 210, program storage, and data storage of different forms, e.g., a disk 270, a ROM 230, or a RAM 240, for various data files to be processed and/or communicated by the computer, as well as possibly program instructions to be executed by the processor 220. The computing device 200A may also include an I/O component 260, supporting input/output flows between the computer and other components therein such as user interface elements (not shown in FIG. 2A). The computing device 200A may also receive programming and data via network communications.
  • Aspects of the methods of the image processing and/or other processes, as described herein, may be embodied in programming. Program aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of executable code and/or associated data that is carried on or embodied in a type of machine readable medium. Tangible non-transitory “storage” type media include any or all of the memory or other storage for the computers, processors, or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide storage at any time for the software programming.
  • All or portions of the software may at times be communicated through a network such as the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer of a scheduling system into the hardware platform(s) of a computing environment or other system implementing a computing environment or similar functionalities in connection with image processing. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links or the like, also may be considered as media bearing the software. As used herein, unless restricted to tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.
  • A non-transitory machine-readable medium may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium or physical transmission medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s), or the like, which may be used to implement the system or any of its components shown in the drawings. Volatile storage media may include dynamic memory, such as a main memory of such a computer platform. Tangible transmission media may include coaxial cables; copper wire and fiber optics, including the wires that form a bus within a computer system. Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media may include, for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a physical processor for execution.
  • Those skilled in the art will recognize that the present teachings are amenable to a variety of modifications and/or enhancements. For example, although the implementation of various components described herein may be embodied in a hardware device, it may also be implemented as a software only solution, e.g., an installation on an existing server. In addition, image processing as disclosed herein may be implemented as a firmware, firmware/software combination, firmware/hardware combination, or a hardware/firmware/software combination.
  • FIG. 2B is a schematic diagram illustrating an exemplary mobile device 200B according to some embodiments of the present disclosure. In some embodiments, the mobile device 200B may illustrate hardware and/or software components of the terminal 130. As illustrated in FIG. 2B, the mobile device 200B may include a communication platform 295, a display 255, a graphic processing unit (GPU) 266, a central processing unit (CPU) 265, an I/O 260, a memory 275, and a storage 290. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown), may also be included in the mobile device 200B. In some embodiments, a mobile operating system 280 (e.g., iOS™, Android™, Windows Phone™, etc.) and one or more applications 285 may be loaded into the memory 275 from the storage 290 in order to be executed by the CPU 265. The applications 285 may include a browser or any other suitable mobile apps for receiving and rendering information relating to image processing or other information from, for example, the image processing device 120. User interactions with the information stream may be achieved via the I/O 260 and provided to the image processing device 120 and/or other components of the Diagnostic and treatment system 100 via the network 150.
  • To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform(s) for one or more of the elements described herein. A computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device. A computer may also act as a server if appropriately programmed.
  • FIG. 3 is a block diagram illustrating an exemplary image processing device 120 according to some embodiments of the present disclosure of the present disclosure. The image processing device 120 may include an acquisition module 310, a decomposition module 320, a frequency adjustment module 330, a reconstruction module 340, and a determination module 350. The processor 220 may include more or fewer components without loss of generality. For example, two of the modules may be combined into a single module, or one of the modules may be divided into two or more modules. As another example, one or more of the modules may reside on different computing devices (e.g., a desktop, a laptop, a mobile device, a tablet computer, a wearable computing device, or the like, or a combination thereof). As still another example, the image processing device 120 may be implemented on the computing device 200A shown in FIG. 2A or the mobile device 200B shown in FIG. 2B.
  • Here and also throughout the present disclosure, a module may be implemented in many different ways and as hardware, software or in different combinations of hardware and software. For example, all or parts of a module implementations may be a processing circuitry that may include part or all of an instruction processor, such as a central processing unit (CPU), a microcontroller, a microprocessor; or an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), a programmable logic device (PLD), a field programmable gate array (FPGA), a controller, other electronic components; or as circuitry that includes discrete logic or other circuit components, including an analog circuit component, a digital circuit component or both; or any combination thereof. The circuitry may include discrete interconnected hardware components or may be combined on a single integrated circuit die, distributed among multiple integrated circuit dies, or implemented in a Multiple Chip Module (MCM) of multiple integrated circuit dies in a common package, as examples.
  • The acquisition module 310 may be configured to acquire data of an original image. The original image may include a picture in a video sequence including signals sampled in both the horizontal and vertical directions. Exemplary data of the original image may include red green blue (RGB) data, Bayer data, Luma and Chroma (YUV) data, raw image format (RAW) data, joint photographic experts group (JPEG) data, or the like, or any combination thereof. In some embodiments, the acquisition module 310 may be connected to an I/O module (not shown in FIG. 3) to acquire data.
  • In some embodiments, the acquisition module 310 may also acquire data relating to the original image. The data relating to the original image may include a plurality of gray levels in the original image, a plurality of statistical probabilities associated with each gray level, etc.
  • The decomposition module 320 may be configured to decompose a luminance image of the original image. Exemplary techniques for decomposing an image may include a wavelet transform technique, a Gauss-pyramid technique, a Laplacian-pyramid technique, a contrast-pyramid technique, a wavelet-pyramid technique, or the like, or any combination thereof.
  • In some embodiments, the decomposition module 320 may decompose a luminance image by a wavelet transform decomposition to generate at least one decomposed image of high frequency and a decomposed image of low frequency. As used herein, “high frequency” is a relative term compared to “low frequency” relating to the frequency variation of brightness in the image. For example, the image of high frequency may display details, and the image of low frequency may display outlines. Exemplary wavelet transform decomposition algorithms may include stationary wavelet transform (SWT), orthogonal wavelet transform (OWT), fast wavelet transform (FWT), discrete wavelet transform (DWT), or the like, or any combination thereof. Merely by way of example, the decomposition module 320 may decompose an image into a decomposed image of low frequency and three decomposed images of high frequency in a horizontal direction, in a vertical direction, and in a diagonal direction by the SWT algorithm. The decomposed image of low frequency may include components showing outlines of an object in the decomposed image. Three decomposed images of high frequency may include components showing details of the object in the decomposed image in different directions, respectively.
  • In some embodiments, the decomposition module 320 may decompose the first luminance image by multiple orders of decompositions. In an N-th order decomposition, an N-th order decomposed image of low frequency and one or more N-th order decomposed images of high frequency may be obtained. In the subsequent decomposition of (N+1)-th order, the N-th order decomposed image of low frequency may be further decomposed to generate a (N+1)-th order decomposed image of low frequency and one or more (N+1)-order decomposed image of high frequency.
  • For example, the decomposition module 320 may perform a three-order wavelet transform decomposition (e.g., SWT) to generate a first-order decomposition set, a second-order decomposition set, and a third-order decomposition set. The decomposed image of low frequency from the first-order decomposition may be decomposed further to generate a second-order decomposed image of low frequency and one or more second-order decomposed images of high frequency. The second-order decomposed image of low frequency may be decomposed further to generate a third-order decomposed image of low frequency and one or more third-order decomposed images of high frequency. The first-order decomposition set and the second-order decomposition set may each include three decomposed images of high frequency, respectively. The third-order decomposition set may include a decomposed image of low frequency and three decomposed images of high frequency.
  • The frequency adjustment module 330 may be configured to adjust frequencies of pixels in a first image to generate a second image (or referred to as a frequency-adjusted image). The first decomposed image may be received from the decomposition module 320. In some embodiments, the frequency adjustment module 330 may adjust frequencies of pixels in the first image based on the frequency adjustment threshold corresponding to the first image. For example, the frequency adjustment module 330 may reduce the frequency of a pixel that is greater than a frequency adjustment threshold in the first image. As another example, the frequency adjustment module 330 may increase the frequency of a pixel that is less than the frequency adjustment threshold in the first image. In some embodiments, the frequency adjustment threshold corresponding to the first image may be provided by a user via, for example, the I/O 250. In some embodiments, the frequency adjustment threshold corresponding to the first decomposed image may be determined by the system 100 based on a default setting of the system 100, an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc. An empirical setting from prior imaging processing may be derived by, for example, machine learning.
  • An image subject to frequency adjustment may be a decomposed image obtained by, for example, decomposition as described elsewhere in the present disclosure. For instance, the frequency adjustment module 330 may adjust frequencies of pixels in a first decomposed image to generate a second decomposed image (or referred to as a frequency-adjusted decomposed image.
  • In some embodiments, the frequency adjustment module 330 may adjust frequencies of pixels in an image based on one or more adjustment factors corresponding to the image. For example, the frequency adjustment module 330 may adjust a frequency of a pixel in a first decomposed image of high frequency based on an adjustment factor corresponding to the first decomposed image of high frequency. In some embodiments, the adjustment factors corresponding to decomposed images of different orders may be the same or different.
  • The reconstruction module 340 may be configured to reconstruct a frequency-adjusted luminance image based on a plurality of frequency-adjusted decomposed images. The frequency-adjusted decomposed images may be generated from the frequency adjustment module 330. The frequency of a pixel in a frequency-adjusted decomposed image may be determined based on the frequency of a pixel locate at the same position in the corresponding decompose image before the frequency adjustment.
  • In some embodiments, the reconstruction may be a reverse process of the decomposition. The reconstruction may include multiple orders of reconstructions. In some embodiments, the number of orders in the reconstruction may be the same as the number of orders in the decomposition. For example, the reconstruction module 340 may reconstruct a frequency-adjusted luminance image by a three-order wavelet transform when the frequency-adjusted decomposed images are generated based on a three-order wavelet transform decomposition.
  • The determination module 350 may be configured to determine a final image of the original image based on the luminance image without the frequency adjustment, the frequency-adjusted luminance image, and the original image. The determination module 350 may determine a final image of the original image by first determining the position of a pixel in the original image, and then adjusting the frequency of the pixel based on the luminance of the pixel being at the same position in the corresponding luminance image before frequency adjustment; based on the frequency-adjusted pixels and their respective positions, the frequency-adjusted luminance image may be obtained.
  • In some embodiments, the determination module 350 may be configured to determine data relating to processing an image. For example, the determination module 350 may determine a mapping curve of a target region in the image, an optimal coefficient of the target region, an optimal curve of the target region, the value of a pixel in the target region, or the like, or any combination thereof. The term “optimal” (e.g., optimal coefficient, optimal curve) may be used to describe a factor only, and not for optimizing the value of a pixel in the target region.
  • It should be understood that the preceding description of the image processing device 120 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modifications may be made in the light of the present disclosure. For example, the image processing device 120 may also include a storage module (not shown in FIG. 3) for storing data relating to the image processing. However, those variations and modifications do not depart from the protecting scope of the present disclosure.
  • FIG. 4 is a block diagram illustrating an exemplary frequency adjustment module 330 according to some embodiments of the present disclosure.
  • The frequency adjustment module 330 may include a gain determination unit 410 and a frequency adjustment unit 420. The frequency adjustment module 330 may include more or fewer components without loss of generality. For example, two of the units may be combined into a single unit, or one of the units may be divided into two or more units. For example, one or more of the units may reside on different computing devices (e.g., a desktop, a laptop, a mobile device, a tablet computer, a wearable computing device, or the like, or a combination thereof). However, those variations and modifications do not depart from the protecting scope of the present disclosure.
  • The gain determination unit 410 may be configured to determine a gain of a pixel in an image that is to be adjusted (or referred to as an unadjusted image). The gain determination unit 410 may determine the gain of the pixel based on the frequency of the pixel in the first image and a frequency threshold associated with the first image.
  • The frequency adjustment unit 420 may be configured to adjust frequencies of pixels in an unadjusted image. The frequency adjustment unit 420 may adjust the frequency of a pixel in the unadjusted image based on a gain corresponding to the pixel in the unadjusted image. In some embodiments, the frequency of the corresponding pixel in the frequency-adjusted image may be determined based on the original frequency of the pixel and the gain of the original pixel in the unadjusted image.
  • FIG. 5 is a block diagram illustrating an example of a determination module 350 according to some embodiments of the present disclosure. The determination module 350 may include a position determination unit 510, a pixel value adjustment unit 520, and a construction unit 530. The determination module 350 may include more or fewer components without loss of generality. For example, two of the units may be combined into a single unit, or one of the units may be divided into two or more units. In one implementation, one or more of the units may reside on different computing devices (e.g., a desktop, a laptop, a mobile device, a tablet computer, a wearable computing device, or the like, or a combination thereof). However, those variations and modifications do not depart from the protecting scope of the present disclosure.
  • The position determination unit 510 may be configured to determine the position of a pixel in an image. For example, the position determination unit 510 may determine an original pixel in an original image, the position of a pixel in a luminance image before frequency adjustment, the position of a pixel in a frequency-adjusted luminance image, etc. The position of the pixel in the image may be represented by coordinates (e.g., orthogonal coordinates, spherical coordinates, polar coordinates, etc.) in, for example, a two-dimensional coordinate system, a three-dimensional coordinate system, etc. Merely by way of example, the position of an original pixel may be represented as (x, y) as illustrated in FIG. 10.
  • As used herein, an original image may refer to an image acquired by an imaging device (e.g., the imaging device 110 illustrated in FIG. 1). An original image may be stored or retrieved from a storage device (e.g., the storage 140 illustrated in FIG. 1, the disk 270 illustrated in FIG. 2A, or an external source, such as a hard disk, a wireless terminal, or the like, or any combination thereof, that is connected to or otherwise communicates with the system 100), or from an imaging device by which the original image is acquired. As used herein, an unadjusted image may refer to an image that is obtained by performing, on an original image, one or more operations (e.g., decomposition, transform, etc.) except for frequency adjustment. As used herein, a frequency-adjusted image may refer to an image that is obtained by performing, on an original image or an unadjusted image, frequency adjustment.
  • The pixel value adjustment unit 520 may be configured adjust a pixel value of an original pixel in an original image. For example, the pixel value adjustment unit 520 may adjust the pixel value of the original pixel in the original image to determine a pixel value of the final pixel (e.g., the pixel in the final image that is the same as the original pixel except for the pixel value adjustment) in a final image of the original image.
  • The construction unit 530 may be configured to construct a final image based on a plurality of final pixels.
  • FIG. 6 is a flowchart illustrating an exemplary process 600 for processing an original image according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 600 may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 600 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120).
  • At 610, a first luminance image (or referred to as an unadjusted luminance image) of an original image may be obtained. The first luminance image of the original image may be obtained by determining luminance of each pixel in the original image. In some embodiments, the original image may be captured by one or more sensors and/or imaging devices. In some embodiments, the original image may be retrieved from a storage device (e.g., the storage 140 illustrated in FIG. 1, the disk 270 illustrated in FIG. 2A) or received from an imaging device (e.g., the imaging device 110 illustrated in FIG. 1). In some embodiments, the original image may be retrieved from an external source, such as a hard disk, a wireless terminal, or the like, or any combination thereof, that is connected to or otherwise communicates with the system 100. In some embodiments, the original image may include red green blue (RGB) data, Bayer data, Luma and Chroma (YUV) data, raw image format (RAW) data, joint photographic experts group (JPEG) data, or the like, or any combination thereof.
  • At 620, the first luminance image of the original image may be decomposed to obtain a plurality of first decomposed images (or referred to as unadjusted decomposed images). The first decomposed images may include frequency information of the original image. For example, the first decomposed images may include information relating to the frequencies of one or more pixels in the original image. The first decomposed image of high frequency may include information relating to details of an object in the first decomposed image. The first decomposed image of low frequency may include information relating to outlines of the object in the first decomposed image. In some embodiments, the decomposition of the first luminance image of the original image may be performed by the decomposition module 320 as illustrated in FIG. 3.
  • In some embodiments, the decomposition of the first luminance image of the original image may be performed based on different decomposition techniques. Exemplary decomposition techniques may include pyramid decomposition, wavelet decomposition, Laplace transform, filtering, or the like, or any combination thereof. Exemplary pyramid decomposition techniques may include a Gauss-pyramid, a Laplacian-pyramid, a contrast-pyramid, a wavelet-pyramid, or the like, or any combination thereof. Exemplary wave decomposition techniques may include a stationary wavelet transform (SWT), a fast wavelet transform (FWT), a discrete wavelet transform (DWT), an orthogonal wavelet transform (OWT), or the like, or any combination thereof. Exemplary filtering techniques may include low-pass filtering, feather-edge filtering, etc.
  • For illustration purposes, the wavelet decomposition may be described as an example. The wavelet decomposition of the first luminance image may generate at least one decomposed image of high frequency and a decomposed image of low frequency. One or more orders of decomposition may be performed based on the wavelet decomposition. For example, a three-order wavelet decomposition (e.g., SWT) of the first luminance image of the original image may generate a first-order decomposition set, a second-order decomposition set, and a third-order decomposition set. The first-order wavelet decomposition set may include three first (or unadjusted) decomposed images of high frequency. The second-order wavelet decomposition set may include three first (or unadjusted) decomposed images of high frequency. The third-order wavelet decomposition set may include a first (or unadjusted) decomposed image of low frequency and three first (or unadjusted) decomposed images of high frequency. The first (or unadjusted) decomposed images of high frequency in the same order may include frequency information in a horizontal direction, in a vertical direction, and in a diagonal direction, respectively. The direction may be determined based on a two-dimensional coordinate system. The two-dimensional coordinate system may include an x axis and a y axis. For example, the horizontal direction may be parallel to the x axis. The vertical direction may be parallel to the y axis.
  • At 630, a plurality of second (or frequency-adjusted) decomposed images may be determined based on the plurality of first (or unadjusted) decomposed images. In some embodiments, one or more frequency adjustment operations for determining the second (or frequency-adjusted) decomposed images may be performed by the frequency adjustment module 330 as illustrated in FIGS. 3 and 4.
  • The second (or frequency-adjusted) decomposed image may be determined according to adjusting the frequencies of the pixels in the first decomposed image. In some embodiments, the frequencies of the corresponding pixels in the second (or frequency-adjusted) decomposed image may be determined based on the frequencies of the pixels in the first (or unadjusted) decomposed image and a frequency adjustment threshold associated with the first (or unadjusted) decomposed image. For example, the frequency of a pixel that is greater than a frequency adjustment threshold in the first decomposed image may be reduced to provide the frequency of the corresponding pixel in the second decomposed image. A frequency of a pixel that is lower than the frequency adjustment threshold in the first decomposed image may be increased to provide the frequency of the corresponding pixel in the second decomposed image. The frequency adjustment threshold may be varied according to different application scenarios of the image processing system 100. For example, the frequency adjustment threshold corresponding to the first decomposed image may be provided by a user via, for example, the I/O 250. As another example, the frequency adjustment threshold corresponding to the first decomposed image may be determined by the system 100 based on a default setting of the system 100, an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc. An empirical setting from prior imaging processing may be derived by, for example, machine learning.
  • In some embodiments, the frequencies of the corresponding pixels in the second (or frequency-adjusted) decomposed image may be determined based on the frequencies of the pixels in the first decomposed image and gains of the pixels in the first (or unadjusted) decomposed image. The gain of the pixel may be determined based on the frequency of the pixel and a frequency adjustment threshold associated with the first decomposed image. Exemplary processes for determining a second decomposed image may be found elsewhere in the present disclosure. See, for example, FIG. 7 and the description thereof.
  • At 640, a second (or frequency-adjusted) luminance image of the original image may be reconstructed based on the plurality of second (or frequency-adjusted) decomposed images. In some embodiments, one or more operations of reconstructing the second luminance image may be performed by the reconstruction module 340 as illustrated in FIG. 3.
  • The reconstruction of the second luminance image may include or be a reverse process of the decomposition. Exemplary reconstruction techniques may include pyramid reconstruction, wavelet reconstruction, Laplace transform reconstruction, inverse filtering, or the like, or any combination thereof. Multiple orders of reconstruction may be performed. In some embodiments, the number of orders in the reconstruction may be the same as the number of orders in the decomposition. For example, the reconstruction module 340 may reconstruct a second (or frequency-adjusted) luminance image by a three-order wavelet transform when the decomposed images are generated based on a three-order wavelet transform decomposition.
  • At 650, a final image of the original image may be determined based on the first (or unadjusted) luminance image, the second (or frequency-adjusted) luminance image, and the original image. In some embodiments, one or more operations of determining the final image of the original image may be performed by the determination module 350 as illustrated in FIG. 3.
  • The final image may include a plurality of final pixels corresponding to original pixels in the original image. The values of the final pixels may be determined based on the first (or unadjusted) luminance image, the second (or frequency-adjusted) luminance image, and the original image. For example, the values of the final pixels may be determined by multiplying the values of corresponding original pixels in the original image by a ratio of the second luminance image to the first luminance image. In some embodiments, exemplary processes for determining the final image of the original image may be elsewhere in the present disclosure. See, for example, FIG. 11 and the description thereof.
  • It should be noted that the above description of process 600 is merely provided for the purposes of illustration, and not intended to be understood as the only embodiment. For persons having ordinary skills in the art, various variations and modifications may be conduct under the teaching of some embodiments of the present disclosure. In some embodiments, some steps may be reduced or added. For example, 610 may be omitted. The luminance of an image may be predetermined and stored in a storage medium of the image processing system 100. However, those variations and modifications may not depart from the protecting of some embodiments of the present disclosure.
  • FIG. 7 is a flowchart illustrating an exemplary process 700 for determining a second (or frequency-adjusted) decomposed image according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 700 for determining a second decomposed image may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 700 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120, the frequency adjustment module 330 as illustrated in FIGS. 3-4).
  • At 710, frequencies of pixels in a first (or unadjusted) decomposed image Ii may be identified. For example, the frequencies of all or a portion of the pixels included in the first decomposed image Ii may be identified. In some embodiments, the frequencies of the pixels may be determined by Fourier transform, Z transform, Laplace transform, or the like, or any combination thereof. In some embodiments, the frequencies of the pixels may be predetermined and stored in a storage medium of the image processing device 120.
  • At 720, gains of the pixels in the first (or unadjusted) decomposed image Ii may be determined based on the frequencies of the pixels and a frequency adjustment threshold associated with the first decomposed image Ii. In some embodiments, the gains of different pixels in the same first decomposed image Ii may be the same or different. In some embodiments, the gain of a pixel may be determined based on the position of the pixel in the first decomposed images. Exemplary processes for determining a gain of a pixel in a first decomposed image may be found elsewhere in the present disclosure. See, for example, FIG. 8 and the description thereof.
  • The frequency adjustment threshold may vary in different application scenarios of the image processing system 100. For example, the frequency adjustment threshold associated with the first (or unadjusted) decomposed image may be provided by a user via, for example, the I/O 250. In some embodiments, the frequency adjustment threshold corresponding to the first decomposed image may be determined by the system 100 based on a default setting of the system 100, an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc. An empirical setting from prior imaging processing may be derived by, for example, machine learning.
  • At 730, frequencies of corresponding pixels in the second (or frequency-adjusted) decomposed image may be determined based on the original frequencies and the gains of the original pixels in the first (or unadjusted) decomposed image Ii.
  • In some embodiments, the frequency of a corresponding pixel in the second (or frequency-adjusted) decomposed image may be determined by multiplying the original frequency by the gain corresponding to the original pixel in the first (or unadjusted) decomposed image Ii. A pixel in the second decomposed image is considered to correspond to an original pixel in the first decomposed image Ii or in an original image if the two pixels correspond to a same physical point in the space or in an object to which the original image, or the first decomposed image, or the second decomposed image relates. A pixel in the second decomposed image and a corresponding original pixel in the first decomposed image Ii or in an original image may be considered to be located at a same position. In some embodiments, an original pixel in the first decomposed image Ii or in an original image and the corresponding pixel in the second decomposed image may be the same except for the frequency adjustment.
  • At 740, the second (or frequency-adjusted) decomposed image associated with the first (or unadjusted) decomposed image Ii may be determined based on the frequencies of the corresponding pixels in the second decomposed image. The second decomposed image may be associated with an original image that is associated with the first (or unadjusted) decomposed image.
  • A pixel in the second (or frequency-adjusted) decomposed image may be located at the same position as the corresponding original pixel in the original image. The second decomposed image may be generated by arranging the corresponding pixels in the same way as the original pixels in the original image.
  • In some embodiments, the plurality of second (or frequency-adjusted) decomposed images at operation 630 may be determined by implementing operations 710-740 on each of the plurality of the first (or unadjusted) decomposed images.
  • FIG. 8 is a flowchart illustrating an exemplary process 800 for determining a gain of a pixel in a first decomposed image according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 800 for determining a gain of a pixel may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 800 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120, the determination module 350 as illustrated in FIG. 3, the gain determination unit 410 as illustrated in FIG. 4).
  • At 810, a certain number of pixels that are located at a same position from a plurality of first (or unadjusted) decomposed images may be identified. The plurality of first decomposed images may be obtained by decomposing a first (or unadjusted) luminance image of an original image at 620 illustrated in FIG. 6. The plurality of first decomposed images may be stored in a storage medium of the image processing system 100. In some embodiments, the certain number of pixels may include pixels from all or a portion of the plurality of first decomposed images. For example, the certain number of pixels from three first decomposed images obtained by decomposition of a same order may be identified. In some embodiments, the particular number of pixels may be all or a portion of the pixels in the one or more first decomposed images. For example, all the pixels that are located at a same position from three first decomposed images obtained by the decomposition of a same order may be identified.
  • The same position may be determined based on a position of a pixel to be adjusted in the first decomposed image Ii. The pixel to be adjusted may be determined based on the frequency and the gain of the original pixel in the first decomposed image Ii. FIG. 10-I, FIG. 10-II, and FIG. 10-III illustrate examples of pixels that are located at a same position.
  • At 820, the frequencies of the identified pixels may be determined. In some embodiments, the frequency of an identified pixel may be determined by Fourier transform, Z transform, Laplace transform, or the like, or any combination thereof. In some embodiments, the frequencies of the identified pixels may be predetermined and stored in a storage medium of the image processing device 120.
  • At 830, the frequencies of the identified pixels may be filtered to obtain the particular number of filtered frequencies. In some embodiments, the filter frequency may be obtained using a Butterworth filter, a Chebyshev filter, a Bessel filter, an elliptic filter, a Gaussian filter, an Hourglass filter, a raised-cosine filter, or the like, or any combination thereof. The filtered frequencies may be associated with positions of the identified pixels.
  • Exemplary processes for obtaining a filtered frequency associated with a position may be found elsewhere in the present disclosure. See, for example, FIG. 9 and the description thereof.
  • At 840, the average filtered frequency associated with the position may be determined based on the filtered frequencies. The average filtered frequency may be an average value of the filtered frequencies of pixels at a same position in the one or more first (or unadjusted) decomposed images. The average value may include an arithmetic mean value, a geometric mean value, a square mean value, a harmonic mean value, a weighted average value, or the like, or any combination thereof. In some embodiments, the average filtered frequency of a pixel in a first decomposed image may be the same as frequencies of pixels at the same position in one or more first decomposed images of a same order.
  • At 850, a gain associated with the position may be determined based on the average filtered frequency. In some embodiments, the gains of pixels at a same position of the first decomposed images of high frequency of a same order may be the same or different.
  • It should be noted that above description of process 800 is merely provided for the purposes of illustration, and not intended to be understood as the only embodiment. For persons having ordinary skills in the art, various variations and modifications may be conduct under the teaching of some embodiments of the present disclosure. For example, the gain associated with the position may be determined based on filtered frequencies of the identified pixels. Operation 840 may be omitted. As another example, the gains of the pixels in the first decomposed image Ii at operation 720 may be determined by implementing operations 810-850 on each of the pixels in the first decomposed image Ii. However, those variations and modifications may not depart from the protecting of some embodiments of the present disclosure.
  • For illustration purposes, a gain of a pixel (or associated with a position) in a first (or unadjusted) decomposed image may be determined based on an average filtered frequency of the pixels at a same position in the first (or unadjusted) decomposed images obtained by decomposition of a same order. The gain of a pixel in the first decomposed image may be determined based on equations (1) and (2):
  • G i j ( x , y ) = m i ( A ag ( x , y ) + ɛ δ A ) γ - 1 , ( 1 ) A a g ( x , y ) = 1 3 N + 1 k = 1 3 N + 1 A k ( x , y ) , ( 2 )
  • where i denotes the order of decomposition; j denotes any one of the first (or unadjusted) decomposed images obtained by the ith-order decomposition; Gij(x, y) denotes the gain of the pixel at the position (x, y) in the first (or unadjusted) decomposed image j obtained in the ith-order decomposition; Aag(x, y) denotes the average filtered frequency of the pixels at the position (x, y) in the first (or unadjusted) decomposed image j; mi denotes a gain adjustment factor corresponding to the ith-order decomposition, and 0≤mi≤1; γ denotes a constant, and 0≤γ≤1; δ denotes a constant, and 0≤δ≤1; A′ denotes an average frequency corresponding to the position (x, y) of the pixel in the first (or unadjusted) decomposed image j. For example, A′ may be an arithmetic mean of the frequencies of pixels at a same position in one or more first (or unadjusted) decomposed images obtained by decomposition of the same order. ε denotes a noise level correlation parameter which may be used to suppress a noise amplification, and 0≤ε≤1. N denotes the total number of orders of decomposition. Ak(x, y) denotes a filtered frequency of the pixel at the position (x, y) in the first (or unadjusted) decomposed image k of the plurality of the first decomposed images obtained after the Nth-order decomposition. The number of the first (or unadjusted) decomposed images may be 3N+1. The gain adjustment factor mi of a first (or unadjusted) decomposed image of an order may vary in different application scenarios of the image processing system 100. In some embodiments, the mi of first (or unadjusted) decomposed images obtained by decomposition of different orders may be the same or different. For example, mi of first (or unadjusted) decomposed images obtained by decomposition of different orders may be the same or different. As another example, mi of first (or unadjusted) decomposed images obtained by decomposition of a same order may be the same or different.
  • Still for illustration purposes, a gain of a pixel (or associated with a position) in a first (or unadjusted) decomposed image may be determined based on filtered frequencies of pixels at a same position in the one or more first (unadjusted) decomposed images of a same order. The gain of a pixel (or associated with a position) in the first (or unadjusted) decomposed image may be determined based on equation (3):
  • G i j ( x , y ) = m i ( A i j ( x , y ) + ɛ δ A ) γ - 1 , ( 3 )
  • where i denotes the order of decomposition; j denotes any one of the first decomposed images obtained by the ith-order decomposition; Gij(x, y) denotes the gain of the pixel at the position (x, y) in the first (or unadjusted) decomposed image j obtained in the ith-order decomposition; Aij(x, y) denotes the filtered frequency of the pixel at the position (x, y) in the first decomposed image j; mi denotes a gain adjustment factor corresponding to the ith-order decomposition, and 0≤mi≤1; γ denotes a constant, and 0≤γ≤1; δ denotes a constant, and 0≤δ≤1; A′ denotes an average frequency corresponding to the position (x, y) of the pixel in the first (or unadjusted) decomposed image j. For example, A′ may be an arithmetic mean of frequencies of pixels at the same position in of the one or more first (or unadjusted) decomposed images of the same order. ε denotes a noise level correlation parameter which may be used to suppress a noise amplification, and 0≤ε≤1. The mi of each order's first decomposed image (or unadjusted) may be varied according to different application scenarios of the image processing system 100. In some embodiments, the mi of each order's first (or unadjusted) decomposed image may be same or different. For example, the mi of different order's first (or unadjusted) decomposed image may be different. As another example, the mi of same order's first (or unadjusted) decomposed image may be same.
  • FIG. 9 is a flowchart illustrating an exemplary process 900 for obtaining a gain of a pixel in a first (or unadjusted) decomposed image according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 900 for obtaining a gain of a pixel may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 900 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120, the determination module 350 as illustrated in FIG. 3, the gain determination unit 410 as illustrated in FIG. 4).
  • At 910, the Nth-order decomposed images of high frequency may be identified. In response to the determination that the first decomposed image Ii is a first (or unadjusted) decomposed image with high frequency, one or more first (or unadjusted) decomposed images which are of the same order as the first decomposed image Ii are identified. For example, in response to the determination that the first decomposed image Ii is a second-order first decomposed image of high frequency, three second-order first decomposed images (the first decomposed image Ii and other two second-order first decomposed images) of high frequency may be identified.
  • At 920, the frequencies of the pixels at a same position of the identified Nth-order first (or unadjusted) decomposed images of high frequency may be determined. For example, the frequencies of the three pixels at a same position of the three identified Nth-order first decomposed images of high frequency may be determined.
  • In some embodiments, the number of pixels included in each first decomposed image may be the same, and the pixels may be arranged in the same way in each first decomposed image. FIG. 10-I through 10-DI are schematic diagrams illustrating an Nth order decomposed images of high frequency according to some embodiments of the present disclosure. FIG. 10-I shows a decomposed image of high frequency including four pixels a1, a2, a3, and a4. FIG. 10-II and FIG. 10-III are the other two decomposed images of high frequency obtained in the decomposition of the same order as the image in FIG. 10-I. FIG. 10-II and FIG. 10-III each may include four pixels, b1, b2, b3, and b4 in FIG. 10-II, and c1, c2, c3, and c4 in FIG. 10-III, respectively. For instance, the images in FIG. 10-I through FIG. 10-III may overlap with each other. There are three pixels at the position (x, y) in a two-dimensional coordinate system. The pixels a1, b1, and c1 are located at a same position. The pixels a2, b2, and c2 are located at a same position. The pixels a3, b3, and c3 are located at a same position. The pixels a4, b4, and c4 are located at a same position.
  • At 930, the average frequency value associated with the position may be determined based on the frequencies of the pixels. The average frequency value may be an average value of the absolute values of the frequencies of the pixels in the identified Nth-order decomposed images of high frequency. The average value may be an arithmetic mean value, a geometric mean value, a square mean value, a harmonic mean value, a weighted average value, or the like, or any combination thereof.
  • In some embodiments, the average frequency value associated with the position may be determined based on the absolute value of the frequency of the pixel in the first (or unadjusted) decomposed image Ii of high frequency and the absolute values of the frequencies of the pixels at the same position in the first (or unadjusted) decomposed images of high frequency of the same order as the first (or unadjusted) decomposed image Ii. For example, the average frequency value of pixel a1 may be based on the frequencies of pixels a1, b1, and c1.
  • Merely by way of example, an absolute value F1 of the frequency of a pixel at a position in the first (or unadjusted) decomposed image Ii of high frequency may be determined, and the other two absolute values, F2 and F3, of the two pixels (in the first (or unadjusted) decomposed images of high frequency by the decomposition of the same order as the image Ii) at the same position may be determined. The average frequency value F associated with the position may be determined based on equation (4):
  • F = F 1 + F 2 + F 3 3 . ( 4 )
  • At 940, the average frequency value F may be filtered to obtain a filtered frequency associated with the position. In some embodiments, the filtered frequency may be obtained using a Butterworth filter, a Chebyshev filter, a Bessel filter, an elliptic filter, a Gaussian filter, an Hourglass filter, a raised-cosine filter, or the like, or any combination thereof.
  • In some embodiments, a plurality of filtered frequencies may be obtained by implementing operations 910-940 on each of the pixels in the first (or unadjusted) decomposed image Ii.
  • In some embodiments, in response to the determination that the first decomposed image Ii is a first (or unadjusted) decomposed image with low frequency, the frequencies of one or more pixels in the first decomposed image Ii may be identified. Then the frequencies may be filtered to obtain filtered frequencies. The gains corresponding to the one or more pixels may be determined based on the corresponding filtered frequencies of the one or more pixels.
  • FIG. 11 is a flowchart illustrating an exemplary process 1100 for determining a final image of the original image according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 1100 for determining a final image may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 1100 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120, the determination module 350 as illustrated in FIG. 3 and FIG. 5).
  • At 1110, a position of an original pixel in the original image may be identified. For example, the original image may be in a two-dimensional coordinate system, and the position of an original pixel may be represented by (x, y). The value of coordinates x and y may be used to identify the position of the original pixel.
  • At 1120, a first (or unadjusted) luminance of a pixel in the first (or unadjusted) luminance image may be determined, in which the pixel in the first (or unadjusted) luminance image is at the same position as the original pixel in the original image. In some embodiments, the first (or unadjusted) luminance image may be determined as described in 610 in the present disclosure. In some embodiments, the first (or unadjusted) luminance image of the original image may be stored in a storage medium of the image processing system 100 or an external storage device.
  • In some embodiments, the same position may be identified based on the first luminance image and the original image. For example, pixels in the first (or unadjusted) luminance image and the original image may be arranged in the same way in identical two-dimensional coordinate systems. For the same position (x, y), there are two pixels, one in the first (or unadjusted) luminance image, and the other in the original image.
  • At 1130, a second (or frequency-adjusted) luminance of a pixel in the first luminance image may be determined, in which the pixel in the second (or frequency-adjusted) luminance image is at the same position as the original pixel in the original image. In some embodiments, the second (or frequency-adjusted) luminance image may be reconstructed from a plurality of the second (or frequency-adjusted) decomposed images as described in 640 in the present disclosure. In some embodiments, the second luminance image of the original image may be stored in a storage medium of the image processing system 100.
  • In some embodiments, the same position may be identified based on the second (or frequency-adjusted) luminance image and the original image. For example, pixels in the second (or frequency-adjusted) luminance image and the original image may be arranged in the same way in identical two-dimensional coordinate systems. For the same position (x, y), there are two pixels, one in the second (or frequency-adjusted) luminance image, and the other in the original image.
  • At 1140, a final pixel associated with the original pixel may be determined based on the first (or unadjusted) luminance and the second (or frequency-adjusted) luminance. The final pixel may be a pixel in a final image. The final pixel may include data of the final pixel such as luminance information of the final pixel, frequency information of the final pixel, hue information of the final pixel, saturation information of the final pixel, or the like, or any combination thereof.
  • In some embodiments, the data of the final pixel may be determined based on equation (5):
  • C out ( x , y ) = I o u t ( x , y ) I i n ( x , y ) C i n ( x , y ) , ( 5 )
  • where Cin(x, y) denotes the data of the original image associated with the original pixel at the position (x, y) in the original image, Cout(x, y) denotes the data of the final pixel at the position (x, y) in the final image, Iin(x, y) denotes the first (or unadjusted) luminance of the pixel at the position (x, y) in the first (or unadjusted) luminance image, and Iout(x,y) denotes the second (or frequency-adjusted) luminance of the pixel at the position (x, y) in the second (or frequency-adjusted) luminance image.
  • Operations 1110 through 1140 may be performed for multiple positions and multiple original pixels at these positions to provide corresponding final pixels. At 1150, a final image of the original image may be determined based on the final pixels.
  • FIG. 12 is a flowchart illustrating an exemplary process 1200 for processing an image according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 1200 may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 1200 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120, the determination module 350 of the image processing device 120, etc.).
  • At 1210, a target region in an image and a plurality of gray levels in the target region may be identified.
  • In some embodiments, the image may be captured by one or more sensors and/or imaging devices. In some embodiments, the image may be retrieved from a storage device (e.g., the storage 140 illustrated in FIG. 1, the disk 270 illustrated in FIG. 2A) or received from an imaging device (e.g., the imaging device 110 illustrated in FIG. 1). In some embodiments, the image may be retrieved from an external source, such as a hard disk, a wireless terminal, or the like, or any combination thereof, that is connected to or otherwise communicates with the system 100. In some embodiments, the image may include gray scale data, red green blue (RGB) data, Bayer data, Luma and Chroma (YUV) data, raw image format (RAW) data, joint photographic experts group (JPEG) data, or the like, or any combination thereof. For example, the image may be a gray scale image transformed from an RGB image. In some embodiments, the image may be obtained from a finial image obtained at 650 in FIG. 6. For example, the image may include a gray scale image transformed from the final image obtained at 650.
  • The image may include at least one target region. The number of the target regions in the image may vary in different application scenarios. For example, the number of the target regions in the image may be determined based on the number of pixels that need to be processed in the image. A pixel that needs to be processed may be designated as the central pixel of a target region. As another example, the entire image may constitute the only one target region of the image. The size of the target region in the image may vary in different application scenarios. For example, the size of two target regions in an image may be the same or different.
  • The target region may include a plurality of gray levels. In some embodiments, the gray levels may include original gray levels in the target region, normalized gray levels of the original levels, or other processed gray levels. For example, the plurality of gray levels in the target region may be within a range from 0 to 1. The gray level of 0 represents white in the target region, and the gray level of 1 represents black in the target region. In some embodiments, the number of the gray levels in the target region may vary in different application scenarios. For example, the number of the gray levels in the target region may be four. In some embodiments, the number of gray levels in the target region may be determined by a user of the image processing system 100 via, for example an I/O. In some embodiments, the gray levels in a target region may be determined by the system 100 based on a default setting of the system 100, an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc. An empirical setting from prior imaging processing may be derived by, for example, machine learning. In some embodiments, the gray levels in a target region may be determined based on the range of gray values of the pixels in the target region. The four gray levels may include 0.25, 0.5, 0.75, and 1.
  • In some embodiments, the interval between two adjacent gray levels may be the same or different. The interval between two adjacent gray levels in the target region may vary according to different application scenarios. For example, the interval between two adjacent gray levels may be 0.25. The gray levels of the target region may include four, 0.25, 0.5, 0.75, and 1, respectively. As another example, the interval between two adjacent gray levels may be 0.2. The gray levels of the target region may include five, 0.2, 0.4, 0.6, 0.8, and 1, respectively. As still another example, the gray levels of the target region may include four, 0.1, 0.4, 0.6 and 0.9, respectively. In some embodiments, the gray levels of any two target regions in the image may be the same or different.
  • In some embodiments, if the image includes more than one target regions, and at least one target region has a part outside the image, the at least one target region extending outside of the image may be processed according to, for example, the process 1300 illustrated in FIG. 13 after 1210.
  • At 1220, a plurality of statistical probabilities may be determined, in which a statistical probability is associated with a gray level of the plurality of gray levels. In some embodiments, the plurality of statistical probabilities may be associated with all the gray levels except for the greatest gray level in the target region.
  • In some embodiments, a statistical probability associated with a gray level may include a proportion or number of the pixels associated with the gray value. For example, the statistical probability associated with the gray level 0.25 may include the proportion of the pixels whose gray levels are not greater than 0.25 in the target region. The proportion may be a ratio of the number of the pixels whose gray values are not greater than the gray level 0.25 to the total number of pixels in the target region. In some embodiments, the statistical probabilities may be expressed in one or more of various ways. For example, the statistical probabilities may be represented as numerical values, a diagram, a table, or the like, or any combination thereof. Exemplary diagrams may include a cumulative histogram, a line chart, a pie chart, a scatter diagram, a bar chart, or the like, or any combination thereof.
  • Merely by way of example, the statistical probabilities may be represented as a cumulative histogram of the gray levels. The cumulative histogram may include a horizontal axis and a vertical axis. The horizontal axis may indicate gray levels of the plurality of gray levels. For example, the horizontal axis may include all the plurality of gray levels in the target region. As another example, the horizontal axis may include all the plurality of gray levels except for the greatest gray level in the target region. The vertical axis may indicate the proportions of pixels whose gray values are not greater than the corresponding gray levels in the horizontal axis.
  • At 1230, a mapping curve of the target region may be determined based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels. In some embodiments, the mapping curve of the target region may be determined based on the plurality of statistical probabilities of the plurality of gray levels except for the greatest gray level and the plurality of predetermined curves associated with the plurality of gray levels except for the greatest gray level. Exemplary processes for determining a mapping curve of the target region may be found elsewhere in the present disclosure. See, for example, FIG. 16 and the description thereof.
  • In some embodiments, the predetermined curves associated with the plurality of the gray levels may vary in different application scenarios of the image processing system 100. The predetermined curve associated with various gray levels may be the same or different. Merely by way of example, the predetermined curves may include a Gaussian distribution curve, a Weibull distribution curve, an exponential distribution curve, a Poisson distribution curve, a binomial distribution curve, or the like, or any combination thereof.
  • At 1240, at least one pixel that needs to be processed in the target region may be identified.
  • In some embodiments, the pixels that needs to be processed may be determined manually, automatically, and/or semi-automatically. For example, the pixel in the center of the target region may need to be processed. As another example, all the pixels in the target region may need to be processed. As a further example, in the target region, the pixels whose gray values are lower or higher than a predetermined value may be determined to be processed.
  • At 1250, for a pixel of the at least one pixel that needs to be processed, the value of the pixel may be determined based on the mapping curve of the target region. In some embodiments, a processed image may be generated based on the determined at least one value of the at least one pixel.
  • For example, the mapping curve may be expressed in the form of a formula. The value of a pixel may be determined based on the formula. It should be noted that the above description of process 1200 is merely provided for the purposes of illustration, and not intended to be understood as the only embodiment. For persons having ordinary skills in the art, various variations and modifications may be conduct under the teaching of some embodiments of the present disclosure. For example, at least one pixel that needs to be processed may be first identified (e.g., at 1240) before the target region (e.g., at 1210) is identified. The target region may be identified based on the at least one pixel that needs to be processed in the image. However, those variations and modifications may not depart from the protecting of some embodiments of the present disclosure.
  • FIG. 13 is a flowchart illustrating an exemplary process 1300 for processing at least one target region having a part outside of an image according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 1300 may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 1300 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120, the determination module 350 of the image processing device 120, etc.).
  • At 1310, more than one target region in an image may be identified. In some embodiments, the image may be captured by one or more sensors and/or imaging devices. In some embodiments, the image may be retrieved from a storage device (e.g., the storage 140 illustrated in FIG. 1, the disk 270 illustrated in FIG. 2A) or received from an imaging device (e.g., the imaging device 110 illustrated in FIG. 1). In some embodiments, the image may be retrieved from an external source, such as a hard disk, a wireless terminal, or the like, or any combination thereof, that is connected to or otherwise communicating with the system 100. In some embodiments, the image may include gray scale data, red green blue (RGB) data, Bayer data, Luma and Chroma (YUV) data, raw image format (RAW) data, joint photographic experts group (JPEG) data, or the like, or any combination thereof. For example, the image may include a gray scale image transformed from an RGB image. In some embodiments, the image may be obtained from a finial image obtained at 650 in FIG. 6. For example, the image may include a gray scale image transformed from the final image obtained at 650.
  • The number of the target regions in the image may vary in different application scenarios. For example, the number of the target regions in the image may be determined based on the number of pixels that need to be processed in the image. Each pixel that needs to be processed may be the pixel located at the center of a target region. The size of the target region in the image may vary in different application scenarios. For example, the sizes of two target regions in the image may be the same or different.
  • At 1320, a determination may be made as to whether at least one target region has a part outside of the image. In some embodiments, the pixels that needs to be processed may be the pixels located at the center of the corresponding target region. When there is at least one pixel located on an edge or at a corner of the image, the corresponding target region may have a part outside the image.
  • At 1330, at least one edge of the image may be processed in response to the determination that at least one target region has a part outside the image.
  • Exemplary processes for patching at least one edge of an image may be found elsewhere in the present disclosure. See, for example, FIG. 14-I through FIG. 14-V and FIG. 15-I through FIG. 15-V, or any combination thereof.
  • In some embodiments, in response to a determination that no target region has a part outside the image at 1320, the process 1200 described in FIG. 12 may be implemented on each of the identified target regions in the image at 1310 to process the image. In some embodiments, in response to a determination that at least one target region has a part outside the image, the process 1200 described in FIG. 12 may be implemented on the at least one processed target region obtained after 1330.
  • FIGS. 14-I through 14-V are schematic diagrams illustrating the patching of at least one edge of an image according to some embodiments of the present disclosure.
  • FIG. 14-I shows an image A that needs to be processed. The image A may include a plurality of pixels arranged in rows and columns. For example, the number of pixels in each row is W, and the number of pixels in each column is H, where W and H are positive integers. Firstly, two sections named as First Section and Second Section may be identified in the image A. The First Section may be located on the leftmost side of the image A. The number of pixels in each row of the First Section may be W/2, and the number of pixels in each column of the First Section may be equal to the number of pixels in each column of the image A. The Second Section may be located on the rightmost side of the image A. The number of pixels in each row of the Second Section may be W/2, and the number of pixels in each column of the Second Section may be equal to the number of pixels in each column of the image A. FIG. 14-II illustrates the locations of the First Section and the Second Section.
  • Secondly, the First Section may be mirrored with the left edge of the image A as a symmetry axis, and the Second Section may be mirrored with the right edge of the image A as a symmetry axis. FIG. 14-III illustrates an image B which is generated by the mirroring of the First Section and the Second Section. Thirdly, two sections named as Third Section and Fourth Section may be identified in the image B. The Third Section may be located at the top of the image B. The number of pixels in each column of the Third Section may be H/2, and the number of pixels in each row of the Third Section may be equal to the number of pixels in each row of the image B. The Fourth Section may be located at the bottom of the image B. The number of pixels in each column of the Fourth Section may be H/2, and the number of pixels in each row of the Fourth Section may be equal to the number of pixels in each raw of the image B. FIG. 14-IV illustrates the locations of the Third Section and the Fourth Section. Fourthly, the Third Section may be mirrored with the top edge of the image B as a symmetry axis, and the Fourth Section may be mirrored with the bottom edge of the image B as a symmetry axis. FIG. 14-V illustrates an image C which is generated by the mirroring of the Third Section and the Fourth Section. In some embodiments, at least one target region may be identified in the processed image C as illustrated in FIG. 14-V. It should be noted that the steps for patching the image described above may be implemented in a different order. For example, the top and bottom sections of the image may be patched before the left and right sections are patched.
  • In some embodiments, for any one pixel in the image, the target region may include a predetermined area having the pixel in the center (the pixel being as the central pixel). For example, the target region may include a W*H of region centered at the pixel 1. The pixel 1 represents the central pixel, W represents the number of pixels in a row of the target region, and H represents the number of pixels in a column of the target region.
  • FIGS. 15-I through 15-V illustrate schematic diagrams for patching at least one edge of an image according to some embodiments of the present disclosure. FIGS. 15-I through 15-V illustrate the cases that a part of a target region is inside the image (a part outside the image).
  • In FIG. 15-I, an X axis and a Y axis may divide the target region having the shape of a rectangle enclosed by the solid lines into four sections according to the center of the target region (e.g., the center of the target region is the original of the coordinate system including the X axis and the Y axis). It is understood that a target region may have a shape other than a rectangle. The four sections may be respectively named as Section A, Section B, Section C, and Section D. The Section A is inside the image, and the Section B, the Section C, and the Section D are outside the image. The pixels in the Section A may be mirrored to generate the pixels in the Section B with the X axis as a symmetry axis. The pixels in the Section A and the Section B may be mirrored to generate pixels in the Section C and the Section D with the Y axis as a symmetry axis.
  • In FIG. 15-II, an X axis and a Y axis may divide the target region into four sections (Section A, Section B, Section C, and Section D) according to the center of the target region (e.g., the center of the target region is the original point of the X axis and the Y axis). The Section B may include a part (B1) outside the image and a part (B2) inside the image. The Section A1 is symmetrical to the Section B1. The pixels in the Section A1 may be mirrored to generate the pixels in the Section B1 with the X axis as a symmetry axis. The pixels in the Section A and the Section B may be mirrored to generate the pixels in the Section D and the Section C with the Y axis as a symmetry axis.
  • In FIG. 15-III, an X axis may divide the target region into two sections (Section A and Section B) based on the center of the target region. The pixels in the Section A may be mirrored to generate the pixels in the Section B with the X axis as a symmetry axis.
  • In FIG. 15-IV, an X axis may divide the target region into two sections (Section A and Section B) based on the center of the target region. The Section B may include a part (B1) outside the image and a part (B2) inside the image. The Section A1 is symmetrical to the Section B1. The pixels in the Section A1 may be mirrored to generate the pixels in the Section B1 with the X axis as a symmetry axis.
  • In some embodiments, other cases (e.g., the case illustrated in FIG. 15-V) may be patched based on the same or similar techniques described in FIGS. 15-I through 15-IV.
  • FIG. 16 is a flowchart illustrating an exemplary process 1600 for determining a mapping curve of a target region according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 1600 may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 1600 may be stored in the storage 140 as a form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120, the determination module 350 of the image processing device 120, etc.).
  • At 1610, a plurality of optimal coefficients may be determined. An optimal coefficient may be associated with a statistical probability of the plurality of statistical probabilities relating to the target region.
  • In some embodiments, an optimal coefficient associated with a statistical probability may be determined based on the statistical probability of a gray level, a predetermined range of the statistical probability corresponding to the gray level (e.g., the gray level is not the greatest gray level in the target region), the pixel value of the central pixel of the target region, the pixel values of neighbor pixels around the central pixel, or the like, or any combination thereof. The predetermined range of the statistical probability may vary in different application scenarios of the image processing system 100. A neighbor pixel of a certain pixel may be within a range (e.g., a square region with an area of 3 pixels*3 pixels centered at the central pixel) of the pixel. Exemplary processes for determining an optimal coefficient may be described may be found elsewhere in the present disclosure. See, for example, FIG. 17 and the description thereof.
  • At 1620, a plurality of optimal curves may be determined. An optimal curve may be associated with an optimal coefficient of the plurality of optimal coefficients.
  • In some embodiments, an optimal curve may be determined based on the corresponding optimal coefficient. In some embodiments, the optimal curves of different gray levels in the target region may be the same or different. For instance, an optimal curve may be determined based on the equation (6):
  • Y out = ( 2 n - 1 ) × ( Y i n 2 n - 1 ) W , ( 6 )
  • where Yout denotes an output pixel value, Yin denotes an input pixel value, n denotes the bit width of the image, W′ denotes an optimal coefficient. In some embodiments, when W is the Gamma value, the optimal curve may be a Gamma distribution curve.
  • At 1630, a plurality of predetermined curves associated with the plurality of optimal coefficients may be identified. In some embodiments, a predetermined curve may correspond to a gray level except for the greatest gray level. The predetermined curves associated with different gray level may be the same or different. Merely by way of example, the predetermined curves may include a Gaussian distribution curve, a Weibull distribution curve, an exponential distribution curve, a Poisson distribution curve, a binomial distribution curve, or the like, or any combination thereof.
  • At 1640, a mapping curve of the target region may be determined based on the plurality of optimal curves and the plurality of predetermined curves.
  • In some embodiments, a plurality of sub mapping curves may be determined before the mapping curve is determined. A sub mapping curve may correspond to a gray level except for the greatest gray level. The sub mapping curves may be determined based on the optimal curves and the predetermined curves. In some embodiments, a sub mapping curve, which corresponds to a gray level, may be determined based on the optimal curve corresponding to the gray level and the predetermined curve corresponding to the gray level.
  • In some embodiments, the mapping curve of the target region may be determined based on all or a portion of the plurality of sub mapping curves of the target region. For example, the mapping curve may be determined as a sum of all the plurality of sub mapping curves.
  • FIG. 17 is a flowchart illustrating an exemplary process 1700 for determining an optimal coefficient of a target region according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 1700 may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 1700 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120, the determination module 350 of the image processing device 120, etc.).
  • At 1710, an initial coefficient associated with a statistical probability may be determined based on the gray level associated with the statistical probability. The initial coefficient may be used to describe a factor in the process for determining an optimal coefficient of a target region.
  • In some embodiments, the initial coefficient corresponding to a gray level may be determined based on the gray level, the statistical probability associated with the gray level, and a predetermined range of the statistical probability associated with the gray level. The predetermined range of a statistical probability associated with the gray level may vary in different application scenarios of the image processing system 100. For example, the predetermined range of the statistical probability associated with a gray level may be predetermined by a user of the image processing system 100 via, for example, the I/O 250. In some embodiments, the predetermined range of the statistical probability associated with a gray level may be determined by the system 100 based on a default setting of the system 100, an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc. An empirical setting from prior imaging processing may be derived by, for example, machine learning. Exemplary processes for determining an initial coefficient may be found elsewhere in the present disclosure. See, for example, FIG. 18 and the description thereof.
  • At 1720, the central pixel of the target region may be identified.
  • At 1730, an optimal coefficient corresponding to the initial coefficient may be determined based on the central pixel of the target region. In some embodiment, the optimal coefficient may be determined by different ways when the gray level corresponding to the initial coefficient is within different ranges.
  • In some embodiments, in response to a determination that the gray level corresponding to the initial coefficient is less than a first threshold, the optimal coefficient corresponding to the initial coefficient may be determined based on the pixel value of the central pixel of the target region and the pixel values of neighbor pixels around the central pixel. In some embodiments, the first threshold associated with a gray level may be determined by the system 100 based on a default setting of the system 100, an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc. An empirical setting from prior imaging processing may be derived by, for example, machine learning. The first threshold may be predetermined within (0, 1). For example, the first threshold may be within (0.25, 0.75). As another example, the first threshold may be 0.5. A neighbor pixel of a certain pixel may be within a certain range (e.g., a region with an area of L pixels in a row, M pixels in a column, and centered with the central pixel) from the pixel. The neighbor pixels may be the pixels in the neighbor region of the central pixel of the target region. For instance, the optimal coefficient may be determined based on the exemplary equation (7):

  • W′=W×m,  (7)
  • where W denotes the initial coefficient, W denotes the optimal coefficient corresponding to the initial coefficient, and m denotes the ratio of the average pixel value of the neighbor pixels to the pixel value of the central pixel. In some embodiments, the m may be determined based on the equation (8):

  • m=b 1 /a 1,  (8)
  • where b1 denotes the average pixel value of the neighbor pixels, and a1 denotes the pixel value of the central pixel.
  • Merely by way of example, in response to the determination that m<1, the pixel value of the central pixel may be greater than the average pixel value of the neighbor pixels. The optimal coefficient may be adjusted to be less than the initial coefficient. As another example, in response to the determination that m>1, the pixel value of the central pixel may be less than the average pixel value of the neighbor pixels. The optimal coefficient may be adjusted to be greater than the initial coefficient.
  • In some embodiments, in response to the determination that the initial coefficient is not less than the first threshold, the optimal coefficient corresponding to the initial coefficient may be determined based on the pixel value of the central pixel of the target region. For instance, the optimal coefficient may be determined based on the equation (9):

  • W′=W×(1+pix_value),  (9)
  • where W denotes the initial coefficient, W denotes optimal coefficient corresponding to the initial coefficient, and pix_value denotes the normalized pixel value of the central pixel of the target region. The pix_value is within the range from 0 to 1. Exemplary algorithms for normalizing the pixel value of the central pixel may include a Min-Max normalized algorithm, a z-score normalized algorithm, a decimal scaling normalized algorithm, or the like, or any combination thereof.
  • FIG. 18 is a flowchart illustrating an exemplary process 1800 for determining an initial optimal coefficient of a target region according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 1800 may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 1800 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120, the determination module 350 of the image processing device 120, etc.).
  • At 1810, a range of the statistical probability associated with a gray level may be determined. For example, the range of the statistical probability associated with a gray level may be determined manually by a user of the image processing system 100 via, for example, the I/O 250. In some embodiments, the range of the statistical probability associated with a gray level may be determined by the system 100 based on a default setting of the system 100, an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc. An empirical setting from prior imaging processing may be derived by, for example, machine learning.
  • Merely by way of example, four gray levels may be determined as 0.25, 0.5, 0.75, and 1. The statistical probabilities associated with the first three gray levels (except for the greatest gray level) may be Pys, Pym, and Pyh, respectively. The range of the statistical probabilities associated with the first three gray levels may be predetermined as [0.01, 0.35], [0.35, 0.65], and [0.65, 0.95], respectively. The statistical probabilities Pys associated with gray level 0.25 may be adjusted based on the range [0.01, 0.35], the statistical probabilities Pym associated with gray level 0.5 may be adjusted based on the range [0.35, 0.65], and the statistical probabilities Pyh associated with gray level 0.75 may be adjusted based on the range [0.65, 0.95].
  • In some embodiments, the range of the statistical probability may be determined automatically. For example, the range of the statistical probability associated with a gray level may be determined based on the equation (10) and the equation (11):

  • max=bin+k 1,  (10)

  • min=bin+k 2,  (11)
  • where max denotes the maximum of the range, min denotes the minimum of the range, and 0<min<max<1; bin denotes the normalized gray level, and 0≤bin≤1; k1 and k2 denote empirical coefficients, and k1>k2. The empirical coefficients k1 and k2 may be vary in different application scenarios. In some embodiments, k1 and k2 may be determined by the system 100 based on a default setting of the system 100, an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc. An empirical setting from prior imaging processing may be derived by, for example, machine learning. In some embodiments, k1 and k2 corresponding to the different gray levels may be the same or different.
  • At 1820, an adjusted statistical probability may be determined based on the range of the statistical probability. For example, the adjusted statistical probability may be determined based on the exemplary equation (12):

  • P y′=min+(max−min)×P y,  (12)
  • where Py′ denotes the adjusted statistical probability, Py denotes the statistical probability, max denotes the maximum of the range, and min denotes the minimum of the range.
  • At 1830, an initial coefficient may be determined based on the adjusted statistical probability. For example, the initial coefficient may be determined based on the equation (13):

  • W=log(P y′)/log(bin),  (13)
  • where W denotes the initial coefficient; Py′ denotes the adjusted statistical probability; and bin denotes the normalized gray level, and 0≤bin≤1.
  • FIG. 19 is a flowchart illustrating an exemplary process 1900 for determining a mapping curve of a target region according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 1900 may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 1900 may be stored in the storage 140 in the form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120, the determination module 350 of the image processing device 120, etc.).
  • At 1910, for each gray level of the plurality of gray levels in a target region, a sub mapping curve associated with the gray level may be determined based on an optimal curve associated with the gray level and a predetermined curve associated with the gray level. In some embodiments, a plurality of sub mapping curves associated with the plurality of gray levels except for the greatest gray level may be determined. In some embodiments, a predetermined curve may be associated with a gray level. The predetermined curves associated with different gray levels may be the same or different. For example, the predetermined curve may be predetermined manually by a user of the image processing system 100 via, for example, the I/O 250. In some embodiments, the predetermined curve may be determined by the system 100 based on a default setting of the system 100, an empirical setting from prior imaging processing by the system 100 or a different system, a combination of a default setting or an empirical setting and a user input, etc. An empirical setting from prior imaging processing may be derived by, for example, machine learning. Exemplary predetermined curves may include a Gaussian distribution curve, a Weibull distribution curve, an exponential distribution curve, a Poisson distribution curve, a binomial distribution curve, or the like, or any combination thereof.
  • At 1910, a mapping curve may be determined based on the plurality of the sub mapping curves associated with the plurality of gray levels. In some embodiments, the mapping curve may be the sum of all the sub mapping curves. For example, the mapping curve may be determined based on the equation (14):

  • Y out =G 1(Y inB 1(Y in)+G 2(Y inB 2(Y in)+ . . . +G n-1(Y inB n-1(Y in),  (14)
  • where Yin denotes an input pixel value of a pixel that needs to be processed; Yout denotes an output pixel value of the pixel that needs to be processed; G1, G2, . . . , Gn−1 denote respective optimal curves corresponding to each of the plurality of gray levels except for the greatest gray level; and B1, B2, . . . , Be−1 denote respective predetermined curves corresponding to each of the plurality of gray levels except for the greatest gray level. For a same pixel value Yin, the sum of B1 (Yin), B2(Yin), . . . , Bn-1(Yin) may be 1.
  • For illustration purposes, schematic diagrams illustrating exemplary curves generated during processing the image may be found in FIGS. 20-22. Merely by way of example, the initial coefficient associated with the statistical probability may be an initial gamma value. The initial gamma value may be determined based on the equation (15):

  • initial gamma_value=log(P y′)/log(bin),  (15)
  • where initial gamma_value denotes the initial gamma coefficient; Py′ denotes the adjusted statistical probability; bin denotes the normalized gray level, and 0≤bin≤1.
  • An optimal gamma value may be determined based on the initial gamma coefficient associated with the gray level, the central pixel of the target region, the neighbor pixels around the central pixel, or any combination thereof. The term “optimal” is used herein for describing a gamma value only.
  • In some embodiments, in response to the determination that the gray level corresponding to the initial gamma coefficient is less than the first threshold, the optimal gamma value corresponding to the initial coefficient may be determined based on the initial gamma coefficient associated with the gray level, the pixel value of the central pixel of the target region, and the pixel values of neighbor pixels around the central pixel. For instance, the optimal coefficient may be determined based on the equation (16):

  • gamma_value′=initial gamma_value×m,  (16)
  • where gamma_value′ denotes to the optimal gamma value, initial gamma_value denotes to the initial gamma coefficient, and m denotes the ratio of the average pixel value of the neighbor pixels to the pixel value of the central pixel. In response to the determination that m<1, the pixel value of the central pixel may be greater than the average pixel value of the neighbor pixels. The optimal coefficient may be adjusted to be less than the initial gamma coefficient. As another example, in response to the determination that m>1, the pixel value of the central pixel may be less than the average pixel value of the neighbor pixels. The optimal gamma coefficient may be adjusted to be greater than the initial gamma coefficient
  • In some embodiments, in response to determining that the gray level corresponding to the initial gamma coefficient is not less than the first threshold, the optimal gamma value corresponding to the initial coefficient may be determined based on the initial gamma coefficient associated with the gray level and the pixel value of the central pixel of the target region. The optimal coefficient may be determined based on the equation (17):

  • gamma_value′=initial gamma_value×(1+pix_value),  (17)
  • where gamma_value′ denotes to the optimal gamma value associated with a gray level, initial gamma_value denotes to the initial gamma coefficient associated with the gray level, and pixel_value denotes to normalized pixel value of the central pixel of the target region. The pix_value is within the range from 0 to 1. Exemplary technique for normalizing the pixel value of the central pixel may include a Min-Max normalized technique, a z-score normalized technique, a decimal scaling normalized technique, or the like, or any combination thereof.
  • The gamma curve associated with a gray level may be determined based on the corresponding optimal gamma value, and determined based on the equation (18):
  • Y o u t = ( 2 n - 1 ) × ( Y i n 2 n - 1 ) gamma _ value , ( 18 )
  • where Yout denotes an output pixel value, Yin denotes an input pixel value, n denotes the bit width of the image, gamma_value′ denotes an optimal gamma value.
  • FIG. 20 is a schematic diagram illustrating exemplary optimal gamma curves according to some embodiments of the present disclosure. For illustration purpose, there gamma curves of Gs (a gamma curve associated with a shadow gray level of the target region), Gm (a gamma curve associated with a middle gray level of the target region) and Gh (a gamma curve associated with a high gray level of the target region) may be determined based on the equation (18). The term “shadow”, “middle” and “high” may refer to different gray levels of the target region. For example, the gamma curve Gs may be associated with a gray level of 0.25 in the target region; the gamma curve Gm may be associated with a gray level of 0.5 in the target region; and the gamma curve Gh may be associated with a gray level of 0.75 in the target region. The statistical probability Pys′ associated with the gray level of 0.25 may be adjusted based on the gamma curve Gs. The statistical probability Pym′ associated with the gray level of 0.5 may be adjusted based on the gamma curve Gm. The statistical probability Pyh′ associated with the gray level of 0.75 may be adjusted based on the gamma curve Gh.
  • The mapping curve of the target region may be determined based the three gamma curves of Gs, Gm and Gh, and three corresponding Gaussian weight curves of Bs, Bm and Bh. FIG. 21 is a schematic diagram illustrating exemplary Gaussian weight curves according to some embodiments of the present disclosure. The Gaussian weight curve Bs may be associated with the shadow gray level of the target region; the Gaussian weight curve Bm may be associated with the middle gray level of the target region; and the Gaussian weight curve Bh may be associated with the high gray level of the target region. For example, the Gaussian weight curve Bs may be associated with the gray level of 0.25 in the target region; the Gaussian weight curve Bm may be associated with the gray level of 0.5 in the target region; and the Gaussian weight curve Bh may be associated with the gray level of 0.75 in the target region.
  • The mapping curve of the target region may be determined based on the equation (19):

  • Y out =G s(Y inB s(Y in)+G m(Y inB m(Y in)+G h(Y inB h(Y in),  (19)
  • where Yin denotes an input pixel value of a pixel that needs to be processed; Yout denotes an output pixel value of the pixel that needs to be processed; Gs, Gm and Gh denotes three gamma curves; and Bs, Bm and RI denotes three Gaussian weight curves. FIG. 22 is a schematic diagram illustrating an exemplary mapping curve according to some embodiments of the present disclosure. The pixel value of a pixel that needs to be processed may be adjusted based on the mapping curve as illustrated in FIG. 22.
  • FIG. 23 is a flowchart illustrating an exemplary process of processing an image according to some embodiments of the present disclosure. In some embodiments, one or more operations of process 2300 may be implemented in the image processing system 100 illustrated in FIG. 1. For example, the process 2300 may be stored in the storage 140 as a form of instructions, and invoked and/or executed by the image processing device 120 (e.g., the processor 220 of the image processing device 120, the determination module 350 of the image processing device 120, etc.).
  • At 2310, a first luminance image of an original image may be obtained. The first luminance image of the original image may be obtained by determining luminance of each pixel in the original image. In some embodiments, the original image may be captured by one or more sensors and/or imaging devices. In some embodiments, the original image may be retrieved from a storage device (e.g., the storage 140 illustrated in FIG. 1, the disk 270 illustrated in FIG. 2A) or received from an imaging device (e.g., the imaging device 110 illustrated in FIG. 1). In some embodiments, the original image may be retrieved from an external source, such as a hard disk, a wireless terminal, or the like, or any combination thereof, that is connected to or otherwise communicates with the system 100. In some embodiments, the original image may include red green blue (RGB) data, Bayer data, Luma and Chroma (YUV) data, raw image format (RAW) data, joint photographic experts group (JPEG) data, or the like, or any combination thereof.
  • At 2320, the first luminance image of the original image may be decomposed to obtain a plurality of first decomposed images (or referred to as unadjusted decomposed images). In some embodiments, the decomposition of the first luminance image of the original image may be performed by the decomposition module 320 as illustrated in FIG. 3. In some embodiments, the method of decomposing the first luminance image of the original image may be described as 620 of process 600 in FIG. 6 in the present disclosure.
  • At 2330, a plurality of second (or frequency-adjusted) decomposed images may be determined based on the plurality of first (or unadjusted) decomposed images. In some embodiments, one or more frequency adjustment operations for determining the second (or frequency-adjusted) decomposed images may be performed by the frequency adjustment module 330 as illustrated in FIGS. 3 and 4. In some embodiments, the method of determining the plurality of second decomposed images may be described as 630 of process 600 in FIG. 6 in the present disclosure.
  • At 2340, a second (or frequency-adjusted) luminance image of the original image may be reconstructed based on the plurality of second (or frequency-adjusted) decomposed images. In some embodiments, one or more operations of reconstructing the second luminance image may be performed by the reconstruction module 340 as illustrated in FIG. 3. In some embodiments, the method of reconstructing the second luminance image may be described as 640 of process 600 in FIG. 6 in the present disclosure.
  • At 2350, a final image of the original image may be determined based on the first (or unadjusted) luminance image, the second (or frequency-adjusted) luminance image, and the original image. In some embodiments, one or more operations of determining the final image of the original image may be performed by the determination module 350 as illustrated in FIG. 3. In some embodiments, the method of determining the final image of the original image may be described as 650 of process 600 in FIG. 6 in the present disclosure.
  • At 2360, a target region in the final image and a plurality of gray levels in the target region may be identified. In some embodiments, the method of identifying the target region and the plurality of gray levels may be described as 1210 of process 1200 in FIG. 12 in the present disclosure.
  • At 2370, a plurality of statistical probabilities may be determined, in which a statistical probability is associated with a gray level of the plurality of gray levels. In some embodiments, the method of determining the plurality of statistical probabilities may be described as 1220 of process 1200 in FIG. 12 in the present disclosure.
  • At 2380, a mapping curve of the target region may be determined based on the plurality of statistical probabilities and a plurality of predetermined curves associated with the plurality of gray levels. In some embodiments, the method of determining the mapping curve of the target region may be described as 1230 of process 1200 in FIG. 12 in the present disclosure.
  • At 2390, at least one pixel that needs to be processed in the target region may be identified. In some embodiments, the method of identifying the at least one pixel that needs to be processed in the target region may be described as 1240 of process 1200 in FIG. 12 in the present disclosure.
  • At 2395, the value of a pixel that needs to be processed may be determined based on the mapping curve of the target region. In some embodiments, the method of determining the value of the at least one pixel that needs to be processed may be described as 1250 of process 1200 in FIG. 12 in the present disclosure.
  • It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “identifying,” “decomposing,” “obtaining,” “storing,” “determining,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • The terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation.
  • In some implementations, any suitable computer readable media may be used for storing instructions for performing the processes described herein. For example, in some implementations, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as magnetic media (such as hard disks, floppy disks, etc.), optical media (such as compact discs, digital video discs, Blu-ray discs, etc.), semiconductor media (such as flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in connectors, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
  • Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the present disclosure is not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution—e.g., an installation on an existing server or mobile device.
  • Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment.
  • In some embodiments, the numbers expressing quantities, properties, and so forth, used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about,” “approximate,” or “substantially.” For example, “about,” “approximate,” or “substantially” may indicate ±20% variation of the value it describes, unless otherwise stated. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable.
  • Each of the patents, patent applications, publications of patent applications, and other material, such as articles, books, specifications, publications, documents, things, and/or the like, referenced herein is hereby incorporated herein by this reference in its entirety for all purposes, excepting any prosecution file history associated with same, any of same that is inconsistent with or in conflict with the present document, or any of same that may have a limiting affect as to the broadest scope of the claims now or later associated with the present document. By way of example, should there be any inconsistency or conflict between the descriptions, definition, and/or the use of a term associated with any of the incorporated material and that associated with the present document, the description, definition, and/or the use of the term in the present document shall prevail.
  • In closing, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the application. Other modifications that may be employed may be within the scope of the application. Thus, by way of example, but not of limitation, alternative configurations of the embodiments of the application may be utilized in accordance with the teachings herein. Accordingly, embodiments of the present application are not limited to that precisely as shown and described.

Claims (23)

1. A system for image processing, comprising:
at least one computer-readable storage medium including a set of instructions for processing an original image; and
at least one processor in communication with the at least one computer-readable storage medium, wherein when executing the set of instructions, the at least one processor is directed to:
obtain a first luminance image of the original image;
decompose the first luminance image of the original image to provide a plurality of first decomposed images;
adjust pixel frequencies in at least some of the plurality of first decomposed images to generate a plurality of second decomposed images;
generate a second luminance image of the original image based on the plurality of second decomposed images; and
determine a final image of the original image based on the first luminance image, the second luminance image, and the original image.
2. The system of claim 1, wherein to adjust pixel frequencies in at least some of the plurality of first decomposed images, the at least one processor is further directed to:
for a specific pixel in a first decomposed image,
identify a frequency of the specific pixel;
determine a gain of the specific pixel based on the frequency of the specific pixel and a frequency adjustment threshold associated with the first decomposed image; and
adjust the frequency of the specific pixel based on the gain of the specific pixel.
3. The system of claim 2, wherein to determine a gain of the specific pixel, the at least one processor is further directed to:
identify, from the plurality of the first decomposed images, a certain number of pixels each of which is located at a position corresponding to the specific pixel of the first decomposed images;
determine the frequencies of the identified certain number of pixels, a pixel of the certain number of pixels having a frequency;
filter the certain number of frequencies to obtain a plurality of filtered frequencies;
determine an average filtered frequency associated with the position based on the plurality of filtered frequencies;
determine the gain associated with the position based on the average filtered frequency; and
assign the gain associated with the position to the specific pixel.
4. The system of claim 1, wherein to determine the final image of the original image based on the first luminance image, the second luminance image, and the original image, the at least one processor is further directed to:
for an original pixel of a plurality of original pixels in the original image,
identify the position of the original pixel in the original image;
determine a first luminance of a pixel in the first luminance image, the pixel in the first luminance image being at the same position as the original pixel in the original image;
determine a second luminance of a pixel in the second luminance image, the pixel in the second luminance image being at the same position as the original pixel in the original image; and
determine a final pixel associated with the original pixel based on the first luminance and the second luminance; and
generate the final image of the original image based on the determined final pixels associated with the plurality of original pixels.
5. The system of claim 1, wherein to obtain the plurality of first decomposed images, the at least one processor is further directed to:
perform one or more orders of decomposition on the first luminance image.
6. The system of claim 5, wherein the one or more orders of decomposition are performed based on a wavelet transformation.
7. The system of claim 1, wherein to reconstruct a second luminance image of the original image based on the plurality of second decomposed images, the at least one processor is further directed to:
perform a reverse operation of the decomposition that provides the plurality of first decomposed images.
8-12. (canceled)
13. A method for processing an original image, comprising:
obtaining a first luminance image of the original image;
decomposing the first luminance image of the original image to provide a plurality of first decomposed images;
adjusting pixel frequencies in at least some of the plurality of first decomposed images to generate a plurality of second decomposed images;
generating a second luminance image of the original image based on the plurality of second decomposed images; and
determining a final image of the original image based on the first luminance image, the second luminance image, and the original image.
14. The method of claim 13, wherein adjusting pixel frequencies in at least some of the plurality of first decomposed images comprises:
for a specific pixel in a first decomposed image,
identifying a frequency of the specific pixel;
determining a gain of the specific pixel based on the frequency of the specific pixel and a frequency adjustment threshold associated with the first decomposed image; and
adjusting the frequency of the specific pixel based on the gain of the specific pixel.
15. The method of claim 14, wherein determining a gain of the specific pixel comprises:
identifying from the plurality of the first decomposed images, a certain number of pixels each of which is located at a position corresponding to the specific pixel of the first decomposed images;
determining the frequencies of the identified certain number of pixels, a pixel of the certain number of pixels having a frequency;
filtering the certain number of frequencies to obtain a plurality of filtered frequencies;
determining an average filtered frequency associated with the position based on the plurality of filtered frequencies;
determining the gain associated with the position based on the average filtered frequency; and
assigning the gain associated with the position to the specific pixel.
16. The method of claim 13, wherein determining the final image of the original image based on the first luminance image, the second luminance image, and the original image comprises:
for an original pixel of a plurality of original pixels in the original image,
identifying the position of the original pixel in the original image;
determining a first luminance of a pixel in the first luminance image, the pixel in the first luminance image being at the same position as the original pixel in the original image;
determining a second luminance of a pixel in the second luminance image, the pixel in the second luminance image being at the same position as the original pixel in the original image; and
determining a final pixel associated with the original pixel based on the first luminance and the second luminance; and
generating the final image of the original image based on the determined final pixels associated with the plurality of original pixels.
17. The method of claim 13, wherein obtaining the plurality of first decomposed images comprises:
performing one or more orders of decomposition on the first luminance image.
18. The method of claim 17, wherein the one or more orders of decomposition are performed based on a wavelet transformation.
19. The method of claim 13, wherein reconstructing a second luminance image of the original image based on the plurality of second decomposed images comprises:
performing a reverse operation of the decomposition that provides the plurality of first decomposed images.
20-24. (canceled)
25. A non-transitory computer readable medium, comprising at least one set of instructions for processing an original image, wherein when executed by at least one processor, the at least one set of instructions directs the at least one processor to perform acts of:
obtaining a first luminance image of the original image;
decomposing the first luminance image of the original image to provide a plurality of first decomposed images;
adjusting pixel frequencies in at least some of the plurality of first decomposed images to generate a plurality of second decomposed images;
generating a second luminance image of the original image based on the plurality of second decomposed images; and
determining a final image of the original image based on the first luminance image, the second luminance image, and the original image.
26-38. (canceled)
39. The system of claim 3, wherein to filter the certain number of frequencies to obtain the plurality of filtered frequencies, the at least one processor is further directed to:
obtain the plurality of filtered frequencies based on first decomposed images of high frequency.
40. The system of claim 39, wherein to obtain the plurality of filtered frequencies based on first decomposed images of high frequency, the at least one processor is further directed to:
identify Nth-order first decomposed images of high frequency;
determine frequencies of the pixels at a same position of the identified Nth-order first decomposed images of high frequency;
determine an average frequency value associated with the position based on the frequencies of the pixels; and
obtain a filtered frequency associated with the position by filtering the average frequency value.
41. The method of claim 15, wherein the filtering the certain number of frequencies to obtain a plurality of filtered frequencies comprises:
obtaining the plurality of filtered frequencies based on first decomposed images of high frequency.
42. The method of claim 41, wherein the obtaining the plurality of filtered frequencies based on first decomposed images of high frequency comprises:
identifying Nth-order first decomposed images of high frequency;
determining frequencies of the pixels at a same position of the identified Nth-order first decomposed images of high frequency;
determining an average frequency value associated with the position based on the frequencies of the pixels; and
obtaining a filtered frequency associated with the position by filtering the average frequency value.
43. The non-transitory computer readable medium of claim 25, wherein the adjusting pixel frequencies in at least some of the plurality of first decomposed images comprises:
for a specific pixel in a first decomposed image,
identifying a frequency of the specific pixel;
determining a gain of the specific pixel based on the frequency of the specific pixel and a frequency adjustment threshold associated with the first decomposed image; and
adjusting the frequency of the specific pixel based on the gain of the specific pixel.
US17/342,695 2016-06-21 2021-06-09 Systems and methods for image processing Abandoned US20210295480A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/342,695 US20210295480A1 (en) 2016-06-21 2021-06-09 Systems and methods for image processing

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
CN201610456890.6 2016-06-21
CN201610456890.6A CN106097286B (en) 2016-06-21 2016-06-21 A kind of method and device of image procossing
CN201710021180.5A CN106780400B (en) 2017-01-11 2017-01-11 Image processing method and device
CN201710021180.5 2017-01-11
PCT/CN2017/089192 WO2017219962A1 (en) 2016-06-21 2017-06-20 Systems and methods for image processing
US16/219,907 US11094045B2 (en) 2016-06-21 2018-12-13 Systems and methods for image processing
US17/342,695 US20210295480A1 (en) 2016-06-21 2021-06-09 Systems and methods for image processing

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/219,907 Continuation US11094045B2 (en) 2016-06-21 2018-12-13 Systems and methods for image processing

Publications (1)

Publication Number Publication Date
US20210295480A1 true US20210295480A1 (en) 2021-09-23

Family

ID=60783817

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/219,907 Active 2037-07-20 US11094045B2 (en) 2016-06-21 2018-12-13 Systems and methods for image processing
US17/342,695 Abandoned US20210295480A1 (en) 2016-06-21 2021-06-09 Systems and methods for image processing

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/219,907 Active 2037-07-20 US11094045B2 (en) 2016-06-21 2018-12-13 Systems and methods for image processing

Country Status (3)

Country Link
US (2) US11094045B2 (en)
EP (1) EP3459043A4 (en)
WO (1) WO2017219962A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210272252A1 (en) * 2018-11-21 2021-09-02 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751024A (en) * 2019-09-06 2020-02-04 平安科技(深圳)有限公司 User identity identification method and device based on handwritten signature and terminal equipment
CN111062860B (en) * 2019-11-12 2022-12-02 北京旷视科技有限公司 Image color adjusting method and device based on scene and computer equipment
CN112819736B (en) * 2021-01-13 2023-08-29 浙江理工大学 Workpiece character image local detail enhancement fusion method based on multiple exposure

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020159623A1 (en) * 2000-11-30 2002-10-31 Hiroyuki Shinbata Image processing apparatus, image processing method, storage medium, and program
US20020196907A1 (en) * 2001-06-19 2002-12-26 Hiroyuki Shinbata Image processing apparatus, image processing system, image processing method, program, and storage medium
US20040096106A1 (en) * 2002-09-18 2004-05-20 Marcello Demi Method and apparatus for contour tracking of an image through a class of non linear filters
US6771793B1 (en) * 1999-02-17 2004-08-03 Fuji Photo Film Co., Ltd. Image processing method and apparatus
US20040213457A1 (en) * 2003-04-10 2004-10-28 Seiko Epson Corporation Image processor, image processing method, and recording medium on which image processing program is recorded
US20060285767A1 (en) * 2005-06-20 2006-12-21 Ali Walid S Enhancing video sharpness and contrast by luminance and chrominance transient improvement
US20070092137A1 (en) * 2005-10-20 2007-04-26 Sharp Laboratories Of America, Inc. Methods and systems for automatic digital image enhancement with local adjustment
US20080310714A1 (en) * 2007-06-13 2008-12-18 Sensors Unlimited, Inc. Method and Apparatus for Enhancing Images
US20130079626A1 (en) * 2011-09-26 2013-03-28 Andriy Shmatukha Systems and methods for automated dynamic contrast enhancement imaging
US20140093139A1 (en) * 2012-09-28 2014-04-03 Fujifilm Corporation Image evaluation device, image evaluation method and program storage medium
US20150348238A1 (en) * 2014-05-28 2015-12-03 Fuji Xerox Co., Ltd. Image processing apparatus, and non-transitory computer readable medium
US20150348247A1 (en) * 2014-05-30 2015-12-03 Zonare Medical Systems, Inc. Systems and methods for selective enhancement of a region of interest in an image
US20170308995A1 (en) * 2014-11-13 2017-10-26 Nec Corporation Image signal processing apparatus, image signal processing method and image signal processing program

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3003561B2 (en) * 1995-09-25 2000-01-31 松下電器産業株式会社 Gradation conversion method and circuit, image display method and apparatus, and image signal conversion apparatus
US6829778B1 (en) 2000-11-09 2004-12-07 Koninklijke Philips Electronics N.V. Method and system for limiting repetitive presentations based on content filtering
SG118191A1 (en) * 2003-06-27 2006-01-27 St Microelectronics Asia Method and system for contrast enhancement of digital video
FI116327B (en) * 2003-09-24 2005-10-31 Nokia Corp Method and system for automatically adjusting color balance in a digital image processing chain, corresponding hardware and software means for implementing the method
CN1300744C (en) * 2003-12-09 2007-02-14 香港中文大学 Automatic method for modifying digital image and system of adopting the method
JP4143549B2 (en) * 2004-01-28 2008-09-03 キヤノン株式会社 Image processing apparatus and method, computer program, and computer-readable storage medium
JP3930493B2 (en) 2004-05-17 2007-06-13 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー Image processing method, image processing apparatus, and X-ray CT apparatus
KR20060081536A (en) * 2005-01-10 2006-07-13 삼성전자주식회사 The black/white stretching system using rgb information in the image and the method thereof
EP1880363A4 (en) * 2005-03-31 2010-02-10 Agency Science Tech & Res Method and apparatus for image segmentation
US7660461B2 (en) * 2006-04-21 2010-02-09 Sectra Ab Automated histogram characterization of data sets for image visualization using alpha-histograms
US7796830B2 (en) * 2006-08-15 2010-09-14 Nokia Corporation Adaptive contrast optimization of digital color images
JP2009290660A (en) * 2008-05-30 2009-12-10 Seiko Epson Corp Image processing apparatus, image processing method, image processing program and printer
CN101727658B (en) * 2008-10-14 2012-12-26 深圳迈瑞生物医疗电子股份有限公司 Image processing method and device
US20100278423A1 (en) * 2009-04-30 2010-11-04 Yuji Itoh Methods and systems for contrast enhancement
JP5493717B2 (en) * 2009-10-30 2014-05-14 大日本印刷株式会社 Image processing apparatus, image processing method, and image processing program
JP5031877B2 (en) * 2010-01-06 2012-09-26 キヤノン株式会社 Image processing apparatus and image processing method
JP2012058850A (en) * 2010-09-06 2012-03-22 Sony Corp Information processing device and method, and program
CN101980521B (en) * 2010-11-23 2013-02-13 华亚微电子(上海)有限公司 Image sharpening method and related device
JP5889013B2 (en) 2012-02-01 2016-03-22 キヤノン株式会社 Image processing apparatus and image processing method
EP2733933A1 (en) * 2012-09-19 2014-05-21 Thomson Licensing Method and apparatus of compensating illumination variations in a sequence of images
US9324137B2 (en) 2012-10-24 2016-04-26 Marvell World Trade Ltd. Low-frequency compression of high dynamic range images
CN104123697B (en) * 2013-04-23 2017-11-17 华为技术有限公司 A kind of image enchancing method and equipment
US8885105B1 (en) * 2013-06-17 2014-11-11 Cyberlink Corp. Systems and methods for performing region-based local contrast enhancement
WO2015013719A1 (en) * 2013-07-26 2015-01-29 Li-Cor, Inc. Adaptive noise filter
US9218652B2 (en) * 2013-07-26 2015-12-22 Li-Cor, Inc. Systems and methods for setting initial display settings
GB2519336B (en) * 2013-10-17 2015-11-04 Imagination Tech Ltd Tone Mapping
CN104021531A (en) 2014-06-18 2014-09-03 厦门美图之家科技有限公司 Improved method for enhancing dark environment images on basis of single-scale Retinex
US9691211B2 (en) * 2014-07-03 2017-06-27 Seiko Epson Corporation Image processing apparatus, image processing method, and program
US9710715B2 (en) * 2014-12-26 2017-07-18 Ricoh Company, Ltd. Image processing system, image processing device, and image processing method
CN104504721A (en) 2015-01-08 2015-04-08 中国科学院合肥物质科学研究院 Unstructured road detecting method based on Gabor wavelet transformation texture description
CN105376498A (en) * 2015-10-16 2016-03-02 凌云光技术集团有限责任公司 Image processing method and system for expanding dynamic range of camera
MX2018006330A (en) * 2015-11-24 2018-08-29 Koninklijke Philips Nv Handling multiple hdr image sources.
JP6675194B2 (en) * 2015-12-15 2020-04-01 キヤノン株式会社 Imaging device, control method therefor, program, and storage medium
WO2017132858A1 (en) * 2016-02-03 2017-08-10 Chongqing University Of Posts And Telecommunications Methods, systems, and media for image processing
CN106097286B (en) * 2016-06-21 2019-02-12 浙江大华技术股份有限公司 A kind of method and device of image procossing
CN106780400B (en) * 2017-01-11 2020-08-04 浙江大华技术股份有限公司 Image processing method and device

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6771793B1 (en) * 1999-02-17 2004-08-03 Fuji Photo Film Co., Ltd. Image processing method and apparatus
US20020159623A1 (en) * 2000-11-30 2002-10-31 Hiroyuki Shinbata Image processing apparatus, image processing method, storage medium, and program
US20020196907A1 (en) * 2001-06-19 2002-12-26 Hiroyuki Shinbata Image processing apparatus, image processing system, image processing method, program, and storage medium
US20040096106A1 (en) * 2002-09-18 2004-05-20 Marcello Demi Method and apparatus for contour tracking of an image through a class of non linear filters
US7272241B2 (en) * 2002-09-18 2007-09-18 Consiglio Nazionale Delle Ricerche Method and apparatus for contour tracking of an image through a class of non linear filters
US7508543B2 (en) * 2003-04-10 2009-03-24 Seiko Epson Corporation Image processor, image processing method, and recording medium on which image processing program is recorded
US20040213457A1 (en) * 2003-04-10 2004-10-28 Seiko Epson Corporation Image processor, image processing method, and recording medium on which image processing program is recorded
US20060285767A1 (en) * 2005-06-20 2006-12-21 Ali Walid S Enhancing video sharpness and contrast by luminance and chrominance transient improvement
US20070092137A1 (en) * 2005-10-20 2007-04-26 Sharp Laboratories Of America, Inc. Methods and systems for automatic digital image enhancement with local adjustment
US20080310714A1 (en) * 2007-06-13 2008-12-18 Sensors Unlimited, Inc. Method and Apparatus for Enhancing Images
US8218868B2 (en) * 2007-06-13 2012-07-10 Sensors Unlimited, Inc. Method and apparatus for enhancing images
US20130079626A1 (en) * 2011-09-26 2013-03-28 Andriy Shmatukha Systems and methods for automated dynamic contrast enhancement imaging
US20140093139A1 (en) * 2012-09-28 2014-04-03 Fujifilm Corporation Image evaluation device, image evaluation method and program storage medium
US9213894B2 (en) * 2012-09-28 2015-12-15 Fujifilm Corporation Image evaluation device, image evaluation method and program storage medium
US20150348238A1 (en) * 2014-05-28 2015-12-03 Fuji Xerox Co., Ltd. Image processing apparatus, and non-transitory computer readable medium
US9330473B2 (en) * 2014-05-28 2016-05-03 Fuji Xerox Co., Ltd. Image processing apparatus, and non-transitory computer readable medium
US20150348247A1 (en) * 2014-05-30 2015-12-03 Zonare Medical Systems, Inc. Systems and methods for selective enhancement of a region of interest in an image
US20170308995A1 (en) * 2014-11-13 2017-10-26 Nec Corporation Image signal processing apparatus, image signal processing method and image signal processing program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210272252A1 (en) * 2018-11-21 2021-09-02 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing

Also Published As

Publication number Publication date
US11094045B2 (en) 2021-08-17
US20190164263A1 (en) 2019-05-30
EP3459043A1 (en) 2019-03-27
EP3459043A4 (en) 2019-04-24
WO2017219962A1 (en) 2017-12-28

Similar Documents

Publication Publication Date Title
US20210295480A1 (en) Systems and methods for image processing
US11030731B2 (en) Systems and methods for fusing infrared image and visible light image
US10643306B2 (en) Image signal processor for processing images
US10319115B2 (en) Image compression device
US10708525B2 (en) Systems and methods for processing low light images
US11341618B2 (en) Systems and methods for noise reduction
US10311547B2 (en) Image upscaling system, training method thereof, and image upscaling method
US20210272252A1 (en) Systems and methods for image processing
US20220156949A1 (en) Information processing method and system
CN113034358A (en) Super-resolution image processing method and related device
CN115496668A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
US20210279530A1 (en) Systems and methods for image fusion
CN113628115B (en) Image reconstruction processing method, device, electronic equipment and storage medium
US10567777B2 (en) Contrast optimization and local adaptation approach for high dynamic range compression
WO2020133462A1 (en) Methods and systems for image processing
CN115063301A (en) Video denoising method, video processing method and device
CN111401477B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
US20230140865A1 (en) Image processing method and image processing apparatus
US20220375048A1 (en) Electronic apparatus and image processing method thereof
US11961214B2 (en) Electronic apparatus and image processing method thereof
CN116563190B (en) Image processing method, device, computer equipment and computer readable storage medium
US20200244939A1 (en) Methods and devices for processing images
US20230245279A1 (en) Methods and systems for denoising media frames captured in low-light environment
KR20220158525A (en) Electronic apparatus and image processing method thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: ZHEJIANG DAHUA TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YANG, CHANGJIU;JIANG, XIAOTAO;REEL/FRAME:056539/0358

Effective date: 20170627

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION