WO2022259323A1 - Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image - Google Patents

Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image Download PDF

Info

Publication number
WO2022259323A1
WO2022259323A1 PCT/JP2021/021592 JP2021021592W WO2022259323A1 WO 2022259323 A1 WO2022259323 A1 WO 2022259323A1 JP 2021021592 W JP2021021592 W JP 2021021592W WO 2022259323 A1 WO2022259323 A1 WO 2022259323A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image processing
correction parameter
types
histogram
Prior art date
Application number
PCT/JP2021/021592
Other languages
English (en)
Japanese (ja)
Inventor
和樹 出口
俊明 久保
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2021/021592 priority Critical patent/WO2022259323A1/fr
Priority to JP2023527167A priority patent/JP7496935B2/ja
Publication of WO2022259323A1 publication Critical patent/WO2022259323A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing

Definitions

  • the present disclosure relates to an image processing device, an image processing method, and an image processing program.
  • Images with various resolutions and various compression methods are input to the device.
  • the various resolutions are 8K, 4K, full HD (High Definition), SD (Standard Definition), and the like.
  • various compression schemes are specified in H.264. 265/HEVC, H.I. H.264/AVC, MPEG2, and the like.
  • the device may be equipped with an image quality enhancement function. Then, it is desired that the apparatus performs optimal image quality enhancement. For example, when sharpening a blurry image, a process is performed to sharpen the image. Also, for example, when a video with a lot of noise is input, processing is executed to prevent the noise from being emphasized.
  • an image quality control device for controlling image quality has been proposed (see Patent Document 1).
  • the image quality control device of Patent Document 1 generates histograms of all luminance information, chromaticity information, color information, and frequency information obtained from an image included in a video signal.
  • the image quality control device extracts histogram patterns corresponding to all histograms from a reference table having a plurality of preset histogram patterns.
  • the image quality control device controls the image quality of the video signal based on control parameters corresponding to the extracted histogram pattern.
  • the purpose of this disclosure is to optimize image quality for various images.
  • the image processing apparatus includes an acquisition unit that acquires an image and a trained model, a plurality of feature amount extraction units that extract a plurality of types of feature amounts based on the image, and the plurality of types of feature amounts and the a correction parameter detection unit that detects a correction parameter, which is a parameter for correcting the image quality of the image, using the trained model; and a process for correcting the image quality of the image using the correction parameter. and an image processing unit for processing.
  • image quality optimization can be performed for various images.
  • FIG. 2 illustrates hardware included in the image processing apparatus according to the first embodiment
  • FIG. 2 is a block diagram showing functions of the image processing apparatus according to Embodiment 1
  • FIG. 4 is a diagram showing an example (part 1) of horizontal and vertical edge histograms according to Embodiment 1
  • FIG. 10 is a diagram showing an example (part 2) of horizontal and vertical edge histograms according to the first embodiment
  • FIG. 4 is a diagram showing a specific example of a method for extracting a frame difference histogram according to Embodiment 1
  • FIG. 5 is a diagram showing a specific example of a method for extracting the number of changes in brightness according to the first embodiment
  • FIG. 4 is a diagram showing a specific example of a saturation histogram extraction method according to the first embodiment; 1 is a diagram showing an example of a neural network according to Embodiment 1; FIG. FIG. 2 is a diagram showing an example of a trained model that constitutes a random forest according to Embodiment 1; FIG. 1 is a diagram showing an example of a support vector machine according to Embodiment 1; FIG. 4 is a flowchart showing an example of processing executed by the image processing apparatus according to Embodiment 1; 3 is a block diagram showing functions of an image processing apparatus according to a second embodiment; FIG. 9 is a flow chart showing an example of processing executed by the image processing apparatus according to the second embodiment;
  • FIG. 1 illustrates hardware included in an image processing apparatus according to a first embodiment.
  • the image processing device 100 is a device that executes an image processing method.
  • the image processing apparatus 100 has a processor 101 , a volatile memory device 102 and a nonvolatile memory device 103 .
  • the processor 101 controls the image processing apparatus 100 as a whole.
  • the processor 101 is a CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), or the like.
  • Processor 101 may be a multiprocessor.
  • the image processing apparatus 100 may have a processing circuit.
  • the processing circuit may be a single circuit or multiple circuits.
  • the volatile storage device 102 is the main storage device of the image processing device 100 .
  • the volatile memory device 102 is RAM (Random Access Memory).
  • a nonvolatile storage device 103 is an auxiliary storage device of the image processing apparatus 100 .
  • the nonvolatile storage device 103 is a HDD (Hard Disk Drive) or an SSD (Solid State Drive).
  • FIG. 2 is a block diagram showing functions of the image processing apparatus according to the first embodiment.
  • the image processing apparatus 100 includes a storage unit 110, an acquisition unit 120, feature quantity extraction units 130_1, 130_2, .
  • n is a positive integer.
  • the storage unit 110 may be implemented as a storage area secured in the volatile storage device 102 or the nonvolatile storage device 103 .
  • a part or all of the acquisition unit 120, the feature amount extraction unit 130, the correction parameter detection unit 140, the image processing unit 150, and the output unit 160 may be realized by a processing circuit.
  • Part or all of the acquisition unit 120, the feature amount extraction unit 130, the correction parameter detection unit 140, the image processing unit 150, and the output unit 160 may be implemented as modules of a program executed by the processor 101.
  • the program executed by the processor 101 is also called an image processing program.
  • an image processing program is recorded on a recording medium.
  • the image processing device 100 may acquire analog images.
  • the image processing device 100 has an A (Analog)/D (Digital) converter.
  • the storage unit 110 may store images and trained models. Moreover, the storage unit 110 may store a video.
  • Acquisition unit 120 acquires an image. For example, the acquisition unit 120 acquires an image from the storage unit 110. FIG. Also, for example, the acquisition unit 120 acquires an image from a camera. Note that the illustration of the camera is omitted. Acquisition unit 120 also acquires video. In other words, the acquisition unit 120 acquires multiple images.
  • the acquisition unit 120 acquires a learned model.
  • the acquisition unit 120 acquires the trained model from the storage unit 110.
  • the trained model may be stored in an external device (for example, cloud server). If the trained model is stored in the external device, the obtaining unit 120 obtains the trained model from the external device.
  • the feature amount extraction unit 130 extracts a plurality of types of feature amounts based on the image. In other words, the feature amount extraction unit 130 extracts a plurality of feature amounts of different types based on the image. A specific description will be given of the feature amount extraction processing.
  • the feature quantity extraction unit 130 extracts a horizontal edge histogram, which is one feature quantity, based on the image. Specifically, the feature amount extraction unit 130 calculates a luminance difference value between adjacent pixels in the horizontal direction as edge enhancement based on the luminance value of the image. For example, the feature amount extraction unit 130 extracts a horizontal edge histogram expressed by dividing edge enhancement of 1024 gradations into 16 pieces. 4 illustrates a horizontal edge histogram;
  • FIG. 3 is a diagram showing an example (part 1) of a horizontal edge histogram according to the first embodiment.
  • the vertical axis indicates the appearance frequency.
  • the horizontal axis indicates edge strength.
  • the feature amount extraction unit 130 extracts a horizontal edge amount, which is one feature amount. Specifically, a method for extracting the horizontal edge amount will be described.
  • FIG. 4 is a diagram showing an example (part 2) of the horizontal edge histogram according to the first embodiment. Assume that the horizontal edge histogram shown in FIG. 4 has been extracted.
  • the feature quantity extraction unit 130 calculates the total frequency of appearance between the minimum value (edge_left in FIG. 4) and the maximum value (edge_right in FIG. 4) of the edge strength in the specified range in the horizontal edge histogram as a horizontal edge. Extract as quantity. Note that the range may be a preset range.
  • the feature quantity extraction unit 130 extracts a vertical edge histogram, which is one feature quantity, based on the image. Specifically, the feature quantity extraction unit 130 calculates a luminance difference value between adjacent pixels in the vertical direction as edge enhancement based on the luminance value of the image. For example, the feature amount extraction unit 130 extracts a vertical edge histogram expressed by dividing edge enhancement of 1024 gradations into 16 pieces. For example, a vertical edge histogram can be expressed as shown in FIG. Therefore, FIG. 3 may be considered as a diagram showing an example of a vertical edge histogram.
  • the feature quantity extraction unit 130 extracts a vertical edge quantity, which is one feature quantity.
  • the method for extracting the vertical edge amount is the same as the method for extracting the horizontal edge amount. For example, when considering FIG. 4 as an example of a vertical edge histogram, the feature quantity extraction unit 130 calculates the total frequency of appearance between the minimum value and the maximum value of edge strength in a specified range in the vertical edge histogram. is extracted as the vertical edge quantity.
  • the feature quantity extraction unit 130 uses an image to extract a frame difference histogram, which is one feature quantity. A method for extracting a frame difference histogram will be specifically described.
  • FIG. 5 is a diagram showing a specific example of a frame difference histogram extraction method according to the first embodiment.
  • FIG. 5 shows a luminance histogram based on the previously acquired image (for example, n ⁇ 1th frame) and a luminance histogram based on the currently acquired image (for example, nth frame).
  • the vertical axis of these luminance histograms indicates the number of pixels.
  • the horizontal axis of these luminance histograms indicates luminance.
  • a brightness histogram is obtained by dividing the brightness into 16 based on the maximum value of RGB (Red, Green, Blue) of each pixel. That is, the luminance histogram is obtained by summarizing the luminance in 16 gradations. Note that the luminance histogram may be grouped by 1 gradation and 8 gradation.
  • the feature amount extraction unit 130 extracts a frame difference histogram by calculating the difference between the brightness histogram based on the image acquired last time and the brightness histogram based on the image acquired this time.
  • FIG. 5 shows a frame difference histogram.
  • the vertical axis of the frame difference histogram indicates the number of pixels.
  • the horizontal axis of the frame difference histogram indicates luminance. Note that the frame difference histogram may also be called a difference luminance histogram.
  • the feature quantity extraction unit 130 uses an image to extract a frequency histogram, which is one feature quantity. Moreover, the feature quantity extraction unit 130 may extract the number of luminance changes as a feature quantity. A method for extracting the number of luminance changes will be specifically described.
  • FIG. 6 is a diagram showing a specific example of a method for extracting the number of luminance changes according to the first embodiment.
  • the vertical axis of the graph in FIG. 6 indicates the luminance value.
  • the horizontal axis of the graph in FIG. 6 indicates pixel coordinates.
  • FIG. 6 shows nine pixels in the horizontal direction and luminance values of the nine pixels.
  • the feature amount extraction unit 130 extracts the number of pixels whose luminance difference values between adjacent pixels in the horizontal direction are equal to or greater than a threshold as the number of luminance changes.
  • the feature amount extraction unit 130 extracts the N+3 pixel as a pixel whose luminance has changed.
  • pixels whose brightness has changed are represented by colored circles.
  • FIG. 6 shows that the brightness change number is five.
  • the feature amount extraction unit 130 may extract the total value of the number of luminance changes as the feature amount.
  • the feature amount extraction unit 130 uses an image to extract a saturation histogram, which is one feature amount. A method for extracting a saturation histogram will be specifically described.
  • FIG. 7 is a diagram showing a specific example of a saturation histogram extraction method according to the first embodiment.
  • the vertical axis indicates the appearance frequency.
  • the horizontal axis indicates saturation.
  • the feature amount extraction unit 130 extracts saturation based on the maximum value when the U value and V value calculated in the CIE UVW color space are replaced with absolute values.
  • the feature amount extraction unit 130 extracts a saturation histogram expressed by dividing the saturation of 512 gradations into 16 parts.
  • the feature amount may be a feature amount other than the above.
  • the feature amount is the maximum luminance of the image, the minimum luminance of the image, the average luminance of the image, the value of the black area in the image, the value of the white area in the image, and the like.
  • the correction parameter detection unit 140 detects correction parameters using a plurality of types of feature quantities and learned models. That is, the correction parameter detection unit 140 detects correction parameters output by the learned model by inputting a plurality of types of feature amounts to the learned model.
  • the correction parameter is a parameter for correcting image quality of an image.
  • the correction parameter may be expressed as a parameter for improving the image quality of the image.
  • the trained model may consist of multiple layers of neural networks. Illustrate a neural network. 8 is a diagram showing an example of a neural network according to Embodiment 1.
  • FIG. A neural network consists of an input layer, an intermediate layer, and an output layer.
  • FIG. 8 shows that the number of intermediate layers is three.
  • the number of intermediate layers is not limited to three.
  • the number of neurons is not limited to the number in the example of FIG.
  • Multiple types of features are assigned to multiple neurons in the input layer.
  • one neuron in the input layer is assigned a horizontal edge amount.
  • the correction parameter y is calculated by Equation (1) based on multiple types of feature amounts input to multiple neurons.
  • n is the number of neurons in the input layer.
  • x1 to xn are a plurality of types of feature amounts.
  • b is the bias.
  • w is the weight. Bias b and weight w are determined by learning.
  • s indicates a function.
  • Function s(a) is the activation function.
  • the activation function s(a) may be a step function that outputs 0 if a is less than or equal to 0 and outputs 1 if a is not zero.
  • the activation function s(a) may be a ReLU function that outputs 0 if a is 0 or less and outputs the input value a if a is other than 0, or it outputs the input value a as it is. It may be an identity function or a sigmoid function.
  • the input layer neurons output the input values as they are. Therefore, it can be said that the activation function used in the neurons of the input layer is the identity function.
  • a step function or a sigmoid function may be used in the intermediate layer. It may be considered that the ReLU function is used in the output layer. Also, different functions may be used between neurons in the same layer.
  • the weight w is determined by learning.
  • An example of a method of calculating the weight w will be described.
  • Image quality correction processing in machine learning is defined such that noise reduction processing, sharpening processing, contrast improvement processing, and color conversion processing are performed.
  • the values used in the noise reduction process, sharpening process, contrast improvement process, and color conversion process are changed to improve the image quality.
  • a difference between an output value corresponding to an input pattern and a prepared correction parameter is calculated. The weight w is determined so that the difference becomes small.
  • the trained model may be composed of a random forest.
  • An example of a trained model that constitutes a random forest is shown.
  • 9 is a diagram showing an example of a trained model that constitutes the random forest according to Embodiment 1.
  • FIG. FIG. 9 shows a random forest 200.
  • FIG. Random forest 200 includes decision trees 210_1, 210_2, . . . , 210_n. n is a positive integer.
  • Decision tree 210 comprises a plurality of nodes. In the decision tree 210, tree-structured classification rules are created by stacking nodes.
  • Each of the multiple decision trees 210 is assigned one of multiple types of feature quantities.
  • decision tree 210_1 is assigned a horizontal edge amount.
  • the feature amount is input to the first layer node of the decision tree 210 .
  • the horizontal edge amount is input to the first layer node of the decision tree 210_1.
  • a branch condition is defined for the node.
  • the branch condition stipulates that the horizontal edge amount is 1000 or more.
  • the branch condition is determined by the condition that maximizes the information gain of the input data.
  • Information gain refers to the classification degree (eg, also referred to as impurity) of data when classifying data to the next node of a certain node x. Maximization of information gain means maximizing "(impurity before classification) - (impurity after classification)" at a certain node x, and minimizing the impurity after classification. do.
  • Impurity g(T) is calculated using equation (2).
  • g is the Gini coefficient.
  • the Gini coefficient is an index that indicates whether data classification is successful.
  • T indicates data input to the node.
  • t) indicates the number of data in a certain class for input data.
  • c indicates the number of classes.
  • the information gain IG is calculated using equation (3).
  • Tb indicates data before classification.
  • z indicates the number of nodes after classification.
  • Ni indicates the number of data in node i after classification.
  • Np indicates the number of data before classification.
  • Di indicates the data at node i after classification.
  • branching conditions are defined for nodes.
  • Branching conditions are defined by machine learning.
  • An example of a method of calculating a branching condition in machine learning will be described.
  • Image quality correction processing in machine learning is defined such that noise reduction processing, sharpening processing, contrast improvement processing, and color conversion processing are performed.
  • the values used in the noise reduction process, sharpening process, contrast improvement process, and color conversion process are changed to improve the image quality.
  • a difference between an output value corresponding to an input pattern and a prepared correction parameter is calculated. A branching condition is determined so that the difference becomes small.
  • the correction parameters are output by voting the results output from the plurality of decision trees 210 by majority.
  • the trained model may be a trained model generated by using a support vector machine. Therefore, the trained model reflects the support vector machine technology.
  • the learned model is exemplified.
  • FIG. 10 is a diagram showing an example of the support vector machine of Embodiment 1.
  • Support vector machines utilize linear input elements.
  • a support vector machine receives input of multiple types of features.
  • Each of the plurality of types of feature quantities is classified into two classes by linear input elements.
  • a linear input element is calculated using equation (4).
  • Equation (5) indicates input data (that is, feature amount).
  • w ⁇ T(x)+b denotes a linear straight line separating the data. Equations (5) and (6) are used to calculate Equation (4). Equation (5) is expressed as follows.
  • w indicates the slope of the linear input element.
  • indicates a variable that weakens the constraint of Eq.
  • C denotes a positive value regularization factor.
  • t is a variable (ie, 1 or -1) for making "(w ⁇ T(x)+b)" a positive value.
  • a kernel function may be used when the input data cannot be classified into two by the linear input element.
  • a kernel function By using a kernel function, the input data is expanded into a multidimensional space and a plane that can be linearly classified with linear input elements is calculated.
  • the kernel function may be an RBF kernel, a polynomial kernel, a linear kernel, or a sigmoid kernel.
  • the linear input elements are calculated by machine learning.
  • An example of a method for calculating linear input elements in machine learning will be described.
  • Image quality correction processing in machine learning is defined such that noise reduction processing, sharpening processing, contrast improvement processing, and color conversion processing are performed.
  • the values used in the noise reduction process, sharpening process, contrast improvement process, and color conversion process are changed to improve the image quality.
  • a difference between an output value corresponding to an input pattern and a prepared correction parameter is calculated.
  • a linear input element is calculated so that the difference is small.
  • the image processing unit 150 uses the correction parameters to perform processing for correcting the image quality of the image. For example, the image processing unit 150 uses the correction parameters to perform processing for reducing noise in the image. For example, the process for reducing noise is low-pass filtering. For example, the image processing unit 150 corrects an image with much noise to an image with less noise using the correction parameters. Further, for example, the image processing unit 150 corrects an image including fine pattern noise to an image with less noise using a correction parameter.
  • the image processing unit 150 executes processing for sharpening the image using the correction parameters.
  • processing for sharpening an image is high-pass filtering.
  • the image processing unit 150 corrects a noisy image to a sharpened image using correction parameters.
  • the image processing unit 150 corrects a blurred image to a sharpened image using correction parameters.
  • the image processing unit 150 may use the correction parameters to perform processing for improving the contrast of the image, processing for converting the color of the image, and the like.
  • the output unit 160 outputs the corrected image.
  • the output unit 160 outputs the corrected image to a display. Thereby, the user can visually recognize the image with the optimum image quality. Illustration of the display is omitted.
  • the output unit 160 may output the corrected image to an external device.
  • the output section 160 may output the corrected image to the storage section 110 .
  • FIG. 11 is a flowchart illustrating an example of processing executed by the image processing apparatus according to Embodiment 1.
  • FIG. (Step S11) Acquisition unit 120 acquires an image.
  • Step S12) The feature amount extraction unit 130 extracts a plurality of types of feature amounts based on the image.
  • the correction parameter detection unit 140 detects correction parameters using a plurality of types of feature amounts and learned models.
  • the image processing unit 150 uses the correction parameters to perform processing for correcting the image quality of the image.
  • the output unit 160 outputs the corrected image.
  • the image processing apparatus 100 corrects the image using the correction parameters output from the trained model.
  • various images are input as learning data, and learning is performed to output appropriate correction parameters corresponding to the various images.
  • the learned model is generated. Therefore, the image processing apparatus 100 using the learned model can optimize the image quality of various images.
  • multiple types of feature values are extracted before multiple types of feature values are input to the trained model. That is, the trained model does not perform processing for extracting a plurality of types of feature amounts from the image. Therefore, according to Embodiment 1, it is possible to reduce the weight of the trained model and speed up the processing in the trained model.
  • Embodiment 2 Next, Embodiment 2 will be described. In Embodiment 2, mainly matters different from Embodiment 1 will be described. In the second embodiment, descriptions of items common to the first embodiment are omitted.
  • FIG. 12 is a block diagram showing functions of the image processing apparatus according to the second embodiment. 12 that are the same as those shown in FIG. 2 are assigned the same reference numerals as those shown in FIG.
  • the image processing device 100 further has a determination section 170 .
  • a part or all of the determination unit 170 may be implemented by a processing circuit. Also, part or all of the determination unit 170 may be implemented as a program module executed by the processor 101 . The function of the determination unit 170 will be described later in detail.
  • FIG. 13 is a flowchart illustrating an example of processing executed by the image processing apparatus according to the second embodiment; FIG. The process of FIG. 13 differs from the process of FIG. 11 in that step S13a is executed. Therefore, FIG. 13 demonstrates step S13a. A description of the processes other than step S13a is omitted.
  • Step S13a The determination unit 170 determines whether or not the difference between the detected correction parameter and the preset correction parameter is equal to or less than a preset threshold.
  • the acquisition unit 120 can acquire a plurality of images (for example, videos) for a certain period of time. Then, the feature quantity extraction unit 130 and the correction parameter detection unit 140 detect multiple correction parameters based on the multiple images. As a result, a plurality of correction parameters detected over a certain period of time are detected.
  • the preset correction parameter is an average value based on a plurality of correction parameters detected over a certain period of time.
  • step S14 If the difference is equal to or less than the threshold, the process proceeds to step S14. If the difference is greater than the threshold, the process ends.
  • the image processing apparatus 100 uses the threshold to determine whether or not to perform correction in order to suppress sudden changes in image quality. Therefore, according to the second embodiment, the image processing apparatus 100 can suppress sudden changes in image quality.
  • image processing device 101 processor, 102 volatile storage device, 103 non-volatile storage device, 110 storage unit, 120 acquisition unit, 130, 130_1, 130_2, ..., 130_n feature amount extraction unit, 140 correction parameter detection unit, 150 image processing unit, 160 output unit, 170 determination unit, 200 random forest, 210, 210_1, 210_2, ..., 210_n decision tree.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Un dispositif de traitement d'image (100) selon la présente invention comprend : une unité d'acquisition (120) qui acquiert une image et un modèle entraîné ; une pluralité d'unités d'extraction de quantité de caractéristiques (130_1 à 130_n) qui extraient une pluralité de types de quantités de caractéristiques sur la base de l'image ; une unité de détection de paramètre de correction (140) qui utilise la pluralité de types de quantités de caractéristiques et le modèle entraîné pour détecter un paramètre de correction, qui est un paramètre destiné à corriger la qualité de l'image ; et une unité de traitement d'image (150) qui utilise le paramètre de correction pour exécuter un processus pour corriger la qualité de l'image.
PCT/JP2021/021592 2021-06-07 2021-06-07 Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image WO2022259323A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2021/021592 WO2022259323A1 (fr) 2021-06-07 2021-06-07 Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image
JP2023527167A JP7496935B2 (ja) 2021-06-07 2021-06-07 画像処理装置、画像処理方法、及び画像処理プログラム

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/021592 WO2022259323A1 (fr) 2021-06-07 2021-06-07 Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image

Publications (1)

Publication Number Publication Date
WO2022259323A1 true WO2022259323A1 (fr) 2022-12-15

Family

ID=84424995

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/021592 WO2022259323A1 (fr) 2021-06-07 2021-06-07 Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image

Country Status (2)

Country Link
JP (1) JP7496935B2 (fr)
WO (1) WO2022259323A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003187215A (ja) * 2001-12-18 2003-07-04 Fuji Xerox Co Ltd 画像処理システム及び画像処理サーバ
JP2009151350A (ja) * 2007-12-18 2009-07-09 Nec Corp 画像補正方法および画像補正装置
WO2018150685A1 (fr) * 2017-02-20 2018-08-23 ソニー株式会社 Dispositif de traitement d'image, procédé de traitement d'image, et programme
JP2020188484A (ja) * 2016-06-02 2020-11-19 ソニー株式会社 画像処理装置と画像処理方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003187215A (ja) * 2001-12-18 2003-07-04 Fuji Xerox Co Ltd 画像処理システム及び画像処理サーバ
JP2009151350A (ja) * 2007-12-18 2009-07-09 Nec Corp 画像補正方法および画像補正装置
JP2020188484A (ja) * 2016-06-02 2020-11-19 ソニー株式会社 画像処理装置と画像処理方法
WO2018150685A1 (fr) * 2017-02-20 2018-08-23 ソニー株式会社 Dispositif de traitement d'image, procédé de traitement d'image, et programme

Also Published As

Publication number Publication date
JP7496935B2 (ja) 2024-06-07
JPWO2022259323A1 (fr) 2022-12-15

Similar Documents

Publication Publication Date Title
US10339643B2 (en) Algorithm and device for image processing
US7003153B1 (en) Video contrast enhancement through partial histogram equalization
Nithyananda et al. Review on histogram equalization based image enhancement techniques
US7853095B2 (en) Apparatus, method, recording medium and program for processing signal
US9478017B2 (en) Guided image filtering for image content
US20070036429A1 (en) Method, apparatus, and program for object detection in digital image
JP2020160616A (ja) 生成装置、コンピュータプログラム、生成方法
JP2016505186A (ja) エッジ保存・ノイズ抑制機能を有するイメージプロセッサ
US8335375B2 (en) Image processing apparatus and control method thereof
CN112465727A (zh) 基于HSV色彩空间和Retinex理论的无正常光照参考的低照度图像增强方法
US7327504B2 (en) Method of detecting clipped image pixels
CN111340732B (zh) 一种低照度视频图像增强方法及装置
CN111047543A (zh) 图像增强方法、装置和存储介质
CN111292269A (zh) 一种图像色调映射方法、计算机装置及计算机可读存储介质
JP6396066B2 (ja) 画質改善システム、画質改善方法及びプログラム
CN114998122A (zh) 一种低照度图像增强方法
Kansal et al. Trade-off between mean brightness and contrast in histogram equalization technique for image enhancement
WO2020107308A1 (fr) Procédé et appareil d'amélioration rapide d'image à faible niveau de luminosité basés sur retinex
WO2022259323A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image
US11082613B2 (en) Image adjusting method and image adjusting device
Ramadan Monochromatic-based method for impulse noise detection and suppression in color images
CN114390266B (zh) 一种图像白平衡处理方法、设备及计算机可读存储介质
Lin et al. Tri-histogram equalization based on first order statistics
CN112991448A (zh) 一种基于颜色直方图的回环检测方法、装置及存储介质
Sari et al. Preprocessing of tomato images captured by smartphone cameras using color correction and V-channel Otsu segmentation for tomato maturity clustering

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21945004

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023527167

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21945004

Country of ref document: EP

Kind code of ref document: A1