CN117336423A - Image processing device, image capturing device, image processing method, and storage medium - Google Patents

Image processing device, image capturing device, image processing method, and storage medium Download PDF

Info

Publication number
CN117336423A
CN117336423A CN202310755658.2A CN202310755658A CN117336423A CN 117336423 A CN117336423 A CN 117336423A CN 202310755658 A CN202310755658 A CN 202310755658A CN 117336423 A CN117336423 A CN 117336423A
Authority
CN
China
Prior art keywords
image
processing
adjustment
processing target
contrast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310755658.2A
Other languages
Chinese (zh)
Inventor
斋藤太郎
田中康一
岛田智大
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Publication of CN117336423A publication Critical patent/CN117336423A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Geometry (AREA)

Abstract

The invention provides an image processing device, an image capturing device, an image processing method, and a program, which can obtain an image in which the influence of the 1 st AI process is not more noticeable than the 1 st image obtained by performing the 1 st AI process on the image to be processed. The image processing device is provided with a processor. The processor performs the following processing: acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by performing 1 st AI processing on the processing object image, and the 2 nd image is obtained by not performing 1 st AI processing on the processing object image; and adjusting the excessive or insufficient of the 1 st AI process by synthesizing the 1 st image and the 2 nd image.

Description

Image processing device, image capturing device, image processing method, and storage medium
Technical Field
The present technology relates to an image processing apparatus, an imaging apparatus, an image processing method, and a storage medium.
Background
Patent document 1 discloses an image processing system including: a processing unit that processes an input image input to an input layer using a neural network having the input layer, an output layer, and an intermediate layer provided between the input layer and the output layer; and an adjustment unit that adjusts at least one internal parameter of one or more nodes included in the intermediate layer calculated by learning, based on data related to the input image when processing is performed after learning.
In the image processing system described in patent document 1, the input image is an image including noise, and the noise in the input image is removed or reduced by the processing performed by the processing unit.
Further, in the image processing system described in patent document 1, the neural network includes: a 1 st neural network; a 2 nd neural network; a dividing unit that divides an input image into a high-frequency component image and a low-frequency component image, and inputs the high-frequency component image to the 1 st neural network and simultaneously inputs the low-frequency component image to the 2 nd neural network; and a synthesizing unit synthesizing the 1 st output image output from the 1 st neural network and the 2 nd output image output from the 2 nd neural network, wherein the adjusting unit adjusts the internal parameters of the 1 st neural network according to the data related to the input image, and the internal parameters of the 2 nd neural network are not adjusted.
Further, patent document 1 discloses an image processing system including: a processing unit that generates a noise-reduced output image from the input image using a neural network; and an adjusting unit for adjusting the internal parameters of the neural network according to the imaging conditions of the input image.
Patent document 2 discloses a medical image processing apparatus including: an acquisition unit that acquires a 1 st image that is a medical image of a predetermined region of a subject; a high-quality image processing unit that generates a 2 nd image from the 1 st image using a high-quality image processing engine including a machine learning engine, the 2 nd image having a higher image quality than the 1 st image; and a display control unit that displays a composite image, which is obtained by compositing the 1 st image and the 2 nd image in a ratio obtained by using information related to at least a partial region of the 1 st image, on the display unit.
Patent document 3 discloses an electronic device including: a memory storing at least one command; and a processor electrically connected to the memory, obtaining a noise figure representing the quality of the input image from the input image by executing a command, and applying the input image and the noise figure to a learning network model including a plurality of layers, obtaining an output image in which the quality of the input image is improved, the processor providing the noise figure to at least one intermediate layer of the plurality of layers, the learning network model being a learned artificial intelligence model obtained by learning a relationship among the plurality of sample images, the noise figure for each sample image, and the original image for each sample image by an artificial intelligence algorithm.
Patent document 1: japanese patent application laid-open No. 2018-206382
Patent document 2: japanese patent laid-open No. 2020-166814
Patent document 3: japanese patent laid-open No. 2020-1843300
Disclosure of Invention
An embodiment of the present invention provides an image processing apparatus, an imaging apparatus, an image processing method, and a program capable of obtaining an image in which the influence of the 1 st AI process is not more noticeable than the 1 st image obtained by performing the 1 st AI process on the image to be processed.
An embodiment 1 of the present invention relates to an image processing apparatus including a processor configured to execute: acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by performing 1 st AI processing on the processing object image, and the 2 nd image is obtained by not performing 1 st AI processing on the processing object image; and adjusting the excessive or insufficient of the 1 st AI process by synthesizing the 1 st image and the 2 nd image.
In the image processing apparatus according to claim 1, the 2 nd image is an image obtained by performing a non-AI process that does not use a neural network on the image to be processed.
A 3 rd aspect of the present invention relates to an image processing apparatus including a processor configured to execute: acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by adjusting a non-noise element of a processing object image by performing a 1 st AI process on the processing object image, and the 2 nd image is obtained by not performing the 1 st AI process on the processing object image; and adjusting the non-noise element by synthesizing the 1 st image and the 2 nd image.
In a 4 th aspect of the present invention, in the image processing apparatus according to the 3 rd aspect, the 2 nd image is an image in which a non-noise element is adjusted by performing a non-AI method process on the image to be processed without using a neural network.
In the image processing apparatus according to claim 3, according to claim 5 of the present invention, the 2 nd image is an image in which the non-noise element is not adjusted.
In a 6 th aspect of the present technology, in the image processing apparatus according to any one of the 1 st to 5 th aspects, the processor synthesizes the 1 st image and the 2 nd image in a ratio that adjusts the excess or deficiency of the 1 st AI process.
In a 7 th aspect of the present invention, in the image processing apparatus according to the 6 th aspect, the 1 st AI process includes a 1 st correction process of correcting a phenomenon occurring in the image to be processed due to characteristics of the image capturing apparatus in the AI process, the 1 st image includes a 1 st correction image obtained by performing the 1 st correction process, and the processor adjusts an element derived from the 1 st correction process by synthesizing the 1 st correction image and the 2 nd image in proportion.
An 8 th aspect of the present invention is the image processing apparatus according to the 7 th aspect, wherein the processor performs a 2 nd correction process of correcting the phenomenon in a non-AI manner, the 2 nd image includes a 2 nd correction image obtained by performing the 2 nd correction process, and the processor adjusts an element derived from the 1 st correction process by synthesizing the 1 st correction image and the 2 nd correction image in proportion.
An image processing apparatus according to a 9 th aspect of the present invention is the image processing apparatus according to any of the 6 th or 8 th aspects, wherein the characteristic includes an optical characteristic of the image pickup apparatus.
In a 10 th aspect of the present invention, in the image processing apparatus according to any one of the aspects 6 to 9, the 1 st AI process includes a 1 st change process of changing a factor of a visual impression given to the image to be processed by the control process in the AI process, the 1 st image includes a 1 st change image obtained by performing the 1 st change process, and the processor adjusts an element derived from the 1 st change process by combining the 1 st change image and the 2 nd image in proportion.
An 11 th aspect of the present invention relates to the image processing apparatus according to the 10 th aspect, wherein the processor performs a 2 nd modification process in which the 2 nd modification process is a non-AI modification factor, the 2 nd image includes a 2 nd modification image obtained by performing the 2 nd modification process, and the processor adjusts an element derived from the 1 st modification process by combining the 1 st modification image and the 2 nd modification image in proportion.
The 12 th aspect of the present invention relates to the image processing apparatus according to the 10 th or 11 th aspect, wherein the factors include sharpness, color, gradation, resolution, blurring, degree of emphasis of an edge region, wind, and/or image quality related to skin.
An aspect 13 of the present invention is the image processing apparatus according to any one of aspects 6 to 12, wherein the processing target image is a captured image obtained by capturing, by the imaging apparatus, subject light imaged on the light receiving surface by the lens of the imaging apparatus, the 1 st image includes a 1 st aberration correction image obtained by performing, as a process included in the 1 st AI process, an aberration region correction process of correcting, by the AI method, a region in which an aberration of the lens is reflected in the captured image, the 2 nd image includes a 2 nd aberration correction image obtained by performing a process of correcting, by the non-method, a region in which an aberration of the lens is reflected in the captured image, and the processor adjusts an element derived from the aberration region correction process by synthesizing the 1 st aberration correction image and the 2 nd aberration correction image in proportion.
An aspect 14 of the present invention provides the image processing apparatus according to any one of aspects 6 to 13, wherein the 1 st image includes a 1 st color image obtained by performing a color process as a process included in the 1 st AI process, the color process coloring the image to be processed in the AI process so as to be able to distinguish between a 1 st region and a 2 nd region, the 2 nd region being a region different from the 1 st region, the 2 nd image includes a 2 nd color image obtained by performing a process of changing the color of the image to be processed in a non-AI manner, and the processor adjusts an element derived from the color process by synthesizing the 1 st color image and the 2 nd color image in proportion.
In a 15 th aspect of the present invention, in the image processing apparatus according to the 14 th aspect, the 2 nd color image is an image obtained by performing processing of coloring a processing target image in a non-AI manner so as to be able to distinguish between the 1 st area and the 2 nd area.
An image processing apparatus according to a 16 th aspect of the present invention is the image processing apparatus according to any one of the 14 th and 15 th aspects, wherein the processing target image is an image obtained by capturing the 1 st object, and the 1 st area is an area within the processing target image in which a specific object included in the 1 st object is displayed.
An aspect 17 of the present invention is the image processing apparatus according to any one of aspects 6 to 16, wherein the 1 st image includes a 1 st contrast adjustment image obtained by performing a 1 st contrast adjustment process as a process included in a 1 st AI process, the 1 st contrast adjustment process adjusts a contrast of the image to be processed in an AI process, the 2 nd image includes a 2 nd contrast adjustment image obtained by performing a 2 nd contrast adjustment process, the 2 nd contrast adjustment process adjusts a contrast of the image to be processed in a non-AI process, and the processor adjusts an element derived from the 1 st contrast adjustment process by synthesizing the 1 st contrast adjustment image and the 2 nd contrast adjustment image in proportion.
An 18 th aspect of the present invention is the image processing apparatus according to the 17 th aspect, wherein the processing target image is an image obtained by capturing a 2 nd subject, the 1 st contrast adjustment processing includes a 3 rd contrast adjustment processing for adjusting a contrast of the processing target image according to the 2 nd subject in an AI manner, the 2 nd contrast adjustment processing includes a 4 th contrast adjustment processing for adjusting a contrast of the processing target image according to the 2 nd subject in a non-AI manner, the 1 st image includes a 3 rd contrast image obtained by performing the 3 rd contrast adjustment processing, the 2 nd image includes a 4 th contrast image obtained by performing the 4 th contrast adjustment processing, and the processor adjusts an element derived from the 3 rd contrast adjustment processing by synthesizing the 3 rd contrast image and the 4 th contrast image in proportion.
An aspect 19 of the present invention relates to the image processing apparatus according to the aspect 17 or the aspect 18, wherein the 1 st contrast adjustment process includes a 5 th contrast adjustment process of adjusting the contrast between a center pixel included in the image to be processed and a plurality of adjacent pixels adjacent thereto around the center pixel in the AI manner, the 2 nd contrast adjustment process includes a 6 th contrast adjustment process of adjusting the contrast between the center pixel and the plurality of adjacent pixels in the non-AI manner, the 1 st image includes a 5 th contrast image obtained by performing the 5 th contrast adjustment process, the 2 nd image includes a 6 th contrast image obtained by performing the 6 th contrast adjustment process, and the processor adjusts an element derived from the 5 th contrast adjustment process by synthesizing the 5 th contrast image and the 6 th contrast image in proportion.
An aspect 20 of the present invention is the image processing apparatus according to any one of aspects 6 to 19, wherein the 1 st image includes a 1 st resolution adjustment image obtained by performing a 1 st resolution adjustment process as a process included in a 1 st AI process, the 1 st resolution adjustment process adjusts a resolution of the image to be processed in the AI process, the 2 nd image includes a 2 nd resolution adjustment image obtained by performing a 2 nd resolution adjustment process, the 2 nd resolution adjustment process adjusts a resolution in a non-AI process, and the processor adjusts an element derived from the 1 st resolution adjustment process by synthesizing the 1 st resolution adjustment image and the 2 nd resolution adjustment image in proportion.
An 21 st aspect of the present technology provides the image processing apparatus according to the 20 th aspect, wherein the 1 st resolution adjustment process is a process of super-resolving the image to be processed in the AI mode, and the 2 nd resolution adjustment process is a process of super-resolving the image to be processed in the non-AI mode.
An image processing apparatus according to a 22 nd aspect of the present invention is the image processing apparatus according to any one of the 6 th to 21 st aspects, wherein the 1 st image includes a 1 st high dynamic range image obtained by performing an expansion process of expanding a dynamic range of the image to be processed in the AI process as a process included in the 1 st AI process, the 2 nd image includes a 2 nd high dynamic range image obtained by performing a process of expanding a dynamic range of the image to be processed in the non-AI process, and the processor adjusts an element derived from the expansion process by synthesizing the 1 st high dynamic range image and the 2 nd high dynamic range image in proportion.
An aspect 23 of the present invention is the image processing apparatus according to any one of aspects 6 to 22, wherein the 1 st image includes a 1 st edge emphasized image obtained by performing, as a process included in the 1 st AI process, an emphasized process of emphasizing an edge region in AI form than a non-edge region within the image to be processed, the non-edge region being a region different from the edge region, the 2 nd image includes a 2 nd edge emphasized image obtained by performing a process of emphasizing an edge region in non-AI form than the non-edge region, and the processor adjusts an element derived from the emphasized process by synthesizing the 1 st edge emphasized image and the 2 nd edge emphasized image in proportion.
An image processing apparatus according to a 24 th aspect of the present invention is the image processing apparatus according to any one of the 6 th to 23 th aspects, wherein the 1 st image includes a 1 st point image adjustment image obtained by performing a point image adjustment process for adjusting a blurring amount of a point image of a processing target image in an AI manner as a process included in the 1 st AI process, the 2 nd image includes a 2 nd point image adjustment image obtained by performing a process for adjusting a blurring amount in a non-AI manner, and the processor adjusts an element derived from the point image adjustment process by combining the 1 st point image adjustment image and the 2 nd point image adjustment image in proportion.
An aspect 25 of the present invention provides the image processing apparatus according to any one of aspects 6 to 24, wherein the processing target image is an image obtained by capturing a 3 rd subject, the 1 st image includes a 1 st blurred image obtained by performing a blur process as a process included in a 1 st AI process for imparting a blur corresponding to the 3 rd subject to the processing target image in the AI process, the 2 nd image includes a 2 nd blurred image obtained by performing a process for imparting a blur to the processing target image in a non-AI process, and the processor adjusts an element derived from the blur process by synthesizing the 1 st blurred image and the 2 nd blurred image in proportion.
An aspect 26 of the present invention provides the image processing apparatus according to any one of aspects 6 to 25, wherein the 1 st image includes a 1 st circle-blurred image obtained by performing a circle-blurred process as a process included in the 1 st AI process, the circle-blurred process imparting a 1 st circle-blur to the processing-target image in the AI process, the 2 nd image includes a 2 nd circle-blurred image obtained by performing a process of adjusting the 1 st circle-blur from the processing-target image in a non-AI manner or imparting a 2 nd circle-blur to the processing-target image in a non-AI manner, and the processor adjusts an element derived from the circle-blurred process by proportionally synthesizing the 1 st circle-blurred image and the 2 nd circle-blurred image.
An image processing apparatus according to a 27 th aspect of the present invention is the image processing apparatus according to any one of the 6 th to 26 th aspects, wherein the 1 st image includes a 1 st gradation adjustment image obtained by performing a 1 st gradation adjustment process as a process included in a 1 st AI process, the 1 st gradation adjustment process adjusts a gradation of the image to be processed in an AI process, the 2 nd image includes a 2 nd gradation adjustment image obtained by performing a 2 nd gradation adjustment process, the 2 nd gradation adjustment process adjusts a gradation of the image to be processed in a non-AI process, and the processor adjusts an element derived from the 1 st gradation adjustment process by synthesizing the 1 st gradation adjustment image and the 2 nd gradation adjustment image in proportion.
An image processing apparatus according to a 28 th aspect of the present invention is the image processing apparatus according to the 27 th aspect, wherein the processing target image is an image obtained by capturing a 4 th object, the 1 st gradation adjustment processing is a processing for adjusting the gradation of the processing target image according to the 4 th object in the AI mode, and the 2 nd gradation adjustment processing is a processing for adjusting the gradation of the processing target image according to the 4 th object in the non-AI mode.
An embodiment 29 of the present invention relates to the image processing apparatus according to any one of the embodiments 6 to 28, wherein the 1 st image includes a wind-image changing image obtained by performing a wind-image changing process of changing a wind of the image to be processed in the AI system as a process included in the 1 st AI process, and the processor adjusts an element derived from the wind-image changing process by synthesizing the wind-image changing image and the 2 nd image in proportion.
An aspect 30 of the present invention provides the image processing apparatus according to any one of aspects 6 to 29, wherein the processing target image is an image obtained by capturing a skin, the 1 st image includes a skin image quality adjustment image obtained by performing a skin image quality adjustment process as a process included in the 1 st AI process, the skin image quality adjustment process adjusts an image quality related to a skin appearing in the processing target image in an AI manner, and the processor adjusts an element derived from the skin image quality adjustment process by synthesizing the skin image quality adjustment image and the 2 nd image in proportion.
An image processing apparatus according to claim 31, which is a technology of the present invention, is the image processing apparatus according to any one of claim 6 to claim 30, wherein the 1 st AI process includes a plurality of processes according to purposes performed in the AI process, and the 1 st image includes a multiprocessed image obtained by performing a plurality of processes according to purposes on the image to be processed, and the processor synthesizes the multiprocessed image and the 2 nd image in proportion.
A 32 nd aspect of the present invention relates to the image processing apparatus according to the 31 st aspect, wherein the plurality of processes are performed in order of magnitude based on an influence on the image to be processed according to the purpose.
A 33 rd aspect of the present technology is the image processing apparatus according to the 32 nd aspect, wherein the plurality of target-based processes are performed stepwise from the target-based process having a small degree of influence to the target-based process having a large degree of influence.
An image processing apparatus according to a 34 th aspect of the present invention is the image processing apparatus according to any one of the 5 th to 32 th aspects, wherein the ratio is determined based on a difference between the processing target image and the 1 st image and/or a difference between the 1 st and 2 nd images.
An image processing apparatus according to a 35 th aspect of the present invention is the image processing apparatus according to any one of the 6 th to 34 th aspects, wherein the processor adjusts the scale according to the related information related to the image to be processed.
A 36 th aspect of the present invention is an imaging device, comprising: the image processing apparatus according to any one of aspects 1 to 34; and an image sensor that captures an image of the processing target.
A 37 th aspect of the present invention is an image processing method including the steps of: acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by performing 1 st AI processing on the processing object image, and the 2 nd image is obtained by not performing 1 st AI processing on the processing object image; and adjusting the excessive or insufficient of the 1 st AI process by synthesizing the 1 st image and the 2 nd image.
A 38 th aspect of the present invention is an image processing method comprising the steps of: acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by adjusting a non-noise element of a processing object image by performing a 1 st AI process on the processing object image, and the 2 nd image is obtained by not performing the 1 st AI process on the processing object image; and adjusting the non-noise element by synthesizing the 1 st image and the 2 nd image.
A 39 th aspect of the present invention relates to a program for causing a computer to execute a process including: acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by performing 1 st AI processing on the processing object image, and the 2 nd image is obtained by not performing 1 st AI processing on the processing object image; and adjusting the excessive or insufficient of the 1 st AI process by synthesizing the 1 st image and the 2 nd image.
A 40 th aspect of the present invention relates to a program for causing a computer to execute a process including: acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by adjusting a non-noise element of a processing object image by performing a 1 st AI process on the processing object image, and the 2 nd image is obtained by not performing the 1 st AI process on the processing object image; and adjusting the non-noise element by synthesizing the 1 st image and the 2 nd image.
Drawings
Fig. 1 is a schematic configuration diagram showing an example of the overall configuration of an image pickup apparatus.
Fig. 2 is a schematic configuration diagram showing an example of a hardware configuration of an optical system and an electrical system of the imaging apparatus.
Fig. 3 is a block diagram showing an example of functions of the image processing engine.
Fig. 4 is a conceptual diagram showing an example of processing contents of the AI-system processing unit and the non-AI-system processing unit.
Fig. 5 is a conceptual diagram showing an example of the processing contents of the image adjusting unit and the synthesizing unit.
Fig. 6 is a flowchart showing an example of the flow of the image synthesis process.
Fig. 7 is a conceptual diagram showing an example of processing contents of the AI-system processing unit and the non-AI-system processing unit according to modification 1.
Fig. 8 is a conceptual diagram showing an example of the processing contents of the image adjustment unit and the synthesis unit according to modification 1.
Fig. 9 is a flowchart showing an example of the flow of the image synthesis processing according to modification 1.
Fig. 10 is a conceptual diagram showing an example of processing contents that the non-AI-system processing unit colors a person region and a background region in a non-AI system so as to be distinguishable.
Fig. 11 is a conceptual diagram showing an example of processing contents of the AI-system processing unit and the non-AI-system processing unit according to modification 2.
Fig. 12 is a conceptual diagram showing an example of the processing contents of the image adjustment unit and the synthesis unit according to modification 2.
Fig. 13 is a flowchart showing an example of the flow of the image synthesis processing according to modification 2.
Fig. 14 is a conceptual diagram showing an example of the contents of the 1 st definition processing and the 2 nd definition processing.
Fig. 15 is a conceptual diagram showing an example of processing content of the processor for adjusting the contrast according to the subject.
Fig. 16 is a conceptual diagram showing an example of processing contents of the AI-system processing unit and the non-AI-system processing unit according to modification 3.
Fig. 17 is a conceptual diagram showing an example of the processing contents of the image adjustment unit and the synthesis unit according to modification 3.
Fig. 18 is a flowchart showing an example of the flow of the image synthesis processing according to modification 3.
Fig. 19 is a conceptual diagram showing an example of processing contents of the AI-system processing unit and the non-AI-system processing unit according to modification 4.
Fig. 20 is a conceptual diagram showing an example of the processing contents of the image adjustment unit and the synthesis unit according to modification 4.
Fig. 21 is a flowchart showing an example of the flow of the image synthesis processing according to modification 4.
Fig. 22 is a conceptual diagram showing an example of the processing contents of the AI-system processing unit and the non-AI-system processing unit according to modification 5.
Fig. 23 is a conceptual diagram showing an example of the processing contents of the image adjustment unit and the synthesis unit according to modification 5.
Fig. 24 is a flowchart showing an example of the flow of the image synthesis processing according to modification 5.
Fig. 25 is a conceptual diagram showing an example of processing contents of the AI-system processing unit and the non-AI-system processing unit according to modification 6.
Fig. 26 is a conceptual diagram showing an example of the processing contents of the image adjustment unit and the synthesis unit according to modification 6.
Fig. 27 is a flowchart showing an example of the flow of the image synthesis processing according to modification 6.
Fig. 28 is a conceptual diagram showing an example of the processing contents of the AI-system processing unit and the non-AI-system processing unit according to modification 7.
Fig. 29 is a conceptual diagram showing an example of the processing contents of the image adjustment unit and the synthesis unit according to modification 7.
Fig. 30 is a flowchart showing an example of the flow of the image synthesis processing according to modification 7.
Fig. 31 is a conceptual diagram showing an example of processing contents of the AI-system processing unit and the non-AI-system processing unit according to modification 8.
Fig. 32 is a conceptual diagram showing an example of the processing contents of the image adjustment unit and the synthesis unit according to modification 8.
Fig. 33 is a flowchart showing an example of the flow of the image synthesis processing according to modification 8.
Fig. 34 is a conceptual diagram showing example 1 of processing content in which the non-AI-scheme processing section generates the 2 nd circular blur by filtering the generated 1 st circular blur by the AI scheme.
Fig. 35 is a conceptual diagram showing example 2 of processing content in which the non-AI-scheme processing section generates the 2 nd circular blur by filtering the generated 1 st circular blur by the AI scheme.
Fig. 36 is a conceptual diagram showing an example of the processing contents of the AI-system processing unit and the non-AI-system processing unit according to modification 9.
Fig. 37 is a conceptual diagram showing an example of the processing contents of the image adjustment unit and the synthesis unit according to modification 9.
Fig. 38 is a flowchart showing an example of the flow of the image synthesis processing according to modification 9.
Fig. 39 is a conceptual diagram showing an example of processing contents of the AI-system processing unit and the non-AI-system processing unit according to modification 10.
Fig. 40 is a conceptual diagram showing an example of the processing contents of the image adjustment unit and the synthesis unit according to modification 10.
Fig. 41 is a flowchart showing an example of the flow of the image synthesis processing according to modification 10.
Fig. 42 is a conceptual diagram showing an example of processing contents of the AI-system processing unit and the non-AI-system processing unit according to modification 11.
Fig. 43 is a conceptual diagram showing an example of the processing contents of the image adjustment unit and the synthesis unit according to modification 11.
Fig. 44 is a flowchart showing an example of the flow of the image synthesis processing according to modification 11.
Fig. 45 is a conceptual diagram showing an example of a mode in which the AI-mode processing unit performs a plurality of processes according to the purpose in the AI mode.
Fig. 46 is a conceptual diagram showing an example of processing contents in which the processor derives a scale from the difference between the processing target image and the 1 st image.
Fig. 47 is a conceptual diagram showing an example of processing contents in which the processor derives a scale from the difference between the 1 st image and the 2 nd image.
Fig. 48 is a conceptual diagram showing an example of processing contents in which the processor adjusts the scale according to the related information.
Fig. 49 is a conceptual diagram illustrating an example of the configuration of the imaging system.
Detailed Description
Hereinafter, an example of an embodiment of an image processing apparatus, an image capturing apparatus, an image processing method, and a program according to the technology of the present invention will be described with reference to the drawings.
First, words and phrases used in the following description will be described.
CPU refers to an abbreviation of "Central Processing Unit (central processing unit)". GPU refers to the abbreviation of "Graphics Processing Unit (graphics processing unit)". TPU refers to an abbreviation of "Tenso r processing unit (tensor processing unit)". NVM is an abbreviation for "Non-volatile memory". RAM refers to an abbreviation of "Random Access Memory (random access memory)". IC refers to the abbreviation "Integrated Circuit (integrated circuit)". ASIC refers to an abbreviation of "Application Specific Integrated Circuit (application specific integrated circuit)". PLD refers to the abbreviation "Programmable Logic Device (programmable logic device)". FPGA refers to the abbreviation "Field-Programmable Gate Array (Field programmable gate array)". SoC refers to the abbreviation of "System-on-a-chip". SSD refers to the abbreviation "solid State Drive". USB refers to an abbreviation of "Universal Serial Bus (universal serial bus)". HDD refers to an abbreviation of "Hard Disk Drive". EEPROM refers to the abbreviation "Electrically Erasable and Programmable Read Only Memory (electrically erasable programmable read only memory)". EL refers to the abbreviation of "Electro-luminescence". I/F refers to an abbreviation for "Interface". UI refers to an abbreviation of "User Interface". fps refers to the abbreviation of "frame per second". MF refers to the abbreviation of "Manual Focus". AF refers to an abbreviation of "Auto Focus". CMOS refers to the abbreviation of "Complementary Metal Ox ide Semiconductor (complementary metal oxide semiconductor)". CCD refers to an abbreviation of "Charge Coupled Device (charge coupled device)". LAN refers to the abbreviation of "Local Area Network (local area network)". WAN refers to an abbreviation of "Wide Area Network (wide area network)". AI refers to the abbreviation of "Artificial Intelligence (artificial intelligence)". a/D refers to the abbreviation of "Analog/Digital". FIR refers to the abbreviation of "Finite Impulse Response (finite impulse response)". IIR refers to the abbreviation of "Infinite Impulse Response (infinite impulse response)". VAE refers to the abbreviation "variable Auto-Encoder". GAN refers to the abbreviation of "Generative Adversarial Network (generative antagonism network)".
In the present embodiment, noise refers to noise generated by imaging by an imaging device (for example, electrical noise that appears in an image (i.e., an electronic image) obtained by imaging). In other words, noise refers to electrical noise that is inevitably generated (e.g., noise that is inevitably generated due to electrical factors). Specific examples of the noise include noise generated with an increase in analog gain, dark current noise, pixel defect, thermal noise, and the like. Hereinafter, elements other than noise (i.e., elements that express images other than noise) that appear in an image obtained by imaging are referred to as "non-noise elements".
As an example, as shown in fig. 1, the image pickup apparatus 10 is an apparatus for picking up an object, and includes an image processing engine 12, an image pickup apparatus main body 16, and an interchangeable lens 18. The imaging device 10 is an example of the "imaging device" according to the technology of the present invention. The interchangeable lens 18 is an example of a "lens" according to the technology of the present invention. The image processing engine 12 is an example of "image processing apparatus" and "computer" according to the technology of the present invention.
The image processing engine 12 is built in the image pickup device main body 16, and controls the entire image pickup device 10. The interchangeable lens 18 is interchangeably attached to the image pickup apparatus main body 16. The interchangeable lens 18 is provided with a focus ring 18A. The focus ring 18A is operated by a user or the like when the user of the image pickup apparatus 10 (hereinafter, simply referred to as "user") or the like manually adjusts the focus on the subject by the image pickup apparatus 10.
In the example shown in fig. 1, a lens-interchangeable digital camera is shown as an example of the imaging device 10. However, this is only an example, and the camera may be a lens-fixed digital camera, or a digital camera incorporated in various electronic devices such as a smart device, a wearable terminal, a cell observation device, an ophthalmic observation device, and a surgical microscope.
The image pickup device main body 16 is provided with an image sensor 20. The image sensor 20 is an example of the "image sensor" according to the technology of the present invention. The image sensor 20 is a CMOS image sensor. The image sensor 20 generates and outputs image data representing an image by capturing an object. When the interchangeable lens 18 is attached to the image pickup device body 16, subject light representing a subject is transmitted through the interchangeable lens 18 and imaged on the image sensor 20, and image data is generated by the image sensor 20.
In the present embodiment, a CMOS image sensor is exemplified as the image sensor 20, but the technique of the present invention is not limited thereto, and for example, even if the image sensor 20 is another type of image sensor such as a CCD image sensor, the technique of the present invention is also true.
The upper surface of the image pickup apparatus main body 16 is provided with a release button 22 and a dial 24. The dial 24 is operated when an operation mode of the imaging system, an operation mode of the playback system, and the like are set, and in the imaging apparatus 10, the imaging mode, the playback mode, and the set mode are selectively set as the operation modes by operating the dial 24. The imaging mode is an operation mode in which the imaging device 10 is caused to perform imaging. The play mode is an operation mode for playing an image (for example, a still image and/or a moving image) obtained by performing recording shooting in the shooting mode. The setting mode is an operation mode set for the image pickup apparatus 10, for example, when various setting values used for control related to shooting are set.
The release button 22 functions as a shooting preparation instruction unit and a shooting instruction unit, and is capable of detecting a pressing operation in two stages, i.e., a shooting preparation instruction state and a shooting instruction state. The shooting preparation instruction state refers to, for example, a state pressed from the standby position to the intermediate position (half-pressed position), and the shooting instruction state refers to a state pressed to the final pressed position (full-pressed position) beyond the intermediate position. Hereinafter, the "state of being pressed from the standby position to the half-pressed position" is referred to as a "half-pressed state", and the "state of being pressed from the standby position to the full-pressed position" is referred to as a "full-pressed state". According to the configuration of the image pickup apparatus 10, the shooting preparation instruction state may be a state in which the finger of the user touches the release button 22, or the shooting instruction state may be a state in which the finger of the user performing the operation is shifted from the state in which the finger touches the release button 22 to the released state.
The imaging device main body 16 is provided with an instruction key 26 and a touch panel display 32 on the back surface thereof.
The touch panel display 32 includes the display 28 and the touch panel 30 (see also fig. 2). An example of the display 28 is an EL display (for example, an organic EL display or an inorganic EL display). The display 28 may be other types of displays, such as a liquid crystal display, instead of an EL display.
The display 28 displays images and/or character information, etc. When the image pickup apparatus 10 is in the shooting mode, the display 28 is configured to display a through image obtained by shooting (i.e., continuous shooting) the through image. Here, the "through image" refers to a moving image for display based on image data obtained by capturing by the image sensor 20. The photographing (hereinafter, also referred to as "photographing for a preview image") to obtain a preview image is performed at a frame rate of 60fp s, for example. 60fps is only an example, and may be less than 60fps, or more than 60 fps.
In the case where the imaging device 10 is instructed to take a still image via the release button 22, the display 28 is also used to display a still image obtained by taking a still image. The display 28 is also used to display a playback image or the like when the imaging device 10 is in the playback mode. Further, when the image pickup apparatus 10 is in the setting mode, the display 28 is also used to display a menu screen on which various menus can be selected and a setting screen for setting various setting values and the like used for control related to shooting.
The touch panel 30 is a transmissive touch panel, which is superimposed on the surface of the display area of the display 28. The touch panel 30 receives an instruction from a user by detecting contact of an instruction body such as a finger or a stylus. In addition, hereinafter, for convenience of explanation, the "full-press state" also includes a state in which the user presses a soft key for starting photographing via the touch panel 30.
In the present embodiment, as an example of the touch panel display 32, a plug-in type touch panel display in which the touch panel 30 is superimposed on the surface of the display area of the display 28 is exemplified, but this is only an example. For example, an embedded or external touch screen display may be applied as the touch screen display 32.
The instruction key 26 receives various instructions. Here, the "various instructions" refer to, for example, a display instruction of a menu screen, a selection instruction of one or more menus, a determination instruction of a selected content, a deletion instruction of a selected content, various instructions such as enlargement, reduction, and frame advance, and the like. These instructions may also be made through the touch panel 30.
As an example, as shown in fig. 2, the image sensor 20 includes a photoelectric conversion element 72. The photoelectric conversion element 72 has a light receiving surface 72A, and subject light is imaged on the light receiving surface 72A via the interchangeable lens 18. The light receiving surface 72A is an example of the "light receiving surface" according to the technology of the present invention. The photoelectric conversion element 72 is disposed in the imaging device main body 16 so that the center of the light receiving surface 72A coincides with the optical axis OA (see also fig. 1). The photoelectric conversion element 72 has a plurality of photosensitive pixels arranged in a matrix, and the light receiving surface 72A is formed of the plurality of photosensitive pixels. Each of the photosensitive pixels has a microlens (not shown). Each of the photosensitive pixels is a physical pixel having a photodiode (not shown), and photoelectrically converts received light and outputs an electric signal corresponding to the amount of received light.
The plurality of photosensitive pixels are arranged in a matrix in a predetermined pattern (for example, bayer array, G stripe R/G complete square, X-Trans (registered trademark) array, honeycomb array, or the like), and red (R), green (G), or blue (B) color filters (not shown). In the following description, for convenience of explanation, a photosensitive pixel having a microlens and an R color filter is referred to as an R pixel, a photosensitive pixel having a microlens and a G color filter is referred to as a G pixel, and a photosensitive pixel having a microlens and a B color filter is referred to as a B pixel.
The interchangeable lens 18 is provided with an imaging lens 40. The imaging lens 40 includes an objective lens 40A, a focusing lens 40B, a zoom lens 40C, and an aperture stop 40D. The objective lens 40A, the focus lens 40B, the zoom lens 40C, and the diaphragm 40D are arranged in this order along the optical axis OA from the object side (i.e., object side) to the image pickup apparatus main body 16 side (i.e., image side).
The interchangeable lens 18 includes a control device 36, a 1 st actuator 37, a 2 nd actuator 38, and a 3 rd actuator 39. The control device 36 controls the entire interchangeable lens 18 in accordance with an instruction from the image pickup device body 16. The control device 36 is, for example, a device having a computer including a CPU, NVM, RAM, and the like. The NVM of the control device 36 is, for example, EEPROM. The RAM of the control device 36 temporarily stores various information and is used as a work memory. In the control device 36, the CPU reads necessary programs from the NVM, and controls the entire imaging lens 40 by executing the read various programs on the RAM.
In this case, a device having a computer is exemplified as an example of the control device 36, but this is only an example, and a device including an ASIC, an FPGA, and/or a PLD may be applied. Further, as the control device 36, for example, a device implemented by a combination of a hardware configuration and a software configuration may be used.
The 1 st actuator 37 includes a focus slide mechanism (not shown) and a focus motor (not shown). The focusing slide mechanism is provided with a focusing lens 40B slidably along the optical axis OA. The focusing motor is connected to the focusing slide mechanism, and the focusing slide mechanism is operated by receiving the power of the focusing motor, thereby moving the focusing lens 40B along the optical axis OA.
The 2 nd actuator 38 includes a zoom slide mechanism (not shown) and a zoom motor (not shown). The zoom slide mechanism has a zoom lens 40C slidably mounted along the optical axis OA. The zoom motor is connected to the zoom slide mechanism, and the zoom slide mechanism is operated by receiving the power of the zoom motor, thereby moving the zoom lens 40C along the optical axis OA.
The 3 rd actuator 39 includes a power transmission mechanism (not shown) and a motor for an aperture (not shown). The diaphragm 40D has an opening 40D1, and is a diaphragm that can change the size of the opening 40D 1. The opening 40D1 is formed by, for example, a plurality of diaphragm blades 40D2. The plurality of diaphragm blades 40D2 are coupled to the power transmission mechanism. A motor for the diaphragm is connected to the power transmission mechanism, and the power transmission mechanism transmits the power of the motor for the diaphragm to the plurality of diaphragm blades 40D2. The plurality of diaphragm blades 40D2 operate by receiving power transmitted from the power transmission mechanism, thereby changing the size of the opening 40D 1. Aperture 40D adjusts exposure by changing the size of opening 40D 1.
The focus motor, the zoom motor, and the diaphragm motor are connected to a control device 36, and the control device 36 controls the driving of the focus motor, the zoom motor, and the diaphragm motor, respectively. In the present embodiment, a stepping motor is used as an example of the focusing motor, the zooming motor, and the diaphragm motor. Accordingly, the focus motor, the zoom motor, and the diaphragm motor operate in synchronization with the pulse signal in response to a command from the control device 36. Here, although the example in which the focus motor, the zoom motor, and the diaphragm motor are provided in the interchangeable lens 18 is shown, this is only an example, and at least one of the focus motor, the zoom motor, and the diaphragm motor may be provided in the imaging device main body 16. The structure and/or operation method of the interchangeable lens 18 may be changed as necessary.
In the image pickup apparatus 10, in the case of being in the shooting mode, the MF mode and the AF mode can be selectively set according to an instruction made to the image pickup apparatus main body 16. The MF mode is an operation mode of manual focusing. In the MF mode, for example, by a user operating the focus ring 18A or the like, the focus lens 40B is moved along the optical axis OA by a movement amount corresponding to the operation amount of the focus ring 18A or the like, thereby adjusting the focus.
In the AF mode, the imaging device main body 16 calculates a focus position corresponding to the object distance, and moves the focus lens 40B toward the calculated focus position, thereby adjusting the focus. Here, the focus position refers to a position of the focus lens 40B on the optical axis OA in the in-focus state.
The image pickup apparatus main body 16 includes an image processing engine 12, an image sensor 20, a system controller 44, an image memory 46, a UI-based device 48, an external I/F50, a communication I/F52, a photoelectric conversion element driver 54, and an input/output interface 70. The image sensor 20 includes a photoelectric conversion element 72 and an a/D converter 74.
The image processing engine 12, the image memory 46, the UI-based device 48, the external I/F50, the photoelectric conversion element driver 54, and the a/D converter 74 are connected to the input/output interface 70. The control device 36 of the interchangeable lens 18 is also connected to the input/output interface 70.
The system controller 44 includes a CPU (not shown), an NVM (not shown), and a RAM (not shown). In the system controller 44, the NVM is a non-transitory storage medium, and various parameters and various programs are stored. The NVM of the system controller 44 is, for example, EEPROM. However, this is merely an example, and an HDD, an SSD, or the like may be used as the NVM of the system controller 44 instead of or in addition to the EEPROM. The RAM of the system controller 44 temporarily stores various information and is used as a work memory. In the system controller 44, the CPU reads necessary programs from the NVM, and controls the entire image pickup apparatus 10 by executing the read various programs on the RAM. That is, in the example shown in fig. 2, the image processing engine 12, the image memory 46, the UI-based device 48, the external I/F50, the communication I/F52, the photoelectric conversion element driver 54, and the control device 36 are controlled by the system controller 44.
The image processing engine 12 acts under the control of the system controller 44. The image processing engine 12 is provided with a processor 62, an NVM64 and a RAM66. The processor 62 is an example of a "processor" according to the technology of the present invention.
The processor 62, the NVM64, and the RAM66 are connected to each other via a bus 68, and the bus 68 is connected to an input/output interface 70. In the example shown in fig. 2, one bus is shown as the bus 68 for convenience of illustration, but a plurality of buses may be used. The bus 68 may be a serial bus or a parallel bus including a data bus, an address bus, a control bus, and the like.
The processor 62 has a CPU and a GPU, and the GPU acts under the control of the CPU and is mainly responsible for executing image processing. The processor 62 may be one or more CPUs integrated with or not integrated with the GPU functions. Also, the processor 62 may include a multi-core CPU or TPU.
The NVM64 is a non-transitory storage medium storing various parameters and various programs different from those stored in the NVM of the system controller 44. NVM64 is, for example, EEPROM. However, this is merely an example, and an HDD, an SSD, or the like may be used as the NVM64 instead of or in addition to the EEPROM. The RAM66 temporarily stores various information and is used as a work memory.
The processor 62 reads necessary programs from the NVM64 and executes the read programs on the RAM 66. The processor 62 performs various image processing according to programs executed on the RAM 66.
The photoelectric conversion element driver 54 is connected to the photoelectric conversion element 72. The photoelectric conversion element driver 54 supplies an imaging timing signal, which specifies the timing of imaging by the photoelectric conversion element 72, to the photoelectric conversion element 72 in accordance with an instruction from the processor 62. The photoelectric conversion element 72 performs reset, exposure, and output of an electric signal according to the photographing timing signal supplied from the photoelectric conversion element driver 54. Examples of the imaging timing signal include a vertical synchronization signal and a horizontal synchronization signal.
When the interchangeable lens 18 is attached to the imaging apparatus main body 16, the subject light incident on the imaging lens 40 is imaged on the light receiving surface 72A by the imaging lens 40. The photoelectric conversion element 72 photoelectrically converts the object light received by the light receiving surface 72A under the control of the photoelectric conversion element driver 54, and outputs an electric signal corresponding to the light quantity of the object light as analog image data representing the object light to the a/D converter 74. Specifically, the a/D converter 74 reads analog image data for each horizontal line in 1 frame unit from the photoelectric conversion element 72 in an exposure sequence reading manner.
The a/D converter 74 generates a processing target image 75A by digitizing analog image data. The processing target image 75A is a captured image obtained by capturing an image by the imaging device 10, and is an example of "processing target image" and "captured image" according to the technique of the present invention. The processing target image 75A is an image in which R pixels, G pixels, and B pixels are arranged in a mosaic shape.
In the present embodiment, the processor 62 of the image processing engine 12 acquires the processing target image 75A from the a/D converter 74, and performs various image processing on the acquired processing target image 75A, as an example.
The processed image 75B is stored in the image memory 46. The processed image 75B is an image obtained by performing various image processing on the processing target image 75A by the processor 62.
The UI device 48 includes a display 28, and the processor 62 causes the display 28 to display various information. The UI device 48 further includes a receiving device 76. The receiving device 76 includes the touch panel 30 and the hard key 78. The hard key portion 78 is a plurality of hard keys including the indication key 26 (refer to fig. 1). The processor 62 operates in accordance with various instructions received through the touch panel 30. In addition, although the hard key 78 is included in the UI device 48, the technique of the present invention is not limited thereto, and the hard key 78 may be connected to the external I/F50, for example.
The external I/F50 controls exchange of various information with a device (hereinafter also referred to as an "external device") existing outside the image pickup device 10. As an example of the external I/F50, a USB interface is given. External devices (not shown) such as a smart device, a personal computer, a server, a USB memory, a memory card, and/or a printer are directly or indirectly connected to the USB interface.
The communication I/F52 is connected to a network (not shown). The communication I/F52 controls exchange of information between a communication device (not shown) such as a server on the network and the system controller 44. For example, the communication I/F52 transmits information corresponding to a request from the system controller 44 to the communication device via the network. The communication I/F52 receives information transmitted from the communication device and outputs the received information to the system controller 44 via the input/output interface 70.
As an example, as shown in fig. 3, the NVM64 of the image pickup apparatus 10 stores an image composition processing program 80. The image synthesis processing program 80 is an example of a "program" according to the technique of the present invention.
The NVM64 of the image pickup apparatus 10 stores a generation model 82A. As an example of the generation model 82A, a learned generation network is given. Examples of the network generation include GAN and VAE. The processor 62 performs AI-mode processing on the processing target image 75A (see fig. 2). As an example of the AI-mode processing, a processing using the generation model 82A is given. Hereinafter, for convenience of explanation, the process using the generation model 82A will be explained as a process mainly performed actively on the generation model 82A. That is, for convenience of explanation, the view generation model 82A is a function of processing input information and outputting a processing result.
The NVM64 of the image pickup apparatus 10 stores a digital filter 84A. The FIR filter is an example of the digital filter 84A. The FIR filter is merely an example, and may be another digital filter such as an IIR filter. Hereinafter, for convenience of explanation, the process using the digital filter 84A will be explained as a process mainly performed actively by the digital filter 84A. That is, for convenience of explanation, the digital filter 84A is explained as a function of processing input information and outputting a processing result.
The processor 62 reads the image composition processing program 80 from the NVM64 and executes the read image composition processing program 80 on the RAM 66. The processor 62 performs image synthesis processing according to an image synthesis processing program 80 executed on the RAM66 (refer to fig. 6). The image combining process is realized by the processor 62 operating as the AI-mode processing unit 62A1, the non-AI-mode processing unit 62B1, the image adjusting unit 62C1, and the combining unit 62D1 according to the image combining process program 80. The generation model 82A is used by the AI-mode processing unit 62A1, and the digital filter 84A is used by the non-AI-mode processing unit 62B1.
As an example, as shown in fig. 4, the processing target image 75A1 is input to the AI-mode processing unit 62A1 and the non-AI-mode processing unit 62B1. The processing target image 75A1 is an example of the processing target image 75A shown in fig. 2. In the example shown in fig. 4, as an image area (i.e., an image area in which aberration is reflected) of the image to be processed 75A1 affected by the aberration (hereinafter, simply referred to as "aberration") of the imaging lens 40 (refer to fig. 2), an image area 75A1a is shown.
The processing target image 75A1 is an image having a non-noise element. As an example of the non-noise element, the image area 75A1a is given. The image area 75A1a is an example of "a non-noise element of a processing target image", "a phenomenon occurring in the processing target image due to characteristics of an imaging device", "blurring", and "an area in which aberrations of a lens are reflected in a captured image" according to the technique of the present invention.
In the example shown in fig. 4, an image area reflecting curvature of the image plane is shown as an example of the image area 75A1a. In the example shown in fig. 4, a system in which the image area 75A1a gradually darkens from the center of the processing target image 75A1 to the outside in the radial direction due to image surface curvature (i.e., a system in which post blurring is reflected) is shown.
Here, although the image surface curvature is exemplified as the aberration reflected in the processing target image 75A1, this is only an example, and the aberration reflected in the processing target image 75A1 may be other types of aberration such as spherical aberration, coma aberration, astigmatism, distortion aberration, on-axis chromatic aberration, and chromatic aberration of magnification. The aberration is an example of "characteristics of an imaging device" and "optical characteristics of an imaging device" according to the technique of the present invention.
The AI-scheme processing unit 62A1 performs AI-scheme processing on the processing target image 75 A1. As an example of the AI-mode processing for the processing target image 75A1, a processing using the generation model 82A1 is given. The generation model 82A1 is an example of the generation model 82A shown in fig. 3. The generation model 82A1 is a generation network in which learning to reduce the influence of aberration (here, image surface curvature is an example) has been performed. The AI-scheme processing unit 62A1 generates A1 st aberration-corrected image 86A1 by performing processing using the generation model 82A1 on the processing-target image 75 A1. In other words, the AI-mode processing unit 62A1 adjusts the non-noise element (here, the image area 75A1a is an example) in the processing target image 75A1 to generate the 1 st aberration correction image 86A1. In other words, the AI-mode processing section 62A1 generates the 1 st aberration-corrected image 86A1 by AI-mode correcting the image area 75A1a (i.e., the area in which the aberration is reflected) in the processing-target image 75 A1. The process of using the generation model 82A1 is an example of "1 st AI process", "1 st correction process", and "1 st aberration region correction process" according to the technique of the present invention. Here, "generating the 1 st aberration correction image 86A1" is an example of "acquiring the 1 st image" according to the technique of the present invention.
The processing target image 75A1 is input to the generation model 82A1. The generation model 82A1 generates and outputs A1 st aberration correction image 86A1 from the input processing target image 75 A1. The 1 st aberration correction image 86A1 is an image obtained by adjusting a non-noise element by the generation model 82A1 (i.e., an image obtained by adjusting a non-noise element by a process of generating the model 82A1 with respect to the processing target image 75 A1). In other words, the 1 st aberration correction image 86A1 is a corrected image of the non-noise element in the processing target image 75A1 by the generation model 82A1 (i.e., an image in which the non-noise element is corrected by the processing using the generation model 82A1 for the processing target image 75 A1). In other words, the 1 st aberration correction image 86A1 is a corrected image of the image area 75A1a by the generation model 82A1 (i.e., an image in which the image area 75a is corrected by the process of using the generation model 82A1 with respect to the processing target image 75A1 so that the influence of the aberration is reduced). The 1 st aberration-corrected image 86A1 is an example of the "1 st image", "1 st corrected image", and "1 st aberration-corrected image" according to the technique of the present invention.
The non-AI-scheme processing unit 62B1 performs a non-AI-scheme process on the processing target image 75 A1. The processing in the non-AI mode refers to processing that does not use a neural network. Here, as the process not using the neural network, for example, a process not using the generation model 82A1 is given.
As an example of the non-AI-mode processing for the processing target image 75A1, a processing using the digital filter 84A1 is given. The digital filter 84A1 is a digital filter configured to reduce the influence of aberration (here, image plane curvature is an example). The non-AI-scheme processing section 62B1 generates the 2 nd aberration corrected image 88A1 by performing processing (i.e., filtering) using the digital filter 84A1 on the processing target image 75 A1. In other words, the non-AI-scheme processing unit 62B1 generates the 2 nd aberration correction image 88A1 by adjusting the non-noise element (here, the image area 75A1a is an example) in the processing target image 75A1 in a non-AI scheme. In other words, the non-AI-mode processing section 62B1 generates the 2 nd aberration corrected image 88A1 by correcting the image area 75A1a (i.e., the area in which the aberration is reflected) in the processing target image 75A1 in a non-AI mode. The process using the digital filter 84A1 is an example of "a process of a non-AI method that does not use a neural network", "a 2 nd correction process", and "a process of performing correction in a non-AI method" according to the technique of the present invention. Here, "generating the 2 nd aberration correction image 88A1" is an example of "acquiring the 2 nd image" according to the technique of the present invention.
The processing target image 75A1 is input to the digital filter 84A1. The digital filter 84A1 generates a 2 nd aberration correction image 88A1 from the input processing target image 75 A1. The 2 nd aberration correction image 88A1 is an image obtained by adjusting a non-noise element by the digital filter 84A1 (i.e., an image obtained by adjusting a non-noise element by a process using the digital filter 84A1 with respect to the processing target image 75 A1). In other words, the 2 nd aberration correction image 88A1 is a corrected image of the non-noise element in the processing target image 75A1 by the digital filter 84A1 (i.e., an image in which the non-noise element is corrected by the processing using the digital filter 84A1 for the processing target image 75 A1). In other words, the 2 nd aberration-corrected image 88A1 is a corrected image of the image area 75A1a by the digital filter 84A1 (i.e., an image in which the image area 75a is corrected by the processing using the digital filter 84A1 for the processing target image 75A1 so that the influence of the aberration is reduced). The 2 nd aberration-corrected image 88A1 is an example of "the 2 nd image", "the 2 nd corrected image", and "the 2 nd aberration-corrected image" according to the technique of the present invention.
Among the users are the following: it is not desirable to completely eliminate the influence of aberration, but rather to properly preserve the influence of aberration within the image. In the example shown in fig. 4, the 1 st aberration corrected image 86A1 reduces the influence of aberration as compared to the 2 nd aberration corrected image 88A1. In other words, the influence of aberration is retained in the 2 nd aberration corrected image 88A1 as compared to the 1 st aberration corrected image 86 A1. However, the user sometimes feels that the influence of the aberration in the 1 st aberration correction image 86A1 is too small and the influence of the aberration in the 2 nd aberration correction image 88A1 is too large. Therefore, if only one of the 1 st aberration correction image 86A1 and the 2 nd aberration correction image 88A1 is finally output, an image that does not match the user's preference is provided to the user. If the learning amount of the generation model 82A1 is increased or the number of layers in the middle of the generation model 82A1 is increased to attempt to increase the performance of the generation model 82A1, the possibility that an image close to the user's preference can be obtained increases. However, the cost required for creating the generation model 82A1 increases, and as a result, the price of the imaging device 10 may increase.
In view of this, in the image pickup apparatus 10, as an example, as shown in fig. 5, the 1 st aberration correction image 86A1 and the 2 nd aberration correction image 88A1 are synthesized by performing the processing of the image adjustment unit 62C1 and the processing of the synthesis unit 62D1 on the 1 st aberration correction image 86A1 and the 2 nd aberration correction image 88A1.
As an example, as shown in fig. 5, the NVM64 stores a ratio 90A. The ratio 90A is a ratio at which the 1 st aberration correction image 86A1 and the 2 nd aberration correction image 88A1 are synthesized, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., processing using the generation model 82 A1) by the AI scheme processing unit 62 A1.
The ratio 90A is roughly divided into A1 st ratio 90A1 and A2 nd ratio 90A2. The 1 st ratio 90A1 is a value of 0 to 1, and the 2 nd ratio 90A2 is a value obtained by subtracting the 1 st ratio 90A1 from "1". That is, the 1 st ratio 90A1 and the 2 nd ratio 90A2 are set to "1" from the sum of the 1 st ratio 90A1 and the 2 nd ratio 90A2. The 1 st scale 90A1 and the 2 nd scale 90A2 are variable values that can be changed according to an instruction from a user. The instruction from the user is received by the receiving device 76 (see fig. 2).
The image adjustment unit 62C1 adjusts the 1 st aberration correction image 86A1 generated by the AI-scheme processing unit 62A1 using the 1 st scale 90 A1. For example, the image adjustment unit 62C1 adjusts the pixel value of each pixel of the 1 st aberration correction image 86A1 by multiplying the 1 st scale 90A1 by the pixel value of each pixel of the 1 st aberration correction image 86A1.
The image adjustment unit 62C1 adjusts the 2 nd aberration correction image 88A1 generated by the non-AI-scheme processing unit 62B1 using the 2 nd scale 90 A2. For example, the image adjustment unit 62C1 adjusts the pixel value of each pixel of the 2 nd aberration correction image 88A1 by multiplying the 2 nd ratio 90A2 by the pixel value of each pixel of the 2 nd aberration correction image 88A1.
The combining section 62D1 generates a combined image 92A by combining the 1 st aberration correction image 86A1 adjusted by the 1 st scale 90A1 through the image adjusting section 62C1 and the 2 nd aberration correction image 88A1 adjusted by the 2 nd scale 90A2 through the image adjusting section 62C 1. That is, the combining unit 62D1 combines the 1 st aberration correction image 86A1 adjusted in the 1 st scale 90A1 and the 2 nd aberration correction image 88A1 adjusted in the 2 nd scale 90A2 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A1. In other words, the combining unit 62D1 combines the 1 st aberration correction image 86A1 adjusted in the 1 st scale 90A1 and the 2 nd aberration correction image 88A1 adjusted in the 2 nd scale 90A2 to adjust the non-noise element (here, the image area 75A1a is an example). In other words, the synthesizing unit 62D1 synthesizes the 1 st aberration correction image 86A1 adjusted in the 1 st scale 90A1 and the 2 nd aberration correction image 88A1 adjusted in the 2 nd scale 90A2 to adjust elements derived from the processing using the generation model 82A1 (for example, pixel values of pixels in which the influence of the aberration is reduced by the generation model 82 A1).
The combination performed by the combining unit 62D1 is the addition of the pixel values at the corresponding pixel positions between the 1 st aberration corrected image 86A1 and the 2 nd aberration corrected image 88 A1. The addition here refers to, for example, simple addition. In the example shown in fig. 5, as an example of the synthesized image 92A, an image obtained by synthesizing the 1 st aberration correction image 86A1 and the 2 nd aberration correction image 88A1 when the value of the 1 st scale 90A1 and the value of the 2 nd scale 90A2 are both "0.5" is shown. In this case, the influence of the 1 st aberration correction image 86A1 (i.e., the influence of the process using the generation model 82 A1) and the influence of the 2 nd aberration correction image 88A1 (i.e., the influence of the process using the digital filter 84 A1) are reflected in half in the composite image 92A, respectively.
If the 1 st scale 90A1 is made larger than the 2 nd scale 90A2, the influence of the 1 st aberration correction image 86A1 is reflected in the composite image 92A more than the influence of the 2 nd aberration correction image 88 A1. Conversely, if the 2 nd ratio 90A2 is made larger than the 1 st ratio 90A1, the influence of the 2 nd aberration corrected image 88A1 is reflected in the composite image 92A more than the influence of the 1 st aberration corrected image 86 A1.
The combining unit 62D1 performs various image processes (for example, known image processes such as offset correction, white balance correction, demosaicing process, color correction, gamma correction, color space conversion, brightness process, color difference process, and resizing process) on the combined image 92A. The combining unit 62D1 outputs an image obtained by performing various image processing on the combined image 92A as a processed image 75B (see fig. 2) to a predetermined output destination (for example, the image memory 46 shown in fig. 2).
Next, the operation of the imaging device 10 will be described with reference to fig. 6. Fig. 6 shows an example of a flow of the image synthesis processing executed by the processor 62. The flow of the image synthesis processing shown in fig. 6 is an example of the "image processing method" according to the technique of the present invention.
In the image synthesizing process shown in fig. 6, first, in step ST10, the AI-scheme processing section 62A1 determines whether or not the image sensor 20 (refer to fig. 2) has generated the processing target image 75A1. In step ST10, when the image sensor 20 does not generate the processing target image 75A1, the determination is negated, and the image synthesis process proceeds to step ST32. In step ST10, when the image sensor 20 generates the processing target image 75A1, the determination is affirmative, and the image synthesis process proceeds to step ST12.
In step ST12, the AI-mode processing unit 62A1 and the non-AI-mode processing unit 62B1 acquire the processing target image 75A1 from the image sensor 20. After the process of step ST12 is performed, the image synthesis process proceeds to step ST14.
In step ST14, the AI-scheme processing unit 62A1 inputs the processing-target image 75A1 acquired in step ST12 into the generation model 82A1. After the process of step ST14 is performed, the image synthesis process proceeds to step ST16.
In step ST16, the AI-scheme processing unit 62A1 acquires the 1 ST aberration-corrected image 86A1, and the 1 ST aberration-corrected image 86A1 is output from the generation model 82A1 by inputting the processing-target image 75A1 into the generation model 82A1 in step ST 14. After the process of step ST16 is performed, the image synthesis process proceeds to step ST18.
In step ST18, the non-AI-scheme processing unit 62B1 corrects the influence of the aberration (i.e., the image region 75A1 a) by performing the processing using the digital filter 84A1 on the processing target image 75A1 acquired in step ST 12. After the process of step ST18 is performed, the image synthesis process proceeds to step ST20.
In step ST20, the non-AI-mode processing unit 62B1 acquires the 2 nd aberration correction image 88A1, and the 2 nd aberration correction image 88A1 is obtained by performing processing using the digital filter 84A1 on the processing target image 75A1 in step ST18. After the process of step ST20 is performed, the image synthesis process proceeds to step ST22.
In step ST22, the image adjustment unit 62C1 acquires the 1 ST scale 90A1 and the 2 nd scale 90A2 from the NVM 64. After the process of step ST22 is performed, the image synthesis process proceeds to step ST24.
In step ST24, the image adjustment unit 62C1 adjusts the 1 ST aberration correction image 86A1 using the 1 ST scale 90A1 acquired in step ST22. After the process of step ST24 is performed, the image synthesis process proceeds to step ST26.
In step ST26, the image adjustment unit 62C1 adjusts the 2 nd aberration correction image 88A1 using the 2 nd scale 90A2 acquired in step ST 22. After the process of step ST26 is performed, the image synthesis process proceeds to step ST28.
In step ST28, the combining unit 62D1 combines the 1 ST aberration-corrected image 86A1 adjusted in step ST24 and the 2 nd aberration-corrected image 88A1 adjusted in step ST26 to adjust the excessive or insufficient processing of the AI scheme by the AI-scheme processing unit 62 A1. The synthesized image 92A is generated by synthesizing the 1 ST aberration-corrected image 86A1 adjusted in step ST24 and the 2 nd aberration-corrected image 88A1 adjusted in step ST 26. After the process of step ST28 is performed, the image synthesis process proceeds to step ST30.
In step ST30, the combining unit 62D1 performs various image processing on the combined image 92A. Then, the combining unit 62D1 outputs an image obtained by performing various image processing on the combined image 92A as a processed image 75B to a predetermined output destination. After the process of step ST30 is performed, the image synthesis process proceeds to step ST32.
In step ST32, the combining unit 62D1 determines whether or not a condition for ending the image combining process (hereinafter, referred to as "end condition") is satisfied. As the end condition, a condition that the receiving device 76 receives an instruction to end the image synthesis process, and the like are given. In step ST32, when the end condition is not satisfied, the determination is negated, and the image synthesis process proceeds to step ST10. In step ST32, when the end condition is satisfied, the determination is affirmative, and the image synthesis process is ended.
As described above, in the image capturing apparatus 10, the AI-mode processing section 62A1 and the non-AI-mode processing section 62B1 acquire the processing target image 75A1 as an image having the image area 75A1a on which the influence of the aberration is reflected. The AI-scheme processing unit 62A1 performs AI-scheme processing (i.e., processing using the generation model 82 A1) on the processing target image 75A1. Thereby, the 1 st aberration correction image 86A1 is generated. The non-AI-scheme processing unit 62B1 performs a non-AI-scheme process (i.e., a process using the digital filter 84 A1) on the processing target image 75A1. Thereby, the 2 nd aberration corrected image 88A1 is generated.
If the 1 st aberration correction image 86A1 is used as it is as an image that is finally provided to the user, the influence of the process of using the generation model 82A1 is significant, and thus the user's preference may not be satisfied. Therefore, in the imaging apparatus 10, the 1 st aberration correction image 86A1 and the 2 nd aberration correction image 88A1 are adjusted in the ratio 90A. That is, the 1 st aberration correction image 86A1 is adjusted using the 1 st scale 90A1, and the 2 nd aberration correction image 88A1 is adjusted using the 2 nd scale 90 A2. Then, the 1 st aberration correction image 86A1 adjusted using the 1 st scale 90A1 and the 2 nd aberration correction image 88A1 adjusted using the 2 nd scale 90A2 are synthesized. Thus, an image (i.e., the synthesized image 92A) in which the influence of the process using the generation model 82A1 (i.e., the influence of the adjustment of the non-noise element by the process using the generation model 82 A1) is not more noticeable than the 1 st aberration corrected image 86A1 can be obtained.
In the present embodiment, the 2 nd aberration correction image 88A1 is an image obtained by performing processing using the digital filter 84A1 on the processing target image 75A1, and the 1 st aberration correction image 86A1 and the 2 nd aberration correction image 88A1 are combined to generate a combined image 92A. This can cause the composite image 92A to include the influence of the processing using the digital filter 84 A1.
In the present embodiment, the 2 nd aberration correction image 88A1 is an image in which the non-noise element of the processing target image 75A1 is adjusted by the non-AI-method processing, and the 1 st aberration correction image 86A1 and the 2 nd aberration correction image 88A1 are combined to generate the combined image 92A. This makes it possible to include the result of adjusting the non-noise element of the processing target image 75A1 by the processing of the non-AI method in the composite image 92 A1.
In the present embodiment, the ratio 90A is set to adjust the excess or deficiency of the process of using the generation model 82 A1. The 1 st aberration correction image 86A1 and the 2 nd aberration correction image 88A1 are synthesized in proportion 90A. This can suppress the influence of the process of using the generation model 82A1 from excessively appearing in the image and causing the image (i.e., the composite image 92A) to be out of compliance with the user's preference. Further, since the scale 90A can be changed in accordance with an instruction from the user, the degree of influence of the process of retaining the use of the generation model 82A1 in the composite image 92A and the degree of influence of retaining the aberration in the composite image 92A can be made to match the preference of the user.
In the present embodiment, the 1 st aberration correction image 86A1 is an image obtained by correcting a phenomenon (here, the influence of aberration is an example) occurring in the processing target image 75A1 due to the characteristics of the image pickup device 10 (here, the optical characteristics of the imaging lens 40 are an example) in the AI system. The 1 st aberration correction image 86A1 is an image obtained by correcting a phenomenon occurring in the processing target image 75A1 due to the characteristics of the image pickup device 10 in a non-AI manner. Then, the 1 st aberration correction image 86A1 and the 2 nd aberration correction image 88A1 are combined to generate a combined image 92A. Therefore, it is possible to suppress an excessive or insufficient correction amount for the composite image 92A in the AI system to correct a phenomenon (here, the influence of aberration is an example) that occurs in the processing target image 75A1 due to the characteristics of the image pickup device 10. Further, since the influence of the aberration is not completely eliminated, the unnatural degree of the appearance of the synthetic image 92A (that is, the unnatural degree caused by the influence of the aberration is reduced by the generation model 82A 1) can be alleviated. Further, it is possible to prevent the influence of the processing using the generation model 82A1 from being excessively reflected in the synthesized image 92A, and to appropriately retain the influence of the aberration.
In the above embodiment, the embodiment in which the 1 st aberration-corrected image 86A1 and the 2 nd aberration-corrected image 88A1 are synthesized was described as an example, but the technique of the present invention is not limited thereto. For example, the element of the processing by the AI scheme may be adjusted by combining the processing target image 75A1 (i.e., an image in which the non-noise element is not adjusted) with the 1 st aberration correction image 86A1 instead of the 2 nd aberration correction image 88 A1. That is, the image area in which the influence of the aberration is reduced by the AI method may be adjusted by combining the 1 st aberration correction image 86A1 and the processing target image 75A1 (for example, here, the pixel value of the pixel in which the influence of the aberration is reduced by the generation model 82A1 is an example). In this case, the influence of the element derived from the processing of the AI scheme on the composite image 92A is alleviated by the element derived from the processing target image 75A1 (for example, the image area 75A1 a). Therefore, it is possible to suppress an excessive or insufficient correction amount for the composite image 92A in the AI system to correct a phenomenon (here, the influence of aberration is an example) that occurs in the processing target image 75A1 due to the characteristics of the image pickup device 10. The image 75A1 to be processed combined with the 1 st aberration correction image 86A1 is an example of the "2 nd image" according to the technique of the present invention.
In the above-described embodiment, the influence of the aberration (in the example shown in fig. 4, the image area 75A1 a) is illustrated as an example of the phenomenon occurring in the processing target image 75A1 due to the characteristics of the image pickup device 10, but the technique of the present invention is not limited to this. For example, the phenomenon occurring in the processing target image 75A1 due to the characteristics of the image pickup device 10 may be flare, ghost, or the like caused by the imaging lens 40, and in this case, flare and ghost may be reduced by AI-mode processing or non-AI-mode processing. The brightness according to the aperture of the imaging lens 40 may be adjusted by AI-mode processing and non-AI-mode processing.
In the above embodiment, the 2 nd aberration correction image 88A1 is illustrated as an image obtained by performing the non-AI process on the processing target image 75A1, but the technique of the present invention is not limited to this. For example, instead of the 2 nd aberration correction image 88A1, an image obtained by not performing processing using the generation model 82A1 on an image different from the processing target image 75A1 (for example, an image other than the processing target image 75A1 out of a plurality of images including the processing target image 75A1 obtained by continuous shooting) may be applied. The same applies to the following description of modification 1 and the following.
[ modification 1 ]
As an example, as shown in fig. 7, the processor 62 according to the modification 1 differs from the processor 62 shown in fig. 4 in that the AI-based processing unit 62A1 includes an AI-based processing unit 62A2 and the non-AI-based processing unit 62B1 includes a non-AI-based processing unit 62B2. In modification 1, the description of the same items as those described in modification 1 is omitted, and the description of the different items from those described in modification 1 is omitted.
The processing target image 75A2 is input to the AI-mode processing unit 62A2 and the non-AI-mode processing unit 62B2. The processing target image 75A2 is an example of the processing target image 75A shown in fig. 2. The processing target image 75A2 is a color image, and has a person region 94 and a background region 96. The person region 94 is an image region in which a person is shown. The background area 96 is an image area in which the background is reflected.
Here, the person and the background shown in the processing target image 75A2 are examples of the "1 st subject" according to the technique of the present invention. The person region 94 is an example of the "1 st region" and "region where a specific object is displayed" according to the technique of the present invention. The background area 96 is an example of "the 2 nd area which is an area different from the 1 st area" according to the present invention. The color of the person region 94 and the color of the background region 96 are examples of "a non-noise element of the image to be processed" and "a factor that controls the visual impression given by the image to be processed" and "a color" according to the technique of the present invention.
The AI-scheme processing unit 62A2 performs AI-scheme processing on the processing target image 75 A2. As an example of the AI-mode processing for the processing target image 75A2, a processing using the generation model 82A2 can be given. The generation model 82A2 is an example of the generation model 82A shown in fig. 3. The generation model 82A2 enables a generation network to distinguish between the human region 94 and the background region 96 for which learning has been performed to change the color of the human region 94 and the color of the background region 96.
The AI-mode processing unit 62A2 changes the factors of the visual impression given to the control processing target image 75A2 in the AI mode. That is, the AI-scheme processing section 62A2 changes, as a non-noise element of the processing target image 75A2, a factor that controls the visual impression given by the processing target image 75A2 by performing processing using the generation model 82A2 on the processing target image 75 A2. Factors controlling the visual impression given by the processing target image 75A2 are the color of the person region 94 and the color of the background region 96. In the example shown in fig. 7, the AI-scheme processing section 62A2 generates the 1 st coloring image 86A2 by performing processing using the generation model 82A2 on the processing target image 75 A2. The 1 st coloring image 86A2 is an image colored so as to be able to distinguish the human region 94 from the background region 96. For example, the character region 94 is colored, and the background region 96 is achromatic.
The process of using the generation model 82A2 is an example of "1 st AI process", "1 st change process", and "coloring process" according to the technique of the present invention. The 1 st color image 86A2 is an example of the "1 st modified image" and the "1 st color image" according to the technology of the present invention. "generating the 1 st color image 86A2" is an example of "acquiring the 1 st image" according to the technique of the present invention.
The processing target image 75A2 is input to the generation model 82A2. The generation model 82A2 generates and outputs a 1 st coloring image 86A2 from the input processing target image 75 A2.
The non-AI-scheme processing unit 62B2 performs a non-AI-scheme process on the processing target image 75 A2. The processing in the non-AI mode refers to processing that does not use a neural network. In modification 1, the process of not using the neural network includes, for example, a process of not using the generation model 82A2.
As an example of the non-AI-mode processing for the processing target image 75A2, a processing using the digital filter 84A2 is given. The digital filter 84A2 is configured to change the color in the processing target image 75A2 to an achromatic color. The non-AI-scheme processing unit 62B2 generates the 2 nd color image 88A2 by performing processing (i.e., filtering) using the digital filter 84A2 on the processing target image 75 A2. In other words, the non-AI-scheme processing unit 62B2 generates the 2 nd color image 88A2 by adjusting a non-noise element (here, color is an example) in the processing target image 75A2 in a non-AI scheme. In other words, the non-AI-method processing unit 62B2 generates the 2 nd color image 88A2 by changing the color in the processing target image 75A2 to an achromatic color in the non-AI method.
The processing using the digital filter 84A2 is an example of "processing of the non-AI method that does not use a neural network" and "processing of the 2 nd change that changes the factor by the non-AI method" according to the technique of the present invention. "generating the 2 nd color image 88A2" is an example of "acquiring the 2 nd image" according to the technique of the present invention.
The processing target image 75A2 is input to the digital filter 84A2. The digital filter 84A2 generates A2 nd color image 88A2 from the inputted processing target image 75 A2. The 2 nd color image 88A2 is an image obtained by changing the non-noise element by the digital filter 84A2 (i.e., an image obtained by changing the non-noise element by the processing using the digital filter 84A2 with respect to the processing target image 75 A2). In other words, the 2 nd color image 88A2 is an image in which the color in the processing target image 75A2 is changed by the digital filter 84A2 (i.e., an image in which the color is changed to an achromatic color by processing using the digital filter 84A2 for the processing target image 75 A2). The 2 nd color image 88A2 is an example of the "2 nd image", "2 nd modified image", and "2 nd color image" according to the technology of the present invention.
The 1 st color image 86A2 obtained by performing the AI-based processing on the processing target image 75A2 may contain a color different from the user's preference due to the characteristics (for example, the number of intermediate layers and/or the learning amount) of the generated model 82 A2. If the influence of the AI-based processing is excessively reflected on the processing target image 75A2, a color different from the preference of the user may be apparent.
In view of this, in the imaging apparatus 10, as an example, as shown in fig. 8, the 1 st color image 86A2 and the 2 nd color image 88A2 are synthesized by performing the processing of the image adjustment unit 62C2 and the processing of the synthesis unit 62D2 on the 1 st color image 86A2 and the 2 nd color image 88A2.
As an example, as shown in fig. 8, the NVM64 stores a proportion 90B. The ratio 90B is a ratio of the 1 st color image 86A2 and the 2 nd color image 88A2 synthesized, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., the processing using the generation model 82 A2) by the AI scheme processing unit 62 A2.
The proportion 90B is roughly divided into a 1 st proportion 90B1 and a2 nd proportion 90B2. The 1 st proportion 90B1 is a value of 0 to 1, and the 2 nd proportion 90B2 is a value obtained by subtracting the 1 st proportion 90B1 from "1". That is, the 1 st proportion 90B1 and the 2 nd proportion 90B2 are set so that the sum of the 1 st proportion 90B1 and the 2 nd proportion 90B2 becomes "1". The 1 st scale 90B1 and the 2 nd scale 90B2 are variable values that can be changed according to an instruction from a user.
The image adjustment unit 62C2 adjusts the 1 st color image 86A2 generated by the AI-scheme processing unit 62A2 using the 1 st scale 90B 1. For example, the image adjustment unit 62C2 multiplies the 1 st scale 90B1 by the pixel value of each pixel of the 1 st color image 86A2 to adjust the pixel value of each pixel of the 1 st color image 86A2.
The image adjustment unit 62C2 adjusts the 2 nd color image 88A2 generated by the non-AI-scheme processing unit 62B2 using the 2 nd scale 90B 2. For example, the image adjustment unit 62C2 multiplies the 2 nd ratio 90B2 by the pixel value of each pixel of the 2 nd color image 88A2 to adjust the pixel value of each pixel of the 2 nd color image 88A2.
The combining section 62D2 generates a combined image 92B by combining the 1 st color image 86A2 adjusted by the 1 st scale 90B1 through the image adjusting section 62C2 and the 2 nd color image 88A2 adjusted by the 2 nd scale 90B2 through the image adjusting section 62C 2. That is, the combining unit 62D2 combines the 1 st color image 86A2 adjusted in the 1 st scale 90B1 and the 2 nd color image 88A2 adjusted in the 2 nd scale 90B2 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A2. In other words, the combining unit 62D2 combines the 1 st color image 86A2 adjusted in the 1 st scale 90B1 and the 2 nd color image 88A2 adjusted in the 2 nd scale 90B2 to adjust the non-noise element (here, the color is an example). In other words, the synthesizing unit 62D2 synthesizes the 1 st color image 86A2 adjusted in the 1 st scale 90B1 and the 2 nd color image 88A2 adjusted in the 2 nd scale 90B2 to adjust elements (for example, pixel values of pixels whose colors are changed by the generation model 82 A2) derived from the processing using the generation model 82 A2.
The synthesis performed by the synthesis unit 62D2 is the addition of the pixel values at the corresponding pixel positions between the 1 st color image 86A2 and the 2 nd color image 88 A2. The synthesis by the synthesis unit 62D2 is performed in the same manner as the synthesis by the synthesis unit 62D1 shown in fig. 5. The composite image 92B is also subjected to various image processing by the compositing unit 62D2 in the same manner as the composite image 92A shown in fig. 5. The synthesized image 92B subjected to various image processing is output to a predetermined output destination by the synthesizing unit 62D 2.
Fig. 9 shows an example of the flow of the image synthesis processing according to modification 1. The flowchart shown in fig. 9 differs from the flowchart shown in fig. 6 in that steps ST50 to ST68 are applied instead of steps ST12 to ST 30.
In the image synthesizing process shown in fig. 9, in step ST50, the AI-scheme processing unit 62A2 and the non-AI-scheme processing unit 62B2 acquire the processing target image 75A2 from the image sensor 20. After the process of step ST50 is performed, the image synthesis process proceeds to step ST52.
In step ST52, the AI-scheme processing unit 62A2 inputs the processing-target image 75A2 acquired in step ST50 into the generation model 82A2. After the process of step ST52 is performed, the image synthesis process proceeds to step ST54.
In step ST54, the AI-scheme processing unit 62A2 acquires the 1 ST color image 86A2, and the 1 ST color image 86A2 is output from the generation model 82A2 by inputting the processing-target image 75A2 to the generation model 82A2 in step ST 52. After the process of step ST54 is performed, the image synthesis process proceeds to step ST56.
In step ST56, the non-AI-scheme processing unit 62B2 adjusts the color in the processing target image 75A2 by performing the processing using the digital filter 84A2 on the processing target image 75A2 acquired in step ST 50. After the process of step ST56 is performed, the image synthesis process proceeds to step ST58.
In step ST58, the non-AI-mode processing unit 62B2 acquires the 2 nd color image 88A2, and the 2 nd color image 88A2 is obtained by performing processing using the digital filter 84A2 on the processing target image 75A2 in step ST56. After the process of step ST58 is performed, the image synthesis process proceeds to step ST60.
In step ST60, the image adjustment unit 62C2 acquires the 1 ST scale 90B1 and the 2 nd scale 90B2 from the NVM 64. After the process of step ST60 is performed, the image synthesis process proceeds to step ST62.
In step ST62, the image adjustment unit 62C2 adjusts the 1 ST color image 86A2 using the 1 ST scale 90B1 acquired in step ST60. After the process of step ST62 is performed, the image synthesis process proceeds to step ST64.
In step ST64, the image adjustment unit 62C2 adjusts the 2 nd color image 88A2 using the 2 nd scale 90B2 acquired in step ST 60. After the process of step ST64 is performed, the image synthesis process proceeds to step ST66.
In step ST66, the combining unit 62D2 combines the 1 ST color image 86A2 adjusted in step ST62 and the 2 nd color image 88A2 adjusted in step ST64 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A2. The synthesized image 92B is generated by synthesizing the 1 ST color image 86A2 adjusted in step ST62 and the 2 nd color image 88A2 adjusted in step ST 64. After the process of step ST66 is performed, the image synthesis process proceeds to step ST68.
In step ST68, the combining unit 62D2 performs various image processing on the combined image 92B. Then, the combining unit 62D2 outputs an image obtained by performing various image processing on the combined image 92B as a processed image 75B to a predetermined output destination. After the process of step ST68 is performed, the image synthesis process proceeds to step ST32.
As described above, in the image pickup apparatus 10 according to the present modification 1, the 1 st color image 86A2 is generated by changing the factors (here, the color is an example) of the visual impression given to the control processing target image 75A2 by the processing of the AI scheme. The 2 nd color image 88A2 is generated by changing the factors of the visual impression given by the control processing target image 75A2 by the processing of the non-AI method. The 1 st color image 86A2 is adjusted at the 1 st scale 90B1, and the 2 nd color image 88A2 is adjusted at the 2 nd scale 90B 2. Then, a synthesized image 92B is generated by synthesizing the 1 st color image 86A2 adjusted at the 1 st scale 90B1 and the 2 nd color image 88A2 adjusted at the 2 nd scale 90B 2. Thus, the element (for example, the color in the 1 st coloring image 86 A2) resulting from the processing of the AI scheme is adjusted. That is, the influence of the elements derived from the processing of the AI scheme on the composite image 92B is alleviated by the elements derived from the processing of the non-AI scheme (e.g., the color within the 2 nd color image 88 A2). Therefore, it is possible to suppress an excessive or insufficient amount of change in the factor that AI-wise changes the visual impression given to the control processing target image 75A2 for the composite image 92B. As a result, the composite image 92B is an image in which the influence of the AI-mode processing is less noticeable than the 1 st color image 86A2, and an appropriate image can be provided to the user in which the influence of the AI-mode processing is not noticeable.
In the present modification 1, the 1 st color image 86A2 is generated by coloring the human region 94 and the background region 96 in the processing target image 75A2 in an AI manner so as to be distinguishable. Then, the 1 st coloring image 86A2 and the 2 nd coloring image 88A2 are synthesized. This can suppress excessive or insufficient coloring of the synthesized image 92B by the AI method. As a result, the composite image 92B is an image in which coloring by the AI method is less noticeable than the 1 st coloring image 86A2, and an appropriate image can be provided to a user who does not like coloring by the AI method.
In the present modification 1, the 1 st color image 86A2 and the 2 nd color image 88A2 are synthesized after the human region 94 and the background region 96 in the processing target image 75A2 are colored so as to be distinguishable by the AI method, and therefore, it is possible to suppress an excessive or insufficient coloring of the processing by the AI method for the human region 94. As a result, the composite image 92B is an image in which coloring of the human figure region 94 by the AI method is less likely to be noticeable than the 1 st coloring image 86A2, and an appropriate image can be provided to a user who does not like coloring of the human figure region 94 by the AI method.
In the example shown in fig. 7 to 9, the description has been given of the embodiment in which the non-AI-scheme processing unit 62B2 changes the color in the processing target image 75A2 from the color to the achromatic color regardless of the subject displayed in the processing target image 75A2, but the technique of the present invention is not limited to this. For example, the non-AI-mode processing section 62B2 may color the human region 94 and the background region 96 in a non-AI mode so as to be distinguishable.
In this case, for example, as shown in fig. 10, the non-AI-scheme processing unit 62B2 performs processing using the digital filter 84A2a on the processing target image 75 A2. The digital filter 84A2a is a digital filter configured to color the human region 94 and the background region 96 in the processing target image 75A2 so as to be distinguishable. The digital filter 84A2a may be configured to set one of the character region 94 and the background region 96 to be colored and the other to be achromatic. The digital filter 84A2a may be configured to change the gradation between the human region 94 and the background region 96 by changing both the human region 94 and the background region 96 to be colored or non-colored.
The non-AI-scheme processing section 62B2 generates an image that colors the human figure region 94 and the background region 96 so as to be distinguishable as A2 nd color image 88A2 by performing processing using the digital filter 84A2a on the processing target image 75 A2. By synthesizing the 2 nd and 1 st colored images 88A2 and 86A2 so generated, the user can easily visually recognize the difference between the human figure region 94 and the background region 96 within the synthesized image 92B.
In the present modification 1, the person region 94 is exemplified as an example of the "1 st region" and the "region where a specific object is displayed" according to the technique of the present invention, but this is merely an example, and the technique of the present invention is also applicable even to regions other than the person region 94 (for example, a region where a specific vehicle is displayed, a region where a specific animal is displayed, a region where a specific plant is displayed, a region where a specific building is displayed, and/or a region where a specific airplane is displayed).
In the present modification 1, the embodiment in which the 1 st color image 86A2 and the 2 nd color image 88A2 are synthesized has been described as an example. For example, the element (for example, the color in the 1 st color image 86 A2) resulting from the processing of the AI scheme may be adjusted by synthesizing the processing target image 75A2 (i.e., an image in which the non-noise element is not adjusted) with the 1 st color image 86A2 instead of the 2 nd color image 88 A2. In this case, the influence of the elements derived from the processing of the AI scheme on the composite image 92B is alleviated by the elements derived from the processing target image 75A2 (for example, the color within the processing target image 75 A2). Therefore, it is possible to suppress an excessive or insufficient amount of change in the factor that AI-wise changes the visual impression given to the control processing target image 75A2 for the composite image 92B. The image 75A2 to be processed combined with the 1 st color image 86A2 is an example of the "2 nd image" according to the technique of the present invention.
[ modification 2 ]
As an example, as shown in fig. 11, the processor 62 according to the modification 2 differs from the processor 62 shown in fig. 4 in that the AI-based processing unit 62A1 includes an AI-based processing unit 62A3 and the non-AI-based processing unit 62B1 includes a non-AI-based processing unit 62B3. In modification 2, the description of the same items as those described in modification 2 is omitted, and the description of the different items from those described in modification 2 is omitted.
The processing target image 75A3 is input to the AI-mode processing unit 62A3 and the non-AI-mode processing unit 62B3. The processing target image 75A3 is an example of the processing target image 75A shown in fig. 2. The processing target image 75A3 is a color image, and has a person region 98 and a background region 100. The person region 98 is an image region in which a person is shown. The background area 100 is an image area in which a background is displayed. In this case, a color image is illustrated as the processing target image 75A3, but the processing target image 75A3 may be an achromatic color image.
The AI-mode processing unit 62A3 and the non-AI-mode processing unit 62B3 perform processing for adjusting the contrast of the input processing target image 75 A3. In modification 2, the process of adjusting the contrast is a process of enhancing or weakening the contrast. The contrast of the processing target image 75A3 is an example of "a non-noise element of the processing target image", "a factor that controls the visual impression given by the processing target image", and "a contrast of the processing target image" according to the technique of the present invention.
The AI-scheme processing unit 62A3 performs AI-scheme processing on the processing target image 75 A3. As an example of the AI-mode processing for the processing target image 75A3, a processing using the generation model 82A3 can be given. The generation model 82A3 is an example of the generation model 82A shown in fig. 3. The generation model 82A3 is a generation network in which learning to adjust the contrast of the processing target image 75A3 has been performed.
The AI-mode processing unit 62A3 changes the factors of the visual impression given to the control processing target image 75A3 in the AI mode. That is, the AI-scheme processing section 62A3 changes, as a non-noise element of the processing target image 75A3, a factor that controls the visual impression given by the processing target image 75A3 by performing processing using the generation model 82A3 on the processing target image 75 A3. A factor that controls the visual impression given to the processing target image 75A3 is the contrast of the processing target image 75 A3. In the example shown in fig. 11, the AI-scheme processing section 62A3 generates the 1 st contrast adjustment image 86A3 by performing processing using the generation model 82A3 on the processing target image 75 A3. The 1 st contrast adjustment image 86A3 is an image in which the contrast of the processing target image 75A3 is adjusted in the AI method.
The process of using the generation model 82A3 is an example of "1 st AI process", "1 st change process", and "1 st contrast adjustment process" according to the technique of the present invention. The 1 st contrast adjustment image 86A3 is an example of the "1 st modified image" and the "1 st contrast adjustment image" according to the technique of the present invention. "generating the 1 st contrast adjustment image 86A3" is an example of "acquiring the 1 st image" according to the technique of the present invention.
The processing target image 75A3 is input to the generation model 82A3. The generation model 82A3 generates and outputs a 1 st contrast adjustment image 86A3 from the input processing target image 75 A3. In the example shown in fig. 11, an image having a higher contrast than the processing target image 75A3 is shown as an example of the 1 st contrast adjustment image 86A3.
The non-AI-scheme processing unit 62B3 performs a non-AI-scheme process on the processing target image 75 A3. The processing in the non-AI mode refers to processing that does not use a neural network. In modification 2, the process of not using the neural network includes, for example, a process of not using the generation model 82A3.
As an example of the non-AI-mode processing for the processing target image 75A3, a processing using the digital filter 84A3 is given. The digital filter 84A3 is a digital filter configured to adjust the contrast of the processing target image 75 A3. The non-AI-scheme processing unit 62B3 generates the 2 nd contrast adjustment image 88A3 by performing processing (i.e., filtering) using the digital filter 84A3 on the processing target image 75 A3. In other words, the non-AI-scheme processing unit 62B3 generates the 2 nd contrast adjustment image 88A3 by adjusting a non-noise element (here, contrast is an example) of the processing target image 75A3 in a non-AI scheme. In other words, the non-AI-mode processing unit 62B3 generates the 2 nd contrast adjustment image 88A3 by changing the contrast of the processing target image 75A3 in the non-AI mode.
The processing using the digital filter 84A3 is an example of "processing of the non-AI method that does not use a neural network" and "processing of the 2 nd change of the non-AI method change factor" according to the technique of the present invention. "generating the 2 nd contrast adjustment image 88A3" is an example of "acquiring the 2 nd image" according to the technique of the present invention.
The processing target image 75A3 is input to the digital filter 84A3. The digital filter 84A3 generates a 2 nd contrast adjustment image 88A3 from the input processing target image 75 A3. The 2 nd contrast adjustment image 88A3 is an image obtained by changing the non-noise element by the digital filter 84A3 (that is, an image obtained by changing the non-noise element by the processing using the digital filter 84A3 with respect to the processing target image 75 A3). In other words, the 2 nd contrast adjustment image 88A3 is an image in which the contrast of the processing target image 75A3 is changed by the digital filter 84A3 (i.e., an image in which the contrast is changed by the processing using the digital filter 84A3 for the processing target image 75 A3). In the example shown in fig. 11, an image having a higher contrast than the processing target image 75A3 and lower contrast than the 1 st contrast adjustment image 86A3 is shown as an example of the 2 nd contrast adjustment image 88A3. The 2 nd contrast adjustment image 88A3 is an example of the "2 nd image", "2 nd change image", and "2 nd contrast adjustment image" according to the technology of the present invention.
The 1 st contrast adjustment image 86A3 obtained by performing the AI-based processing on the processing target image 75A3 may include a contrast different from the preference of the user due to the characteristics (for example, the number of intermediate layers and/or the learning amount) of the generated model 82 A3. If the influence of the AI-based processing is excessively reflected on the processing target image 75A3, a case is also conceivable in which a contrast different from the preference of the user becomes apparent.
In view of this, in the image pickup apparatus 10, as an example, as shown in fig. 12, the 1 st contrast adjustment image 86A3 and the 2 nd contrast adjustment image 88A3 are synthesized by performing the processing of the image adjustment unit 62C3 and the processing of the synthesis unit 62D3 on the 1 st contrast adjustment image 86A3 and the 2 nd contrast adjustment image 88A3.
As an example, as shown in fig. 12, the NVM64 stores a proportion 90C. The ratio 90C is a ratio at which the 1 st contrast adjustment image 86A3 and the 2 nd contrast adjustment image 88A3 are synthesized, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., processing using the generation model 82 A3) by the AI scheme processing unit 62 A3.
The proportion 90C is roughly divided into a 1 st proportion 90C1 and a 2 nd proportion 90C2. The 1 st proportion 90C1 is a value of 0 to 1, and the 2 nd proportion 90C2 is a value obtained by subtracting the 1 st proportion 90C1 from "1". That is, the 1 st proportion 90C1 and the 2 nd proportion 90C2 are set so that the sum of the 1 st proportion 90C1 and the 2 nd proportion 90C2 becomes "1". The 1 st scale 90C1 and the 2 nd scale 90C2 are variable values that can be changed according to an instruction from a user.
The image adjustment unit 62C3 adjusts the 1 st contrast adjustment image 86A3 generated by the AI-scheme processing unit 62A3 using the 1 st scale 90C 1. For example, the image adjustment unit 62C3 multiplies the 1 st scale 90C1 by the pixel value of each pixel of the 1 st contrast adjustment image 86A3 to adjust the pixel value of each pixel of the 1 st contrast adjustment image 86A3.
The image adjustment unit 62C3 adjusts the 2 nd contrast adjustment image 88A3 generated by the non-AI-scheme processing unit 62B3 using the 2 nd scale 90C 2. For example, the image adjustment unit 62C3 multiplies the 2 nd ratio 90C2 by the pixel value of each pixel of the 2 nd contrast adjustment image 88A3 to adjust the pixel value of each pixel of the 2 nd contrast adjustment image 88A3.
The combining unit 62D3 generates a combined image 92C by combining the 1 st contrast adjustment image 86A3 adjusted by the image adjusting unit 62C3 at the 1 st scale 90C1 and the 2 nd contrast adjustment image 88A3 adjusted by the image adjusting unit 62C3 at the 2 nd scale 90C 2. That is, the combining unit 62D3 combines the 1 st contrast adjustment image 86A3 adjusted in the 1 st scale 90C1 and the 2 nd contrast adjustment image 88A3 adjusted in the 2 nd scale 90C2 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A3. In other words, the combining unit 62D3 combines the 1 st contrast adjustment image 86A3 adjusted in the 1 st scale 90C1 and the 2 nd contrast adjustment image 88A3 adjusted in the 2 nd scale 90C2 to adjust the non-noise element (here, the contrast is an example). In other words, the combining unit 62D3 combines the 1 st contrast adjustment image 86A3 adjusted in the 1 st scale 90C1 and the 2 nd contrast adjustment image 88A3 adjusted in the 2 nd scale 90C2 to adjust elements derived from the processing using the generation model 82A3 (for example, pixel values of pixels whose contrast is changed by the generation model 82 A3).
The combination by the combining unit 62D3 is the addition of the pixel values at the corresponding pixel positions between the 1 st contrast adjustment image 86A3 and the 2 nd contrast adjustment image 88 A3. The synthesis by the synthesis unit 62D3 is performed in the same manner as the synthesis by the synthesis unit 62D1 shown in fig. 5. The composite image 92C is also subjected to various image processing by the compositing unit 62D3 in the same manner as the composite image 92A shown in fig. 5. The synthesized image 92C subjected to various image processing is output to a predetermined output destination by the synthesizing unit 62D 3.
Fig. 13 shows an example of the flow of the image synthesis processing according to modification 2. The flowchart shown in fig. 13 differs from the flowchart shown in fig. 6 in that steps ST100 to ST118 are applied instead of steps ST12 to ST 30.
In the image combining process shown in fig. 13, in step ST100, the AI-scheme processing unit 62A3 and the non-AI-scheme processing unit 62B3 acquire the processing target image 75A3 from the image sensor 20. After the process of step ST100 is performed, the image synthesis process proceeds to step ST102.
In step ST102, the AI-scheme processing unit 62A3 inputs the processing-target image 75A3 acquired in step ST100 into the generation model 82A3. After the process of step ST102 is performed, the image synthesis process proceeds to step ST104.
In step ST104, the AI-scheme processing unit 62A3 acquires the 1 ST contrast adjustment image 86A3, and the 1 ST contrast adjustment image 86A3 is output from the generation model 82A3 by inputting the processing target image 75A3 into the generation model 82A3 in step ST 102. After the process of step ST104 is performed, the image synthesis process proceeds to step ST106.
In step ST106, the non-AI-scheme processing unit 62B3 adjusts the contrast of the processing target image 75A3 by performing the processing using the digital filter 84A3 on the processing target image 75A3 acquired in step ST 100. After the process of step ST106 is performed, the image synthesis process proceeds to step ST108.
In step ST108, the non-AI-mode processing unit 62B3 acquires the 2 nd contrast adjustment image 88A3, and the 2 nd contrast adjustment image 88A3 is obtained by performing processing using the digital filter 84A3 on the processing target image 75A3 in step ST106. After the process of step ST108 is performed, the image synthesis process proceeds to step ST110.
In step ST110, the image adjustment unit 62C3 acquires the 1 ST scale 90C1 and the 2 nd scale 90C2 from the NVM 64. After the process of step ST110 is performed, the image synthesis process proceeds to step ST112.
In step ST112, the image adjustment unit 62C3 adjusts the 1 ST contrast adjustment image 86A3 using the 1 ST scale 90C1 acquired in step ST110. After the process of step ST112 is performed, the image synthesis process proceeds to step ST114.
In step ST114, the image adjustment unit 62C3 adjusts the 2 nd contrast adjustment image 88A3 using the 2 nd scale 90C2 acquired in step ST 110. After the process of step ST114 is performed, the image synthesis process proceeds to step ST116.
In step ST116, the combining unit 62D3 combines the 1 ST contrast adjustment image 86A3 adjusted in step ST112 and the 2 nd contrast adjustment image 88A3 adjusted in step ST114 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A3. The synthesized image 92C is generated by synthesizing the 1 ST contrast adjustment image 86A3 adjusted in step S112 and the 2 nd contrast adjustment image 88A3 adjusted in step ST 114. After the process of step ST116 is performed, the image synthesis process proceeds to step ST118.
In step ST118, the combining unit 62D3 performs various image processing on the combined image 92C. Then, the combining unit 62D3 outputs an image obtained by performing various image processing on the combined image 92C as a processed image 75B to a predetermined output destination. After the process of step ST118 is performed, the image synthesis process proceeds to step ST32.
As described above, in the image pickup apparatus 10 according to modification 2, the 1 st contrast adjustment image 86A3 is generated by adjusting the contrast of the processing target image 75A3 in the AI system. Then, the 2 nd contrast adjustment image 88A3 is generated by adjusting the contrast of the processing target image 75A3 in a non-AI manner. Then, the 1 st contrast adjustment image 86A3 and the 2 nd contrast adjustment image 88A3 are synthesized. This can suppress excessive or insufficient contrast in the processing by the AI method for the composite image 92C. As a result, the composite image 92C is an image whose contrast by the AI method is less likely to be more noticeable than the 1 st contrast adjustment image 86A3, and can provide an appropriate image to a user who does not like the AI method.
In the example shown in fig. 11 to 13, the example in which the processor 62 adjusts the contrast with respect to the entire processing target image 75A3 has been described, but the technique of the present invention is not limited to this, and the processor 62 may perform the process of adjusting the sharpness of the processing target image 75 A3. The sharpness refers to a contrast between a center pixel and pixels adjacent thereto around the center pixel within a pixel block constituted by a plurality of pixels. The process of adjusting the definition includes a process of adjusting the definition in an AI manner and a process of adjusting the definition in a non-AI manner.
The process of adjusting the sharpness in the AI manner is, for example, a process of using the generative model 82A3 a. In this case, the generation model 82A3a is a generation network in which the learning of the 1 st sharpness process is performed while the contrast is adjusted in the above-described manner. The 1 st sharpness process refers to a process of adjusting sharpness in an AI manner (i.e., a process of locally adjusting contrast in an AI manner). As shown in fig. 14, the local adjustment of the contrast by the AI method is realized by, for example, increasing or decreasing the difference between the pixel value of the center pixel 104A among the plurality of pixels 104 constituting the edge region of the human region 98 and the pixel value of the plurality of adjacent pixels 104B adjacent to the center pixel 104A around the center pixel 104A.
The process of adjusting the sharpness in the non-AI manner is, for example, a process of using the generative model 82A3 a. In this case, the digital filter 84A3a is configured to adjust the contrast in the above-described manner and perform the 2 nd sharpness process. The 2 nd sharpness process refers to a process of adjusting sharpness in a non-AI manner (i.e., a process of locally adjusting contrast in a non-AI manner). As shown in fig. 14, the local adjustment of the contrast by the non-AI method is realized by, for example, increasing or decreasing the difference between the pixel value of the center pixel 106A among the plurality of pixels 106 constituting the edge region of the human region 98 and the pixel value of the plurality of adjacent pixels 106B adjacent to the center pixel 106A around the center pixel 106A.
Here, if the 1 st sharpness process is performed, an unnatural border may appear in the human region 98 due to the sharpness of the 1 st contrast adjustment image 86A3 becoming too strong, or conversely, a fine portion of the human region 98 may become unclear due to the sharpness of the 1 st contrast adjustment image 86A3 becoming too weak. Thus, the 1 st contrast adjustment image 86A3 subjected to the 1 st sharpness process and the 2 nd contrast adjustment image 88A3 subjected to the 2 nd sharpness process are synthesized in proportion 90C. Thereby, an element (for example, a pixel value of a pixel whose contrast is changed by the generation model 82A3 a) derived from the 1 st sharpness process is adjusted. As a result, an image in which the influence of the 1 st sharpness process is relieved can be obtained as the composite image 92C.
Here, the embodiment of combining the 1 st contrast adjustment image 86A3 subjected to the 1 st sharpness process and the 2 nd contrast adjustment image 88A3 subjected to the 2 nd sharpness process has been described, but the 1 st contrast adjustment image 86A3 subjected to the 1 st sharpness process and the 2 nd contrast adjustment image 88A3 not subjected to the 2 nd sharpness process or the image 75A3 to be processed may be combined. In this case, the same effect can be expected.
The 1 st sharpness process is an example of the "5 th contrast adjustment process" according to the technique of the present invention. The 2 nd sharpness process is an example of the "6 th contrast adjustment process" according to the technique of the present invention. The 1 st contrast adjustment image 86A3 obtained by performing 1 st sharpness processing is an example of the "5 th contrast image" according to the technique of the present invention. The 2 nd contrast adjustment image 88A3 obtained by performing the 2 nd sharpness process is an example of the "6 th image" according to the technique of the present invention.
In the example shown in fig. 11 to 13, the case where a person is shown in the processing target image 74A3 has been described, but the technique of the present invention is not limited to this. For example, as shown in fig. 15, a person and a vehicle (here, an automobile is an example) may be displayed on the processing target image 74 A3. In this case, the processing target image 74A3 has the person region 98 and the vehicle region 108. The vehicle region 108 is an image region in which an existing vehicle is displayed.
The AI-mode processing unit 62A adjusts the contrast according to the subject with respect to the processing target image 75A3 in the AI mode. To achieve this, in the example shown in fig. 15, the AI-scheme processing unit 62A performs processing using the generation model 82A3 b. The generation model 82A3b is a generation network in which learning to adjust contrast according to an object has been performed. The AI-scheme processing unit 62A adjusts the contrast according to the person region 98 and the vehicle region 108 in the processing target image 75A3 using the generated model 82A3 b. That is, the person region 98 is given a contrast corresponding to the person represented by the person region 98, and the vehicle region 108 is given a contrast corresponding to the vehicle represented by the vehicle region 108. In the example shown in fig. 15, the contrast of the vehicle region 108 is higher than the contrast of the person region 98.
The non-AI-scheme processing unit 62B adjusts the contrast according to the subject with respect to the processing target image 75A3 in the non-AI scheme. To achieve this, in the example shown in fig. 15, the non-AI-scheme processing unit 62B performs processing using the digital filter 84 A3B. The digital filter 84A3b is a digital filter configured to adjust contrast according to an object. The non-AI-mode processing unit 62B adjusts the contrast according to the person region 98 and the vehicle region 108 in the processing target image 75A3 using the digital filter 84 A3B. In the example shown in fig. 15, the contrast of the vehicle region 108 is higher than the contrast of the person region 98. The contrast of the vehicle region 108 in the 2 nd contrast adjustment image 88A3 is lower than the contrast of the vehicle region 108 in the 1 st contrast adjustment image 86 A3. The contrast of the human region 98 in the 2 nd contrast adjustment image 88A3 is lower than the contrast of the human region 98 in the 1 st contrast adjustment image 86 A3.
Here, if the 1 st contrast adjustment image 86A3 is excessively affected by the process of using the generated model 82A3b, the contrast of the person region 98 and the vehicle region 108 in the 1 st contrast adjustment image 86A3 may not match the preference of the user. For example, the user may feel that the contrast of the person region 98 and the vehicle region 108 in the 1 st contrast adjustment image 86A3 is too high. Accordingly, the 1 st contrast adjustment image 86A3 obtained by performing the process using the generation model 82A3b on the processing target image 75A3 and the 2 nd contrast adjustment image 88A3 obtained by performing the process using the digital filter 84A3b on the processing target image 75A3 are synthesized at the scale 90C. Thereby, elements derived from the process of using the generation model 82A3b (for example, pixel values of pixels whose contrast is changed by the generation model 82A3 b) are adjusted. Thereby, an image in which the influence of the process of using the generation model 82A3b is alleviated can be obtained as the composite image 92C.
Here, the embodiment of synthesizing the 1 st contrast adjustment image 86A3 obtained by processing the processing target image 75A3 using the generation model 82A3b and the 2 nd contrast adjustment image 88A3 obtained by processing the processing target image 75A3 using the digital filter 84A3b has been described, but the technique of the present invention is not limited to this. For example, the 1 st contrast adjustment image 86A3 obtained by performing the process using the generation model 82A3b on the image to be processed 75A3 and the 2 nd contrast adjustment image 88A3 or the image to be processed 75A3 without performing the process using the digital filter 84A3b may be synthesized. In this case, the same effect can be expected.
The process of generating the model 82A3b is an example of "the 3 rd contrast adjustment process" according to the technique of the present invention. The process using the digital filter 84A3b is an example of "4 th contrast adjustment process" according to the technique of the present invention. The 1 st contrast adjustment image 86A3 obtained by performing processing using the generation model 82A3b on the processing target image 75A3 is an example of the "3 rd contrast adjustment image" according to the technique of the present invention. The 2 nd contrast adjustment image 88A3 obtained by performing processing using the digital filter 84A3b on the processing target image 75A3 is an example of the "4 th contrast adjustment image" according to the technique of the present invention.
[ modification example 3 ]
As an example, as shown in fig. 16, the processor 62 according to the modification 3 differs from the processor 62 shown in fig. 4 in that the AI-based processing unit 62A1 includes an AI-based processing unit 62A4 and the non-AI-based processing unit 62B1 includes a non-AI-based processing unit 62B4. In modification 3, the description of the same items as those described in modification 3 is omitted, and the description of the different items from those described in modification 3 is omitted.
The processing target image 75A4 is input to the AI-mode processing unit 62A4 and the non-AI-mode processing unit 62B4. The processing target image 75A4 is an example of the processing target image 75A shown in fig. 2. The processing object image 75A4 is a color image, and has a person region 110. The person region 110 is an image region in which a person is shown. In this case, a color image is illustrated as the processing target image 75A4, but the processing target image 75A4 may be an achromatic color image.
The AI-mode processing unit 62A4 and the non-AI-mode processing unit 62B4 perform processing for adjusting the resolution of the input processing target image 75 A4. In modification 3, the resolution adjustment process refers to a process of increasing or decreasing the resolution. The resolution of the processing target image 75A4 is an example of "a non-noise element of the processing target image", "a factor that controls a visual impression given by the processing target image", and "a resolution of the processing target image" according to the technique of the present invention.
The AI-scheme processing unit 62A4 performs AI-scheme processing on the processing target image 75 A4. As an example of the AI-mode processing for the processing target image 75A4, a processing using the generation model 82A4 is given. The generative model 82A4 is an example of the generative model 82A shown in fig. 3. The generation model 82A4 is a generation network in which learning to adjust the resolution of the processing target image 75A4 has been performed. In modification 3, the learning to adjust the resolution of the processing target image 75A4 is learning to super-resolution the processing target image 75 A4.
The AI-mode processing unit 62A4 changes the factors of the visual impression given to the control processing target image 75A4 in the AI mode. That is, the AI-scheme processing section 62A4 changes, as a non-noise element of the processing target image 75A4, a factor that controls the visual impression given by the processing target image 75A4 by performing processing using the generation model 82A4 on the processing target image 75 A4. A factor that controls the visual impression given to the processing object image 75A4 is the resolution of the processing object image 75 A4. In the example shown in fig. 16, the AI-scheme processing unit 62A4 generates the 1 st resolution adjustment image 86A4 by performing processing using the generation model 82A4 on the processing target image 75 A4. The 1 st resolution adjustment image 86A4 is an image in which the resolution of the processing target image 75A4 is adjusted in the AI method. Here, the image in which the resolution of the processing target image 75A4 is adjusted in the AI method is an image in which the processing target image 75A4 is super-resolved in the AI method.
The process of using the generation model 82A4 is an example of "1 st AI process", "1 st change process", and "1 st resolution adjustment process" according to the technique of the present invention. The 1 st resolution adjustment image 86A4 is an example of the "1 st change image" and the "1 st resolution adjustment image" according to the technology of the present invention.
"generating the 1 st resolution adjustment image 86A4" is an example of "acquiring the 1 st image" according to the technique of the present invention.
The processing target image 75A4 is input to the generation model 82A4. The generation model 82A4 generates and outputs a 1 st resolution adjustment image 86A4 from the input processing target image 75 A4. In the example shown in fig. 16, an image in which the processing target image 75A4 is super-resolution is shown as an example of the 1 st resolution adjustment image 86A4.
The non-AI-scheme processing unit 62B4 performs a non-AI-scheme process on the processing target image 75 A4. The processing in the non-AI mode refers to processing that does not use a neural network. In modification 3, the process of not using the neural network includes, for example, a process of not using the generation model 82A4.
As an example of the non-AI-mode processing for the processing target image 75A4, a processing using the digital filter 84A4 is given. The digital filter 84A4 is a digital filter configured to adjust the resolution of the processing target image 75 A4. Hereinafter, as the digital filter 84A4, a digital filter configured to super-resolution the processing target image 75A4 will be described as an example.
The non-AI-scheme processing unit 62B4 generates the 2 nd resolution adjustment image 88A4 by performing processing (i.e., filtering) using the digital filter 84A4 on the processing target image 75 A4. In other words, the non-AI-scheme processing unit 62B4 generates the 2 nd resolution-adjusted image 88A4 by adjusting a non-noise element (here, resolution is an example) of the processing target image 75A4 in a non-AI scheme. In other words, the non-AI-mode processing unit 62B4 generates the 2 nd resolution adjustment image 88A4 by adjusting the resolution of the processing target image 75A4 in the non-AI mode. The image in which the resolution of the processing target image 75A4 is adjusted in the non-AI mode is an image in which the processing target image 75A4 is super-resolved in the non-AI mode.
The process using the digital filter 84A4 is an example of "a process of a non-AI method that does not use a neural network" and "a 2 nd changing process of a non-AI method changing factor" according to the technique of the present invention. "generating the 2 nd resolution adjustment image 88A4" is an example of "acquiring the 2 nd image" according to the technique of the present invention.
The processing target image 75A4 is input to the digital filter 84A4. The digital filter 84A4 generates a 2 nd resolution adjustment image 88A4 from the inputted processing target image 75 A4. The 2 nd resolution adjustment image 88A4 is an image obtained by changing the non-noise element by the digital filter 84A4 (that is, an image obtained by changing the non-noise element by the processing using the digital filter 84A4 with respect to the processing target image 75 A4). In other words, the 2 nd resolution adjustment image 88A4 is an image in which the resolution of the processing target image 75A4 is adjusted by the digital filter 84A4 (i.e., an image in which the resolution is adjusted by the processing using the digital filter 84A4 for the processing target image 75 A4). In the example shown in fig. 16, as an example of the 2 nd resolution adjustment image 88A4, an image in which the processing target image 75A4 is super-resolution and the resolution is lower than the 1 st resolution adjustment image 86A4 is shown. The 2 nd resolution adjustment image 88A4 is an example of the "2 nd image", "2 nd change image", and "2 nd resolution adjustment image" according to the technology of the present invention.
The resolution of the 1 st resolution adjustment image 86A4 obtained by performing the AI-mode processing on the processing target image 75A4 may be different from the user's preference due to the characteristics (for example, the number of intermediate layers and/or the learning amount) of the generation model 82 A4. If the influence of the processing of the AI scheme is excessively reflected on the processing target image 75A4, it is also conceivable that the resolution is excessively higher than the preference of the user or, conversely, excessively lower than the preference of the user.
In view of this, in the imaging apparatus 10, as shown in fig. 17, for example, the 1 st resolution adjustment image 86A4 and the 2 nd resolution adjustment image 88A4 are synthesized by performing the processing of the image adjustment unit 62C4 and the processing of the synthesis unit 62D4 on the 1 st resolution adjustment image 86A4 and the 2 nd resolution adjustment image 88A4.
As an example, as shown in fig. 17, the NVM64 stores a proportion 90D. The scale 90D is a scale for synthesizing the 1 st resolution adjustment image 86A4 and the 2 nd resolution adjustment image 88A4, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., processing using the generation model 82 A4) by the AI scheme processing unit 62 A4.
The proportion 90D is roughly divided into a 1 st proportion 90D1 and a 2 nd proportion 90D2. The 1 st proportion 90D1 is a value of 0 to 1, and the 2 nd proportion 90D2 is a value obtained by subtracting the 1 st proportion 90D1 from "1". That is, the 1 st proportion 90D1 and the 2 nd proportion 90D2 are set so that the sum of the 1 st proportion 90D1 and the 2 nd proportion 90D2 becomes "1". The 1 st scale 90D1 and the 2 nd scale 90D2 are variable values that can be changed according to an instruction from a user.
The image adjustment unit 62C4 adjusts the 1 st resolution adjustment image 86A4 generated by the AI-scheme processing unit 62A4 using the 1 st scale 90D 1. For example, the image adjustment unit 62C4 multiplies the 1 st scale 90D1 by the pixel value of each pixel of the 1 st resolution adjustment image 86A4 to adjust the pixel value of each pixel of the 1 st resolution adjustment image 86A4.
The image adjustment unit 62C4 adjusts the 2 nd resolution adjustment image 88A4 generated by the non-AI-scheme processing unit 62B4 using the 2 nd scale 90D 2. For example, the image adjustment unit 62C4 multiplies the 2 nd scale 90D2 by the pixel value of each pixel of the 2 nd resolution adjustment image 88A4 to adjust the pixel value of each pixel of the 2 nd resolution adjustment image 88A4.
The combining section 62D4 generates a combined image 92D by combining the 1 st resolution-adjusted image 86A4 adjusted by the image adjusting section 62C4 at the 1 st scale 90D1 and the 2 nd resolution-adjusted image 88A4 adjusted by the image adjusting section 62C4 at the 2 nd scale 90D 2. That is, the combining unit 62D4 combines the 1 st resolution adjustment image 86A4 adjusted in the 1 st scale 90D1 and the 2 nd resolution adjustment image 88A4 adjusted in the 2 nd scale 90D2 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A4. In other words, the combining unit 62D4 combines the 1 st resolution adjustment image 86A4 adjusted in the 1 st scale 90D1 and the 2 nd resolution adjustment image 88A4 adjusted in the 2 nd scale 90D2 to adjust the non-noise element (here, the resolution is an example). In other words, the synthesizing unit 62D4 synthesizes the 1 st resolution adjustment image 86A4 adjusted in the 1 st scale 90D1 and the 2 nd resolution adjustment image 88A4 adjusted in the 2 nd scale 90D2 to adjust elements (for example, pixel values of the adjusted pixels of the resolution-generated model 82 A4) derived from the processing using the generation model 82 A4.
The synthesis performed by the synthesis unit 62D4 is the addition of the pixel values at the corresponding pixel positions between the 1 st resolution-adjusted image 86A4 and the 2 nd resolution-adjusted image 88 A4. The synthesis by the synthesis unit 62D4 is performed in the same manner as the synthesis by the synthesis unit 62D1 shown in fig. 5. The composite image 92D is also subjected to various image processing by the compositing unit 62D4 in the same manner as the composite image 92A shown in fig. 5. The synthesized image 92D subjected to various image processing is output to a predetermined output destination by the synthesizing unit 62D 4.
Fig. 18 shows an example of the flow of the image synthesis processing according to modification 3. The flowchart shown in fig. 18 differs from the flowchart shown in fig. 6 in that steps ST150 to ST168 are applied instead of steps ST12 to ST 30.
In the image combining process shown in fig. 18, in step ST150, the AI-mode processing unit 62A4 and the non-AI-mode processing unit 62B4 acquire the processing target image 75A4 from the image sensor 20. After the process of step ST150 is performed, the image synthesis process proceeds to step ST152.
In step ST152, the AI-scheme processing unit 62A4 inputs the processing-target image 75A4 acquired in step ST150 into the generation model 82A4. Thus, the processing target image 75A4 is super-resolved in the AI system. After the process of step ST152 is performed, the image synthesis process proceeds to step ST154.
In step ST154, the AI-scheme processing unit 62A4 acquires the 1 ST resolution adjustment image 86A4, and the 1 ST resolution adjustment image 86A4 is output from the generation model 82A4 by inputting the processing target image 75A4 into the generation model 82A4 in step ST 152. After the process of step ST154 is performed, the image synthesis process proceeds to step ST156.
In step ST156, the non-AI-scheme processing unit 62B4 adjusts the resolution of the processing target image 75A4 by performing the processing using the digital filter 84A4 on the processing target image 75A4 acquired in step ST 150. Thus, the processing target image 75A4 is super-resolved in a non-AI manner. After the process of step ST156 is performed, the image synthesis process proceeds to step ST158.
In step ST158, the non-AI-mode processing unit 62B4 acquires the 2 nd resolution adjustment image 88A4, and the 2 nd resolution adjustment image 88A4 is obtained by performing processing using the digital filter 84A4 on the processing target image 75A4 in step ST156. After the process of step ST158 is performed, the image synthesis process proceeds to step ST160.
In step ST160, the image adjustment unit 62C4 acquires the 1 ST scale 90D1 and the 2 nd scale 90D2 from the NVM 64. After the process of step ST160 is performed, the image synthesis process proceeds to step ST162.
In step ST162, the image adjustment unit 62C4 adjusts the 1 ST resolution adjustment image 86A4 using the 1 ST scale 90D1 acquired in step ST 160. After the process of step ST162 is performed, the image synthesis process proceeds to step ST164.
In step ST164, the image adjustment unit 62C4 adjusts the 2 nd resolution adjustment image 88A4 using the 2 nd scale 90D2 acquired in step ST 160. After the process of step ST164 is performed, the image synthesis process proceeds to step ST166.
In step ST166, the combining unit 62D4 combines the 1 ST resolution adjustment image 86A4 adjusted in step ST162 and the 2 nd resolution adjustment image 88A4 adjusted in step ST164 to adjust the excessive or insufficient processing of the AI scheme by the AI-scheme processing unit 62 A4. The synthesized image 92D is generated by synthesizing the 1 ST resolution-adjusted image 86A4 adjusted in step ST162 and the 2 nd resolution-adjusted image 88A4 adjusted in step ST164. After the process of step ST166 is performed, the image synthesis process proceeds to step ST168.
In step ST168, the combining unit 62D4 performs various image processing on the combined image 92D. Then, the combining unit 62D4 outputs an image obtained by performing various image processing on the combined image 92D as a processed image 75B to a predetermined output destination. After the process of step ST168 is performed, the image synthesis process proceeds to step ST32.
As described above, in the image pickup apparatus 10 according to modification 3, the 1 st resolution adjustment image 86A4 is generated by adjusting the resolution of the processing target image 75A4 in the AI system. Then, the 1 st resolution adjustment image 86A4 and the 2 nd resolution adjustment image 88A4 are synthesized. This can suppress excessive or insufficient resolution of the process by the AI method for the synthesized image 92D. As a result, the synthesized image 92D is an image whose resolution by the AI method is less likely to be more noticeable than the 1 st resolution adjustment image 86A4, and an appropriate image can be provided to a user who does not like the AI method.
In the present modification 3, the 1 st resolution adjustment image 86A4 is an image in which the processing target image 75A4 is super-resolved by the AI scheme, and the 2 nd resolution adjustment image 88A4 is an image in which the processing target image 75A4 is super-resolved by the non-AI scheme. Then, the synthesized image 92D is generated by synthesizing the image in which the processing target image 75A4 is super-resolved in the AI scheme and the image in which the processing target image 75A4 is super-resolved in the non-AI scheme. Therefore, the resolution obtained by the super-resolution of the AI method can be suppressed from being excessive or insufficient for the synthesized image 92D.
Here, the embodiment in which the 1 st resolution adjustment image 86A4 obtained by performing the process using the generation model 82A4 on the processing target image 75A4 and the 2 nd resolution adjustment image 88A4 obtained by performing the process using the digital filter 84A4 on the processing target image 75A4 are synthesized has been described, but the technique of the present invention is not limited to this. For example, the 1 st resolution adjustment image 86A4 obtained by performing the process using the generation model 82A4 on the processing target image 75A4 and the processing target image 75A4 (i.e., an image in which the non-noise element is not adjusted) may be synthesized. In this case, the same effect can be expected.
[ modification 4 ]
As an example, as shown in fig. 19, the processor 62 according to the modification 4 differs from the processor 62 shown in fig. 4 in that the AI-based processing unit 62A1 includes an AI-based processing unit 62A5 and the non-AI-based processing unit 62B1 includes a non-AI-based processing unit 62B5. In modification 4, the description of the same items as those described in modification 4 is omitted, and the description of the different items from those described in modification 4 is omitted.
The processing target image 75A5 is input to the AI-mode processing unit 62A5 and the non-AI-mode processing unit 62B5. The processing target image 75A5 is an example of the processing target image 75A shown in fig. 2. The processing target image 75A5 is a color image. In this case, a color image is illustrated as the processing target image 75A5, but the processing target image 75A5 may be an achromatic color image.
The AI-mode processing unit 62A5 and the non-AI-mode processing unit 62B5 perform processing for expanding the dynamic range of the input processing target image 75 A5. The dynamic range of the processing target image 75A5 is an example of "a non-noise element of the processing target image", "a factor that controls the visual impression given by the processing target image", and "a dynamic range of the processing target image" according to the technique of the present invention.
The AI-scheme processing unit 62A5 performs AI-scheme processing on the processing target image 75 A5. As an example of the AI-mode processing for the processing target image 75A5, a processing using the generation model 82A5 is given. The generative model 82A5 is an example of the generative model 82A shown in fig. 3. The generation model 82A5 is a generation network in which learning to expand the dynamic range of the processing target image 75A5 has been performed. In modification 4, the learning to adjust the dynamic range of the processing target image 75A5 is learning to increase the dynamic range of the processing target image 75 A5. Hereinafter, the "high dynamic range" is referred to as "HDR".
The AI-mode processing unit 62A5 changes the factors of the visual impression given to the control processing target image 75A5 in the AI mode. That is, the AI-scheme processing section 62A5 changes, as a non-noise element of the processing target image 75A5, a factor that controls the visual impression given by the processing target image 75A5 by performing processing using the generation model 82A5 on the processing target image 75 A5. A factor that controls the visual impression given to the processing object image 75A5 is the dynamic range of the processing object image 75 A5. In the example shown in fig. 19, the AI-scheme processing unit 62A5 generates the 1 st HD R image 86A5 by performing processing using the generation model 82A5 on the processing target image 75 A5. The 1 st HDR image 86A5 is an image in which the dynamic range of the processing target image 75A5 is enlarged in the AI method.
The process of using the generation model 82A5 is an example of "1 st AI process", "1 st change process", and "expansion process" according to the technique of the present invention. The 1 st HDR image 86A5 is an example of the "1 st modified image" and the "1 st HDR image" according to the technology of the present invention. "generating the 1 st HDR image 86A5" is an example of "acquiring the 1 st image" according to the technique of the present invention.
The processing target image 75A5 is input to the generation model 82A5. The generation model 82A5 generates and outputs a 1 st HDR image 86A5 from the input processing target image 75 A5.
The non-AI-scheme processing unit 62B5 performs a non-AI-scheme process on the processing target image 75 A5. The processing in the non-AI mode refers to processing that does not use a neural network. In modification 4, the process of not using the neural network includes, for example, a process of not using the generation model 82A5.
As an example of the non-AI-mode processing for the processing target image 75A5, a processing using the digital filter 84A5 is given. The digital filter 84A5 is a digital filter configured to expand the dynamic range of the processing target image 75 A5. Hereinafter, as the digital filter 84A5, a digital filter configured to HDR the processing target image 75A5 will be described as an example.
The non-AI-scheme processing unit 62B5 generates the 2HDR image 88A5 by performing processing (i.e., filtering) using the digital filter 84A5 on the processing target image 75 A5. In other words, the non-AI-scheme processing unit 62B5 generates the 2 nd HDR image 88A5 by changing the non-noise element of the processing target image 75A5 in the non-AI scheme. In other words, the non-AI-scheme processing unit 62B5 expands the dynamic range of the processing target image 75A5 to generate the 2 nd HDR image 88A5.
The processing using the digital filter 84A5 is an example of "processing of the non-AI method that does not use a neural network" and "processing of the 2 nd change of the non-AI method change factor" according to the technique of the present invention. "generating the 2 nd HDR image 88A5" is an example of "acquiring the 2 nd image" according to the technology of the present invention.
The processing target image 75A5 is input to the digital filter 84A5. The digital filter 84A5 generates a 2 nd HDR image 88A5 from the input processing target image 75 A5. The 2 nd HDR image 88A5 is an image obtained by changing the non-noise element by the digital filter 84A5 (i.e., an image obtained by changing the non-noise element by the processing using the digital filter 84A5 with respect to the processing target image 75 A5). In other words, the 2 nd HDR image 88A5 is an image in which the dynamic range of the processing target image 75A5 is changed by the digital filter 84A5 (i.e., an image in which the dynamic range is enlarged by the processing using the digital filter 84A5 for the processing target image 75 A5). The 2 nd HDR image 88A5 is an example of the "2 nd image", "2 nd modified image", and "2 nd HDR image" according to the technology of the present invention.
The dynamic range of the 1 st HDR image 86A5 obtained by performing AI-mode processing on the processing target image 75A5 may be a dynamic range different from the user's preference due to the characteristics (for example, the number of intermediate layers and/or the learning amount) of the generation model 82 A5. If the influence of the AI-based processing is excessively reflected on the processing target image 75A5, it is also conceivable that the dynamic range is widened too much than the preference of the user or conversely narrowed too much than the preference of the user.
In view of this, in the image capturing apparatus 10, as shown in fig. 20, for example, the 1 st HDR image 86A5 and the 2 nd HDR image 88A5 are synthesized by performing the processing of the image adjustment unit 62C5 and the processing of the synthesis unit 62D5 on the 1 st HDR image 86A5 and the 2 nd HDR image 88A5.
As an example, as shown in fig. 20, the NVM64 stores a proportion 90E. The ratio 90E is a ratio of the 1 st HDR image 86A5 and the 2 nd HDR image 88A5 synthesized, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., the processing using the generation model 82 A5) performed by the AI scheme processing unit 62 A5.
The proportion 90E is roughly divided into a 1 st proportion 90E1 and a 2 nd proportion 90E2. The 1 st proportion 90E1 is a value of 0 to 1, and the 2 nd proportion 90E2 is a value obtained by subtracting the 1 st proportion 90E1 from "1". That is, the 1 st proportion 90E1 and the 2 nd proportion 90E2 are set so that the sum of the 1 st proportion 90E1 and the 2 nd proportion 90E2 becomes "1". The 1 st scale 90E1 and the 2 nd scale 90E2 are variable values that can be changed according to an instruction from the user.
The image adjustment unit 62C5 adjusts the 1 st HDR image 86A5 generated by the AI-scheme processing unit 62A5 using the 1 st scale 90E 1. For example, the image adjustment unit 62C5 multiplies the 1 st scale 90E1 by the pixel value of each pixel of the 1 st HDR image 86A5 to adjust the pixel value of each pixel of the 1 st HDR image 86A5.
The image adjustment unit 62C5 adjusts the 2 nd HDR image 88A5 generated by the non-AI-scheme processing unit 62B5 using the 2 nd scale 90E 2. For example, the image adjustment unit 62C5 multiplies the 2 nd scale 90E2 by the pixel value of each pixel of the 2 nd HDR image 88A5 to adjust the pixel value of each pixel of the 2 nd HDR image 88A5.
The combining section 62D5 generates a combined image 92E by combining the 1 st HD R image 86A5 adjusted by the image adjusting section 62C5 at the 1 st scale 90E1 and the 2 nd HDR image 88A5 adjusted by the image adjusting section 62C5 at the 2 nd scale 90E 2. That is, the synthesizing unit 62D5 synthesizes the 1 st HDR image 86A5 adjusted in the 1 st scale 90E1 and the 2 nd HDR image 88A5 adjusted in the 2 nd scale 90E2 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A5. In other words, the synthesizing unit 62D5 synthesizes the 1 st HDR image 86A5 adjusted by the 1 st scale 90E1 and the 2 nd HDR image 88A5 adjusted by the 2 nd scale 90E2 to adjust the non-noise element (here, the dynamic range is an example). In other words, the synthesizing unit 62D5 synthesizes the 1 st HDR image 86A5 adjusted in the 1 st scale 90E1 and the 2 nd HDR image 88A5 adjusted in the 2 nd scale 90E2 to adjust elements derived from the processing using the generation model 82A5 (for example, pixel values of pixels whose dynamic range is enlarged by the generation model 82 A5).
The synthesis performed by the synthesis unit 62D5 is the addition of the pixel values at the corresponding pixel positions between the 1 st HDR image 86A5 and the 2 nd HDR image 88 A5. The synthesis by the synthesis unit 62D5 is performed in the same manner as the synthesis by the synthesis unit 62D1 shown in fig. 5. The composite image 92E is also subjected to various image processing by the compositing unit 62D5 in the same manner as the composite image 92A shown in fig. 5. The synthesized image 92E subjected to various image processing is output to a predetermined output destination by the synthesizing unit 62D 5.
Fig. 21 shows an example of the flow of the image synthesis processing according to modification 4. The flowchart shown in fig. 21 differs from the flowchart shown in fig. 6 in that steps ST200 to ST218 are applied instead of steps ST12 to ST 30.
In the image combining process shown in fig. 21, in step ST200, the AI-scheme processing unit 62A5 and the non-AI-scheme processing unit 62B5 acquire the processing target image 75A5 from the image sensor 20. After the process of step ST200 is performed, the image synthesis process proceeds to step ST202.
In step ST202, the AI-scheme processing unit 62A5 inputs the processing-target image 75A5 acquired in step ST200 into the generation model 82A5. Thus, the processing target image 75A5 is HDR-converted in the AI system. After the process of step ST202 is performed, the image synthesis process proceeds to step ST204.
In step ST204, the AI-scheme processing unit 62A5 acquires the 1 ST HDR image 86A5, and the 1 ST HDR image 86A5 is output from the generation model 82A5 by inputting the processing target image 75A5 into the generation model 82A5 in step ST 202. After the process of step ST204 is performed, the image synthesis process proceeds to step ST206.
In step ST206, the non-AI-scheme processing unit 62B5 expands the dynamic range of the processing target image 75A5 by performing the processing using the digital filter 84A5 on the processing target image 75A5 acquired in step ST 200. Thus, the processing target image 75A5 is HDR-converted in a non-AI manner. After the process of step ST206 is performed, the image synthesis process proceeds to step ST208.
In step ST208, the non-AI-scheme processing unit 62B5 acquires the 2 nd HDR image 88A5, and the 2 nd HDR image 88A5 is obtained by performing processing using the digital filter 84A5 on the processing target image 75A5 in step ST206. After the process of step ST208 is performed, the image synthesis process proceeds to step ST210.
In step ST210, the image adjustment unit 62C5 acquires the 1 ST scale 90E1 and the 2 nd scale 90E2 from the NVM 64. After the process of step ST210 is performed, the image synthesis process proceeds to step ST212.
In step ST212, the image adjustment unit 62C5 adjusts the 1 ST HDR image 86A5 using the 1 ST scale 90E1 acquired in step ST210. After the process of step ST212 is executed, the image synthesis process proceeds to step ST214.
In step ST214, the image adjustment unit 62C5 adjusts the 2 nd HDR image 88A5 using the 2 nd scale 90E2 acquired in step ST 210. After the process of step ST214 is performed, the image synthesis process proceeds to step ST216.
In step ST216, the synthesizing unit 62D5 synthesizes the 1 ST HDR image 86A5 adjusted in step ST212 and the 2 nd HDR image 88A5 adjusted in step ST214 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A5. The synthesized image 92E is generated by synthesizing the 1 ST HDR image 86A5 adjusted in step ST212 and the 2 nd HDR image 88A5 adjusted in step ST 214. After the process of step ST216 is performed, the image synthesis process proceeds to step ST218.
In step ST218, the combining unit 62D5 performs various image processing on the combined image 92E. Then, the combining unit 62D5 outputs an image obtained by performing various image processing on the combined image 92E as a processed image 75B to a predetermined output destination. After the process of step ST218 is performed, the image synthesis process proceeds to step ST32.
As described above, in the image pickup apparatus 10 according to modification 4, the dynamic range HDR of the processing target image 75A5 is converted to the 1HDR image 86A5 by the AI method. Then, the 1 st HDR image 86A5 and the 2 nd HDR image 88A5 are synthesized. This can suppress the excessive or insufficient dynamic range of the processing by the AI method for the composite image 92E. As a result, the composite image 92E is an image in which the dynamic range of the processing by the AI scheme is less likely to be more noticeable than the 1 st HDR image 86A5, and an appropriate image can be provided to a user who does not like the processing by the AI scheme.
Here, an example of a mode in which the 1 st HDR image 86A5 obtained by processing the processing target image 75A5 using the generation model 82A5 and the 2 nd HDR image 88A5 obtained by processing the processing target image 75A5 using the digital filter 84A5 are synthesized is described, but the technique of the present invention is not limited to this. For example, the 1 st HDR image 86A5 obtained by performing processing using the generation model 82A5 on the processing target image 75A5 and the processing target image 75A5 (i.e., an image in which the non-noise element is not adjusted) may be synthesized. In this case, the same effect can be expected.
[ modification 5 ]
As an example, as shown in fig. 22, the processor 62 according to the present modification 5 is different from the processor 62 shown in fig. 4 in that the AI-based processing unit 62A1 is provided with an AI-based processing unit 62A6 and the non-AI-based processing unit 62B1 is provided with a non-AI-based processing unit 62B6. In the present modification 5, description of the same items as those described in the present modification 5 is omitted, and description of different items from those described in the present modification 5 is made.
The processing target image 75A6 is input to the AI-mode processing unit 62A6 and the non-AI-mode processing unit 62B6. The processing target image 75A6 is an example of the processing target image 75A shown in fig. 2. The processing object image 75A6 is a color image, and has an edge area 112. The edge region 112 is an image region (for example, a high-frequency component equal to or higher than a predetermined value) in which the edge of the object is displayed. In this case, a color image is illustrated as the processing target image 75A6, but the processing target image 75A6 may be an achromatic color image.
The AI-mode processing unit 62A6 and the non-AI-mode processing unit 62B6 perform processing for emphasizing the edge region 112 more than a non-edge region (hereinafter, simply referred to as "non-edge region") that is a region different from the edge region 112 in the input processing target image 75 A6. The edge region 112 is an example of "an edge region within a processing target image" according to the technique of the present invention. The degree of emphasis of the edge region 112 is an example of "a non-noise element of the processing target image", "a factor that controls the visual impression given by the processing target image", and "the degree of emphasis of the edge region" according to the technique of the present invention.
The AI-scheme processing unit 62A6 performs AI-scheme processing on the processing target image 75 A6. As an example of the AI-mode processing for the processing target image 75A6, a processing using the generation model 82A6 is given. The generative model 82A6 is an example of the generative model 82A shown in fig. 3. The generation model 82A6 is a generation network in which learning has been performed to emphasize the edge region 112 more than the non-edge region within the processing target image 75 A6.
The AI-mode processing unit 62A6 changes the factors of the visual impression given by the control processing target image 75A6 in the AI mode. That is, the AI-scheme processing section 62A6 changes, as a non-noise element of the processing target image 75A6, a factor that controls the visual impression given by the processing target image 75A6 by performing processing using the generation model 82A6 on the processing target image 75 A6. The factor that controls the visual impression given to the processing object image 75A6 is the edge area 112 within the processing object image 75 A6. In the example shown in fig. 22, the AI-scheme processing unit 62A6 generates the 1 st edge emphasized image 86A6 by performing processing using the generation model 82A6 on the processing target image 75 A6. The 1 st edge emphasized image 86A6 is an image in which the edge region 112 is emphasized more in the AI manner than the non-edge region within the processing object image 75 A6.
The process of using the generation model 82A6 is an example of "1 st AI process", "1 st modification process", and "emphasis process" according to the technique of the present invention. The 1 st edge emphasized image 86A6 is an example of the "1 st modified image" and the "1 st edge emphasized image" according to the technology of the present invention. "generating the 1 st edge emphasized image 86A6" is an example of "acquiring the 1 st image" according to the technique of the present invention.
The processing target image 75A6 is input to the generation model 82A6. The generation model 82A6 generates and outputs the 1 st edge emphasized image 86A6 from the input processing target image 75 A6.
The non-AI-scheme processing unit 62B6 performs a non-AI-scheme process on the processing target image 75 A6. The processing in the non-AI mode refers to processing that does not use a neural network. In modification 5, the process of not using the neural network includes, for example, a process of not using the generation model 82A6.
As an example of the non-AI-mode processing for the processing target image 75A6, a processing using the digital filter 84A6 is given. The digital filter 84A6 is a digital filter configured to emphasize the edge region 112 more than the non-edge region in the processing target image 75 A6.
The non-AI-scheme processing unit 62B6 generates the 2 nd edge emphasized image 88A6 by performing processing (i.e., filtering) using the digital filter 84A6 on the processing target image 75 A6. In other words, the non-AI-scheme processing unit 62B6 generates the 2 nd edge emphasized image 88A6 by emphasizing the non-noise element (here, the edge region 112 is an example) of the processing target image 75A6 more than the non-edge region in the non-AI scheme. In other words, the non-AI-mode processing unit 62B6 generates the 2 nd edge-emphasized image 88A6 by emphasizing the edge region 112 in the non-AI mode than the non-edge region in the processing target image 75 A6.
The process using the digital filter 84A6 is an example of "a process of a non-AI method that does not use a neural network" and "a 2 nd changing process of a non-AI method changing factor" according to the technique of the present invention. "generating the 2 nd edge-emphasized image 88A6" is an example of "acquiring the 2 nd image" according to the technique of the present invention.
The processing target image 75A6 is input to the digital filter 84A6. The digital filter 84A6 generates a 2 nd edge emphasized image 88A6 from the inputted processing target image 75 A6. The 2 nd edge emphasized image 88A6 is an image obtained by changing the non-noise element by the digital filter 84A6 (that is, an image obtained by changing the non-noise element by the processing using the digital filter 84A6 with respect to the processing target image 75 A6). In other words, the 2 nd edge-emphasized image 88A6 is an adjusted image of the edge region 112 in the processing target image 75A6 through the digital filter 84A6 (i.e., an image in which the edge region 112 is emphasized more than the non-edge region by the processing using the digital filter 84A6 for the processing target image 75 A6). The intensity of the edge region 112 in the 2 nd edge-emphasized image 88A6 is lower than the intensity of the edge region 112 in the 1 st edge-emphasized image 86 A6. The degree to which the edge region 112 in the 2 nd edge-emphasized image 88A6 is lower in intensity than the edge region 112 in the 1 st edge-emphasized image 86A6 is, for example, at least to such an extent that the edge region 112 in the 2 nd edge-emphasized image 88A6 is visually recognized as being different from the edge region 112 in the 1 st edge-emphasized image 86 A6. The 2 nd edge emphasized image 88A6 is an example of the "2 nd image", "2 nd modified image" and "2 nd edge emphasized image" according to the technology of the present invention.
The intensity (for example, brightness) of the 1 st edge emphasized image 86A6 obtained by performing the AI-method processing on the processing target image 75A6 may be different from the user's preference due to the characteristics (for example, the number of intermediate layers and/or the learning amount) of the generated model 82 A6. If the influence of the processing of the AI scheme is excessively reflected on the processing target image 75A6, it is also conceivable that the intensity is excessively higher than the preference of the user or, conversely, excessively lower than the preference of the user.
In view of this, in the image pickup apparatus 10, as an example, as shown in fig. 23, the 1 st edge emphasized image 86A6 and the 2 nd edge emphasized image 88A6 are synthesized by performing the processing of the image adjustment unit 62C6 and the processing of the synthesis unit 62D6 on the 1 st edge emphasized image 86A6 and the 2 nd edge emphasized image 88A6.
As an example, as shown in fig. 23, the NVM64 stores a proportion 90F. The ratio 90F is a ratio at which the 1 st edge emphasized image 86A6 and the 2 nd edge emphasized image 88A6 are synthesized, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., processing using the generation model 82 A6) by the AI scheme processing unit 62 A6.
The proportion 90F is roughly divided into a 1 st proportion 90F1 and a 2 nd proportion 90F2. The 1 st proportion 90F1 is a value of 0 to 1, and the 2 nd proportion 90F2 is a value obtained by subtracting the 1 st proportion 90F1 from "1". That is, the 1 st proportion 90F1 and the 2 nd proportion 90F2 are set so that the sum of the 1 st proportion 90F1 and the 2 nd proportion 90F2 becomes "1". The 1 st scale 90F1 and the 2 nd scale 90F2 are variable values that can be changed according to an instruction from a user.
The image adjustment unit 62C6 adjusts the 1 st edge emphasized image 86A6 generated by the AI-scheme processing unit 62A6 using the 1 st scale 90F 1. For example, the image adjustment unit 62C6 multiplies the 1 st scale 90F1 by the pixel value of each pixel of the 1 st edge emphasized image 86A6 to adjust the pixel value of each pixel of the 1 st edge emphasized image 86A6.
The image adjustment unit 62C6 adjusts the 2 nd edge-emphasized image 88A6 generated by the non-AI-scheme processing unit 62B6 using the 2 nd scale 90F 2. For example, the image adjustment unit 62C6 multiplies the 2 nd ratio 90F2 by the pixel value of each pixel of the 2 nd edge-emphasized image 88A6 to adjust the pixel value of each pixel of the 2 nd edge-emphasized image 88A6.
The combining section 62D6 generates a combined image 92F by combining the 1 st edge-emphasized image 86A6 adjusted by the image adjusting section 62C6 at the 1 st scale 90F1 and the 2 nd edge-emphasized image 88A6 adjusted by the image adjusting section 62C6 at the 2 nd scale 90F 2. That is, the combining unit 62D6 combines the 1 st edge-emphasized image 86A6 adjusted in the 1 st scale 90F1 and the 2 nd edge-emphasized image 88A6 adjusted in the 2 nd scale 90F2 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A6. In other words, the combining unit 62D6 combines the 1 st edge-emphasized image 86A6 adjusted in the 1 st scale 90F1 and the 2 nd edge-emphasized image 88A6 adjusted in the 2 nd scale 90F2 to adjust the non-noise element (here, the edge region 112 is an example). In other words, the synthesizing unit 62D6 synthesizes the 1 st edge-emphasized image 86A6 adjusted in the 1 st scale 90F1 and the 2 nd edge-emphasized image 88A6 adjusted in the 2 nd scale 90F2 to adjust elements derived from the process of using the generation model 82A6 (for example, pixel values of pixels in the edge region 112 are emphasized more than in the non-edge region by the generation model 82 A6).
The synthesis performed by the synthesis unit 62D6 is the addition of the pixel values at the corresponding pixel positions between the 1 st edge emphasized image 86A6 and the 2 nd edge emphasized image 88 A6. The synthesis by the synthesis unit 62D6 is performed in the same manner as the synthesis by the synthesis unit 62D1 shown in fig. 5. The composite image 92F is also subjected to various image processing by the compositing unit 62D6 in the same manner as the composite image 92A shown in fig. 5. The synthesized image 92F subjected to various image processing is output to a predetermined output destination by the synthesizing unit 62D 6.
Fig. 24 shows an example of the flow of the image synthesis processing according to the present modification 5. The flowchart shown in fig. 24 differs from the flowchart shown in fig. 6 in that steps ST250 to ST268 are applied instead of steps ST12 to ST 30.
In the image combining process shown in fig. 24, in step ST250, the AI-scheme processing unit 62A6 and the non-AI-scheme processing unit 62B6 acquire the processing target image 75A6 from the image sensor 20. After the process of step ST250 is performed, the image synthesis process proceeds to step ST252.
In step ST252, the AI-scheme processing unit 62A6 inputs the processing-target image 75A6 acquired in step ST250 into the generation model 82A6. After the process of step ST252 is performed, the image synthesis process proceeds to step ST254.
In step ST254, the AI-scheme processing unit 62A6 acquires the 1 ST edge-emphasized image 86A6, and the 1 ST edge-emphasized image 86A6 is output from the generation model 82A6 by inputting the processing-target image 75A6 into the generation model 82A6 in step ST 252. After the process of step ST254 is performed, the image synthesis process proceeds to step ST256.
In step ST256, the non-AI-scheme processing unit 62B6 emphasizes the edge region 112 by performing processing using the digital filter 84A6 on the processing target image 75A6 acquired in step ST250, over a non-edge region within the processing target image 75 A6. After the process of step ST256 is performed, the image synthesis process proceeds to step ST258.
In step ST258, the non-AI-mode processing unit 62B6 acquires the 2 nd edge-emphasized image 88A6, and the 2 nd edge-emphasized image 88A6 is obtained by performing the processing using the digital filter 84A6 on the processing target image 75A6 in step ST256. After the process of step ST258 is performed, the image synthesis process proceeds to step ST260.
In step ST260, the image adjustment unit 62C6 acquires the 1 ST scale 90F1 and the 2 nd scale 90F2 from the NVM 64. After the process of step ST260 is performed, the image synthesis process proceeds to step ST262.
In step ST262, the image adjustment unit 62C6 adjusts the 1 ST edge emphasized image 86A6 using the 1 ST scale 90F1 acquired in step ST260. After the process of step ST262 is performed, the image synthesis process proceeds to step ST264.
In step ST264, the image adjustment unit 62C6 adjusts the 2 nd edge-emphasized image 88A6 using the 2 nd scale 90F2 acquired in step ST 260. After the process of step ST264 is performed, the image synthesis process proceeds to step ST266.
In step ST266, the combining unit 62D6 combines the 1 ST edge-emphasized image 86A6 adjusted in step ST262 and the 2 nd edge-emphasized image 88A6 adjusted in step ST264 to adjust the excessive or insufficient processing of the AI scheme by the AI-scheme processing unit 62 A6. The synthesized image 92F is generated by synthesizing the 1 ST edge-emphasized image 86A6 adjusted in step ST262 and the 2 nd edge-emphasized image 88A6 adjusted in step ST 264. After the process of step ST266 is performed, the image synthesis process proceeds to step ST268.
In step ST268, the combining unit 62D6 performs various image processing on the combined image 92F. Then, the combining unit 62D6 outputs an image obtained by performing various image processing on the combined image 92F as a processed image 75B to a predetermined output destination. After the process of step ST268 is performed, the image synthesis process proceeds to step ST32.
As described above, in the image pickup apparatus 10 according to the present modification 5, the 1 st edge-emphasized image 86A6 is generated by emphasizing the edge region 112 in the AI system than the non-edge region in the processing target image 75 A6. Then, the 2 nd edge-emphasized image 88A6 is generated by emphasizing the edge region 112 in the non-AI manner than the non-edge region in the processing target image 75 A6. Then, the 1 st edge emphasized image 86A6 and the 2 nd edge emphasized image 88A6 are synthesized. This can suppress excessive or insufficient intensity of the edge region 112 processed by the AI method for the composite image 92F. As a result, the composite image 92F is an image in which the intensity of the edge region 112 processed by the AI method is less likely to be more noticeable than the 1 st edge emphasized image 86A6, and an appropriate image can be provided to a user who does not like the intensity of the edge region 112 processed by the AI method.
Here, the embodiment in which the 1 st edge-emphasized image 86A6 obtained by performing the process using the generation model 82A6 on the processing target image 75A6 and the 2 nd edge-emphasized image 88A6 obtained by performing the process using the digital filter 84A6 on the processing target image 75A6 are synthesized has been described, but the technique of the present invention is not limited to this. For example, the 1 st edge emphasized image 86A6 obtained by performing the process using the generation model 82A6 on the processing target image 75A6 and the processing target image 75A6 (i.e., an image in which the non-noise element is not adjusted) may be synthesized. In this case, the same effect can be expected.
[ modification 6 ]
As an example, as shown in fig. 25, the processor 62 according to the present modification example 6 is different from the processor 62 shown in fig. 4 in that the AI-based processing unit 62A1 is provided with an AI-based processing unit 62A7 and the non-AI-based processing unit 62B1 is provided with a non-AI-based processing unit 62B7. In modification 6, the description of the same items as those described in modification 6 is omitted, and the description of the different items from those described in modification 6 is omitted.
As an example, as shown in fig. 25, the processing target image 75A7 is input to the AI-mode processing unit 62A7 and the non-AI-mode processing unit 62B7. The processing target image 75A7 is an example of the processing target image 75A shown in fig. 2. In the example shown in fig. 25, the processing target image 75A7 includes a dot image 114. The point image 114 is an object image obtained by imaging object light representing a point object on the light receiving surface 72A, and appears in the processing target image 75A7 in a more blurred state than the original object image due to a point spread phenomenon resulting from the optical characteristics of the imaging lens 40. The amount of blurring of the point image 114 is expressed by a generally known point spread function.
The processing target image 75A7 is an image having the point image 114 as a non-noise element. The point image 114 is an example of "a non-noise element of a processing target image", "a phenomenon that occurs in the processing target image due to characteristics of an imaging device", and "blurring" according to the technique of the present invention. The blurring amount of the point image 114 is an example of "blurring amount of point image" according to the technique of the present invention. The dot diffusion phenomenon is an example of "characteristics of an imaging device" and "optical characteristics of an imaging device" according to the technology of the present invention.
The AI-scheme processing unit 62A7 performs AI-scheme processing on the processing target image 75 A7. As an example of the AI-mode processing for the processing target image 75A7, a processing using the generation model 82A7 is given. The generation model 82A7 is an example of the generation model 82A shown in fig. 3. The generation model 82A7 is a generation network in which the blur amount of the setpoint image 114 has been learned. Hereinafter, as the blur amount of the adjustment point image 114, a description will be given by taking as an example the reduction of the blur amount of the point image 114 (i.e., the reduction of the point spread).
The AI-scheme processing unit 62A7 generates the 1 st point image adjustment image 86A7 by performing processing using the generation model 82A7 on the processing target image 75 A7. In other words, the AI-scheme processing unit 62A7 generates the 1 st point image adjustment image 86A7 by AI-scheme-adjusting a non-noise element (here, the point image 114 is an example) in the processing target image 75 A7. In other words, the AI-mode processing unit 62A7 generates the 1 st point image adjustment image 86A7 by reducing the blurring amount of the point image 114 in the processing target image 75A7 by the AI mode. The process of using the generation model 82A7 is an example of "1 st AI process", "1 st correction process", and "point image adjustment process" according to the technique of the present invention. Here, "generating the 1 st point image adjustment image 86A7" is an example of "acquiring the 1 st image" according to the technique of the present invention.
The processing target image 75A7 is input to the generation model 82A7. The generation model 82A7 generates and outputs a 1 st point image adjustment image 86A7 from the input processing target image 75 A7. The 1 st point image adjustment image 86A7 is an image obtained by adjusting a non-noise element by the generation model 82A7 (i.e., an image obtained by adjusting a non-noise element by a process of generating the model 82A7 with respect to the processing target image 75 A7). In other words, the 1 st point image adjustment image 86A7 is a corrected image of the non-noise element in the processing target image 75A7 by the generation model 82A7 (i.e., an image in which the non-noise element is corrected by the processing using the generation model 82A7 for the processing target image 75 A7). In other words, the 1 st point image adjustment image 86A7 is a corrected image of the point spread generated model 82A7 of the point image 114 (i.e., an image corrected by the process of using the generated model 82A7 for the processing target image 75A7 so as to reduce the point spread of the point image 114). The 1 st point image adjustment image 86A7 is an example of "1 st image", "1 st correction image" and "1 st point image adjustment image" according to the technique of the present invention.
The non-AI-scheme processing unit 62B7 performs a non-AI-scheme process on the processing target image 75 A7. The processing in the non-AI mode refers to processing that does not use a neural network. Here, as the process not using the neural network, for example, a process not using the generation model 82A7 is given.
As an example of the non-AI-mode processing for the processing target image 75A7, a processing using the digital filter 84A7 is given. The digital filter 84A7 is a digital filter configured to reduce the dot spread of the dot image 114. As an example of the digital filter configured to reduce the dot spread of the dot image 114, a resolution correction filter that cancels the dot spread represented by the dot spread function expressing the dot image 114 is given. The resolution correction filter is a filter applied to a visible light image in a state of being blurred from an original visible light image due to a point spread phenomenon. As an example of the resolution correction filter, an FIR filter is given. Since the resolution correction filter is a well-known filter, a further detailed description thereof will be omitted.
The non-AI-scheme processing unit 62B7 generates the 2 nd point image adjustment image 88A7 by performing processing (i.e., filtering) using the digital filter 84A7 on the processing target image 75 A7. In other words, the non-AI-mode processing unit 62B7 generates the 2 nd point image adjustment image 88A7 by adjusting the non-noise element in the processing target image 75A7 in the non-AI mode. In other words, the non-AI-mode processing unit 62B7 corrects the processing target image 75A7 in a non-AI mode to reduce the dot diffusion in the processing target image 75A7, thereby generating the 2 nd dot image adjustment image 88A7. The processing using the digital filter 84A7 is an example of "processing of a non-AI system that does not use a neural network", "2 nd correction processing", and "processing of adjusting the blur amount in a non-AI system" according to the technique of the present invention. Here, "generating the 2 nd point image adjustment image 88A7" is an example of "acquiring the 2 nd image" according to the technique of the present invention.
The processing target image 75A7 is input to the digital filter 84A7. The digital filter 84A7 generates a 2 nd point image adjustment image 88A7 from the input processing target image 75 A7. The 2 nd point image adjustment image 88A7 is an image obtained by adjusting a non-noise element by the digital filter 84A7 (i.e., an image obtained by adjusting a non-noise element by a process using the digital filter 84A7 with respect to the processing target image 75 A7). In other words, the 2 nd point image adjustment image 88A7 is a corrected image of the non-noise element in the processing target image 75A7 by the digital filter 84A7 (i.e., an image in which the non-noise element is corrected by the processing using the digital filter 84A7 for the processing target image 75 A7). In other words, the 2 nd point image adjustment image 88A7 is a corrected image of the processing target image 75A7 by the digital filter 84A7 (i.e., an image corrected by the processing using the digital filter 84A7 for the processing target image 75A7 so as to reduce the point spread). The 2 nd point image adjustment image 88A7 is an example of "the 2 nd image", "the 2 nd correction image", and "the 2 nd point image adjustment image" according to the technique of the present invention.
Among the users are the following: it is not desirable to completely eliminate the dot diffusion phenomenon, but rather to properly retain the dot diffusion phenomenon within the image. In the example shown in fig. 25, the dot spread of the dot image 114 of the 1 st dot image adjustment image 86A7 is reduced more than that of the 2 nd dot image adjustment image 88A7. In other words, the 2 nd point image adjustment image 88A7 retains the blur of the point image 114 as compared to the 1 st point image adjustment image 86 A7. However, the user sometimes feels that the blur amount of the point image 114 in the 1 st point image adjustment image 86A7 is too insufficient and the blur amount of the point image 114 in the 2 nd point image adjustment image 88A7 is too great. Therefore, if only one of the 1 st point image adjustment image 86A7 and the 2 nd point image adjustment image 88A7 is finally output, an image that does not match the user's preference is provided to the user. If the learning amount of the generation model 82A7 is increased or the number of layers in the middle of the generation model 82A7 is increased to attempt to increase the performance of the generation model 82A7, the possibility that an image close to the user's preference can be obtained increases. However, the cost required for creating the generation model 82A7 increases, and as a result, the price of the imaging device 10 may increase.
In view of this, in the imaging apparatus 10, as an example, as shown in fig. 26, the 1 st point image adjustment image 86A7 and the 2 nd point image adjustment image 88A7 are synthesized by performing the processing of the image adjustment unit 62C7 and the processing of the synthesizing unit 62D7 on the 1 st point image adjustment image 86A7 and the 2 nd point image adjustment image 88A7.
As an example, as shown in fig. 26, the NVM64 stores a proportion 90G. The scale 90G is a scale for combining the 1 st point image adjustment image 86A7 and the 2 nd point image adjustment image 88A7, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., processing using the generation model 82 A7) by the AI scheme processing unit 62 A7.
The ratio 90G is roughly divided into a 1 st ratio 90G1 and a 2 nd ratio 90G2. The 1 st ratio 90G1 is a value of 0 to 1, and the 2 nd ratio 90G2 is a value obtained by subtracting the 1 st ratio 90G1 from "1". That is, the 1 st ratio 90G1 and the 2 nd ratio 90G2 are set such that the sum of the 1 st ratio 90G1 and the 2 nd ratio 90G2 becomes "1". The 1 st scale 90G1 and the 2 nd scale 90G2 are variable values that can be changed according to an instruction from a user.
The image adjustment unit 62C7 adjusts the 1 st point image adjustment image 86A7 generated by the AI-scheme processing unit 62A7 using the 1 st scale 90G 1. For example, the image adjustment unit 62C7 multiplies the 1 st scale 90G1 by the pixel value of each pixel of the 1 st point image adjustment image 86A7 to adjust the pixel value of each pixel of the 1 st point image adjustment image 86A7.
The image adjustment unit 62C7 adjusts the 2 nd point image adjustment image 88A7 generated by the non-AI-scheme processing unit 62B7 using the 2 nd ratio 90G 2. For example, the image adjustment unit 62C7 multiplies the 2 nd ratio 90G2 by the pixel value of each pixel of the 2 nd point image adjustment image 88A7 to adjust the pixel value of each pixel of the 2 nd point image adjustment image 88A7.
The combining unit 62D7 generates a combined image 92G by combining the 1 st point image adjustment image 86A7 adjusted by the 1 st scale 90G1 through the image adjusting unit 62C7 and the 2 nd point image adjustment image 88A7 adjusted by the 2 nd scale 90G2 through the image adjusting unit 62C 7. That is, the combining unit 62D7 combines the 1 st point image adjustment image 86A7 adjusted in the 1 st scale 90G1 and the 2 nd point image adjustment image 88A7 adjusted in the 2 nd scale 90G2 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A7. In other words, the combining unit 62D7 combines the 1 st point image adjustment image 86A7 adjusted in the 1 st scale 90G1 and the 2 nd point image adjustment image 88A7 adjusted in the 2 nd scale 90G2 to adjust the non-noise element. In other words, the combining unit 62D7 combines the 1 st point image adjustment image 86A7 adjusted in the 1 st scale 90G1 and the 2 nd point image adjustment image 88A7 adjusted in the 2 nd scale 90G2 to adjust elements derived from the processing using the generation model 82A7 (for example, the pixel values of the pixels whose dot diffusion has been reduced by the generation model 82 A7).
The combination by the combining unit 62D7 is the addition of the pixel values at the corresponding pixel positions between the 1 st point image adjustment image 86A7 and the 2 nd point image adjustment image 88 A7. The synthesis by the synthesis unit 62D7 is performed in the same manner as the synthesis by the synthesis unit 62D1 shown in fig. 5. The composite image 92G is also subjected to various image processing by the compositing unit 62D7 in the same manner as the composite image 92A shown in fig. 5. The synthesized image 92G subjected to various image processing is output to a predetermined output destination by the synthesizing unit 62D 7.
Fig. 27 shows an example of the flow of the image synthesis processing according to modification 6. The flowchart shown in fig. 27 differs from the flowchart shown in fig. 6 in that steps ST300 to ST318 are applied instead of steps ST12 to ST 30.
In the image combining process shown in fig. 27, in step ST300, the AI-scheme processing unit 62A7 and the non-AI-scheme processing unit 62B7 acquire the processing target image 75A7 from the image sensor 20. After the process of step ST300 is performed, the image synthesis process proceeds to step ST302.
In step ST302, the AI-scheme processing unit 62A7 inputs the processing-target image 75A7 acquired in step ST300 into the generation model 82A7. After the process of step ST302 is performed, the image synthesis process proceeds to step ST304.
In step ST304, the AI-scheme processing unit 62A7 acquires the 1 ST point image adjustment image 86A7, and the 1 ST point image adjustment image 86A7 is output from the generation model 82A7 by inputting the processing target image 75A7 into the generation model 82A7 in step ST 302. After the process of step ST304 is performed, the image synthesis process proceeds to step ST306.
In step ST306, the non-AI-scheme processing unit 62B7 corrects the dot spread phenomenon of the processing target image 75A7 by performing the processing using the digital filter 84A7 on the processing target image 75A7 acquired in step ST 300. After the process of step ST306 is performed, the image synthesis process proceeds to step ST308.
In step ST308, the non-AI-mode processing unit 62B7 acquires the 2 nd point image adjustment image 88A7, and the 2 nd point image adjustment image 88A7 is obtained by performing processing using the digital filter 84A7 on the processing target image 75A7 in step ST306. After the process of step ST308 is performed, the image synthesis process proceeds to step ST310.
In step ST310, the image adjustment unit 62C7 acquires the 1 ST scale 90G1 and the 2 nd scale 90G2 from the NVM 64. After the process of step ST310 is performed, the image synthesis process proceeds to step ST312.
In step ST312, the image adjustment unit 62C7 adjusts the 1 ST point image adjustment image 86A7 using the 1 ST scale 90G1 acquired in step ST310. After the process of step ST312 is performed, the image synthesis process proceeds to step ST314.
In step ST314, the image adjustment unit 62C7 adjusts the 2 nd point image adjustment image 88A7 using the 2 nd ratio 90G2 acquired in step ST 310. After the process of step ST314 is performed, the image synthesis process proceeds to step ST316.
In step ST316, the combining unit 62D7 combines the 1 ST point image adjustment image 86A7 adjusted in step ST312 and the 2 nd point image adjustment image 88A7 adjusted in step ST314 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A7. The synthesized image 92G is generated by synthesizing the 1 ST point image adjustment image 86A7 adjusted in step ST312 and the 2 nd point image adjustment image 88A7 adjusted in step ST 314. After the process of step ST316 is performed, the image synthesis process proceeds to step ST318.
In step ST318, the combining unit 62D7 performs various image processing on the combined image 92G. Then, the combining unit 62D7 outputs an image obtained by performing various image processing on the combined image 92G as a processed image 75B to a predetermined output destination. After the process of step ST318 is executed, the image synthesis process proceeds to step ST32.
As described above, in the image pickup apparatus 10 according to modification 6, the 1 st point image adjustment image 86A7 is generated by reducing the point spread of the point image 114 in the processing target image 75A7 by the AI method. Then, the 2 nd point image adjustment image 88A7 is generated by reducing the point spread of the point image 114 in the processing target image 75A7 in the non-AI manner. Then, the 1 st point image adjustment image 86A7 and the 2 nd point image adjustment image 88A7 are synthesized. Thus, it is possible to suppress a case where the correction amount of the point spread phenomenon (i.e., the correction amount of the blurring amount of the point image 114) by the processing of the AI method is excessive or insufficient for the synthesized image 92G. As a result, the composite image 92G is an image in which the correction amount of the point spread phenomenon by the AI method is less likely to be more noticeable than the 1 st point image adjustment image 86A7, and can be provided to a user who does not like the AI method.
Here, the embodiment in which the 1 st point image adjustment image 86A7 obtained by performing the process using the generation model 82A7 on the processing target image 75A7 and the 2 nd point image adjustment image 88A7 obtained by performing the process using the digital filter 84A7 on the processing target image 75A7 are synthesized has been described, but the technique of the present invention is not limited to this. For example, the 1 st point image adjustment image 86A7 obtained by performing the process using the generation model 82A7 on the processing target image 75A7 and the processing target image 75A7 (i.e., an image in which the non-noise element is not adjusted) may be synthesized. In this case, the same effect can be expected.
[ modification 7 ]
As an example, as shown in fig. 28, the processor 62 according to the present modification 7 is different from the processor 62 shown in fig. 4 in that the AI-based processing unit 62A1 is provided with an AI-based processing unit 62A8 and the non-AI-based processing unit 62B1 is provided with a non-AI-based processing unit 62B8. In the present modification 7, description of the same items as those described in the present modification 7 is omitted, and description of different items from those described in the present modification 7 is made.
The processing target image 75A8 is input to the AI-mode processing unit 62A8 and the non-AI-mode processing unit 62B8. The processing target image 75A8 is an example of the processing target image 75A shown in fig. 2. The processing target image 75A8 is a color image, and has a person region 116. The person region 116 is an image region in which a person is shown. In this case, a color image is illustrated as the processing target image 75A8, but the processing target image 75A8 may be an achromatic color image.
The AI-mode processing unit 62A8 and the non-AI-mode processing unit 62B8 perform processing for imparting blur corresponding to an object mapped in the input processing target image 75A8 to the processing target image 75 A8. In modification 7, the subject shown in the processing target image 75A8 is a person. An example of the "3 rd subject" according to the technique of the present invention is shown in the processing target image 75 A8. The blur corresponding to the subject is an example of "a non-noise element of the processing target image", "a factor that controls the visual impression given by the processing target image", and "a blur corresponding to the 3 rd subject" according to the technique of the present invention.
The AI-scheme processing unit 62A8 performs AI-scheme processing on the processing target image 75 A8. As an example of the AI-mode processing for the processing target image 75A8, a processing using the generation model 82A8 can be given. The generation model 82A8 is an example of the generation model 82A shown in fig. 3. The generative model 82A8 is a generation network in which learning to give blur to the human figure region 116 has been performed.
The AI-mode processing unit 62A8 changes the factors of the visual impression given by the control processing target image 75A8 in the AI mode. That is, the AI-scheme processing section 62A8 changes, as a non-noise element of the processing target image 75A8, a factor that controls the visual impression given by the processing target image 75A8 by performing processing using the generation model 82A8 on the processing target image 75 A8. A factor that controls the visual impression given to the processing target image 75A8 is blurring corresponding to the human figure region 116 within the processing target image 75 A8. In the example shown in fig. 28, the AI-scheme processing section 62A8 generates the 1 st blurred image 86A8 by performing processing using the generation model 82A8 on the processing target image 75 A8. The 1 st blurred image 86A8 is an image in which a blur is given to the human figure region 116 in the processing target image 75A8 by the AI method.
The process of using the generation model 82A8 is an example of "1 st AI process", "1 st change process", and "blurring process" according to the technique of the present invention. The 1 st blurred image 86A8 is an example of the "1 st modified image" and the "1 st blurred image" according to the technology of the present invention. "generating the 1 st blurred image 86A8" is an example of "acquiring the 1 st image" according to the technique of the present invention.
The processing target image 75A8 is input to the generation model 82A8. The generation model 82A8 generates and outputs a 1 st blurred image 86A8 from the input processing target image 75 A8.
The non-AI-scheme processing unit 62B8 performs a non-AI-scheme process on the processing target image 75 A8. The processing in the non-AI mode refers to processing that does not use a neural network. In modification 7, the process of not using the neural network includes, for example, a process of not using the generation model 82A8.
As an example of the non-AI-mode processing for the processing target image 75A8, a processing using the digital filter 84A8 is given. The digital filter 84A8 is a digital filter configured to impart blurring to the human figure region 116 in the processing target image 75 A8.
The non-AI-scheme processing unit 62B8 generates the 2 nd blurred image 88A8 by performing processing (i.e., filtering) using the digital filter 84A8 on the processing target image 75 A8. In other words, the non-AI-scheme processing unit 62B8 generates the 2 nd blurred image 88A8 by changing the non-noise element of the processing target image 75A8 in the non-AI scheme. In other words, the non-AI-mode processing unit 62B8 generates the 2 nd blurred image 88A8 by blurring the human figure region 116 in the processing target image 75A8 in the non-AI mode.
The processing using the digital filter 84A8 is an example of "processing of the non-AI method that does not use a neural network" and "processing of the 2 nd change of the non-AI method change factor" according to the technique of the present invention. "generating the 2 nd blurred image 88A8" is an example of "acquiring the 2 nd image" according to the technique of the present invention.
The processing target image 75A8 is input to the digital filter 84A8. The digital filter 84A8 generates a 2 nd blurred image 88A8 from the inputted processing target image 75 A8. The 2 nd blurred image 88A8 is an image obtained by changing the non-noise element by the digital filter 84A8 (that is, an image obtained by changing the non-noise element by the processing using the digital filter 84A8 with respect to the processing target image 75 A8). In other words, the 2 nd blurred image 88A8 is an adjusted image of the human region 116 in the processing target image 75A8 through the digital filter 84A8 (i.e., an image in which blurring is imparted to the human region 116 by the processing using the digital filter 84A8 for the processing target image 75 A8). The degree of blur imparted to the human figure region 116 in the 2 nd blurred image 88A8 is less than the degree of blur imparted to the human figure region 116 in the 1 st blurred image 86 A8. The 2 nd blurred image 88A8 is an example of the "2 nd image", "2 nd modified image", and "2 nd blurred image" according to the technology of the present invention.
The blur amount of the 1 st blurred image 86A8 obtained by performing the AI-mode processing on the processing target image 75A8 may be a blur amount different from the preference of the user due to the characteristics (for example, the number of intermediate layers and/or the learning amount) of the generated model 82 A8. If the influence of the processing of the AI scheme is excessively reflected on the processing target image 75A8, it is also desirable that the blurring amount is excessively larger than the preference of the user or conversely excessively smaller than the preference of the user.
In view of this, in the image pickup apparatus 10, as shown in fig. 29, for example, the 1 st blurred image 86A8 and the 2 nd blurred image 88A8 are synthesized by performing the processing of the image adjustment unit 62C8 and the processing of the synthesis unit 62D8 on the 1 st blurred image 86A8 and the 2 nd blurred image 88A8.
As an example, as shown in fig. 29, the NVM64 stores a proportion 90H. The scale 90H is a scale for synthesizing the 1 st blurred image 86A8 and the 2 nd blurred image 88A8, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., processing using the generation model 82 A8) by the AI scheme processing section 62 A8.
The proportion 90H is roughly divided into a 1 st proportion 90H1 and a 2 nd proportion 90H2. The 1 st proportion 90H1 is a value of 0 to 1, and the 2 nd proportion 90H2 is a value obtained by subtracting the 1 st proportion 90H1 from "1". That is, the 1 st proportion 90H1 and the 2 nd proportion 90H2 are set so that the sum of the 1 st proportion 90H1 and the 2 nd proportion 90H2 becomes "1". The 1 st scale 90H1 and the 2 nd scale 90H2 are variable values that can be changed according to an instruction from a user.
The image adjustment unit 62C8 adjusts the 1 st blurred image 86A8 generated by the AI-scheme processing unit 62A8 using the 1 st scale 90H 1. For example, the image adjustment unit 62C8 multiplies the 1 st scale 90H1 by the pixel value of each pixel of the 1 st blurred image 86A8 to adjust the pixel value of each pixel of the 1 st blurred image 86A8.
The image adjustment unit 62C8 adjusts the 2 nd blurred image 88A8 generated by the non-AI-scheme processing unit 62B8 using the 2 nd scale 90H 2. For example, the image adjustment unit 62C8 multiplies the 2 nd scale 90H2 by the pixel value of each pixel of the 2 nd blurred image 88A8 to adjust the pixel value of each pixel of the 2 nd blurred image 88A8.
The combining section 62D8 generates a combined image 92H by combining the 1 st blurred image 86A8 adjusted by the image adjusting section 62C8 at the 1 st scale 90H1 and the 2 nd blurred image 88A8 adjusted by the image adjusting section 62C8 at the 2 nd scale 90H 2. That is, the combining unit 62D8 combines the 1 st blurred image 86A8 adjusted in the 1 st scale 90H1 and the 2 nd blurred image 88A8 adjusted in the 2 nd scale 90H2 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A8. In other words, the combining unit 62D8 combines the 1 st blurred image 86A8 adjusted in the 1 st scale 90H1 and the 2 nd blurred image 88A8 adjusted in the 2 nd scale 90H2 to adjust the non-noise element. In other words, the combining unit 62D8 combines the 1 st blurred image 86A8 adjusted in the 1 st scale 90H1 and the 2 nd blurred image 88A8 adjusted in the 2 nd scale 90H2 to adjust elements derived from the processing using the generation model 82A8 (for example, pixel values of pixels to which blurring is given by the generation model 82 A8).
The combination by the combining unit 62D8 is the addition of the pixel values at the corresponding pixel positions between the 1 st blurred image 86A8 and the 2 nd blurred image 88 A8. The synthesis by the synthesis unit 62D8 is performed in the same manner as the synthesis by the synthesis unit 62D1 shown in fig. 5. The composite image 92H is also subjected to various image processing by the compositing unit 62D8 in the same manner as the composite image 92A shown in fig. 5. The synthesized image 92H subjected to various image processing is output to a predetermined output destination by the synthesizing unit 62D 8.
Fig. 30 shows an example of the flow of the image synthesis processing according to modification 7. The flowchart shown in fig. 30 differs from the flowchart shown in fig. 6 in that steps ST350 to ST368 are applied instead of steps ST12 to ST 30.
In the image combining process shown in fig. 30, in step ST350, the AI-scheme processing unit 62A8 and the non-AI-scheme processing unit 62B8 acquire the processing target image 75A8 from the image sensor 20. After the process of step ST350 is performed, the image synthesis process proceeds to step ST352.
In step ST352, the AI-scheme processing unit 62A8 inputs the processing-target image 75A8 acquired in step ST350 into the generation model 82A8. After the process of step ST352 is performed, the image synthesis process proceeds to step ST354.
In step ST354, the AI-scheme processing unit 62A8 acquires the 1 ST blurred image 86A8, and the 1 ST blurred image 86A8 is output from the generated model 82A8 by inputting the processing target image 75A8 into the generated model 82A8 in step ST 352. After the process of step ST354 is executed, the image synthesis process proceeds to step ST356.
In step ST356, the non-AI-scheme processing unit 62B8 performs processing using the digital filter 84A8 on the processing target image 75A8 acquired in step ST350 to impart blurring to the human figure region 116 in the processing target image 75 A8. After the process of step ST356 is performed, the image synthesis process proceeds to step ST358.
In the processing in step ST352 and the processing in step ST356, the example of giving the blur to the human figure region 116 is described, but this is only an example, and the blur specified by the human figure region 116 may be given to an image region other than the human figure region 116. Instead of giving the blur to the personal area 116, the blur determined from the personal area 116 may be given to an image area other than the personal area 116. Although the figure region 116 is illustrated here, this is merely an example, and the image region may be an image region showing a subject other than a human (for example, a specific vehicle, a specific plant, a specific animal, a specific building, or a specific airplane). In this case, the blur corresponding to the object may be applied to the image in the same manner.
In step ST358, the non-AI-scheme processing unit 62B8 acquires the 2 nd blurred image 88A8, and the 2 nd blurred image 88A8 is obtained by performing processing using the digital filter 84A8 on the processing target image 75A8 in step ST 356. After the process of step ST358 is performed, the image synthesis process proceeds to step ST360.
In step ST360, the image adjustment unit 62C8 acquires the 1 ST scale 90H1 and the 2 nd scale 90H2 from the NVM 64. After the process of step ST360 is performed, the image synthesis process proceeds to step ST362.
In step ST362, the image adjustment unit 62C8 adjusts the 1 ST blurred image 86A8 using the 1 ST scale 90H1 acquired in step ST360. After the process of step ST362 is performed, the image synthesis process proceeds to step ST364.
In step ST364, the image adjustment unit 62C8 adjusts the 2 nd blurred image 88A8 using the 2 nd scale 90H2 acquired in step ST360. After the process of step ST364 is performed, the image synthesis process proceeds to step ST366.
In step ST366, the combining unit 62D8 combines the 1 ST blurred image 86A8 adjusted in step ST362 and the 2 nd blurred image 88A8 adjusted in step ST364 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A8. The synthesized image 92H is generated by synthesizing the 1 ST blurred image 86A8 adjusted in step ST362 and the 2 nd blurred image 88A8 adjusted in step ST364. After the process of step ST366 is performed, the image synthesis process proceeds to step ST368.
In step ST368, the combining unit 62D8 performs various image processing on the combined image 92H. Then, the combining unit 62D8 outputs an image obtained by performing various image processing on the combined image 92H as a processed image 75B to a predetermined output destination. After the process of step ST368 is performed, the image synthesis process proceeds to step ST32.
As described above, in the image pickup apparatus 10 according to modification 7, the 1 st blurred image 86A8 is generated by applying the blur corresponding to the personal area 116 in the processing target image 75A8 in the AI system. Then, the 2 nd blurred image 88A8 is generated by imparting a blur corresponding to the personal area 116 in the processing target image 75A8 in the non-AI mode. Then, the 1 st blurred image 86A8 and the 2 nd blurred image 88A8 are synthesized. This can suppress excessive or insufficient blurring of the synthesized image 92H corresponding to the human region 116 in the AI-mode processing. As a result, the composite image 92H is an image in which the blur corresponding to the human region 116 is less likely to be noticeable than the 1 st blurred image 86A8 by the AI process, and an appropriate image can be provided to the user who does not like the 1 st blurred image 86A8 by the AI process.
Here, the embodiment of synthesizing the 1 st blurred image 86A8 obtained by processing the processing target image 75A8 using the generation model 82A8 and the 2 nd blurred image 88A8 obtained by processing the processing target image 75A8 using the digital filter 84A8 has been described, but the technique of the present invention is not limited to this. For example, the 1 st blurred image 86A8 obtained by performing the process using the generation model 82A8 on the processing target image 75A8 and the processing target image 75A8 (i.e., an image in which the non-noise element is not adjusted) may be synthesized. In this case, the same effect can be expected.
[ modification 8 ]
As an example, as shown in fig. 31, the processor 62 according to the present modification example 8 is different from the processor 62 shown in fig. 4 in that the AI-based processing unit 62A1 is provided with an AI-based processing unit 62A9 and the non-AI-based processing unit 62B1 is provided with a non-AI-based processing unit 62B9. In the present modification 8, description of the same items as those described in the present modification 8 is omitted, and description of different items from those described in the present modification 8 is made.
The processing target image 75A9 is input to the AI-mode processing unit 62A9 and the non-AI-mode processing unit 62B9. The processing target image 75A9 is an example of the processing target image 75A shown in fig. 2. The processing target image 75A9 is a color image. In this case, a color image is illustrated as the processing target image 75A9, but the processing target image 75A9 may be an achromatic color image.
The AI-mode processing unit 62A9 and the non-AI-mode processing unit 62B9 perform processing for imparting a circular blur to the input processing target image 75 A9. The circular blur given to the processing target image 75A9 is an example of "a non-noise element of the processing target image", "a factor that controls a visual impression given to the processing target image", "the 1 st circular blur", and "the 2 nd circular blur" according to the technique of the present invention.
The AI-scheme processing unit 62A9 performs AI-scheme processing on the processing target image 75 A9. As an example of the AI-mode processing for the processing target image 75A9, a processing using the generation model 82A9 can be given. The generation model 82A9 is an example of the generation model 82A shown in fig. 3. The generation model 82A9 is a generation network in which learning to impart circular blur to the processing target image 75A9 has been performed.
The AI-mode processing unit 62A9 changes the factors of the visual impression given to the control processing target image 75A9 in the AI mode. That is, the AI-scheme processing section 62A9 changes, as a non-noise element of the processing target image 75A9, a factor that controls the visual impression given by the processing target image 75A9 by performing processing using the generation model 82A9 on the processing target image 75 A9. A factor that controls the visual impression given to the processing object image 75A9 is a circular blur given to the processing object image 75 A9. In the example shown in fig. 31, the AI-scheme processing section 62A9 generates the 1 st circular blurred image 86A9 by performing processing using the generation model 82A9 on the processing target image 75 A9. The 1 st circular blur image 86A9 is an image to which the 1 st circular blur 118 is applied to the processing target image 75A9 in the AI system.
The process of using the generation model 82A8 is an example of "1 st AI process", "1 st change process", and "circle blurring process" according to the technique of the present invention. The 1 st circle blur 118 is an example of "1 st circle blur" according to the technique of the present invention. The 1 st circular blurred image 86A9 is an example of the "1 st modified image" and the "1 st circular blurred image" according to the technology of the present invention. "generating the 1 st circular blurred image 86A9" is an example of "acquiring the 1 st image" according to the technique of the present invention.
The processing target image 75A9 is input to the generation model 82A9. The generation model 82A9 generates and outputs a 1 st circular blurred image 86A9 from the input processing target image 75 A9.
The non-AI-scheme processing unit 62B9 performs a non-AI-scheme process on the processing target image 75 A9. The processing in the non-AI mode refers to processing that does not use a neural network. In modification 8, the process of not using the neural network includes, for example, a process of not using the generation model 82A9.
As an example of the non-AI-mode processing for the processing target image 75A9, a processing using the digital filter 84A9 is given. The digital filter 84A9 is a digital filter configured to impart circular blur to the processing target image 75 A9.
The non-AI-scheme processing unit 62B9 generates the 2 nd circular blurred image 88A9 by performing processing (i.e., filtering) using the digital filter 84A9 on the processing target image 75 A9. In other words, the non-AI-method processing unit 62B9 generates the 2 nd circular blurred image 88A9 by changing the non-noise element of the processing target image 75A9 in the non-AI method. In other words, the non-AI-method processing unit 62B9 generates the 2 nd circular blur image 88A9 by imparting the 2 nd circular blur 120 to the processing target image 75A9 in the non-AI method.
The process using the digital filter 84A9 is an example of "a process of a non-AI method that does not use a neural network" and "a 2 nd changing process of a non-AI method changing factor" according to the technique of the present invention. "generating the 2 nd circular blurred image 88A9" is an example of "acquiring the 2 nd image" according to the technique of the present invention.
The processing target image 75A9 is input to the digital filter 84A9. The digital filter 84A9 generates a 2 nd circular blurred image 88A9 from the inputted processing target image 75 A9. The 2 nd circular blurred image 88A9 is an image obtained by changing the non-noise element by the digital filter 84A9 (that is, an image obtained by changing the non-noise element by the processing using the digital filter 84A9 with respect to the processing target image 75 A9). In other words, the 2 nd circular blur image 88A9 is an image to which the 2 nd circular blur 120 is imparted to the processing target image 75A9 (i.e., an image to which the 2 nd circular blur 120 is imparted by the processing using the digital filter 84A9 for the processing target image 75 A9). The characteristics (e.g., color, sharpness, and/or size, etc.) of the 2 nd circular blur 120 are different from the characteristics of the 1 st circular blur 118. The 2 nd circular blurred image 88A9 is an example of "2 nd image", "2 nd modified image" and "2 nd circular blurred image" according to the technology of the present invention.
The characteristic of the 1 st circular blurred image 86A9 obtained by performing AI-mode processing on the processing target image 75A9 may be different from the preference of the user due to the characteristic (for example, the number of intermediate layers and/or the learning amount) of the generated model 82 A9. If the influence of the AI-mode processing is excessively reflected on the processing target image 75A9, it is also conceivable that the user cannot express the favorite circle blur.
In view of this, in the image pickup apparatus 10, as shown in fig. 32, for example, the 1 st circular blurred image 86A9 and the 2 nd circular blurred image 88A9 are synthesized by performing the processing of the image adjustment unit 62C9 and the processing of the synthesis unit 62D9 on the 1 st circular blurred image 86A9 and the 2 nd circular blurred image 88A9.
As an example, as shown in fig. 32, the NVM64 stores a proportion 90I. The scale 90I is a scale for synthesizing the 1 st circular blurred image 86A9 and the 2 nd circular blurred image 88A9, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., processing using the generation model 82 A9) by the AI scheme processing section 62 A9.
The proportion 90I is roughly divided into a 1 st proportion 90I1 and a 2 nd proportion 90I2. The 1 st proportion 90I1 is a value of 0 to 1, and the 2 nd proportion 90I2 is a value obtained by subtracting the 1 st proportion 90I1 from "1". That is, the 1 st proportion 90I1 and the 2 nd proportion 90I2 are set so that the sum of the 1 st proportion 90I1 and the 2 nd proportion 90I2 becomes "1". The 1 st scale 90I1 and the 2 nd scale 90I2 are variable values that can be changed according to an instruction from a user.
The image adjustment unit 62C9 adjusts the 1 st circle blurred image 86A9 generated by the AI-scheme processing unit 62A9 using the 1 st scale 90I 1. For example, the image adjustment unit 62C9 adjusts the pixel value of each pixel of the 1 st circular blurred image 86A9 by multiplying the 1 st scale 90I1 by the pixel value of each pixel of the 1 st circular blurred image 86A9.
The image adjustment unit 62C9 adjusts the 2 nd circular blurred image 88A9 generated by the non-AI-scheme processing unit 62B9 using the 2 nd scale 90I 2. For example, the image adjustment unit 62C9 adjusts the pixel value of each pixel of the 2 nd circular blur image 88A9 by multiplying the 2 nd ratio 90I2 by the pixel value of each pixel of the 2 nd circular blur image 88A9.
The combining section 62D9 generates a combined image 92I by combining the 1 st circular blurred image 86A9 adjusted by the image adjusting section 62C9 at the 1 st scale 90I1 and the 2 nd circular blurred image 88A9 adjusted by the image adjusting section 62C9 at the 2 nd scale 90I 2. That is, the combining unit 62D9 combines the 1 st circular blurred image 86A9 adjusted in the 1 st scale 90I1 and the 2 nd circular blurred image 88A9 adjusted in the 2 nd scale 90I2 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62 A9. In other words, the combining section 62D9 adjusts the non-noise element by combining the 1 st circular blurred image 86A9 adjusted in the 1 st scale 90I1 and the 2 nd circular blurred image 88A9 adjusted in the 2 nd scale 90I 2. In other words, the synthesizing unit 62D9 synthesizes the 1 st circular blur image 86A9 adjusted in the 1 st scale 90I1 and the 2 nd circular blur image 88A9 adjusted in the 2 nd scale 90I2 to adjust elements derived from the processing using the generation model 82A9 (for example, pixel values of pixels to which the 1 st circular blur 118 is given by the generation model 82 A9).
The combination by the combining unit 62D9 is the addition of the pixel values at the corresponding pixel positions between the 1 st circular blurred image 86A9 and the 2 nd circular blurred image 88 A9. The synthesis by the synthesis unit 62D9 is performed in the same manner as the synthesis by the synthesis unit 62D1 shown in fig. 5. The composite image 92I is also subjected to various image processing by the compositing unit 62D9 in the same manner as the composite image 92A shown in fig. 5. The synthesized image 92I subjected to various image processing is output to a predetermined output destination by the synthesizing unit 62D 9.
Fig. 33 shows an example of the flow of the image synthesis processing according to modification 8. The flowchart shown in fig. 33 differs from the flowchart shown in fig. 6 in that steps ST400 to ST418 are applied instead of steps ST12 to ST 30.
In the image combining process shown in fig. 33, in step ST400, the AI-scheme processing unit 62A9 and the non-AI-scheme processing unit 62B9 acquire the processing target image 75A9 from the image sensor 20. After the process of step ST400 is performed, the image synthesis process proceeds to step ST402.
In step ST402, the AI-scheme processing unit 62A9 inputs the processing-target image 75A9 acquired in step ST400 into the generation model 82A9. After the process of step ST402 is performed, the image synthesis process proceeds to step ST404.
In step ST404, the AI-scheme processing unit 62A9 acquires the 1 ST circular blur image 86A9, and the 1 ST circular blur image 86A9 is output from the generation model 82A9 by inputting the processing target image 75A9 into the generation model 82A9 in step ST 402. After the process of step ST404 is executed, the image synthesis process proceeds to step ST406.
In step ST406, the non-AI-scheme processing unit 62B9 applies the 2 nd circular blur 120 to the processing target image 75A9 by performing processing using the digital filter 84A9 on the processing target image 75A9 acquired in step ST 400. After the process of step ST406 is performed, the image synthesis process proceeds to step ST 408.
In the processing in step ST402 and the processing in step ST406, the example of the method of generating the circular blur irrespective of the object appearing in the processing target image 75A9 is described, but this is only an example, and the circular blur specified by the object appearing in the processing target image 75A9 (for example, a specific person, a specific vehicle, a specific plant, a specific animal, a specific building, a specific airplane, or the like) may be generated and added to the processing target image 75 A9.
In step ST408, the non-AI-mode processing unit 62B9 acquires the 2 nd circular blur image 88A9, and the 2 nd circular blur image 88A9 is obtained by performing processing using the digital filter 84A9 on the processing target image 75A9 in step ST406. After the process of step ST408 is performed, the image synthesis process proceeds to step ST410.
In step ST410, the image adjustment unit 62C9 acquires the 1 ST scale 90I1 and the 2 nd scale 90I2 from the NVM 64. After the process of step ST410 is performed, the image synthesis process proceeds to step ST412.
In step ST412, the image adjustment unit 62C9 adjusts the 1 ST circular blurred image 86A9 using the 1 ST scale 90I1 acquired in step ST 410. After the process of step ST412 is performed, the image synthesis process proceeds to step ST414.
In step ST414, the image adjustment unit 62C9 adjusts the 2 nd circular blurred image 88A9 using the 2 nd scale 90I2 acquired in step ST 410. After the process of step ST414 is performed, the image synthesis process proceeds to step ST416.
In step ST416, the combining unit 62D9 combines the 1 ST circular blurred image 86A9 adjusted in step ST412 and the 2 nd circular blurred image 88A9 adjusted in step ST414 to adjust the excessive or insufficient processing of the AI scheme by the AI-scheme processing unit 62 A9. The synthesized image 92I is generated by synthesizing the 1 ST circular blurred image 86A9 adjusted in step ST412 and the 2 nd circular blurred image 88A9 adjusted in step ST414. After the process of step ST416 is performed, the image synthesis process proceeds to step ST418.
In step ST418, the combining unit 62D9 performs various image processing on the combined image 92I. Then, the combining unit 62D9 outputs an image obtained by performing various image processing on the combined image 92I as a processed image 75B to a predetermined output destination. After the process of step ST418 is performed, the image synthesis process proceeds to step ST32.
As described above, in the image pickup apparatus 10 according to the present modification 8, the 1 st circular blur 118 is given to the processing target image 75A9 by the AI method, and the 1 st circular blur image 86A9 is generated. Then, the 2 nd circular blur 120 is given to the processing target image 75A9 in the non-AI mode, thereby generating the 2 nd circular blur image 88A9. Then, the 1 st circular blurred image 86A9 and the 2 nd circular blurred image 88A9 are synthesized. Thus, the element of the 1 st circular blur 118 processed by the AI method can be suppressed from being excessive or insufficient for the synthesized image 92I. As a result, the synthesized image 92I is an image in which the 1 st circular blur 118 processed by the AI method is less likely to be more noticeable than the 1 st circular blur image 86A9, and an appropriate image can be provided to a user who does not like the 1 st circular blur image 86A9 processed by the AI method.
Here, the 1 st circular blurred image 86A9 obtained by processing the processing target image 75A9 using the generation model 82A9 and the 2 nd circular blurred image 88A9 obtained by processing the processing target image 75A9 using the digital filter 84A9 are described as examples, but the technique of the present invention is not limited to this. For example, the 1 st circular blur image 86A9 obtained by performing the process using the generation model 82A9 on the processing target image 75A9 and the processing target image 75A9 (i.e., an image in which the non-noise element is not adjusted) may be synthesized. In this case, the same effect can be expected.
In the example shown in fig. 31 to 33, the non-AI-scheme processing unit 62B9 has been described as a scheme example in which the processing target image 75A9 is subjected to the non-AI-scheme processing to give the processing target image 75A9 the 2 nd circular blur 120, but the technique of the present invention is not limited to this. For example, as shown in fig. 34, the non-AI-mode processing unit 62B9 may generate the 2 nd circular blur image 88A9 including the 2 nd circular blur 120 by performing non-AI-mode processing on the 1 st circular blur image 86A9 generated by the AI-mode processing unit 62 A9.
Further, for example, as shown in fig. 35, the non-AI-scheme processing unit 62B9 may generate the 2 nd circular blur image 88A9 including the 2 nd circular blur 120 having a higher sharpness than the 1 st circular blur 118 by performing non-AI-scheme processing on the 1 st circular blur image 86A9 generated by the AI-scheme processing unit 62 A9.
In the example shown in fig. 31 to 35, the description has been given of the embodiment in which the 1 st circular blur 118 is given to the processing target image 75A9 by the AI system, but when the circular blur is mapped to the processing target image 75A9, the AI system processing unit 62A9 may remove the circular blur from the processing target image 75A9 by the AI system.
[ modification 9 ]
As an example, as shown in fig. 36, the processor 62 according to the present modification 9 is different from the processor 62 shown in fig. 4 in that the AI-based processing unit 62A1 is provided with an AI-based processing unit 62a10 and the non-AI-based processing unit 62B1 is provided with a non-AI-based processing unit 62B10. In this modification 9, the description of the same items as those described in this modification 9 will be omitted, and the description of the different items from those described in this modification 9 will be made.
The processing target image 75a10 is input to the AI-mode processing unit 62a10 and the non-AI-mode processing unit 62B10. The processing target image 75A10 is an example of the processing target image 75A shown in fig. 2. The processing target image 75a10 is a color image, and has a person region 124 and a background region 126. The person region 124 is an image region in which a person is shown. The background area 126 is an image area in which a background is displayed. Here, a color image is illustrated as the processing target image 75a10, but this is only an example, and the processing target image 75a10 may be an achromatic image.
Here, an example of the "4 th subject" according to the technology of the present invention is shown in the processing target image 75a 10. The gradation of the processing target image 75a10 is an example of "a non-noise element of the processing target image", "a factor that controls the visual impression given by the processing target image", and "a gradation of the processing target image" according to the technique of the present invention.
The AI-scheme processing unit 62a10 performs AI-scheme processing on the processing target image 75a 10. As an example of the AI-mode processing for the processing target image 75a10, a processing using the generation model 82a10 is given. The generative model 82A10 is an example of the generative model 82A shown in fig. 3. The generation model 82a10 is a generation network in which learning to adjust the gradation of the processing target image 75a10 according to the person region 124 has been performed. Examples of learning to adjust the gradation of the processing target image 75a10 according to the person region 124 include learning to change the gradation of the processing target image 75a10 according to whether or not a person is present in the processing target image 75a10, and learning to change the gradation of the processing target image 75a10 according to a feature of a person present in the processing target image 75a 10.
The AI-mode processing unit 62a10 changes the factors of the visual impression given by the control processing target image 75a10 in the AI mode. That is, the AI-scheme processing section 62a10 changes, as a non-noise element of the processing target image 75a10, a factor that controls the visual impression given by the processing target image 75a10 by performing processing using the generation model 82a10 on the processing target image 75a 10. The factor that controls the visual impression given to the processing object image 75a10 is the gradation of the processing object image 75a 10. In the example shown in fig. 36, the AI-scheme processing section 62a10 generates the 1 st gradation adjustment image 86a10 by performing processing using the generation model 82a10 on the processing target image 75a 10. The 1 st gradation adjustment image 86a10 is an image in which the gradation of the processing target image 75a10 is changed according to the person region 124. For example, the gradation of the processing target image 75a10 is changed by increasing or decreasing the pixel value of the R pixel, the pixel value of the G pixel, and the pixel value of the B pixel to the same extent. The gradation of the processing target image 75a10 may be changed by increasing or decreasing the pixel value of at least one pixel designated among the R pixel, the G pixel, and the B pixel. How much the pixel values of which color are changed depends on the personal area 124 (e.g., the presence or absence of the personal area 124 or the characteristics of the person represented by the personal area 124).
The process of using the generation model 82a10 is an example of "1 st AI process", "1 st change process", and "1 st gradation adjustment process" according to the technique of the present invention. The 1 st gradation adjustment image 86a10 is an example of the "1 st modified image" and the "1 st gradation adjustment image" according to the technology of the present invention. "generating the 1 st gradation adjustment image 86a10" is an example of "acquiring the 1 st image" according to the technique of the present invention.
The processing target image 75a10 is input to the generation model 82a10. The generation model 82a10 generates and outputs the 1 st gradation adjustment image 86a10 from the inputted processing target image 75a 10.
The non-AI-scheme processing unit 62B10 performs a non-AI-scheme process on the processing target image 75a 10. The processing in the non-AI mode refers to processing that does not use a neural network. In modification 9, the process of not using the neural network includes, for example, a process of not using the generation model 82a10.
As an example of the non-AI-mode processing for the processing target image 75a10, a processing using the digital filter 84a10 is given. The digital filter 84a10 is a digital filter configured to adjust the gradation of the processing target image 75a 10. For example, the digital filter 84a10 is used when the processing target image 75a10 includes the human figure region 124. In this case, for example, the non-AI-scheme processing unit 62B10 performs a well-known person detection process on the processing target image 75a10 to determine whether or not the processing target image 75a10 includes the person region 124. When it is determined that the processing target image 75a10 includes the human figure region 124, the non-AI-scheme processing unit 62B10 performs processing using the digital filter 84a10 on the processing target image 75a 10.
In this case, for example, the non-AI-mode processing unit 62B10 may acquire the characteristics of the person represented by the person region 124 by performing a known image recognition process on the processing target image 75a10, and may perform a process using the digital filter 84a10 on the processing target image 75a10 corresponding to the acquired characteristics.
The non-AI-scheme processing unit 62B10 generates the 2 nd gradation-adjusted image 88a10 by performing processing (i.e., filtering) using the digital filter 84a10 on the processing target image 75a 10. In other words, the non-AI-scheme processing unit 62B10 generates the 2 nd gradation adjustment image 88a10 by adjusting the non-noise element of the processing target image 75a10 (here, the gradation of the processing target image 75a10 is an example) in a non-AI scheme.
The process using the digital filter 84a10 is an example of "a process of a non-AI method that does not use a neural network" and "a 2 nd changing process of a non-AI method changing factor" according to the technique of the present invention. "generating the 2 nd gradation adjustment image 88a10" is an example of "acquiring the 2 nd image" according to the technique of the present invention.
The processing target image 75a10 is input to the digital filter 84a10. The digital filter 84a10 generates a 2 nd gradation adjustment image 88a10 from the inputted processing target image 75a 10. The 2 nd gradation adjustment image 88a10 is an image obtained by changing the non-noise element by the digital filter 84a10 (i.e., an image obtained by changing the non-noise element by the processing using the digital filter 84a10 for the processing target image 75a 10). In other words, the 2 nd gradation adjustment image 88a10 is an image in which the gradation of the processing target image 75a10 is changed by the digital filter 84a10 (i.e., an image in which the gradation is changed by the processing using the digital filter 84a10 for the processing target image 75a 10). The 2 nd gradation adjustment image 88a10 is an example of the "2 nd image", "2 nd modified image", and "2 nd gradation adjustment image" according to the technique of the present invention.
The gradation of the 1 st gradation adjustment image 86a10 obtained by performing the AI-mode processing on the processing target image 75a10 may be a gradation different from the preference of the user due to the characteristics (for example, the number of intermediate layers and/or the learning amount) of the generated model 82a 10. If the influence of the AI-based processing is excessively reflected on the processing target image 75a10, a case is also conceivable in which a gradation different from the preference of the user becomes apparent.
In view of this, in the image pickup apparatus 10, as an example, as shown in fig. 37, the 1 st gradation adjustment image 86a10 and the 2 nd gradation adjustment image 88a10 are synthesized by performing the processing of the image adjustment section 62C10 and the processing of the synthesis section 62D10 on the 1 st gradation adjustment image 86a10 and the 2 nd gradation adjustment image 88a10.
As an example, as shown in fig. 37, the NVM64 stores a proportion of 90J. The ratio 90J is a ratio at which the 1 st gradation adjustment image 86a10 and the 2 nd gradation adjustment image 88a10 are synthesized, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., processing using the generation model 82a 10) by the AI scheme processing section 62a 10.
The proportion 90J is roughly divided into a1 st proportion 90J1 and a 2 nd proportion 90J2. The 1 st proportion 90J1 is a value of 0 to 1, and the 2 nd proportion 90J2 is a value obtained by subtracting the 1 st proportion 90J1 from "1". That is, the 1 st proportion 90J1 and the 2 nd proportion 90J2 are set so that the sum of the 1 st proportion 90J1 and the 2 nd proportion 90J2 becomes "1". The 1 st scale 90J1 and the 2 nd scale 90J2 are variable values that can be changed according to an instruction from a user.
The image adjustment unit 62C10 adjusts the 1 st gradation adjustment image 86a10 generated by the AI-scheme processing unit 62a10 using the 1 st scale 90J 1. For example, the image adjustment unit 62C10 multiplies the 1 st scale 90J1 by the pixel value of each pixel of the 1 st gradation adjustment image 86a10 to adjust the pixel value of each pixel of the 1 st gradation adjustment image 86a10.
The image adjustment unit 62C10 adjusts the 2 nd gradation adjustment image 88a10 generated by the non-AI-scheme processing unit 62B10 using the 2 nd scale 90J 2. For example, the image adjustment unit 62C10 multiplies the 2 nd ratio 90J2 by the pixel value of each pixel of the 2 nd gradation adjustment image 88a10 to adjust the pixel value of each pixel of the 2 nd gradation adjustment image 88a10.
The combining section 62D10 generates a combined image 92J by combining the 1 st gradation adjustment image 86a10 adjusted by the 1 st scale 90J1 through the image adjusting section 62C10 and the 2 nd gradation adjustment image 88a10 adjusted by the 2 nd scale 90J2 through the image adjusting section 62C 10. That is, the combining unit 62D10 combines the 1 st gradation adjustment image 86a10 adjusted by the 1 st scale 90J1 and the 2 nd gradation adjustment image 88a10 adjusted by the 2 nd scale 90J2 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62a 10. In other words, the combining unit 62D10 combines the 1 st gradation adjustment image 86a10 adjusted in the 1 st scale 90J1 and the 2 nd gradation adjustment image 88a10 adjusted in the 2 nd scale 90J2 to adjust the non-noise element (here, the gradation of the image 75a10 to be processed is an example). In other words, the combining unit 62D10 combines the 1 st gradation adjustment image 86a10 adjusted in the 1 st scale 90J1 and the 2 nd gradation adjustment image 88a10 adjusted in the 2 nd scale 90J2 to adjust elements (for example, pixel values of pixels whose gradation is changed by the generation model 82a 10) derived from the processing using the generation model 82a 10.
The combination by the combining unit 62D10 is the addition of the pixel values at the corresponding pixel positions between the 1 st gradation adjustment image 86a10 and the 2 nd gradation adjustment image 88a 10. The synthesis by the synthesis unit 62D10 is performed in the same manner as the synthesis by the synthesis unit 62D1 shown in fig. 5. The composite image 92J is also subjected to various image processing by the compositing unit 62D10 in the same manner as the composite image 92A shown in fig. 5. The synthesized image 92J subjected to various image processing is output to a predetermined output destination by the synthesizing unit 62D 10.
Fig. 38 shows an example of the flow of the image synthesis processing according to modification 9. The flowchart shown in fig. 38 differs from the flowchart shown in fig. 6 in that steps ST450 to ST468 are applied instead of steps ST12 to ST 30.
In the image combining process shown in fig. 38, in step ST450, the AI-mode processing unit 62a10 and the non-AI-mode processing unit 62B10 acquire the processing target image 75a10 from the image sensor 20. After the process of step ST450 is performed, the image synthesis process proceeds to step ST452.
In step ST452, the AI-scheme processing unit 62a10 inputs the processing-target image 75a10 acquired in step ST450 into the generation model 82a10. After the process of step ST452 is executed, the image synthesis process proceeds to step ST454.
In step ST454, the AI-mode processing unit 62a10 acquires the 1 ST gradation adjustment image 86a10, and the 1 ST gradation adjustment image 86a10 is output from the generation model 82a10 by inputting the processing target image 75a10 into the generation model 82a10 in step ST 452. After the process of step ST454 is executed, the image synthesis process proceeds to step ST456.
In step ST456, the non-AI-scheme processing unit 62B10 adjusts the gradation of the processing target image 75a10 by performing the processing using the digital filter 84a10 on the processing target image 75a10 acquired in step ST 450. After the process of step ST456 is performed, the image synthesis process proceeds to step ST458.
In step ST458, the non-AI-mode processing unit 62B10 acquires the 2 nd gradation-adjusted image 88a10, and the 2 nd gradation-adjusted image 88a10 is obtained by performing the processing using the digital filter 84a10 on the processing target image 75a10 in step ST456. After the process of step ST458 is performed, the image synthesis process proceeds to step ST460.
In step ST460, the image adjustment unit 62C10 acquires the 1 ST scale 90J1 and the 2 nd scale 90J2 from the NVM 64. After the process of step ST460 is performed, the image synthesis process proceeds to step ST462.
In step ST462, the image adjustment unit 62C10 adjusts the 1 ST gradation adjustment image 86a10 using the 1 ST scale 90J1 acquired in step ST460. After the process of step ST462 is performed, the image synthesis process proceeds to step ST464.
In step ST464, the image adjustment section 62C10 adjusts the 2 nd gradation adjustment image 88a10 using the 2 nd scale 90J2 acquired in step ST 460. After the process of step ST464 is executed, the image synthesis process proceeds to step ST466.
In step ST466, the combining unit 62D10 combines the 1 ST gradation adjustment image 86a10 adjusted in step ST462 and the 2 nd gradation adjustment image 88a10 adjusted in step ST464 to adjust the excessive or insufficient processing of the AI scheme by the AI scheme processing unit 62a 10. The synthesized image 92J is generated by synthesizing the 1 ST gradation adjustment image 86a10 adjusted in step ST462 and the 2 nd gradation adjustment image 88a10 adjusted in step ST 464. After the process of step ST466 is executed, the image synthesizing process proceeds to step ST468.
In step ST468, the combining unit 62D10 performs various image processing on the combined image 92J. Then, the combining unit 62D10 outputs an image obtained by performing various image processing on the combined image 92J as a processed image 75B to a predetermined output destination. After the process of step ST468 is performed, the image synthesis process proceeds to step ST32.
As described above, in the image pickup apparatus 10 according to the present modification 9, the 1 st gradation adjustment image 86a10 is generated by adjusting the gradation of the processing target image 75a10 in the AI system. Then, the 2 nd gradation adjustment image 88a10 is generated by adjusting the gradation of the processing target image 75a10 in a non-AI manner. Then, the 1 st gradation adjustment image 86a10 and the 2 nd gradation adjustment image 88a10 are synthesized. This can suppress excessive or insufficient adjustment of the gradation by the AI method for the composite image 92J. As a result, the composite image 92J is an image in which the amount of gray-scale adjustment by the AI method is less likely to be noticeable than the 1 st gray-scale adjustment image 86a10, and an appropriate image can be provided to a user who does not like the amount of gray-scale adjustment by the AI method.
In modification 9, the 1 st gradation adjustment image 86a10 is generated by adjusting the gradation of the image 75a10 to be processed by the AI method according to the person region 124. Then, the gradation of the processing target image 75a10 according to the non-AI scheme is adjusted based on the person region 124 to generate the 2 nd gradation adjustment image 88a10. Then, the 1 st gradation adjustment image 86a10 and the 2 nd gradation adjustment image 88a10 are synthesized. This can suppress an excessive or insufficient adjustment amount of the gray scale according to the character region 124 in the AI system for the composite image 92J.
The embodiment of adjusting the gradation according to the human region 124 is described here, but this is merely an example, and the gradation may be adjusted according to the background region 126. The gradation may be adjusted according to the combination of the character region 124 and the background region 126. The gradation may be adjusted according to regions other than the person region 124 and the background region 126 (for example, a region in which a specific vehicle is shown, a region in which a specific animal is shown, a region in which a specific plant is shown, a region in which a specific building is shown, and/or a region in which a specific airplane is shown).
In addition, the embodiment of synthesizing the 1 st gradation adjustment image 86a10 obtained by processing the processing target image 75a10 using the generation model 82a10 and the 2 nd gradation adjustment image 88a10 obtained by processing the processing target image 75a10 using the digital filter 84a10 is described here, but the technique of the present invention is not limited to this. For example, the 1 st gradation adjustment image 86a10 obtained by performing the process using the generation model 82a10 on the processing target image 75a10 and the processing target image 75a10 (i.e., an image in which the non-noise element is not adjusted) may be synthesized. In this case, the same effect can be expected.
[ modification 10 ]
As an example, as shown in fig. 39, the processor 62 according to the present modification example 10 is different from the processor 62 shown in fig. 4 in that the AI-scheme processing unit 62A1 is provided with an AI-scheme processing unit 62a11 instead. In the present 10 th modification, description of the same matters as those described in the present 10 th modification will be omitted, and description of matters different from those described in the present 10 th modification will be made.
The processing target image 75a11 is input to the AI-mode processing unit 62a11. The processing target image 75A11 is an example of the processing target image 75A shown in fig. 2. The processing target image 75a11 is a color image. Here, a color image is illustrated as the processing target image 75a11, but this is only an example, and the processing target image 75a11 may be an achromatic image.
The AI-scheme processing unit 62a11 performs AI-scheme processing on the processing target image 75a 11. As an example of the AI-mode processing for the processing target image 75a11, a processing using the generation model 82a11 is given. The generation model 82A11 is an example of the generation model 82A shown in fig. 3. The generation model 82a11 is a generation network in which the learning of the picture of the change processing target image 75a11 has been performed.
Here, the picture of the processing target image 75a11 is an example of "a non-noise element of the processing target image", "a factor that controls a visual impression given by the processing target image", and "a picture of the processing target image" according to the technique of the present invention.
The AI-mode processing unit 62a11 changes the factors of the visual impression given to the control processing target image 75a11 in the AI mode. That is, the AI-scheme processing section 62a11 changes, as a non-noise element of the processing target image 75a11, a factor that controls the visual impression given by the processing target image 75a11 by performing processing using the generation model 82a11 on the processing target image 75a 11. The factor that controls the visual impression given to the processing target image 75a11 is the picture of the processing target image 75a 11. In the example shown in fig. 39, the AI-scheme processing unit 62a11 generates the wind-drawing-altered image 86a11 by performing processing using the generation model 82a11 on the processing-target image 75a 11. The picture-wind change image 86a11 is an image in which the picture wind of the processing target image 75a11 is changed. In the example shown in fig. 39, the painting-style changing image 86a11 differs from the painting-style of the processing target image 75a11 in that a plurality of swirl patterns are added.
The process of using the generation model 82a11 is an example of "1 st AI process", "1 st change process", and "drawing change process" according to the technique of the present invention. The picture-wind change image 86a11 is an example of the "1 st change image" and the "picture-wind change image" according to the technology of the present invention. The processing target image 75a11 is an example of the "2 nd image" according to the technique of the present invention. The "generating the picture-in-air change image 86a11" is an example of "acquiring the 1 st image" according to the technique of the present invention.
The processing target image 75a11 is input to the generation model 82a11. The generation model 82a11 generates and outputs a picture change image 86a11 from the input processing target image 75a11.
The picture of the picture-change image 86a11 obtained by performing the AI-mode processing on the processing target image 75a11 may be different from the user's preference due to the characteristics (for example, the number of intermediate layers and/or the learning amount) of the generated model 82a11. If the influence of the processing of the AI scheme is excessively reflected on the processing target image 75a11, it is also conceivable that a picture different from the user's preference becomes obvious.
In view of this, in the image pickup apparatus 10, as shown in fig. 40, for example, the image adjustment unit 62C11 and the synthesis unit 62D11 process the image change image 86a11 and the processing target image 75a11 to synthesize the image change image 86a11 and the processing target image 75a11.
As an example, as shown in fig. 40, the NVM64 stores a proportion 90K. The scale 90K is a scale of the composite picture change image 86a11 and the processing target image 75a11, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., the processing using the generation model 82a 11) by the AI scheme processing unit 62a 11.
The proportion 90K is roughly divided into a1 st proportion 90K1 and a 2 nd proportion 90K2. The 1 st proportion 90K1 is a value of 0 to 1, and the 2 nd proportion 90K2 is a value obtained by subtracting the 1 st proportion 90K1 from "1". That is, the 1 st proportion 90K1 and the 2 nd proportion 90K2 are set so that the sum of the 1 st proportion 90K1 and the 2 nd proportion 90K2 becomes "1". The 1 st scale 90K1 and the 2 nd scale 90K2 are variable values that can be changed according to an instruction from a user.
The image adjustment unit 62C11 adjusts the picture-in-wind change image 86a11 generated by the AI-scheme processing unit 62a11 using the 1 st scale 90K 1. For example, the image adjustment unit 62C11 multiplies the 1 st scale 90K1 by the pixel value of each pixel of the wind change image 86a11 to adjust the pixel value of each pixel of the wind change image 86a11.
The image adjustment unit 62C11 adjusts the processing target image 75a11 using the 2 nd scale 90K2. For example, the image adjustment unit 62C11 multiplies the 2 nd scale 90K2 by the pixel value of each pixel of the processing target image 75a11 to adjust the pixel value of each pixel of the processing target image 75a11.
The combining unit 62D11 combines the picture-change image 86a11 adjusted by the 1 st scale 90K1 through the image adjusting unit 62C11 and the processing target image 75a11 adjusted by the 2 nd scale 90K2 through the image adjusting unit 62C11 to generate a combined image 92K. That is, the combining unit 62D11 combines the picture-in-picture changing image 86a11 adjusted by the 1 st scale 90K1 and the processing target image 75a11 adjusted by the 2 nd scale 90K2 to adjust the excessive or insufficient AI-mode processing performed by the AI-mode processing unit 62a 11. In other words, the combining unit 62D11 combines the picture-in-picture changing image 86a11 adjusted in the 1 st scale 90K1 and the processing target image 75a11 adjusted in the 2 nd scale 90K2 to adjust the non-noise element (here, the picture-in-picture of the processing target image 75a11 is an example). In other words, the combining unit 62D11 combines the wind-image changing image 86a11 adjusted in the 1 st scale 90K1 and the processing-target image 75a11 adjusted in the 2 nd scale 90K2 to adjust elements derived from the processing using the generation model 82a11 (for example, pixel values of pixels whose wind is changed by the generation model 82a 11).
The synthesis performed by the synthesis unit 62D11 is the addition of the pixel values at the corresponding pixel positions between the picture-change image 86a11 and the processing target image 75a 11. The synthesis by the synthesis unit 62D11 is performed in the same manner as the synthesis by the synthesis unit 62D1 shown in fig. 5. The composite image 92K is also subjected to various image processing by the compositing unit 62D11 in the same manner as the composite image 92A shown in fig. 5. The synthesized image 92K subjected to various image processing is output to a predetermined output destination by the synthesizing unit 62D 11.
Fig. 41 shows an example of the flow of the image synthesis processing according to the present modification 10. The flowchart shown in fig. 41 differs from the flowchart shown in fig. 6 in that steps ST500 to ST514 are applied instead of steps ST12 to ST 30.
In the image combining process shown in fig. 41, in step ST500, the AI-scheme processing section 62a11 acquires the processing target image 75a11 from the image sensor 20. After the process of step ST500 is performed, the image synthesis process proceeds to step ST502.
In step ST502, the AI-scheme processing unit 62a11 inputs the processing-target image 75a11 acquired in step ST500 into the generation model 82a11. After the process of step ST502 is executed, the image synthesis process proceeds to step ST504.
In step ST504, the AI-scheme processing unit 62a11 acquires the picture-change image 86a11, and the picture-change image 86a11 is output from the generation model 82a11 by inputting the processing-target image 75a11 into the generation model 82a11 in step ST502. After the process of step ST504 is performed, the image synthesis process proceeds to step ST506.
In step ST506, the image adjustment unit 62C11 acquires the 1 ST scale 90K1 and the 2 nd scale 90K2 from the NVM 64. After the process of step ST506 is performed, the image synthesis process proceeds to step ST508.
In step ST508, the image adjustment unit 62C11 adjusts the picture-in-wind change image 86a11 using the 1 ST scale 90K1 acquired in step ST 506. After the process of step ST508 is performed, the image synthesis process proceeds to step ST510.
In step ST510, the image adjustment unit 62C11 adjusts the processing target image 75a11 using the 2 nd scale 90K2 acquired in step ST 506. After the process of step ST510 is performed, the image synthesis process proceeds to step ST512.
In step ST512, the combining unit 62D11 combines the picture-change image 86a11 adjusted in step ST508 and the processing target image 75a11 adjusted in step ST510 to adjust the excessive or insufficient AI-mode processing performed by the AI-mode processing unit 62a 11. The synthetic image 92K is generated by synthesizing the picture change image 86a11 adjusted in step ST508 and the processing target image 75a11 adjusted in step ST510. After the process of step ST512 is performed, the image synthesis process proceeds to step ST514.
In step ST514, the combining unit 62D11 performs various image processing on the combined image 92K. Then, the combining unit 62D11 outputs an image obtained by performing various image processing on the combined image 92K as a processed image 75B to a predetermined output destination. After the process of step ST514 is performed, the image synthesis process proceeds to step ST32.
As described above, in the image pickup apparatus 10 according to the present modification 10, the picture-in-picture change image 86a11 is generated by adjusting the picture-in-picture of the processing target image 75a11 in the AI system. Then, the picture change image 86a11 and the processing target image 75a11 are synthesized. This can suppress excessive or insufficient image wind changed by the AI method for the composite image 92K. As a result, the composite image 92K is an image in which the AI-mode changed picture is less noticeable than the picture-change image 86a11, and an appropriate image can be provided to the user who does not like the AI-mode changed picture to be excessively noticeable.
[ modification 11 ]
As an example, as shown in fig. 42, the processor 62 according to the present modification 11 is different from the processor 62 shown in fig. 4 in that the AI-scheme processing unit 62A1 is provided with the AI-scheme processing unit 62a12 instead. In the present 11 th modification, description of the same matters as those described in the present 11 th modification will be omitted, and description of matters different from those described in the present 11 th modification will be made.
The processing target image 75a12 is input to the AI-mode processing unit 62a12. The processing target image 75A12 is an example of the processing target image 75A shown in fig. 2. The processing target image 75a12 is a color image. Here, a color image is illustrated as the processing target image 75a12, but this is only an example, and the processing target image 75a12 may be an achromatic image.
The processing target image 75a12 has a person region 128. Character region 128 is an image region representing a person. Character region 128 has a skin region 128A representing skin. Also, the skin region 128A includes a stain region 128A1. The stain region 128A1 is an image region indicating stains generated on the skin. The color spots are exemplified here, but the color spots are not limited to the color spots, and may be moles, scars, or the like, as long as they are elements that hinder the beauty of the skin.
The AI-scheme processing unit 62a12 performs AI-scheme processing on the processing target image 75a 12. As an example of the AI-mode processing for the processing target image 75a12, a processing using the generation model 82a12 is given. The generative model 82A12 is an example of the generative model 82A shown in fig. 3. The generation model 82a12 is a generation network in which learning has been performed to adjust the image quality related to the skin (i.e., the image quality of the skin region 128A) that is mapped to the processing target image 75a 12. The adjustment of the image quality related to the skin is, for example, correction to make the stain region 128A1 in the processing target image 75a12 inconspicuous (for example, deleting the stain region 128 A1).
Here, the image quality of the skin region 128A is an example of "a non-noise element of the image to be processed", "a factor that controls the visual impression given by the image to be processed", and "an image quality related to skin" according to the technique of the present invention.
The AI-mode processing unit 62a12 changes the factors of the visual impression given to the control processing target image 75a12 in the AI mode. That is, the AI-scheme processing section 62a12 changes, as a non-noise element of the processing target image 75a12, a factor that controls the visual impression given by the processing target image 75a12 by performing processing using the generation model 82a12 on the processing target image 75a 12. A factor that controls the visual impression given to the processing target image 75a12 is the image quality of the skin region 128A. In the example shown in fig. 42, the AI-scheme processing unit 62a12 generates the skin-image-quality adjustment image 86a12 by performing processing using the generation model 82a12 on the processing-target image 75a 12. The skin image quality adjustment image 86a12 is an image in which the image quality of the skin region 128A included in the processing target image 75a12 is adjusted. In the example shown in fig. 42, the skin image quality adjustment image 86a12 differs from the processing target image 75a12 in that the color spot region 128A1 is deleted.
The process of using the generation model 82a12 is an example of "1 st AI process", "1 st change process", and "skin image quality adjustment process" according to the technique of the present invention. The skin image quality adjustment image 86a12 is an example of the "1 st modified image" and the "skin image quality adjustment image" according to the technology of the present invention. The processing target image 75a12 is an example of the "2 nd image" according to the technique of the present invention. The "generation of the skin image quality adjustment image 86a12" is an example of "acquisition of the 1 st image" according to the technique of the present invention.
The processing target image 75a12 is input to the generation model 82a12. The generation model 82a12 generates and outputs a skin image quality adjustment image 86a12 from the input processing target image 75a12.
The image quality of the skin region 128A of the skin image quality adjustment image 86a12 obtained by performing the AI-method processing on the processing target image 75a12 may be different from the preference of the user due to the characteristics (for example, the number of intermediate layers and/or the learning amount) of the generated model 82a12. If the influence of the AI-based processing is excessively reflected on the processing target image 75a12, it is also conceivable that an image quality different from the preference of the user becomes obvious. For example, the stain region 128A1 may be completely deleted, which may be an unnatural image.
In view of this, in the image pickup apparatus 10, as shown in fig. 43, for example, the skin image quality adjustment image 86a12 and the processing target image 75a12 are synthesized by performing the processing of the image adjustment unit 62C12 and the processing of the synthesis unit 62D12 on the skin image quality adjustment image 86a12 and the processing target image 75a12.
As an example, as shown in fig. 43, the NVM64 stores a proportion 90L. The ratio 90L is a ratio between the synthesized skin image quality adjustment image 86a12 and the processing target image 75a12, and is set to adjust the excessive or insufficient processing of the AI scheme (i.e., the processing using the generation model 82a 12) performed by the AI scheme processing unit 62a 12.
The proportion 90L is roughly divided into a1 st proportion 90L1 and a 2 nd proportion 90L2. The 1 st proportion 90L1 is a value of 0 to 1, and the 2 nd proportion 90L2 is a value obtained by subtracting the 1 st proportion 90L1 from "1". That is, the 1 st proportion 90L1 and the 2 nd proportion 90L2 are set so that the sum of the 1 st proportion 90L1 and the 2 nd proportion 90L2 becomes "1". The 1 st scale 90L1 and the 2 nd scale 90L2 are variable values that can be changed according to an instruction from a user.
The image adjustment unit 62C12 adjusts the skin image quality adjustment image 86a12 generated by the AI-scheme processing unit 62a12 using the 1 st scale 90L 1. For example, the image adjustment unit 62C12 multiplies the 1 st scale 90L1 by the pixel value of each pixel of the skin-image quality adjustment image 86a12 to adjust the pixel value of each pixel of the skin-image quality adjustment image 86a12.
The image adjustment unit 62C12 adjusts the processing target image 75a12 using the 2 nd scale 90L2. For example, the image adjustment unit 62C12 multiplies the 2 nd ratio 90L2 by the pixel value of each pixel of the processing target image 75a12 to adjust the pixel value of each pixel of the processing target image 75a12.
The combining unit 62D12 combines the skin image quality adjustment image 86a12 adjusted by the image adjusting unit 62C12 at the 1 st scale 90L1 and the processing target image 75a12 adjusted by the image adjusting unit 62C12 at the 2 nd scale 90L2 to generate a combined image 92L. That is, the combining unit 62D12 combines the skin image quality adjustment image 86a12 adjusted in the 1 st scale 90L1 and the processing target image 75a12 adjusted in the 2 nd scale 90L2 to adjust the excessive or insufficient AI-mode processing performed by the AI-mode processing unit 62a 12. In other words, the combining unit 62D12 combines the skin image quality adjustment image 86a12 adjusted at the 1 st scale 90L1 and the processing target image 75a12 adjusted at the 2 nd scale 90L2 to adjust the non-noise element (here, the image quality of the skin region 128A is an example). In other words, the combining unit 62D12 combines the skin image quality adjustment image 86a12 adjusted in the 1 st scale 90L1 and the processing target image 75a12 adjusted in the 2 nd scale 90L2 to adjust elements derived from the processing using the generation model 82a12 (for example, pixel values of pixels whose image quality is changed by the generation model 82a 12).
The synthesis by the synthesis unit 62D12 is the addition of pixel values at corresponding pixel positions between the skin-image quality adjustment image 86a12 and the processing target image 75a12. The synthesis by the synthesis unit 62D12 is performed in the same manner as the synthesis by the synthesis unit 62D1 shown in fig. 5. The composite image 92L is also subjected to various image processing by the compositing unit 62D12 in the same manner as the composite image 92A shown in fig. 5. The synthesized image 92L subjected to various image processing is output to a predetermined output destination by the synthesizing unit 62D 12.
Fig. 44 shows an example of the flow of the image synthesis processing according to the present modification 11. The flowchart shown in fig. 44 differs from the flowchart shown in fig. 6 in that steps ST550 to ST564 are applied instead of steps ST12 to ST 30.
In the image synthesizing process shown in fig. 44, the AI-mode processing section 62a12 acquires the processing-target image 75a12 from the image sensor 20 in step ST 550. After the process of step ST550 is performed, the image synthesis process proceeds to step ST552.
In step ST552, the AI-scheme processing unit 62a12 inputs the processing-target image 75a12 acquired in step ST550 into the generation model 82a12. After the process of step ST552 is performed, the image synthesis process proceeds to step ST554.
In step ST554, the AI-scheme processing unit 62a12 acquires the flesh image quality adjustment image 86a12, and the flesh image quality adjustment image 86a12 inputs the processing target image 75a12 into the generation model 82a12 and outputs the generated model 82a12 in step ST 552. After the process of step ST554 is performed, the image synthesis process proceeds to step ST556.
In step ST556, the image adjustment unit 62C12 acquires the 1 ST scale 90L1 and the 2 nd scale 90L2 from the NVM 64. After the process of step ST556 is executed, the image synthesis process proceeds to step ST558.
In step ST558, the image adjustment unit 62C12 adjusts the skin-image quality adjustment image 86a12 using the 1 ST scale 90L1 acquired in step ST556. After the process of step ST558 is performed, the image synthesis process proceeds to step ST560.
In step ST560, the image adjustment unit 62C12 adjusts the processing target image 75a12 using the 2 nd scale 90L2 acquired in step ST556. After the process of step ST560 is performed, the image synthesis process proceeds to step ST562.
In step ST562, the combining unit 62D12 combines the skin image quality adjustment image 86a12 adjusted in step ST558 and the processing target image 75a12 adjusted in step ST560 to adjust the excessive or insufficient AI-mode processing performed by the AI-mode processing unit 62a 12. The synthesized image 92L is generated by synthesizing the flesh image quality adjustment image 86a12 adjusted in step ST558 and the processing target image 75a12 adjusted in step ST560. After the process of step ST562 is executed, the image synthesis process proceeds to step ST564.
In step ST564, the combining unit 62D12 performs various image processing on the combined image 92L. Then, the combining unit 62D12 outputs an image obtained by performing various image processing on the combined image 92L as a processed image 75B to a predetermined output destination. After the process of step ST564 is performed, the image synthesis process proceeds to step ST32.
As described above, in the image pickup apparatus 10 according to the present modification 11, the image quality of the skin region 128A of the processing target image 75a12 is adjusted in the AI system to generate the skin image quality adjustment image 86a12. Then, the skin-image quality adjustment image 86a12 and the processing target image 75a12 are synthesized. This can suppress an excessive or insufficient adjustment amount of the image quality adjusted by the AI method for the synthesized image 92L. As a result, the composite image 92L is an image whose image quality adjustment by the AI method is less likely to be more noticeable than the skin image quality adjustment image 86a12, and can be provided to a user who does not like the image quality adjustment by the AI method (for example, a user who does not want to completely delete the color spot region 128 A1) who is excessively noticeable.
Here, the embodiment of deleting the color spot region 128A1 is described as an example, but the technique of the present invention is not limited to this. For example, the skin in the processing target image 75a12 may be beautified by changing the brightness of the skin region 128A or changing the color of the skin region 128A by the AI method. In this case, the processing from step ST556 to step ST564 is also performed so that the appearance of the skin of the person appearing in the image is unnatural due to excessive beautification.
Hereinafter, for convenience of explanation, the process target images 75A1 to 75A12 will be referred to as "process target images 75A" unless they are separately described. In the following, for convenience of explanation, the ratio 90A to 90L will be referred to as "ratio 90" unless otherwise specified. In the following description, for convenience of description, the 1 st aberration correction image 86A1, the 1 st coloring image 86A2, the 1 st contrast adjustment image 86A3, the 1 st resolution adjustment image 86A4, the 1 st HDR image 86A5, the 1 st edge emphasized image 86A6, the 1 st point image adjustment image 86A7, the 1 st blurring image 86A8, the 1 st circular blurring image 86A9, the 1 st gradation adjustment image 86A10, the wind-drawn change image 86A11, and the skin image quality adjustment image 86A12 will be referred to as "1 st image 86A". When the 2 nd aberration correction image 88A1, the 2 nd coloring image 88A2, the 2 nd contrast adjustment image 88A3, the 2 nd resolution adjustment image 88A4, the 2 nd HDR image 88A5, the 2 nd edge emphasis image 88A6, the 2 nd point image adjustment image 88A7, the 2 nd blur image 88A8, the 2 nd circular blur image 88A9, the 2 nd gradation adjustment image 88A10, the processing target image 75a11, and the processing target image 75a12 are not described separately, they are referred to as "the 2 nd image 88A". In the following, for convenience of explanation, the generated models 82A1 to 82A12 will be referred to as "generated model 82A" unless otherwise specified. In the following, for convenience of explanation, the AI-system processing units 62A1 to 62A12 will be referred to as "AI-system processing units 62A" unless otherwise specified. In the following, for convenience of explanation, the synthesized images 92A to 92L will be referred to as "synthesized images 92" unless they are separately described.
[ modification 12 ]
In the example shown in fig. 1 to 44, the description has been given of the embodiment in which the processor 62 generates the 1 st image 86A by performing a single process according to the purpose in the AI system, but the technique of the present invention is not limited thereto. For example, the processor 62 may perform a plurality of processes in the AI mode.
In this case, as an example, as shown in fig. 45, the AI-scheme processing unit 62a13 performs a plurality of processes 130 according to the purpose on the processing target image 75A in the AI scheme. That is, the AI-scheme processing unit 62A13 performs processing using a plurality of generation models 82A on the processing target image 75A. The multiprocessing image 132 is generated by performing a plurality of processes 130 according to purposes on the processing target image 75A in the AI manner. Multiprocessing image 132 and image 2 88A are synthesized at ratio 90.
The plurality of processing according to the purpose 130 includes an aberration correction processing 130A, a point image adjustment processing 130B, a gradation adjustment processing 130C, a contrast adjustment processing 130D, a dynamic range adjustment processing 130E, a resolution adjustment processing 130F, an edge emphasis processing 130G, a sharpness adjustment processing 130H, a circular blur generation processing 130I, a blur application processing 130J, a skin image quality adjustment processing 130K, a coloring adjustment processing 130L, and a picture change processing 130M.
As an example of the aberration correction processing 130A, a processing performed by the AI-scheme processing unit 62A1 shown in fig. 4 is given. As an example of the point image adjustment processing 130B, the processing performed by the AI-scheme processing unit 62A7 shown in fig. 25 is given. As an example of the gradation adjustment processing 130C, the processing performed by the AI-scheme processing section 62a10 shown in fig. 36 is given. As an example of the contrast adjustment process 130D, the process performed by the AI-scheme processing unit 62A3 is given. As an example of the dynamic range adjustment process 130E, a process performed by the AI-scheme processing unit 62A5 shown in fig. 19 is given. As an example of the resolution adjustment process 130F, a process performed by the AI-scheme processing unit 62A4 shown in fig. 16 is given. As an example of the edge emphasizing process 130G, a process performed by the AI-scheme processing unit 62A6 shown in fig. 22 is given. As an example of the sharpness adjustment process 130H, a process performed by the AI-scheme processing unit 62A3 shown in fig. 14 is given. As an example of the circle blur generation processing 130I, the processing performed by the AI-scheme processing unit 62A9 shown in fig. 31 is given. As an example of the blurring application process 130J, a process performed by the AI-scheme processing section 62A8 shown in fig. 28 is given. As an example of the skin image quality adjustment process 130K, a process performed by the AI-scheme processing unit 62a12 shown in fig. 42 is given. As an example of the coloring adjustment process 130L, a process performed by the AI-scheme processing unit 62A2 shown in fig. 7 is given. As an example of the picture-wind changing process 130M, a process performed by the AI-scheme processing unit 62a11 shown in fig. 39 is given.
The plurality of processing 130 according to the purpose is performed in order based on the degree of influence on the processing target image 75A. For example, the plurality of processing according to the purpose 130 proceeds stepwise from the processing according to the purpose 130 having a small degree of influence on the processing object image 75A to the processing according to the purpose 130 having a large degree of influence on the processing object image 75A. In the example shown in fig. 45, the processing performed on the processing target image 75A is performed in the order of the aberration correction processing 130A, the point image adjustment processing 130B, the gradation adjustment processing 130C, the contrast adjustment processing 130D, the dynamic range adjustment processing 130E, the resolution adjustment processing 130F, the edge emphasis processing 130G, the sharpness adjustment processing 130H, the circular blur generation processing 130I, the blur imparting processing 130J, the skin image quality adjustment processing 130K, the coloring adjustment processing 130L, and the picture change processing 130M.
As described above, in the present modification 12, since the processing object image 75A is subjected to a plurality of processes 130 according to the purpose in the AI system and the multiprocessing image 132 and the 2 nd image 88A obtained by the processes are combined in the proportion 90, the same effects as those of the examples shown in fig. 1 to 44 can be obtained.
In addition, in the present modification 12, since the plurality of processing according to the purpose 130 are performed in order based on the degree of influence on the processing target image 75A, it is possible to suppress the appearance of the multiprocessing image 132 from becoming unnatural, compared to the case where the plurality of processing according to the purpose 130 are performed on the processing target image 75A in order not considering the degree of influence on the processing target image 75A.
In the present modification 12, the plurality of processing according to the purpose 130 is performed stepwise from the processing according to the purpose 130 having a small degree of influence on the processing target image 75A to the processing according to the purpose 130 having a large degree of influence on the processing target image 75A. Therefore, compared to the case where a plurality of processing according to the purpose 130 is performed stepwise from the processing according to the purpose 130 having a large degree of influence on the processing target image 75A to the processing according to the purpose 130 having a small degree of influence on the processing target image 75A, it is possible to suppress the appearance of the multiprocessed image 132 from becoming unnatural.
[ modification 13 ]
In the examples shown in fig. 1 to 45, the example of the method of determining the proportion 90 according to the instruction from the user is described, but the technique of the present invention is not limited to this, and the proportion 90 may be determined by another method. For example, when the difference between the processing target image 75A and the 1 st image 86A or the difference between the 1 st image 86A and the 2 nd image 88A is too large, or when the difference between the processing target image 75A and the 1 st image 86A or the difference between the 1 st image 86A and the 2 nd image 88A is too small, it can be determined that the influence of the AI-mode processing on the 1 st image 86A is large. Therefore, the scale 90 may also be determined from the difference between the processing object image 75A and the 1 st image 86A or the difference between the 1 st image 86A and the 2 nd image 88A.
In this case, for example, as shown in fig. 46, the processor 62 derives the scale 90 from the difference 134 between the processing target image 75A and the 1 st image 86A. The proportion 90 may be calculated from an arithmetic expression having the difference 134 as an independent variable and the proportion 90 as a dependent variable, or the proportion 90 may be derived from a table in which the difference 134 and the proportion 90 are associated with each other. Also, a division value may be used instead of the difference 134. As an example of the division value used in place of the difference 134, there may be mentioned a ratio of one of a statistical value (for example, an average pixel value) of pixel values of a plurality of pixels (for example, all pixels or a plurality of pixels constituting an area in which a main object is present) included in the processing target image 75A and a statistical value (for example, an average pixel value) of pixel values of a plurality of pixels (for example, all pixels or a plurality of pixels constituting an area in which a main object is present) included in the 1 st image 86A to another.
For example, as shown in fig. 47, the processor 62 may derive the ratio 90 from the difference 136 between the 1 st image 86A and the 2 nd image 88A. In this case, the ratio 90 may be calculated from an arithmetic expression having the difference 136 as an independent variable and the ratio 90 as a dependent variable, or the ratio 90 may be derived from a table in which the difference 136 and the ratio 90 are associated with each other. Also, a division value may be used instead of the difference 136. Examples of the division value used in place of the difference 136 include a ratio of one of a statistical value (e.g., an average pixel value) of pixel values of a plurality of pixels (e.g., all pixels or a plurality of pixels constituting an area in which a main object is present) included in the 1 st image 86A and a statistical value (e.g., an average pixel value) of pixel values of a plurality of pixels (e.g., all pixels or a plurality of pixels constituting an area in which a main object is present) included in the 2 nd image 88A.
The ratio 90 may be derived from the differences 134 and 136. In this case, the ratio 90 may be calculated from an operation expression having the difference 134 and the difference 136 as independent variables and the ratio 90 as dependent variables, or the ratio 90 may be derived from a table in which the difference 134 and the difference 136 are associated with the ratio 90.
As described above, according to the present modification 13, the scale 90 is determined based on the difference between the processing target image 75A and the 1 st image 86A and/or the difference between the 1 st image 86A and the 2 nd image 88A. Therefore, compared to the case where the ratio 90 is a fixed value determined without considering the 1 st image 86A, it is possible to suppress the appearance of the image obtained by combining the 1 st image 86A and the 2 nd image 88A from becoming unnatural due to the influence of the AI-mode processing.
[ modification example 14 ]
As an example, as shown in fig. 48, the processor 62 may adjust the scale 90 based on the related information 138 related to the processing target image 75A. Here, as the 1 st example of the related information 138, information related to the sensitivity of the image sensor 20 (for example, ISO sensitivity, etc.) is given. As 2 nd example of the related information 138, information related to the brightness of the processing target image 75A (for example, an average value, a central value, a crowd value, or the like of the pixel values of the processing target image 75A) is given. As example 3 of the related information 138, information indicating the spatial frequency of the processing target image 75A is given. As the 4 th example of the related information 138, there is a subject image (for example, a person image, a region image, or the like) in the processing target image 75A.
As described above, according to the present modification 14, the scale 90 is adjusted based on the related information 138 related to the processing target image 75A. Therefore, the degradation of the image quality of the image obtained by combining the 1 st image 86A and the 2 nd image 88A due to the related information 138 can be suppressed as compared with the case where the scale 90 is changed without considering the related information 138 at all.
Other variations
While the description has been given by taking the embodiment in which the AI-scheme processing unit 62A performs the process using the generation model 82A, the AI-scheme processing unit 62A may use a plurality of types of generation models 82A separately according to the conditions. For example, the generation model 82A used by the AI-scheme processing unit 62A may be switched according to the imaging scene captured by the imaging device 10. The ratio 90 may be changed according to the generation model 82A used by the AI-scheme processing unit 62A.
In the above, the embodiment in which the color image or the achromatic image captured by the imaging device 10 is used as the processing target image 75A has been described, but the technique of the present invention is not limited to this, and the processing target image 75A may be a distance image.
While the description has been given of the embodiment in which the 2 nd image 88A is obtained by performing the non-AI-method processing on the processing target image 75A, the technique of the present invention is not limited to this, and an image obtained by performing the non-AI-method processing on the processing target image 75A and the processing using a learned model different from the generation model 82A may be used as the 2 nd image 88A.
While the embodiment in which the processor 62 of the image processing engine 12 included in the image pickup apparatus 10 performs the image combining process has been described above, the technique of the present invention is not limited to this, and the apparatus that performs the image combining process may be provided outside the image pickup apparatus 10. In this case, as an example, as shown in fig. 49, an imaging system 140 may be used. The imaging system 140 includes the imaging device 10 and an external device 142. The external device 142 is, for example, a server. The server is implemented, for example, by cloud computing. Here, cloud computing is illustrated, but this is only an example, and for example, a server may be implemented by a mainframe computer, or may be implemented by network computing such as fog computing, edge computing, or grid computing. Here, a server is exemplified as an example of the external device 142, but this is only an example, and at least one personal computer or the like may be used as the external device 142 instead of the server.
The external device 142 includes a processor 144, an NVM146, a RAM148, and a communication I/F150, and the processor 144, the NVM146, the RAM148, and the communication I/F150 are connected to each other via a bus 152. The communication I/F150 is connected to the image pickup apparatus 10 via a network 154. The network 154 is, for example, the internet. The network 154 is not limited to the internet, and may be a LAN such as a WAN and/or an intranet.
The NVM146 stores an image synthesis processing program 80, a generation model 82A, and a digital filter 84A. The processor 144 executes the image composition processing program 80 on the RAM 148. The processor 144 performs the above-described image composition processing in accordance with the image composition processing program 80 executed on the RAM 148. When the image synthesis processing is performed, the processor 144 processes the processing target image 75A using the generation model 82A and the digital filter 84A as described in the above examples. The processing target image 75A is transmitted from the image capturing apparatus 10 to the external apparatus 142 via the network 154, for example. The communication I/F150 of the external device 142 receives the processing target image 75A. The processor 144 performs image composition processing on the processing target image 75A received by the communication I/F150. The processor 144 generates a composite image 92 by performing image synthesis processing, and transmits the generated composite image 92 to the image pickup device 10. The image capturing apparatus 10 receives the composite image 92 transmitted from the external apparatus 142 by using the communication I/F52 (refer to fig. 2).
In the example shown in fig. 49, the external device 142 is an example of the "image processing device" and the "computer" according to the technology of the present invention, and the processor 144 is an example of the "processor" according to the technology of the present invention.
The image combining process may be performed by a plurality of devices including the imaging device 10 and the external device 142.
In the above, the processor 62 is illustrated, but other at least one CPU, at least one GPU and/or at least one TPU may be used instead of the processor 62 or in addition to the processor 62.
In the above, the embodiment in which the image synthesis processing program 80 is stored in the NVM64 has been described, but the technique of the present invention is not limited to this. For example, the image composition processing program 80 may be stored in a portable non-transitory storage medium such as an SSD or a USB memory. The image composition processing program 80 stored in the non-transitory storage medium is installed in the image processing engine 12 of the image pickup apparatus 10. The processor 62 performs image composition processing in accordance with the image composition processing program 80.
The image combining program 80 may be stored in a storage device such as a server device or another computer connected to the image pickup device 10 via a network, and the image combining program 80 may be downloaded and installed in the image processing engine 12 in response to a request from the image pickup device 10.
The image combining process program 80 is not necessarily stored in the memory device or NVM64 of another computer, server device, or the like connected to the image pickup device 10, but a part of the image combining process program 80 may be stored.
The image pickup apparatus 10 shown in fig. 1 and 2 has the image processing engine 12 built therein, but the technique of the present invention is not limited to this, and the image processing engine 12 may be provided outside the image pickup apparatus 10, for example.
Although the image processing engine 12 is described above, the technique of the present invention is not limited to this, and a device including an ASIC, FPGA, and/or PLD may be applied instead of the image processing engine 12. Also, a combination of hardware and software structures may be used instead of the image processing engine 12.
As hardware resources for executing the above-described image synthesizing process, various processors shown below can be used. Examples of the processor include a CPU, which is a general-purpose processor that functions as a hardware resource for executing the image synthesizing process by executing software (i.e., a program). The processor may be, for example, a dedicated circuit having a circuit configuration specifically designed to execute a specific process, such as an FPGA, a PLD, or an ASIC. A memory is built in or connected to any of the processors, and image synthesis processing is performed by using the memory.
The hardware resource for performing the image synthesizing process may be constituted by one of these various processors, or may be constituted by a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs or a combination of a CPU and an FPGA). Also, the hardware resource for performing the image synthesizing process may be one processor.
As an example of the configuration of one processor, first, there is the following: one processor is constituted by a combination of one or more CPUs and software, and functions as hardware resources for performing image synthesis processing by the processor. Next, the following modes are represented by SoC and the like: a processor that realizes the function of the whole system including a plurality of hardware resources that perform image synthesis processing by one IC chip is used. As such, the image synthesis processing is realized by using one or more of the above-described various processors as hardware resources.
Further, as a hardware configuration of these various processors, more specifically, a circuit formed by combining circuit elements such as semiconductor elements may be used. The image synthesis process is only an example. Therefore, needless steps may be deleted, new steps may be added, or the processing order may be changed within a range not departing from the gist of the present invention.
The description and the illustrations shown above are detailed descriptions of the portions related to the technology of the present invention, and are merely examples of the technology of the present invention. For example, the description of the above-described structure, function, operation, and effect is an explanation of an example of the structure, function, operation, and effect of the portion related to the technology of the present invention. Therefore, needless to say, it is also possible to delete unnecessary parts of the description contents and the illustration contents described above, add new elements, or replace them without departing from the gist of the present invention. In order to avoid trouble and to facilitate understanding of the technical aspects of the present invention, descriptions of technical common knowledge and the like, which are not particularly described when the technical aspects of the present invention are implemented, are omitted from the descriptions and illustrations shown above.
In the present specification, "a and/or B" has the same meaning as "at least one of a and B". That is, "a and/or B" means either a alone, B alone, or a combination of a and B. In the present specification, when three or more items are expressed by "and/or" in association with each other, the same point as "a and/or B" applies.
All documents, patent applications and technical standards described in this specification are incorporated by reference into this specification to the same extent as if each document, patent application and technical standard was specifically and individually indicated to be incorporated by reference.
Symbol description
10-image pickup apparatus, 12-image processing engine, 16-image pickup apparatus main body, 18-interchangeable lens, 18A-focus ring, 20-image sensor, 22-release button, 24-dial, 26-index key, 28-display, 30-touch panel, 32-touch screen display, 36-control device, 37-1 st actuator, 38-2 nd actuator, 39-3 rd actuator, 40-imaging lens, 40A-objective lens, 40B-focus lens, 40C-zoom lens, 40D-diaphragm, 40D 1-aperture, 40D 2-diaphragm blade, 44-system controller, 46-image memory, 48-UI-system device, 50-external I/F,52, 150-communication I/F, 54-photoelectric conversion element driver, 62, 144-processor, 62A 1-62A 12-AI mode processing sections, 62B 1-62B 10-non-AI mode processing sections, 62C 1-62C 12-image adjusting sections, 62D 1-62D 12-synthesizing sections, 64, 146-NVM,66, 148-RAM,68, 152-bus, 70-input/output interface, 72-photoelectric conversion element, 72A-light receiving surface, 74-A/D converter, 75A 1-75A 12-processing object image, 75A1 a-image area, 75B-processed image, 76-receiving device, 78-hard key section, 80-image synthesizing processing program, 82A 1-82A 12, 82A3a, 82A 3B-generated model, 84A 1-84A 10, 84A2A, 84A 3B-digital filter, 86A-1 st image, 88A-2 nd image, 90-scale, 90A 1-90L 1-1 st scale, 90A 2-90L 2-2 nd scale, 92-composite image, 94, 98, 110, 116, 124, 128-person region, 96, 100, 126-background region, 104A, 106A-center pixel, 104B, 106B-adjacent pixel, 108-vehicle region, 112-edge region, 114-spot image, 118-1 st circular blur, 128A-skin region, 128A 1-stain region, 120-2 nd circular blur, 130-as-intended processing, 130A-aberration correction processing, 130B-spot image adjustment processing, 130C-gray scale adjustment processing, 130D-contrast adjustment processing, 130E-dynamic range adjustment processing, 130F-resolution adjustment processing, 130G-edge emphasis processing, 130H-sharpness adjustment processing, 130I-circular blur generation processing, 130J-blur imparting processing, 130K-image quality adjustment processing, 130L-color adjustment processing, 130M-wind change processing, 130M-image change processing, 132-difference of image, 138-OA, optical axis, and related image systems, and network devices.

Claims (40)

1. An image processing apparatus includes a processor,
the processor performs the following processing:
acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by performing 1 st AI processing on a processing object image, and the 2 nd image is obtained by not performing the 1 st AI processing on the processing object image; and
The excessive or insufficient processing of the 1 st AI is adjusted by synthesizing the 1 st image and the 2 nd image.
2. The image processing apparatus according to claim 1, wherein,
the 2 nd image is an image obtained by performing a non-AI method process that does not use a neural network on the process target image.
3. An image processing apparatus includes a processor,
the processor performs the following processing:
acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by performing 1 st AI processing on a processing object image to adjust a non-noise element of the processing object image, and the 2 nd image is not obtained by performing the 1 st AI processing on the processing object image; and
The non-noise element is adjusted by synthesizing the 1 st image and the 2 nd image.
4. The image processing apparatus according to claim 3, wherein,
the 2 nd image is an image in which the non-noise element is adjusted by performing a non-AI method process that does not use a neural network on the processing target image.
5. The image processing apparatus according to claim 3, wherein,
the 2 nd image is an image in which the non-noise element is not adjusted.
6. The image processing apparatus according to any one of claims 1 to 5, wherein,
the processor synthesizes the 1 st image and the 2 nd image in a ratio that adjusts the excess or deficiency of the 1 st AI process.
7. The image processing apparatus according to claim 6, wherein,
the processing target image is an image obtained by photographing by an imaging device,
the 1 st AI process includes a 1 st correction process of correcting, in an AI manner, a phenomenon that occurs in the processing target image due to characteristics of the image pickup device,
the 1 st image includes a 1 st correction image, the 1 st correction image being obtained by performing the 1 st correction processing,
the processor adjusts elements derived from the 1 st correction process by synthesizing the 1 st correction image and the 2 nd image at the ratio.
8. The image processing apparatus according to claim 7, wherein,
the processor performs a 2 nd correction process of correcting the phenomenon in a non-AI manner,
the 2 nd image includes a 2 nd correction image, the 2 nd correction image being obtained by performing the 2 nd correction processing,
The processor adjusts elements derived from the 1 st correction process by synthesizing the 1 st correction image and the 2 nd correction image at the ratio.
9. The image processing apparatus according to claim 7 or 8, wherein,
the characteristic includes an optical characteristic of the image pickup device.
10. The image processing apparatus according to claim 6, wherein,
the 1 st AI process includes a 1 st change process of changing, in an AI manner, factors that control a visual impression given to the processing target image,
the 1 st image includes a 1 st change image, the 1 st change image being obtained by performing the 1 st change process,
the processor adjusts an element derived from the 1 st modification process by synthesizing the 1 st modification image and the 2 nd image at the scale.
11. The image processing apparatus according to claim 10, wherein,
the processor performs a 2 nd change process of changing the factor in a non-AI manner,
the 2 nd image includes a 2 nd change image obtained by performing the 2 nd change process,
the processor adjusts an element derived from the 1 st modification process by synthesizing the 1 st modification image and the 2 nd modification image at the ratio.
12. The image processing apparatus according to claim 10 or 11, wherein,
such factors include sharpness, color, gray scale, resolution, blur, degree of emphasis of edge areas, wind and/or skin-related image quality.
13. The image processing apparatus according to claim 6, wherein,
the processing target image is a captured image obtained by capturing, by the image capturing device, subject light imaged on the light receiving surface by a lens of the image capturing device,
the 1 st image includes a 1 st aberration correction image obtained by performing an aberration region correction process that corrects a region of the captured image in which aberration of the lens is reflected in an AI manner as a process included in the 1 st AI process,
the 2 nd image includes a 2 nd aberration correction image obtained by performing a process of correcting, in a non-AI manner, a region in the captured image in which an aberration of the lens is reflected,
the processor adjusts an element derived from the aberration region correction process by synthesizing the 1 st aberration correction image and the 2 nd aberration correction image in the ratio.
14. The image processing apparatus according to claim 6, wherein,
the 1 st image includes a 1 st coloring image obtained by coloring a processing object image in an AI manner so as to be able to distinguish a 1 st region and a 2 nd region, the 2 nd region being a region different from the 1 st region,
the 2 nd image includes a 2 nd coloring image obtained by performing a process of changing the color of the processing target image in a non-AI manner,
the processor adjusts elements derived from the shading process by synthesizing the 1 st and 2 nd shading images at the ratio.
15. The image processing apparatus according to claim 14, wherein,
the 2 nd coloring image is an image obtained by performing a process of coloring the processing target image in a non-AI manner so that the 1 st region and the 2 nd region can be distinguished.
16. The image processing apparatus according to claim 14 or 15, wherein,
the processing target image is an image obtained by capturing the 1 st subject,
The 1 st region is a region within the processing target image in which a specific subject included in the 1 st subject is mapped.
17. The image processing apparatus according to claim 6, wherein,
the 1 st image includes a 1 st contrast adjustment image obtained by performing a 1 st contrast adjustment process as a process included in the 1 st AI process, the 1 st contrast adjustment process adjusting the contrast of the processing object image in an AI manner,
the 2 nd image includes a 2 nd contrast adjustment image obtained by performing a 2 nd contrast adjustment process of adjusting the contrast of the processing target image in a non-AI manner,
the processor adjusts an element derived from the 1 st contrast adjustment process by synthesizing the 1 st contrast adjustment image and the 2 nd contrast adjustment image at the ratio.
18. The image processing apparatus according to claim 17, wherein,
the processing target image is an image obtained by capturing a 2 nd subject,
the 1 st contrast adjustment process includes a 3 rd contrast adjustment process of adjusting a contrast of the processing object image in accordance with the 2 nd subject in an AI manner,
The 2 nd contrast adjustment process includes a 4 th contrast adjustment process of adjusting a contrast of the processing object image in accordance with the 2 nd subject in a non-AI manner,
the 1 st image includes a 3 rd contrast image, the 3 rd contrast image being obtained by performing the 3 rd contrast adjustment process,
the 2 nd image includes a 4 th contrast image, the 4 th contrast image being obtained by performing the 4 th contrast adjustment process,
the processor adjusts elements derived from the 3 rd contrast adjustment process by synthesizing the 3 rd contrast image and the 4 th contrast image at the ratio.
19. The image processing apparatus according to claim 17 or 18, wherein,
the 1 st contrast adjustment process includes a 5 th contrast adjustment process of AI-wise adjusting the contrast of a center pixel included in the processing target image and a plurality of adjacent pixels adjacent thereto around the center pixel,
the 2 nd contrast adjustment process includes a 6 th contrast adjustment process that adjusts the contrast of the center pixel and the plurality of adjacent pixels in a non-AI manner,
The 1 st image includes a 5 th contrast image, the 5 th contrast image being obtained by performing the 5 th contrast adjustment process,
the 2 nd image includes a 6 th contrast image, the 6 th contrast image being obtained by performing the 6 th contrast adjustment process,
the processor adjusts elements derived from the 5 th contrast adjustment process by synthesizing the 5 th contrast image and the 6 th contrast image at the ratio.
20. The image processing apparatus according to claim 6, wherein,
the 1 st image includes a 1 st resolution adjustment image obtained by performing a 1 st resolution adjustment process as a process included in the 1 st AI process, the 1 st resolution adjustment process adjusting the resolution of the processing object image in an AI manner,
the 2 nd image includes a 2 nd resolution adjustment image, the 2 nd resolution adjustment image being obtained by performing a 2 nd resolution adjustment process that adjusts the resolution in a non-AI manner,
the processor adjusts an element derived from the 1 st resolution adjustment process by synthesizing the 1 st resolution adjustment image and the 2 nd resolution adjustment image in the ratio.
21. The image processing apparatus according to claim 20, wherein,
the 1 st resolution adjustment process is a process of super-resolution of the processing target image in an AI manner,
the 2 nd resolution adjustment process is a process of super-resolving the processing target image in a non-AI manner.
22. The image processing apparatus according to claim 6, wherein,
the 1 st image includes a 1 st high dynamic range image, the 1 st high dynamic range image being obtained by performing an expansion process as a process included in the 1 st AI process, the expansion process expanding a dynamic range of the processing target image in an AI manner,
the 2 nd image includes a 2 nd high dynamic range image obtained by performing a process of expanding the dynamic range of the processing target image in a non-AI manner,
the processor adjusts an element derived from the expansion process by synthesizing the 1 st high dynamic range image and the 2 nd high dynamic range image in the ratio.
23. The image processing apparatus according to claim 6, wherein,
the 1 st image includes a 1 st edge emphasized image obtained by subjecting, as a process included in the 1 st AI process, the 1 st edge emphasized image to an emphasis process of emphasizing an edge region in AI manner than a non-edge region within the processing target image, the non-edge region being a region different from the edge region,
The 2 nd image includes a 2 nd edge-emphasized image obtained by performing a process of emphasizing the edge region more than the non-edge region in a non-AI manner,
the processor adjusts an element derived from the emphasis process by synthesizing the 1 st edge emphasized image and the 2 nd edge emphasized image in the ratio.
24. The image processing apparatus according to claim 6, wherein,
the 1 st image includes a 1 st point image adjustment image obtained by performing a point image adjustment process for adjusting a blurring amount of a point image in the AI manner for the processing object image as a process included in the 1 st AI process,
the 2 nd image includes a 2 nd point image adjustment image, the 2 nd point image adjustment image being obtained by performing a process of adjusting the blur amount in a non-AI manner,
the processor adjusts an element derived from the point image adjustment process by synthesizing the 1 st point image adjustment image and the 2 nd point image adjustment image in the ratio.
25. The image processing apparatus according to claim 6, wherein,
the processing target image is an image obtained by capturing a 3 rd subject,
The 1 st image includes a 1 st blurred image obtained by performing a blurring process as a process included in the 1 st AI process, the blurring process imparting a blur corresponding to the 3 rd subject to the processing target image in an AI manner,
the 2 nd image includes a 2 nd blurred image obtained by performing a process of imparting the blur to the processing target image in a non-AI manner,
the processor adjusts elements derived from the blurring process by synthesizing the 1 st blurred image and the 2 nd blurred image in the ratio.
26. The image processing apparatus according to claim 6, wherein,
the 1 st image includes a 1 st circular blur image obtained by performing a circular blur process as a process included in the 1 st AI process, the circular blur process imparting a 1 st circular blur to the processing object image in an AI manner,
the 2 nd image includes a 2 nd circular blur image obtained by performing processing of adjusting the 1 st circular blur from the processing object image in a non-AI manner or imparting a 2 nd circular blur to the processing object image in a non-AI manner,
The processor adjusts elements derived from the circular blur process by synthesizing the 1 st circular blur image and the 2 nd circular blur image in the ratio.
27. The image processing apparatus according to claim 6, wherein,
the 1 st image includes a 1 st gradation adjustment image obtained by performing a 1 st gradation adjustment process as a process included in the 1 st AI process, the 1 st gradation adjustment process AI-wise adjusting the gradation of the processing object image,
the 2 nd image includes a 2 nd gradation adjustment image obtained by performing a 2 nd gradation adjustment process of adjusting the gradation of the processing object image in a non-AI manner,
the processor adjusts an element derived from the 1 st gradation adjustment process by synthesizing the 1 st gradation adjustment image and the 2 nd gradation adjustment image at the ratio.
28. The image processing apparatus according to claim 27, wherein,
the processing target image is an image obtained by capturing a 4 th subject,
the 1 st gradation adjustment process is a process of adjusting the gradation of the processing target image according to the 4 th subject in an AI manner,
The 2 nd gradation adjustment process is a process of adjusting the gradation of the processing target image according to the 4 th subject in a non-AI manner.
29. The image processing apparatus according to claim 6, wherein,
the 1 st image includes a wind-drawing-change image obtained by performing a wind-drawing change process that changes a wind of the processing-target image in an AI manner as a process included in the 1 st AI process,
the processor adjusts an element derived from the picture change process by synthesizing the picture change image and the 2 nd image at the ratio.
30. The image processing apparatus according to claim 6, wherein,
the image to be processed is an image obtained by photographing skin,
the 1 st image includes a skin image quality adjustment image obtained by performing a skin image quality adjustment process for AI-adjusting an image quality related to the skin appearing in the image to be processed as a process included in the 1 st AI process,
the processor adjusts an element derived from the flesh image quality adjustment process by synthesizing the flesh image quality adjustment image and the 2 nd image at the ratio.
31. The image processing apparatus according to claim 6, wherein,
the 1 st AI process includes a plurality of processes according to purposes performed in an AI manner,
the 1 st image includes a multiprocessing image obtained by subjecting the processing target image to the plurality of processing according to purposes,
the processor synthesizes the multiprocessing image and the 2 nd image at the ratio.
32. The image processing apparatus according to claim 31, wherein,
the plurality of processing according to the purpose is performed in order based on the degree of influence on the processing target image.
33. The image processing apparatus according to claim 32, wherein,
the plurality of processing according to the purpose is performed stepwise from the processing according to the purpose having a small degree of influence to the processing according to the purpose having a large degree of influence.
34. The image processing apparatus according to claim 6, wherein,
the ratio is determined from a difference between the processing object image and the 1 st image and/or a difference between the 1 st image and the 2 nd image.
35. The image processing apparatus according to claim 6, wherein,
The processor adjusts the scale according to related information related to the processing object image.
36. An image pickup device is provided with:
the image processing apparatus of any one of claims 1 to 35; and
The image sensor is used for detecting the position of the object,
the processing target image is an image obtained by photographing by the image sensor.
37. An image processing method, comprising the steps of:
acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by performing 1 st AI processing on a processing object image, and the 2 nd image is obtained by not performing the 1 st AI processing on the processing object image; and
The excessive or insufficient processing of the 1 st AI is adjusted by synthesizing the 1 st image and the 2 nd image.
38. An image processing method, comprising the steps of:
acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by performing 1 st AI processing on a processing object image to adjust a non-noise element of the processing object image, and the 2 nd image is not obtained by performing the 1 st AI processing on the processing object image; and
The non-noise element is adjusted by synthesizing the 1 st image and the 2 nd image.
39. A storage medium storing a program for causing a computer to execute a process comprising the steps of:
Acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by performing 1 st AI processing on a processing object image, and the 2 nd image is obtained by not performing the 1 st AI processing on the processing object image; and
The excessive or insufficient processing of the 1 st AI is adjusted by synthesizing the 1 st image and the 2 nd image.
40. A storage medium storing a program for causing a computer to execute a process comprising the steps of:
acquiring a 1 st image and a 2 nd image, wherein the 1 st image is obtained by performing 1 st AI processing on a processing object image to adjust a non-noise element of the processing object image, and the 2 nd image is not obtained by performing the 1 st AI processing on the processing object image; and
The non-noise element is adjusted by synthesizing the 1 st image and the 2 nd image.
CN202310755658.2A 2022-06-30 2023-06-26 Image processing device, image capturing device, image processing method, and storage medium Pending CN117336423A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-106600 2022-06-30
JP2022106600A JP2024006056A (en) 2022-06-30 2022-06-30 Image processing device, image capturing device, image processing method, and program

Publications (1)

Publication Number Publication Date
CN117336423A true CN117336423A (en) 2024-01-02

Family

ID=89274316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310755658.2A Pending CN117336423A (en) 2022-06-30 2023-06-26 Image processing device, image capturing device, image processing method, and storage medium

Country Status (3)

Country Link
US (1) US20240005467A1 (en)
JP (1) JP2024006056A (en)
CN (1) CN117336423A (en)

Also Published As

Publication number Publication date
US20240005467A1 (en) 2024-01-04
JP2024006056A (en) 2024-01-17

Similar Documents

Publication Publication Date Title
KR101692401B1 (en) Image process method and apparatus
JP5460173B2 (en) Image processing method, image processing apparatus, image processing program, and imaging apparatus
JP5434361B2 (en) Image processing apparatus and image processing method
KR101204727B1 (en) Image processing apparatus, image processing method, and storage medium
KR101663227B1 (en) Method and apparatus for processing image
KR101643613B1 (en) Digital image process apparatus, method for image processing and storage medium thereof
KR101566081B1 (en) Image processing apparatus, image processing method, and storage medium
US8094213B2 (en) Image processing apparatus, image processing method, and program in which an original image is modified with respect to a desired reference point set on a screen
CN105339954B (en) The system and method for the super-resolution interpolation based on single frames for digital camera
CN101753814A (en) Filming device, illumination processing device and illumination processing method
JP5930245B1 (en) Image processing apparatus, image processing method, and program
CN103685968A (en) Image processing apparatus and image processing method
JP7516471B2 (en) Control device, imaging device, control method, and program
JP6817779B2 (en) Image processing equipment, its control method, programs and recording media
JP7057079B2 (en) Image processing device, image pickup device, image processing method, and program
JP5455728B2 (en) Imaging apparatus, image processing apparatus, and image processing method
KR101427649B1 (en) Digital image processing appratus and method displaying distribution chart of color
CN117336423A (en) Image processing device, image capturing device, image processing method, and storage medium
JP2005117399A (en) Image processor
JP2017147498A (en) Image processing apparatus, image processing method and program
JP5282533B2 (en) Image processing apparatus, imaging apparatus, and program
JP7476361B2 (en) Information processing device, imaging device, information processing method, and program
CN112640430A (en) Imaging element, imaging device, image data processing method, and program
CN115086558B (en) Focusing method, image pickup apparatus, terminal apparatus, and storage medium
JP2018032442A (en) Image processor, image processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication