CN103731660A - System and method for optimizing image quality in a digital camera - Google Patents
System and method for optimizing image quality in a digital camera Download PDFInfo
- Publication number
- CN103731660A CN103731660A CN201310479312.0A CN201310479312A CN103731660A CN 103731660 A CN103731660 A CN 103731660A CN 201310479312 A CN201310479312 A CN 201310479312A CN 103731660 A CN103731660 A CN 103731660A
- Authority
- CN
- China
- Prior art keywords
- image
- algorithm
- machine learning
- collection
- original image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 81
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 96
- 238000010801 machine learning Methods 0.000 claims abstract description 35
- 238000009877 rendering Methods 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims description 76
- 230000008569 process Effects 0.000 claims description 29
- 230000003287 optical effect Effects 0.000 claims description 14
- 238000009795 derivation Methods 0.000 claims description 8
- 230000001537 neural effect Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 abstract description 23
- 238000012549 training Methods 0.000 description 36
- 238000003860 storage Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 14
- 238000009826 distribution Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 12
- 230000001276 controlling effect Effects 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 11
- 238000013461 design Methods 0.000 description 7
- 238000013316 zoning Methods 0.000 description 7
- 238000013459 approach Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000003139 buffering effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012958 reprocessing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a system and method for optimizing image quality in a digital camera. A digital camera includes an image optimization engine configured to generate an optimized image based on a raw image captured by the digital camera. The image optimization engine implements one or more machine learning engines in order to select rendering algorithms and rendering algorithm arguments that may then be used to render the raw image.
Description
Technical field
The present invention generally relates to digital photography, and more specifically, relates to for optimizing the system and method for digital camera picture quality.
Background technology
The universal of digital photography advanced by leaps and bounds in 10 years in the past, and this is mainly owing to having comprised digital camera in the mobile device such as mobile phone.Due to digital camera technological improvement, cause the increase of digital image resolution, increasing people have reached the digital camera that relies on completely based on mobile phone for all photography demands.User needs to take fast the ability of the photo with professional quality, and therefore, most of modern digital cameras comprises the image-signal processor (ISP) of realizing numerous image processing algorithms and can be used to improve digital picture quality now.
Typically, camera obtains image from transducer and is called as " IMAQ ", and image processing algorithm is applied to original image to produce the process of the image through optimizing, is called as " image rendering ".Provide the algorithm of this processing to be called as " playing up algorithm " and include but not limited to reducing noise, Automatic white balance adjusting, tint correction, sharpening, color enhancing etc.Typically, each accurate operation of playing up algorithm is controlled by concrete argument (argument).For example, color saturation algorithm can be controlled by argument, and this argument is the percentage higher or lower than the saturation of a certain standard value.These algorithms can realize firmware on hardware, digital signal processor completely, as in the special code on the engine able to programme of Graphics Processing Unit (GPU) or software or above a certain combination.
Each this algorithm typically carrys out manually Design and implementation based on designer about calculating the experience of photography by algorithm design teacher.Therefore, many designers become expert in the narrow field relevant to picture quality.For example, given designer can be regarded as expert for " reducing noise " or aspect the algorithm for design of control " white balance ".Consider and produce the algorithm of the desired enormous quantity of high quality graphic and processing from the variety of issue in the original image of digital sensor, many digital cameras supplier employs large quantities of algorithm design teacher, and wherein each designer has design and adjustment algorithm to proofread and correct or to strengthen a highly broad experience for particular problem.In many cases, these algorithms require input from the user of digital camera with true(-)running.For example, given algorithm can require to describe the information that user wishes scene type such as " seabeach " or " forest " of photography.The information that algorithm provides according to this user subsequently regulates some parameter.
A problem of the current state of digital photography is: digital camera supplier typically devotes considerable time and money design will be included in the algorithm in ISP, and however, many digital cameras still require significant user interactions to produce the passable image of quality.This arrangement is not ideal, although because dropped into this large-scale time and money in ISP exploitation, most of digital camera supplier does not still reach " idiot camera (point and shoot) " the functional high quality graphic that produces that utilizes user to expect.
What therefore, this area needed is for realizing the more effective technology of the image processing algorithm of digital camera.
In another route of technological development, developed abundant machine learning techniques collection, it can be trained to machine learning engine (MLE) and realizes complicated arbitrarily function and algorithm.Use such as supervision formula study, such as the MLE of artificial neural net (ANN) or the technology of in SVMs (SVM) or numerous other MLE, can be trained to new but invisible Data classification still.MLE also can be trained to and realize from the new but invisible conversion that is input to output, this output with as by the ideal of the represented expectation of training data, export and closely mate.
Summary of the invention
One embodiment of the present of invention are set forth the method for rendering image, comprise via be included in optical pickocff in digital camera gather original image, pixel value collection based on being associated with original image generate image statistics collection for original image, make the first machine learning engine select to play up algorithm and with selected play up algorithm corresponding play up algorithm argument collection; And play up algorithm and play up algorithm argument collection processing original image and play up the image through optimizing by utilization.
The advantage of technology described herein is: the machine learning engine in image optimization engine can be trained to synthetic image and not require that designer team creates and adjust large-scale algorithm intersection.Further, the user of digital camera can utilize less manual camera control to obtain the more outstanding image through optimizing, thereby improve user, experiences.
Accompanying drawing explanation
Therefore, can at length understand above-mentioned feature of the present invention, and can reference example obtain describing more specifically as the present invention of institute's brief overview above, some of them embodiment is shown in the drawings.Yet, it should be noted in the discussion above that accompanying drawing only shows exemplary embodiments of the present invention, therefore should not be considered to restriction on its scope, the present invention can have other equivalent embodiment.
Fig. 1 shows the block diagram of the computer system that is configured to realize one or more aspects of the present invention;
Fig. 2 be according to an embodiment of the invention, for the block diagram of the parallel processing subsystem of the computer system of Fig. 1;
Fig. 3 is block diagram according to an embodiment of the invention, that show digital camera;
Fig. 4 A is schematic diagram according to an embodiment of the invention, that show the image optimization engine that is configured to processing digital images;
Fig. 4 B is schematic diagram according to an embodiment of the invention, that show the image optimization engine shown in training plan 4A;
Fig. 5 A is schematic diagram according to an embodiment of the invention, that show another embodiment of the image optimization engine shown in Fig. 3;
Fig. 5 B is schematic diagram according to an embodiment of the invention, that show the image optimization engine shown in training plan 5A;
Fig. 6 be according to an embodiment of the invention, for using the flow chart of method step of the image optimization engine processing digital images of Fig. 4 A and 4B;
Fig. 7 be according to an embodiment of the invention, for using the flow chart of method step of the image optimization engine processing digital images of Fig. 5 A and 5B;
Fig. 8 be according to an embodiment of the invention, for the flow chart of the method step of the image optimization engine of training plan 4A and 4B; And
Fig. 9 be according to an embodiment of the invention, for the flow chart of the method step of the image optimization engine of training plan 5A and 5B.
Embodiment
In the following description, will set forth a large amount of details so that the more thorough understanding to the present invention to be provided.Yet, it will be apparent to those skilled in the art, the present invention can be implemented the in the situation that of neither one or a plurality of these details.In other examples, do not describe well-known characteristic and the present invention is caused and obscured avoiding.
System survey
Fig. 1 is the block diagram that shows the computer system 100 that is configured to realize one or more aspects of the present invention.Computer system 100 comprises CPU (CPU) 102 and comprises the system storage 104 of device driver 103.CPU102 communicates by letter via the interconnection path that can comprise Memory bridge 105 with system storage 104.Memory bridge 105 can be north bridge chips for example, via bus or other communication paths 106(super transmission (HyperTransport) link for example) be connected to I/O (I/O) bridge 107.I/O bridge 107, it can be South Bridge chip for example, from one or more user input device 108(for example keyboard, mouse) receive user's input and via path 106 and Memory bridge 105, this input be forwarded to CPU102.Parallel processing subsystem 112 is via bus or other communication paths 113(for example peripheral component interconnect (pci) express, Accelerated Graphics Port (AGP) or super transmission link) be coupled to Memory bridge 105; In one embodiment, parallel processing subsystem 112 is that pixel is delivered to for example conventional monitor based on cathode ray tube (CRT) or liquid crystal display (LCD) of display device 110() graphics subsystem.System disk 114 is also connected to I/O bridge 107.Interchanger 116 provide I/O bridge 107 with such as being connected between network adapter 118 and various plug-in card 120 and 121 miscellaneous part.Miscellaneous part (clearly not illustrating), comprises USB (USB) or the connection of other ports, compact disk (CD) driver, digital video disk (DVD) driver, film recording arrangement and like, also can be connected to I/O bridge 107.Can use any applicable agreement to realize the communication path of the various parts interconnection in Fig. 1, such as PCI, PCI-Express(PCIe), AGP, super transmission or any other bus or point to point communication protocol, and as known in the art, the connection between distinct device can be used different agreement.
In one embodiment, parallel processing subsystem 112 comprises through optimizing the circuit for figure and Video processing, comprises for example video output circuit, and forms Graphics Processing Unit (GPU).In another embodiment, parallel processing subsystem 112 comprises through optimizing the circuit for general procedure, retains the computing architecture of bottom (underlying) simultaneously, will be described in more detail herein.In yet another embodiment, parallel processing subsystem 112 and one or more other system elements can be integrated, this other system element such as Memory bridge 105, CPU102 and I/O bridge 107, to form SOC (system on a chip) (SoC).
Should be appreciated that, herein shown in system be exemplary, and to change and revise be all possible.Connect topology, comprise number and arrangement, the number of CPU102 and the number of parallel processing subsystem 112 of bridge, can revise as required.For example, in certain embodiments, system storage 104 is directly connected to CPU102 rather than passes through bridge, and other equipment are communicated by letter with system storage 104 with CPU102 via Memory bridge 105.In other substituting topologys, parallel processing subsystem 112 is connected to I/O bridge 107 or is directly connected to CPU102, rather than is connected to Memory bridge 105.And in other embodiments, I/O bridge 107 and Memory bridge 105 may be integrated on one single chip.Large-scale embodiment can comprise two or more CPU102 and two or more parallel processing system (PPS) 112.Specific features shown in this article is optional; For example, the plug-in card of any number or ancillary equipment all may be supported.In certain embodiments, interchanger 116 is removed, and network adapter 118 and plug-in card 120,121 are directly connected to I/O bridge 107.
Fig. 2 shows parallel processing subsystem 112 according to an embodiment of the invention.As directed, parallel processing subsystem 112 comprises one or more parallel processing elements (PPU) 202, and each parallel processing element 202 is coupled to local parallel processing (PP) memory 204.Conventionally, parallel processing subsystem comprises U PPU, wherein U >=1.(when herein, the Multi-instance of similar object needs, with the numeral in the reference number of sign object and the bracket of sign example, represent.) PPU202 and parallel processing memory 204 can realize by one or more integrated device electronics, such as programmable processor, application-specific integrated circuit (ASIC) (ASIC) or memory devices, or realize in the mode of any other technical feasibility.
Referring again to Fig. 1, in certain embodiments, some or all of PPU202 in parallel processing subsystem 112 are the graphic process unit with rendering pipeline, it can be configured to implement and following relevant various tasks: the graph data generation pixel data of supplying from CPU102 and/or system storage 104 via Memory bridge 105 and bus 113, can be used as graphic memory with local parallel processing memory 204(, comprise for example conventional frame buffer zone (buffer)) alternately with storage and renewal pixel data, transmit pixel data to display device 110 etc.In certain embodiments, parallel processing subsystem 112 can comprise one or more PPU202 and one or more other PPU202 for general-purpose computations that operate as graphic process unit.These PPU can be same or different, and each PPU can have the special-purpose parallel processing memory devices of himself or not have special-purpose parallel processing memory devices.The exportable data of one or more PPU202 are to display device 110, or the exportable data of each PPU202 are to one or more display devices 110.
In operation, CPU102 is the primary processor of computer system 100, controls and coordinate the operation of other system parts.Particularly, CPU102 sends the order of the operation of controlling PPU202.In certain embodiments, the order that CPU102 writes for each PPU202 flows to stacked buffering area (pushbuffer) (clearly not illustrating in Fig. 1 or Fig. 2), and this enters stack buffer can be arranged in all addressable other memory locations of system storage 104, parallel processing memory 204 or CPU102 and PPU202.PPU202 is from entering stack buffer reading order stream, then with respect to the operation exception ground fill order of CPU102.
Now referring back to Fig. 2, each PPU202 comprise via be connected to Memory bridge 105(or, in an alternate embodiment, be directly connected to CPU102) the communication path 113 I/O unit 205 of communicating by letter with the remainder of computer system 100.PPU202 also can change to the connection of the remainder of computer system 100.In certain embodiments, parallel processing subsystem 112 can be embodied as the plug-in card in the expansion slot that can be inserted into computer system 100.In other embodiments, PPU202 can be integrated on one single chip with the bus bridge such as Memory bridge 105 or I/O bridge 107.And in other embodiments, the some or all of elements of PPU202 can be integrated on one single chip with CPU102.
In one embodiment, communication path 113 is PCIe links, and as known in the art, wherein designated lane is assigned to each PPU202.Also can use other communication paths.I/O unit 205 generates the bag (or other signals) transmitting on communication path 113, and receives all bags that import into (or other signals) from communication path 113, the bag importing into is directed to the suitable parts of PPU202.For example, the order relevant to Processing tasks can be directed to host interface 206, and for example, by the order relevant to storage operation (, reading or writing parallel processing memory 204) bootstrap memory cross bar switch unit 210.Host interface 206 reads each and enters stack buffer, and will output to front end 212 by entering the specified work in stack buffer.
Advantageously, each PPU202 realizes highly-parallel and processes framework.As be shown specifically, PPU202(0) comprising Processing Cluster array 230, this array 230 comprises C general procedure cluster (GPC) 208, wherein C >=1.Each GPC208 can a large amount of (for example, hundreds of or several thousand) thread of concurrent execution, and wherein each thread is the example (instance) of program.In various application, can distribute different GPC208 for the treatment of dissimilar program or for implementing dissimilar calculating.For example, in graphical application, can distribute a GPC208 collection to implement surface subdivision (tessellation) operation and to produce the primitive topology for patch, can distribute the 2nd GPC208 collection to implement surface subdivision painted (shading) to assess for the patch parameter of primitive topology and other attributes on definite vertex position and each summit.The distribution of GPC208 can depend on that the workload producing because of the program of every type or calculating changes.
GPC208 receives via work distribution unit 200 Processing tasks that will carry out, and work distribution unit 200 receives the order of definition process task from front end unit 212.Processing tasks comprises the index of data to be dealt with, for example surface (patch) data, primitive data, vertex data and/or pixel data, and how definition data will be processed state parameter and the order of (for example, what program will be performed).Work distribution unit 200 can be configured to and obtains the index corresponding with task, or work distribution unit 200 can be from front end 212 reception hints.Front end 212 is guaranteed, by entering before the specified processing initiation in stack buffer, GPC208 to be configured to effective status.
When PPU202 is used for graphics process, for example, the task of the work for the treatment of amount for each patch being divided into approximately equal size is distributed to a plurality of GPC208 so that surface subdivision is processed.Work distribution unit 200 can be configured to task can be provided to a plurality of GPC208 for the treatment of frequency carry out generation task.By contrast, in conventional system, process and typically by single processing engine, implemented, and other processing engine keep idle, before the Processing tasks that starts them, wait for that single processing engine completes its task.In some embodiments of the invention, part GPC208 is configured to implement dissimilar processing.For example, first can be configured to enforcement vertex coloring and topological generation, and it is painted with how much that second portion can be configured to enforcement surface subdivision, and the pixel that third part can be configured in enforcement screen space is painted to produce the image through playing up.The intermediate data being produced by GPC208 can be stored in buffering area to allow intermediate data to be transmitted between GPC208 for further processing.
Memory interface 214 comprises D zoning unit 215, and each zoning unit 215 is directly coupled to a part for parallel processing memory 204, wherein D >=1.As directed, the number of zoning unit 215 generally equals the number of DRAM220.In other embodiments, the number of zoning unit 215 also can be not equal to the number of memory devices.It should be appreciated by those skilled in the art that dynamic random access memory (DRAM) 220 can substitute and can be with other suitable memory devices the design of general routine.Therefore omitted detailed description.Such as the playing up target and can be stored across DRAM220 of frame buffer zone or texture map, this allows zoning unit 215 to be written in parallel to each each several part of playing up target effectively to use the available bandwidth of parallel processing memory 204.
Any one GPC208 can process the data that will be written to any DRAM220 in parallel processing memory 204.Cross bar switch unit 210 is configured to the input of any zoning unit 215 of outputing to of each GPC208 of route or arrives another GPC208 for further processing.GPC208 communicates by letter with memory interface 214 by cross bar switch unit 210, so that various external memory devices are read or to be write.In one embodiment, cross bar switch unit 210 has connection to memory interface 214 to communicate by letter with I/O unit 205, and to the connection of local parallel processing memory 204, thereby make processing kernel in different GPC208 can with system storage 104 or for PPU202 other memory communication non-indigenous.In the embodiment shown in Figure 2, cross bar switch unit 210 is directly connected with I/O unit 205.Cross bar switch unit 210 can separate the Business Stream between GPC208 and zoning unit 215 with pseudo channel.
In addition, GPC208 can be programmed to carry out the Processing tasks relevant to miscellaneous application, include but not limited to, linearity and nonlinear data conversion, video and/or audio data filtering, modelling operability are (for example, Applied Physics law is to determine position, speed and other attributes of object), image rendering operation (for example, surface subdivision tinter, vertex shader, geometric coloration and/or pixel shader) etc.PPU202 can transfer to data in the memory of inside (sheet) from system storage 104 and/or local parallel processing memory 204, process this data, and result data is write back to system storage 104 and/or local parallel processing memory 204, wherein such data can be accessed by other system parts, and described other system parts comprise CPU102 or another parallel processing subsystem 112.
PPU202 can be equipped with the local parallel processing memory 204 of any capacity (amount), comprises and there is no local storage, and can use local storage and system storage with any compound mode.For example, in unified memory architecture (UMA) embodiment, PPU202 can be graphic process unit.In such embodiments, will not provide or provide hardly special-purpose figure (parallel processing) memory, and PPU202 can with exclusive or almost exclusive mode use system storage.In UMA embodiment, PPU202 can be integrated in bridge-type chip or in processor chips, or (for example, separate chip PCIe) provides, and described high-speed link is connected to system storage via bridge-type chip or other means of communication by PPU202 as having high-speed link.
As implied above, in parallel processing subsystem 112, can comprise the PPU202 of any number.For example, can on single plug-in card, provide a plurality of PPU202, maybe a plurality of plug-in cards can be connected to communication path 113, maybe one or more PPU202 can be integrated in bridge-type chip.PPU202 in many PPU system can be same or different each other.For example, different PPU202 may have the processing kernel of different numbers, local parallel processing memory of different capabilities etc.In the situation that there is a plurality of PPU202, thereby can with the throughput that may reach higher than single PPU202, carry out deal with data by those PPU of parallel work-flow.The system that comprises one or more PPU202 can usually realize with various configurations and formal cause, comprises desktop computer, notebook computer or HPC, server, work station, game console, embedded system etc.
Optimize digital picture
Fig. 3 is block diagram 300 according to an embodiment of the invention, that show digital camera 302.Digital camera 302 can be included in the mobile device such as mobile phone or flat computer, or can represent to be exclusively used in the equipment of digital photography.By operand word camera 302, user can capturing digital image.As shown, digital camera 302 comprises CPU304, PPU306, optical pickocff 308, memory 312 and I/O (I/O) equipment 310.CPU304 can be roughly similar to the CPU102 shown in Fig. 1, and PPU306 can be roughly similar to the PPU202 shown in Fig. 2.
Memory 312 can be the unit of data-storable any type, comprises random-access memory (ram), read-only memory (ROM), one or more hardware/software register, buffering area etc.As shown, memory 312 comprises original image 320, image optimization engine 3 22 and the image 324 through optimizing.When the user of digital camera 302 wishes capturing digital image, user handles I/O equipment 310 so that optical pickocff 308 receives light waves (that is, user can trip release-push " photograph ").Optical pickocff 308 makes the expression of gathered light wave be written to one or more as in original image 320 of memory 312 subsequently.CPU304 and/or PPU306 can participate in processing the signal generated by optical pickocff 308 to produce initial data 320.
Image optimization engine (IOE) 322 can be with reprocessing original image 320 to play up the image 324 through optimizing.IOE322 can as directedly be the software program residing in memory 312, and can carry out to process original image 320 by CPU304 and/or PPU306.Alternately, IOE322 can be the hardware cell that is embedded in digital camera 302 and is coupled to CPU304/PPU306, optical pickocff 308, memory 312 and I/O equipment 310.As below in conjunction with Fig. 4 A-10 institute in greater detail, IOE322 can realize and process original image 320 to produce the various different technologies of the image 324 through optimizing.
Fig. 4 A shows the schematic diagram 400 of an embodiment of the IOE322 shown in Fig. 3.As shown, schematic diagram 400 comprises original image also shown in Figure 3 320, IOE322 and the image 324 through optimizing.As mentioned above, IOE322 is configured to the image 324 generating through optimizing by processing original image 320.
For given original image 320, first IOE322 generates original image statistics 402.Original image statistics 402 can comprise different value, wherein each value from can be corresponding for the different statistics of given image calculation.Original image statistics 402 can comprise miscellaneous different statistics, comprises the amount of distribution of color, Luminance Distribution, contrast, saturation, exposure and/or other statistics of expression and image correlation connection.
IOE322 is configured to use machine learning engine (MLE) 404 to process original image 302 and original image statistics 402.MLE404 realizes the one or more machine learning algorithms that generally can be used to calculate based on one or more input values one or more output valves.MLE404 can comprise combination of decision tree, artificial neural net (ANN), Bayesian network, different machines learning algorithm etc., and can use one or more supervision formula learning arts to be trained, as discussed in more detail below.IOE322 realizes MLE404 with based on original image statistics 402 and generate to play up based on original image 302 alternatively and control parameter 406 collection.
Play up a subset controlling parameter 406 and comprise and plays up algorithm selector 406A, wherein each is with different to play up algorithm 408 corresponding, remaining play up control parameter 406 comprise to be provided to play up algorithm 408 play up algorithm argument 406B.IOE322 selects the algorithm 408 corresponding with playing up algorithm selector 406A, and utilize subsequently specified parameter 406B by selected algorithm application to original image to play up the image through optimization.IOE322 can repeat this process to play up the image 324 through optimizing for each in original image 320.In one embodiment, IOE322 uses the original image statistics 402 being associated with given original image 320 and uses with the original figure of image correlation connection previous and/or subsequently and add up to process this given original image 320.
As mentioned above, MLE404 is configured to realize one or more machine learning algorithms and is trained by realizing one or more supervision formula learning arts, as below in conjunction with Fig. 4 B institute in greater detail.
Fig. 4 B be according to an embodiment of the invention, show for training the schematic diagram 450 of the technology of MLE404.For each original image 320, provide and be used for training MLE404 " desirable " through the image 424 through optimizing of sign.As shown, for each original image 320, original image statistics 402 is generated and it is combined and be provided to MLE404 with original image alternatively.Utilization can be random or with meet machine learning technique state mode the intersection of the value more carefully selected carry out initialization MLE404.MLE404 generate be used for subsequently selecting playing up algorithm 408 play up algorithm selector 406A, and generate offer these algorithms play up algorithm argument 406B.Algorithm 408 is played up the image 324 through optimizing of comparing through the image 424 of optimizing with " desirable " subsequently, should be supposed to play up from original image 320 by " desirable " image 424 through optimizing.Training engine 405 calculates the deviation of image 324 through optimizing and the image 424 through optimizing through indicating subsequently, and based on this deviation calculate for MLE404 through improved parameter.MLE404 can generate subsequently through improved playing up and control parameter 406.
The process of iteration above-outlined is until be less than the tolerance (tolerance) of a certain expectation by the deviation calculated of training engine 405, at this moment derive to play up the image 324 through optimizing of controlling parameter 406 and approach " desirable " image 424 through optimizing through indicating.For example, can apply this process trains MLE404 to select tone mapping curve, this tone mapping curve approaches " desirable " in training set 424 image through optimizing through indicating most by causing playing up image 324 through optimizing, make when the given original image 320 in training set is not provided to MLE404, MLE404 plays up generation to control parameter 406, and this is played up control parameter 406 and will generate and " desirable " image through optimizing through image similarity of optimization.
In one embodiment, manually, by collecting by alternative grading (rating) of playing up the artificial generation of controlling the image that parameter 406 produces, generate the image 424 through optimizing through indicating.In another embodiment, MLE404 can receive by the user from digital camera 302 input and be trained constantly, and this input represents to use given playing up to control the perceived quality that parameter 406 collects the image 324 through optimizing generating.By controlling parameter and repeat this process for playing up for the difference of different original images, training engine 405 can regulate the weights that are associated with MLE404 to play up algorithm selector 406A and play up algorithm argument 406B more effectively selecting.By this way, MLE404 can be trained to the preference of predictive user.
In one embodiment, MLE404 is trained by original place in digital camera 302.In another embodiment, MLE404 is by trained off-line on computer system 100 or parallel processing subsystem 112, and this housebroken MLE404 realizes in digital camera 302.In yet another embodiment, MLE404 is trained on the computing system based on cloud constantly being positioned at long-range computing system.By realizing technology as described above, can significantly improve the quality of the output image of being played up by digital camera 302.Fig. 5 A has summarized another technology for improvement of picture quality.
Fig. 5 A is schematic diagram 500 according to an embodiment of the invention, another embodiment that show the IOE322 shown in Fig. 3.As shown, IOE322 is configured to play up the image 324 through optimizing based on original image 322.IOE322 be configured to with in conjunction with the described the same manner of Fig. 4 A-4B, for given original image 320, generate original images statistics 502.Therefore, original image statistics can be roughly similar to the original image statistics 402 shown in Fig. 4 A-4B.IOE322 is configured to use MLE504 to process original image statistics 502 and processes alternatively original image 320 to generate the image statistics 512 of deriving.The image statistics 512 of deriving represents to add up 502 and the statistics inferred based on original image, and can represent the quality of original image, such as scene type (such as " seabeach ", " forest " etc.), depth of focus etc.Generally speaking, the image statistics 512 of derivation represents the character of external environment condition.
MLE504 can be roughly similar to the MLE404 shown in Fig. 4 A-4B, and therefore, can realize and can be trained to the image statistics 512 that generates derivation by applying various supervision formula learning art with some different machine learning algorithms.Fig. 5 B is schematic diagram 550 according to an embodiment of the invention, that show the method for training MLE504.For each original image 320, provide for training MLE504 " desirable " through the image 524 through optimizing of sign.As shown, for each original image 320, original image statistics 502 is generated and it is combined and be provided to MLE504 with original image alternatively, and this MLE504 is initialised with the similar manner that MLE404 is initialised in Fig. 4 A with basic.MLE504 based on original image statistics 502 and alternatively original image 320 and the statistics 512 that derives calculate the derivation that is provided to subsequently MLE514 statistics 512, calculate then algorithm 508 selected and provided playing up of its input argument to control parameter 506.Algorithm still calculates the image 324 through optimizing of comparing through the image 524 of optimizing with " desirable " through indicating, and its output is provided to MLE504, and process by iteration until the image through optimizing approaches " desirable " image 524 through optimizing as expected.
Once the image statistics 512 of deriving has been generated by MLE504, these statistics and original image 320 and original image statistics 502 are processed to generate to play up by MLE514 and are controlled parameter 506.To control parameter 406 similar to playing up shown in Fig. 4, and each that play up in algorithm selector 506A is the value corresponding with playing up algorithm 508, and play up each in algorithm argument 506B as being provided for the selected algorithm 508 of playing up.With with in conjunction with the described same mode of Fig. 4 A, IOE322 selects and provides their argument to algorithm 508, and subsequently by algorithm application to original image to play up the image through optimizing.IOE322 can repeat this process to generate the image 324 through optimizing for each in original image 320.
MLE514 can be roughly similar to the MLE404 shown in Fig. 4 A-4B, and therefore, MLE504 can realize and can be trained to generate to play up and control parameter 506 by applying various supervision formula learning art with some different machine learning algorithms.For example, can use " exemplary " original image 320 collection with corresponding " desirable " image 524 through optimizing through indicating to train MLE514.For each original image 320, primary statistics 502 and the statistics 512 deriving are generated and are used as input and be provided to MLE514, it generates selects and provides their playing up of input to control parameter 506 algorithm 508, generates subsequently the image 324 through optimizing.Image 324 through optimizing still " is compared " with the desirable image 524 through optimizing through indicating by training engine 505.Training engine 505 calculates the deviation of image 324 through optimizing and the image 524 through optimizing through indicating, and based on this deviation calculate for MLE504 and 514 through improved parameter.MLE514 can generate subsequently through improved playing up and control parameter 506.This process still iteration until be less than the tolerance of a certain expectation by the deviation calculated of training engine 505, at this moment derive from and plays up the image 324 through optimizing of controlling parameter 506 and approach " desirable " image 524 through optimizing through indicating.
In an embodiment of this process, MLE514 can be trained in having the system of housebroken MLE504.In another embodiment, MLE504 can be trained in having the system of housebroken MLE514.In yet another embodiment, MLE504 can be trained together with the while with MLE514.The same with MLE404, MLE504 and MLE514 can be trained, also on digital camera 302, be realized or be positioned on long-range computing system and the computing system based on cloud subsequently trained constantly on computer system 100 or parallel processing subsystem 112 by trained off-line by original place in digital camera 302.By realizing technology as described above, can significantly improve the quality of the output image of being played up by digital camera 302.
In one embodiment, manually, by collecting by alternative grading of playing up the artificial generation of controlling the image that parameter 506 produces, generate the image 524 through optimizing through indicating.In another embodiment, MLE504 and/or MLE514 can receive by the user from digital camera 302 input and be trained constantly, and this input represents to use given playing up to control the perceived quality that parameter 506 collects the image 324 through optimizing generating.By controlling parameter and repeat this process for playing up for the difference of different original images, the weights that training engine 505 can adjusting be associated with MLE504 and/or 514 are played up algorithm selector 506a and are played up algorithm argument 506b selecting better.By this way, MLE514 can be trained to the preference of predictive user.
By realizing technology as described above, can significantly improve the quality of the image through optimizing of being played up by digital camera 302.
Person of skill in the art will appreciate that, can apply miscellaneous machine learning algorithm and the training technique that is associated to realize each in MLE404,504 and 514.Further, can different embodiment in conjunction with the described IOE322 in Fig. 3-5 above be combined in any technical feasible mode.Below in conjunction with Fig. 7-10, in more detail each in these different embodiment is described.
Fig. 6 be according to an embodiment of the invention, for carry out the flow chart of the method step of processing digital images with the IOE322 shown in Fig. 4 A-4B.Although the system in conjunction with Fig. 1-4B has been described method step, it will be understood by those of skill in the art that, be configured to any system of any order methods && steps of implementation all within the scope of the invention.
As shown, method 600 starts from step 602, and wherein IOE322 receives original image.Original image can by example as shown in Figure 3 the optical pickocff in digital camera 302 308 gather.In step 604, IOE322 generates original image statistics for original image.The original image statistics being generated by IOE322 can comprise different values, and wherein each value is corresponding from the different statistics that can calculate for original image.Original image statistics can comprise miscellaneous different statistics, comprises the amount of distribution of color, Luminance Distribution, contrast, saturation, exposure and/or other statistics of expression and image correlation connection.
In step 606, IOE322 adds up and also based on original image, generates and play up control parameter alternatively based on original image with MLE404.Playing up selection of control parameter will be applied to the algorithm of original image and want the given argument to these algorithms.As described in conjunction with Fig. 4 A above, MLE404 can be trained by applying any technical feasible supervision formula learning algorithm, as described in conjunction with Fig. 4 B.
In step 608, IOE322 plays up the image through optimizing by utilizing selected algorithm and their argument to process original image.Then method 600 finishes.By implementation method 600, IOE322 can generate the image through optimizing, and the described image through optimization is compared and had through improved quality with original image.Fig. 7 has summarized for generating another method of the image through optimizing.
Fig. 7 be according to an embodiment of the invention, for carry out the flow chart of the method step of processing digital images with the IOE322 shown in Fig. 5 A-5B.Although the system in conjunction with Fig. 1-3 and 5A has been described method step, it will be understood by those of skill in the art that, be configured to any system of any order methods && steps of implementation all within the scope of the invention.
As shown, method 700 starts from step 702, and wherein IOE322 receives original image.In step 704, IOE322 generates original image statistics for original image.Step 702 can be roughly similar with 604 to the step 602 of the method 600 shown in Fig. 6 respectively with 704.
In step 706, IOE322 is based on original image statistics and also based on original image, generate the image statistics for the derivation of original image alternatively.The image statistics of deriving represents the statistics that can infer based on original image statistics, and can represent the quality of original image, such as scene type (such as " seabeach ", " forest " etc.), brightness color, depth of focus etc.Generally speaking, the image statistics of derivation represents the character of external environment condition.As described in conjunction with Fig. 5 A above, IOE322 realizes MLE504 to generate the image statistics of deriving.Also as described above, MLE504 can be by being applied to any technical feasible supervision formula learning algorithm MLE504 and being trained to regulate the weights in MLE504.While doing like this, " exemplary " original image can be combined with to train through the image through optimizing of sign the system that comprises MLE504 with their corresponding " desirable ".
In step 708, IOE322 with MLE514 the statistics based on deriving and alternatively also based on original image statistics and alternatively wholly or in part original image generate and play up control parameter.Playing up selection of control parameter will be applied to the algorithm of original image and want the given argument to these algorithms.The same with MLE504, MLE514 can be trained by applying any technical feasible supervision formula learning algorithm, as described in conjunction with Fig. 5 B.
In step 710, IOE322 by the selected algorithm application with specified argument to original image to play up the image through optimizing.Then method 700 finishes.By implementation method 700, IOE322 can generate the image through optimizing, and the described image through optimization is compared and had through improved quality with original image.Fig. 8 and 9 has summarized for training two embodiment of MLE404,504 and 514 method.
Fig. 8 be according to an embodiment of the invention, for the flow chart of the method step of the IOE322 shown in training plan 4A-4B.Although the system in conjunction with Fig. 1-4B has been described method step, it will be understood by those of skill in the art that, be configured to any system of any order methods && steps of implementation all within the scope of the invention.
As shown, method 800 starts from step 802, wherein the 405 initialization MLE404 of the training engine in IOE322.Training engine 405 can utilize can be random or with meet machine learning technique state mode institute more carefully the intersection of the value of selection carry out initialization MLE404.
In step 804, IOE322 receives original image.Original image can by example as shown in Figure 3 the optical pickocff 308 in digital camera 302 gather.In step 806, IOE322 generates original image statistics for original image.Original image statistics can comprise different values, and wherein each value is corresponding from the different statistics that can calculate for original image.Original image statistics can comprise miscellaneous different statistics, comprises the amount of distribution of color, Luminance Distribution, contrast, saturation, exposure and/or other statistics of expression and image correlation connection.
In step 808, the MLE404 in IOE322 adds up and also based on original image, generates and play up control parameter alternatively based on original image.Playing up selection of control parameter will be applied to the algorithm of original image and want the given argument to these algorithms.In step 810, IOE322 plays up algorithm selection by utilization and plays up the algorithm argument processing original image of playing up of controlling in parameter and play up the image through optimizing.
In step 812, training engine 405 calculates the deviation between the image through optimizing and the image through optimizing through indicating.Can be manually, by collecting by the grading of playing up the artificial generation of controlling the image that parameter produces, generate the image through optimizing through indicating.In step 814, training engine 405 determines whether the deviation of calculating in step 812 surpasses threshold value.If so, train image and the difference through the image of optimization between through indicate of engine 405 based on through optimizing to regulate the weights in MLE404, and method 800 turn back to subsequently step 804 and proceed as described above.Value through regulating can be used for calculating through the improved control parameter of playing up by MLE404 subsequently.If determine that at step 814 training engine 405 deviation drops under threshold value, method 800 finishes so.
Fig. 9 be according to an embodiment of the invention, for the flow chart of the method step of the IOE322 shown in training plan 5A-5B.Although the system in conjunction with Fig. 1-3 and 5A-5B has been described method step, it will be understood by those of skill in the art that, be configured to any system of any order methods && steps of implementation all within the scope of the invention.
As shown, method 900 starts from step 902, wherein the 505 initialization MLE504 of the training engine in IOE322 and 514.Training engine 505 can utilize can be random or with meet machine learning technique state mode institute more carefully the intersection of the value of selection come initialization MLE504 and 514.
In step 904, IOE322 receives original image.Original image can by example as shown in Figure 3 the optical pickocff 308 in digital camera 302 gather.In step 906, IOE322 generates original image statistics for original image.Original image statistics can comprise different values, and wherein each value is corresponding from the different statistics that can calculate for original image.Original image statistics can comprise miscellaneous different statistics, comprises the amount of distribution of color, Luminance Distribution, contrast, saturation, exposure and/or other statistics of expression and image correlation connection.In step 908, the MLE514 in IOE322 generates the image statistics of deriving based on original image statistics.The image statistics of deriving represents the statistics that can infer based on original image statistics, and can represent the quality of original image, such as scene type (such as " seabeach ", " forest " etc.), depth of focus etc.Generally speaking, the image statistics of derivation represents the character of external environment condition.
In step 910, the MLE514 in IOE322 is based on original image statistics and the image statistics of deriving and also based on original image, generate and play up control parameter alternatively.Playing up selection of control parameter will be applied to the algorithm of original image and want the given argument to these algorithms.In step 912, IOE322 plays up algorithm selection by utilization and plays up the algorithm argument processing original image of playing up of controlling in parameter and play up the image through optimizing.
In step 914, training engine 505 calculates the deviation between the image through optimizing and the image through optimizing through indicating.Can be manually, by collecting by the grading of playing up the artificial generation of controlling the image that parameter produces, generate the image through optimizing through indicating.In step 916, training engine 505 determines whether the deviation of calculating in step 914 surpasses threshold value.If so, train image and the difference through the image of optimization between through indicate of engine 505 based on through optimizing to regulate the weights in MLE504 and 514.This adjusting can occur simultaneously, and training engine 504 can regulate the weights in MLE504 and 514 during involving the different cycles of training of different original images.Value through regulating can be used for calculating through the improved control parameter of playing up by MLE504 and 514 subsequently.Method 900 turns back to subsequently step 904 and proceeds as described above.If determine that at step 916 training engine 505 deviation drops under threshold value, method 900 finishes so.
Person of skill in the art will appreciate that, in conjunction with the described technology in Fig. 3-9, can carry out combination in any technical feasible mode above.For example, MLE404,504 and 514 can all integrate with IOE322 and be embodied as a part that is configured to generate from original image the image processing pipeline of the image through optimizing.Further, MLE described herein can be by initial training before digital camera 302 is distributed to market, and the training that can stand to continue based on user's input.
In a word, digital camera comprises that the original image being configured to based on being gathered by digital camera generates the image optimization engine of the image through optimizing.The one or more machine learning engines of image optimization engine implementation are played up algorithm and are provided argument to play up the image through optimizing from original image to them with selection.Image optimization engine configuration is for generating the image through optimizing by utilizing selected algorithm and corresponding argument to process original image.
Advantageously, the machine learning engine in image optimization engine can be trained to synthetic image and not require that image processing algorithm designer team produces the algorithm intersection of adjusting this algorithm.Further, the user of digital camera is no longer required to provide a large amount of manual input about the quality of external environment condition to digital camera, thereby has improved user's experience.
One embodiment of the present of invention can be implemented as the program product using together with computer system.The program of this program product defines each function (comprising method described herein) of embodiment and can be contained on various computer-readable recording mediums.Exemplary storage computer-readable medium includes but not limited to: (i) the storage medium that can not write (for example, read-only memory equipment in computer, solid state non-volatile semiconductor memory such as CD-ROM dish, flash memory, rom chip or any type that can be read by CD-ROM drive), store permanent information thereon; (ii) the storage medium that can write (for example, the solid-state random-access semiconductor memory of the floppy disk in disc driver or hard disk drive or any type), stores modifiable information thereon.
Below with reference to specific embodiment, invention has been described.Yet, it will be understood by those of skill in the art that, can to this, make various modifications and variations and not depart from the of the present invention wider spirit and scope of setting forth as enclosed in claims.Therefore, description and accompanying drawing above should be considered to be exemplary and nonrestrictive meaning.
Claims (10)
1. for a computer implemented method for rendering image, comprising:
Via the optical pickocff being included in digital camera, gather original image;
Pixel value collection based on being associated with described original image generates the image statistics collection for described original image;
Make the first machine learning engine select to play up algorithm with selected play up algorithm corresponding play up algorithm argument collection; And
By play up described in utilizing algorithm and described in play up algorithm argument collection and process described original image and play up described image.
2. computer implemented method according to claim 1, wherein said the first machine learning engine is trained by the following:
Receiving target image;
By described image and described target image are relatively carried out to calculation deviation value;
Determine that described deviate surpasses threshold value; And
Difference between pixel value based on being associated with described image and described target image regulates the weights collection in described the first machine learning engine.
3. computer implemented method according to claim 2, wherein said the first machine learning engine comprises artificial neural net and wherein regulates the described weights collection being associated with described the first machine learning engine to comprise back propagation learning algorithm is applied to described weights collection.
4. computer implemented method according to claim 1, wherein said image statistics collection comprises that the original image that represents the white balance, contrast, saturation and/or the exposure that are associated with described original image adds up.
5. computer implemented method according to claim 4, wherein said image statistics collection further comprises the image statistics of the derivation being generated based on described original image statistics by the second machine learning engine, and the character of the image statistics of wherein said derivation indication external environment condition.
6. computer implemented method according to claim 5, wherein said the first and second machine learning engines are trained by the following:
Receiving target image;
By described image and described target image are relatively carried out to calculation deviation value;
Determine that described deviate surpasses threshold value;
Difference between pixel value based on being associated with described image and described target image regulates the weights collection in described the first machine learning engine; And
Difference between pixel value based on being associated with described image and described target image regulates the weights collection in described the second machine learning engine.
7. computer implemented method according to claim 6, wherein said the first machine learning engine comprises that described weights collection that artificial neural net and adjusting are associated with described the first machine learning engine comprises back propagation learning algorithm is applied to described weights collection, and wherein said the second machine learning engine comprises that described weights collection that artificial neural net and adjusting are associated with described the second machine learning engine comprises back propagation learning algorithm is applied to described weights collection.
8. a computing equipment that is configured to rendering image, comprising:
Processing unit, it is configured to:
Via the optical pickocff being included in digital camera, gather original image;
Pixel value collection based on being associated with described original image generates the image statistics collection for described original image;
Make the first machine learning engine select to play up algorithm with selected play up algorithm corresponding play up algorithm argument collection; And
By play up described in utilizing algorithm and described in play up algorithm argument collection and process described original image and play up described image.
9. computing equipment according to claim 8, further comprises:
Memory, it is coupled to described processing unit and stored program instruction, and described program command makes described processing unit when being carried out by described processing unit:
Gather described original image,
Generate described image statistics collection,
Make described the first machine learning engine play up described in selecting algorithm and described in play up algorithm argument collection, and
Play up described image.
10. computing equipment according to claim 8, wherein said the first machine learning engine is trained by the following:
Receiving target image;
By described image and described target image are relatively carried out to calculation deviation value;
Determine that described deviate surpasses threshold value; And
Difference between pixel value based on being associated with described image and described target image regulates the weights collection in described the first machine learning engine, and wherein said the first machine learning engine comprises artificial neural net and wherein regulates the described weights collection be associated with described the first machine learning engine to comprise back propagation learning algorithm is applied to described weights collection.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/651,342 | 2012-10-12 | ||
US13/651,342 US9741098B2 (en) | 2012-10-12 | 2012-10-12 | System and method for optimizing image quality in a digital camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103731660A true CN103731660A (en) | 2014-04-16 |
Family
ID=50383317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310479312.0A Pending CN103731660A (en) | 2012-10-12 | 2013-10-14 | System and method for optimizing image quality in a digital camera |
Country Status (4)
Country | Link |
---|---|
US (1) | US9741098B2 (en) |
CN (1) | CN103731660A (en) |
DE (1) | DE102013016872A1 (en) |
TW (1) | TWI512680B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107038698A (en) * | 2015-10-13 | 2017-08-11 | 西门子保健有限责任公司 | The framework based on study for personalized image quality evaluation and optimization |
CN107622473A (en) * | 2017-09-22 | 2018-01-23 | 广东欧珀移动通信有限公司 | Image rendering method, device, terminal and computer-readable recording medium |
CN108769543A (en) * | 2018-06-01 | 2018-11-06 | 北京壹卡行科技有限公司 | The determination method and device of time for exposure |
CN109791688A (en) * | 2016-06-17 | 2019-05-21 | 华为技术有限公司 | Expose relevant luminance transformation |
CN110663045A (en) * | 2017-11-01 | 2020-01-07 | 谷歌有限责任公司 | Automatic exposure adjustment for digital images |
WO2021018001A1 (en) * | 2019-07-26 | 2021-02-04 | 惠州视维新技术有限公司 | Method and device for adjusting white balance value of television, and computer readable storage medium |
CN114697555A (en) * | 2022-04-06 | 2022-07-01 | 百富计算机技术(深圳)有限公司 | Image processing method, device, equipment and storage medium |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9741098B2 (en) * | 2012-10-12 | 2017-08-22 | Nvidia Corporation | System and method for optimizing image quality in a digital camera |
CN106462401B (en) | 2014-06-19 | 2019-09-10 | 富士通株式会社 | Program creating device and program creating method |
US10362288B2 (en) * | 2015-01-09 | 2019-07-23 | Sony Corporation | Method and system for improving detail information in digital images |
US10210594B2 (en) * | 2017-03-03 | 2019-02-19 | International Business Machines Corporation | Deep learning via dynamic root solvers |
WO2019028472A1 (en) * | 2017-08-04 | 2019-02-07 | Outward, Inc. | Machine learning based image processing techniques |
CN108198124B (en) * | 2017-12-27 | 2023-04-25 | 上海联影医疗科技股份有限公司 | Medical image processing method, medical image processing device, computer equipment and storage medium |
GB2610712B (en) * | 2018-06-15 | 2023-07-19 | Canon Kk | Medical image processing apparatus, optical coherence tomography apparatus, learned model, learning apparatus, medical image processing method and program |
EP3654251A1 (en) | 2018-11-13 | 2020-05-20 | Siemens Healthcare GmbH | Determining a processing sequence for processing an image |
US11704891B1 (en) | 2021-12-29 | 2023-07-18 | Insight Direct Usa, Inc. | Dynamically configured extraction, preprocessing, and publishing of a region of interest that is a subset of streaming video data |
US11509836B1 (en) | 2021-12-29 | 2022-11-22 | Insight Direct Usa, Inc. | Dynamically configured processing of a region of interest dependent upon published video data selected by a runtime configuration file |
US11778167B1 (en) | 2022-07-26 | 2023-10-03 | Insight Direct Usa, Inc. | Method and system for preprocessing optimization of streaming video data |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5465308A (en) * | 1990-06-04 | 1995-11-07 | Datron/Transoc, Inc. | Pattern recognition system |
US7372595B1 (en) * | 2001-08-20 | 2008-05-13 | Foveon, Inc. | Flexible image rendering system utilizing intermediate device-independent unrendered image data |
CN101375315A (en) * | 2006-01-27 | 2009-02-25 | 图象公司 | Methods and systems for digitally re-mastering of 2D and 3D motion pictures for exhibition with enhanced visual quality |
US20100086182A1 (en) * | 2008-10-07 | 2010-04-08 | Hui Luo | Diagnostic image processing with automatic self image quality validation |
US20110142335A1 (en) * | 2009-12-11 | 2011-06-16 | Bernard Ghanem | Image Comparison System and Method |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5694224A (en) * | 1994-12-08 | 1997-12-02 | Eastman Kodak Company | Method and apparatus for tone adjustment correction on rendering gray level image data |
US6400841B1 (en) * | 1999-02-11 | 2002-06-04 | General Electric Company | Method for evaluating three-dimensional rendering systems |
US6356646B1 (en) * | 1999-02-19 | 2002-03-12 | Clyde H. Spencer | Method for creating thematic maps using segmentation of ternary diagrams |
US7245754B2 (en) * | 2000-06-30 | 2007-07-17 | Hitachi Medical Corporation | image diagnosis supporting device |
US20020194161A1 (en) * | 2001-04-12 | 2002-12-19 | Mcnamee J. Paul | Directed web crawler with machine learning |
US7321674B2 (en) * | 2002-02-22 | 2008-01-22 | Agfa Healthcare, N.V. | Method of normalising a digital signal representation of an image |
US7505604B2 (en) * | 2002-05-20 | 2009-03-17 | Simmonds Precision Prodcuts, Inc. | Method for detection and recognition of fog presence within an aircraft compartment using video images |
US7266229B2 (en) * | 2003-07-24 | 2007-09-04 | Carestream Health, Inc. | Method for rendering digital radiographic images for display based on independent control of fundamental image quality parameters |
US8098964B2 (en) * | 2006-02-06 | 2012-01-17 | Microsoft Corp. | Raw image processing |
US7796812B2 (en) * | 2006-10-17 | 2010-09-14 | Greenparrotpictures, Limited | Method for matching color in images |
US20080095306A1 (en) * | 2006-10-18 | 2008-04-24 | Vernon Thomas Jensen | System and method for parameter selection for image data displays |
US8287281B2 (en) * | 2006-12-06 | 2012-10-16 | Microsoft Corporation | Memory training via visual journal |
AU2008202672B2 (en) * | 2008-06-17 | 2011-04-28 | Canon Kabushiki Kaisha | Automatic layout based on visual context |
US8477246B2 (en) * | 2008-07-11 | 2013-07-02 | The Board Of Trustees Of The Leland Stanford Junior University | Systems, methods and devices for augmenting video content |
US20100268223A1 (en) * | 2009-04-15 | 2010-10-21 | Tyco Health Group Lp | Methods for Image Analysis and Visualization of Medical Image Data Suitable for Use in Assessing Tissue Ablation and Systems and Methods for Controlling Tissue Ablation Using Same |
US8358834B2 (en) * | 2009-08-18 | 2013-01-22 | Behavioral Recognition Systems | Background model for complex and dynamic scenes |
JP5844263B2 (en) * | 2009-10-05 | 2016-01-13 | ビーマル イメージング リミテッドBeamr Imaging Ltd. | Apparatus and method for recompressing digital images |
US8565554B2 (en) * | 2010-01-09 | 2013-10-22 | Microsoft Corporation | Resizing of digital images |
WO2011143223A2 (en) * | 2010-05-10 | 2011-11-17 | Board Of Regents, The University Of Texas System | Determining quality of an image or a video using a distortion classifier |
US8786625B2 (en) | 2010-09-30 | 2014-07-22 | Apple Inc. | System and method for processing image data using an image signal processor having back-end processing logic |
US8681222B2 (en) * | 2010-12-08 | 2014-03-25 | GM Global Technology Operations LLC | Adaptation for clear path detection with additional classifiers |
US9741098B2 (en) * | 2012-10-12 | 2017-08-22 | Nvidia Corporation | System and method for optimizing image quality in a digital camera |
US20140152848A1 (en) * | 2012-12-04 | 2014-06-05 | Nvidia Corporation | Technique for configuring a digital camera |
-
2012
- 2012-10-12 US US13/651,342 patent/US9741098B2/en active Active
-
2013
- 2013-10-11 TW TW102136782A patent/TWI512680B/en active
- 2013-10-11 DE DE102013016872.4A patent/DE102013016872A1/en active Pending
- 2013-10-14 CN CN201310479312.0A patent/CN103731660A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5465308A (en) * | 1990-06-04 | 1995-11-07 | Datron/Transoc, Inc. | Pattern recognition system |
US7372595B1 (en) * | 2001-08-20 | 2008-05-13 | Foveon, Inc. | Flexible image rendering system utilizing intermediate device-independent unrendered image data |
CN101375315A (en) * | 2006-01-27 | 2009-02-25 | 图象公司 | Methods and systems for digitally re-mastering of 2D and 3D motion pictures for exhibition with enhanced visual quality |
US20100086182A1 (en) * | 2008-10-07 | 2010-04-08 | Hui Luo | Diagnostic image processing with automatic self image quality validation |
US20110142335A1 (en) * | 2009-12-11 | 2011-06-16 | Bernard Ghanem | Image Comparison System and Method |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107038698A (en) * | 2015-10-13 | 2017-08-11 | 西门子保健有限责任公司 | The framework based on study for personalized image quality evaluation and optimization |
CN109791688B (en) * | 2016-06-17 | 2021-06-01 | 华为技术有限公司 | Exposure dependent luminance conversion |
CN109791688A (en) * | 2016-06-17 | 2019-05-21 | 华为技术有限公司 | Expose relevant luminance transformation |
CN107622473B (en) * | 2017-09-22 | 2020-01-21 | Oppo广东移动通信有限公司 | Image rendering method, device, terminal and computer readable storage medium |
CN107622473A (en) * | 2017-09-22 | 2018-01-23 | 广东欧珀移动通信有限公司 | Image rendering method, device, terminal and computer-readable recording medium |
CN110663045A (en) * | 2017-11-01 | 2020-01-07 | 谷歌有限责任公司 | Automatic exposure adjustment for digital images |
CN110663045B (en) * | 2017-11-01 | 2021-04-16 | 谷歌有限责任公司 | Method, electronic system and medium for automatic exposure adjustment of digital images |
US11210768B2 (en) | 2017-11-01 | 2021-12-28 | Google Llc | Digital image auto exposure adjustment |
CN108769543A (en) * | 2018-06-01 | 2018-11-06 | 北京壹卡行科技有限公司 | The determination method and device of time for exposure |
CN108769543B (en) * | 2018-06-01 | 2020-12-18 | 北京壹卡行科技有限公司 | Method and device for determining exposure time |
WO2021018001A1 (en) * | 2019-07-26 | 2021-02-04 | 惠州视维新技术有限公司 | Method and device for adjusting white balance value of television, and computer readable storage medium |
CN114697555A (en) * | 2022-04-06 | 2022-07-01 | 百富计算机技术(深圳)有限公司 | Image processing method, device, equipment and storage medium |
CN114697555B (en) * | 2022-04-06 | 2023-10-27 | 深圳市兆珑科技有限公司 | Image processing method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
TW201423666A (en) | 2014-06-16 |
US20140104450A1 (en) | 2014-04-17 |
US9741098B2 (en) | 2017-08-22 |
DE102013016872A1 (en) | 2014-04-17 |
TWI512680B (en) | 2015-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103731660A (en) | System and method for optimizing image quality in a digital camera | |
US10534998B2 (en) | Video deblurring using neural networks | |
US11961431B2 (en) | Display processing circuitry | |
EP3923248A1 (en) | Image processing method and apparatus, electronic device and computer-readable storage medium | |
CN102905058B (en) | Produce the apparatus and method for eliminating the fuzzy high dynamic range images of ghost image | |
CN102834849A (en) | Image drawing device for drawing stereoscopic image, image drawing method, and image drawing program | |
CN107113373A (en) | Pass through the exposure calculating photographed based on depth calculation | |
CN108229360A (en) | A kind of method of image procossing, equipment and storage medium | |
JP2023519728A (en) | 2D image 3D conversion method, apparatus, equipment, and computer program | |
US11315306B2 (en) | Systems and methods for processing volumetric data | |
CN103888669A (en) | Approach for camera control | |
JP6980913B2 (en) | Learning device, image generator, learning method, image generation method and program | |
US11361448B2 (en) | Image processing apparatus, method of controlling image processing apparatus, and storage medium | |
EP4020372A1 (en) | A writing/drawing-to-digital asset extractor | |
KR20230022153A (en) | Single-image 3D photo with soft layering and depth-aware restoration | |
CN106469437A (en) | Image processing method and image processing apparatus | |
CN109919851A (en) | A kind of flating obscures removing method and device | |
TWI817335B (en) | Stereoscopic image playback apparatus and method of generating stereoscopic images thereof | |
US11043035B2 (en) | Methods and systems for simulating image capture in an extended reality system | |
WO2023037451A1 (en) | Image processing device, method, and program | |
KR101943424B1 (en) | Apparatus and method for producing image | |
US20240135673A1 (en) | Machine learning model training using synthetic data for under-display camera (udc) image restoration | |
US20240137483A1 (en) | Image processing method and virtual reality display system | |
Zhang et al. | Robust luminance and chromaticity for matte regression in polynomial texture mapping | |
KR20230018398A (en) | Generating Machine Learning Predictions Using Multi-Domain Datasets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140416 |