GB2597873A - Method and system for auto-setting of cameras - Google Patents

Method and system for auto-setting of cameras Download PDF

Info

Publication number
GB2597873A
GB2597873A GB2116127.8A GB202116127A GB2597873A GB 2597873 A GB2597873 A GB 2597873A GB 202116127 A GB202116127 A GB 202116127A GB 2597873 A GB2597873 A GB 2597873A
Authority
GB
United Kingdom
Prior art keywords
function
camera
values
condition
gain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB2116127.8A
Other versions
GB2597873B (en
Inventor
Citerin Johann
Kergourlay Gérald
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB2116127.8A priority Critical patent/GB2597873B/en
Publication of GB2597873A publication Critical patent/GB2597873A/en
Application granted granted Critical
Publication of GB2597873B publication Critical patent/GB2597873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • G03B7/08Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
    • G03B7/091Digital circuits
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • G03B7/28Circuitry to measure or to take account of the object contrast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Abstract

Controlling settings of a camera includes learning 300 a first function representing a relationship between image characteristic values and camera parameters at a first condition. A second function is determined by adapting the first function to a second condition based on image characteristic values at the second condition and on the first function. Camera parameter values are selected for the second condition based on the determined second function and the camera is set at operation 315 according to the selected camera parameter values. The first function may be learnt with characteristic values corresponding to a change of the camera parameter(s). The first/second function may be a function calculating a quality value of an image captured by the camera in the first/second condition, the second condition being different from the first condition. The camera parameters may be selected by the quality value obtained from the second function. The first and second functions may be learned in accordance with a mission chosen by the user 305. The mission may be represented by a scene-dependent parameter containing a target size and a target velocity. Determining the second function may include determining relationships between the image characteristic value(s) and the camera parameter(s).

Description

METHOD AND SYSTEM FOR AUTO-SETTING OF CAMERAS
FIELD OF THE INVENTION
The present invention relates to the technical field of camera setting and to a method and a system for auto-setting of cameras, for example auto-setting of cameras within video surveillance systems.
BACKGROUND OF THE INVENTION
Video surveillance is currently a fast-growing market tending to become increasingly widespread for ubiquitous applications. It can be used today in numerous areas such as crime prevention, private and public areas for security purposes, abnormal event detection, traffic monitoring, customer behaviour, or general data gathering.
The ever-increasing use of network cameras for such purposes has led in particular to increasing image quality, especially to improving image resolution, contrast, and colour.
However, it has been observed that image quality improvement is slowing recently. Indeed, while the camera sensors embedded in recent cameras may provide high quality outputs, image quality highly depends on camera settings that are often not optimal. Motion blur, bad exposure, and a wrong choice of network settings lead very often to poor images.
Moreover, it is noted that environmental conditions may change significantly over a few hours. For example, day versus night, rain versus sun, and light intensity changes are typical environmental changes that have a huge impact on image quality and resource consumption. Therefore, using only one fixed camera selling leads to very poor image quality on average.
To address such changes of environmental conditions, there exist in-camera auto-selling methods such as auto-focus and auto-exposure for adapting camera settings dynamically. Such an auto-setting capability may be further improved thanks to additional manual settings and profiles, making it possible to adapt the auto-setting to the particular camera environment and to choose a suitable trade-off, e.g. a suitable trade-off between image quality and network consumption.
Below, the in-camera embedded auto-selling is referred to as the "camera auto-mode" or the "auto-mode".
Although the camera auto-mode makes it possible to improve image quality by adapting camera settings dynamically, the settings may still be improved. In particular, the camera auto-mode is not so reliable for the following reasons: fine-tuning camera settings to improve the quality of the auto-mode is time-consuming and requires particular skills and a good knowledge of the camera's capabilities and settings interface; - most camera installers do not modify the settings and keep with the default factory auto-mode; - some issues such as motion blur are not solvable through auto-setting; -very few Of any) camera auto-modes are dedicated to optimizing the image in a region of interest (ROI), which leads to bad exposure issues and suboptimal quality; and the camera auto-mode is not adapted to specific tasks or missions, which do not necessarily have the same constraints as the mainstream usage that the camera auto-mode is suited for.
Moreover, it is noted that the quality of images obtained from network cameras as well as deployment ease and cost of the latter would benefit from a more effective auto-setting. This would make it possible for non-specialists, e.g. by the customer's staff itself, to install cameras and this should be efficient in any situation.
It is to be recalled that the three main physical settings that are used to control the quality of images obtained from a camera, in terms of contrast, brightness, sharpness (or blur), and noise level are the aperture, the gain, and the shutter speed (corresponding to the exposure time, generally expressed in seconds).
Generally, the camera auto-mode determines values for the aperture, the gain, and the shutter speed as a function of contrast and global exposure analysis criteria. Many combinations of aperture, gain, and shutter speed values lead to the same contrasts. Indeed, increasing the aperture value, the gain value, and/or increasing the shutter speed value (i.e. increasing exposure time) results in a brighter image. However, increasing the aperture value, the gain value, and/or the shutter speed value does not result only in a brighter image but also
affects depth-of-field, noise, and motion blur:
increasing the aperture value means increasing the amount of light that reaches the sensor, which results in a brighter image but also in an image having a smaller depth-of-field (which increases the defocus blur); increasing the gain value means increasing the dynamic of the image, which results in a brighter image but also in an image having more noise; and increasing the shutter speed value (i.e. increasing the exposure time) means increasing the amount of light that reaches the sensor, which results in a brighter image, but also increasing the motion blur.
Accordingly, a trade-off should be reached between the aperture, gain, and shutter speed values so as to maximize the contrast while minimizing noise and blur (defocus blur and motion blur).
However, since most network cameras monitor distant targets, the aperture value is generally set so that focus is achieved for any objects positioned more than about one meter from the cameras. As a result, the trade-off to be attained is mainly directed to gain and shutter speed that is to say to noise and motion blur. It is made on assumptions and arbitrary choices which are deemed to meet the environmental conditions of the real scene associated with the field of view of the corresponding camera, but which actually do not.
Consequently, there is a need to improve auto-setting of cameras, in particular for dynamically configuring cameras of video-surveillance systems, without disrupting the system while it is running.
SUMMARY OF THE INVENTION
The present invention has been devised to address one or more of the foregoing concerns.
In this context, there is provided a solution for auto-setting cameras, for example for auto-setting cameras in video surveillance systems.
According to a first aspect of the invention, there is provided a method of controlling settings of a camera, the method comprising: learning a first function representing a relationship between a plurality of image characteristic values and camera parameters at a first condition; determining a second function by adapting the first function to a second condition based on the plurality of the image characteristic values at the second condition and on the first function; selecting camera parameter values for the second condition based on the determined second function; and setting the camera according to the selected camera parameter values.
According to the method of the invention, selecting camera parameter values of a camera is rapid, efficient and minimally-invasive for the camera (i.e. the camera does not freeze during the auto-setting and remains operational).
Optional features of the invention are further defined in the dependent appended claims.
According to a second aspect of the invention, there is provided a device for controlling settings of a camera, the device comprising a microprocessor configured for carrying out the steps of: learning a first function representing a relationship between a plurality of image characteristic values and camera parameters at a first condition; determining a second function by adapting the first function to a second condition based on a plurality of the image characteristic values at the second condition and on the first function; selecting camera parameter values for the second condition based on the determined second function; and setting the camera according to the selected camera parameter values.
The second aspect of the present invention has optional features and advantages similar to the first above-mentioned aspect.
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system". Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
BRIEF DESCRIPTION OF THE DRAWINGS
Other features and advantages of the invention will become apparent from the following description of non-limiting exemplary embodiments, with reference to the appended drawings, in which: Figure 1 schematically illustrates an example of a video surveillance system wherein embodiments of the invention may be implemented; Figure 2 is a schematic block diagram of a computing device for implementing embodiments of the invention; Figure 3 is a block diagram illustrating an example of an auto-setting method making it possible to set automatically parameters of a source device according to embodiments of the invention; Figure 4 is a block diagram illustrating a first example of steps carried out during a calibration phase of an auto-selling method as illustrated in Figure 3; Figure 5 illustrates an example of the distribution of the target velocity; Figure 6 illustrates an example of steps for determining new camera settings during the operational use of a camera, without perturbing the use of the camera; and Figure 7 is a block diagram illustrating a second example of steps carried out during a calibration phase of an auto-setting method as illustrated in Figure 3.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
According to embodiments, a new auto-setting method is provided. It comprises several phases among which a learning phase and a calibration phase for obtaining information and an operation phase for dynamically auto-setting a camera in any situation, when environmental conditions change.
Figure 1 schematically illustrates an example of a video surveillance system wherein embodiments of the invention may be implemented.
Video surveillance system 100 includes a plurality of network cameras denoted 110a, 110b, and 110c, for example network cameras of the Internet Protocol (IP) type, generically referred to as IP cameras 110.
Network cameras 110, also referred to as source devices, are connected to a central site 140 via a backbone network 130. In a large video surveillance system, backbone network 130 is typically a wide area network (WAN) such as the Internet.
According to the illustrated example, central site 140 comprises a video manager system (VMS) 150 used to manage the video surveillance system, an auto-setting server 160 used to perform an automatic setting of cameras 110, and a set of recording servers 170 configured to store the received video streams, a set of video content analytics (VCA) servers 180 configured to analyse the received video streams, and a set of displays 185 configured to display received video streams. All the modules are interconnected via a dedicated infrastructure network 145 that is typically a local area network (LAN), for example a local area network based on Gigabit Ethernet.
Video manager system 150 may be a device containing a software module that makes it possible to configure, control, and manage the video surveillance system, for example via an administration interface. Such tasks are typically carried out by an administrator (e.g. administrator 190) who is in charge of configuring the overall video surveillance system. In particular, administrator 190 may use video manager system 150 to select a source encoder configuration for each source device of the video surveillance system. In the state of the art, it is the only means to configure the source video encoders.
The set of displays 185 may be used by operators (e.g. operators 191) to watch the video streams corresponding to the scenes shot by the cameras of the video surveillance system.
The auto-setting server 160 contains a module for setting automatically or almost automatically parameters of cameras 110. It is described in more detail by reference to Figure 2.
Administrator 190 may use the administration interface of video manager system 150 to set input parameters of the auto-selling algorithm described with reference to Figures 3 to 7, carried out in in auto-selling server 160.
Figure 2 is a schematic block diagram of a computing device for implementing embodiments of the invention. It may be embedded in auto-selling server 160 described with reference to Figure 1.
The computing device 200 comprises a communication bus connected to: - a central processing unit 210, such as a microprocessor, denoted CPU; -an I/O module 220 for receiving data from and sending data to external devices. In particular, it may be used to retrieve images from source devices; -a read only memory 230, denoted ROM, for storing computer programs for implementing embodiments; - a hard disk 240 denoted HD; -a random access memory 250, denoted RAM, for storing the executable code of the method of embodiments of the invention, in particular an auto-selling algorithm, as well as registers adapted to record variables and parameters; -a user interface 260, denoted Ul, used to configure input parameters of embodiments of the invention. As mentioned above, an administration user interface may be used by an administrator of the video surveillance system.
The executable code may be stored either in random access memory 250, in hard disk 240, or in a removable digital medium (not represented) such as a disk of a memory card.
The central processing unit 210 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to embodiments of the invention, which instructions are stored in one of the aforementioned storage means. After powering on, CPU 210 may execute instructions from main RAM memory 250 relating to a software application after those instructions have been loaded, for example, from the program ROM 230 or hard disk 240.
Figure 3 is a block diagram illustrating an example of an auto-setting method making it possible to set automatically parameters of a source device, typically a camera, according to embodiments of the invention.
As illustrated, a first phase is a learning phase (reference 300). According to embodiments, it is performed before the installation of the considered camera, for example during the development of a software application for processing images. Preferably, the learning phase is not specific to a type of camera (i.e. it is advantageously generic). During this phase, a relation or a function is established between a quality value (relating to the result of the image processing) and all or most of the relevant variables that are needed to estimate such a processing result quality. These relevant variables may include image quality-dependent parameters and/or scene-dependent parameters. As described hereafter, this relation or function, denoted quality function, may depend on a type of the missions that can be handled by any camera.
An objective of the learning phase is to obtain a quality function which is able to state prima facie the quality of an image in the context of a particular mission, as a function of parameters which have an impact on the mission.
According to particular embodiments, the output of the learning phase is a quality function that may be expressed as follows: fqudamissions)(image quality, scene) where, missions is a type of mission; image quality is a set of parameters that may comprise a blur value, a noise value, and a contrast value; and scene is a set of parameters that may comprise a target size and a target velocity.
Therefore, in particular embodiments, the output of the learning phase may be expressed as follows: fq,day(missions)(noise, blur, contrast, target size, target velocity) The quality function fq"ailly may be a mathematical relation or an n-dimensional array associating a quality value with a set of n parameter values, e.g. values of noise, blur, contrast, target size, target velocity.
As denoted with reference 305, the type of mission to be handled by the camera may be chosen by a user (or an installer) during installation of the camera or later on. Likewise, a user may select a region of interest (ROI) corresponding to a portion of an image to be processed. As illustrated with the use of dotted lines, this step is optional.
As illustrated, after a user has selected a type of mission, the quality function obtained from the learning phase may be written as follows: fq,,aulimage quality, scene) or, according to the given example: fqinshry(noise, blur, contrast, target size, target velocity) Alternatively, the auto-setting algorithm may be configured for a particular type of mission and the whole captured scene may be considered.
A second phase (reference 310) is directed to calibration. This is typically carried out during installation of the camera and aims at measuring scene values from the actual scene according to the settings of the camera, as well as at obtaining parameter values depending on the camera settings. This may take from a few minutes to a few tens of minutes. As explained hereafter, in particular with reference to Figures 4 and 7, it makes it possible to determine quality processing values according to the actual scene and the current camera settings. According to embodiments, the calibration phase is run only once.
The outputs of this phase may comprise: scene values (for example target size and target velocity); image quality values (for example noise, blur, and contrast) that may be determined as a function of the camera settings (for example gain and shutter speed); and image metrics (for example luminance) that may be determined as a function of the camera settings (for example gain and shutter speed). They can be expressed as follows: scene-related parameters: target size target speed image quality: noise= .noose_calibration(gain, shutter speed) blur = four shutter speed) contrast = fcontrast calibration(gain, shutter speed) image metrics: luminance = f nurninance_calibration(gain, shutter speed) The functions (f.
viio(se_calibration, fblur calibration, fcontrast_calibration, flurninance calibration) may be mathematical relations or 2-dimensional arrays associating values with sets of 2 parameter values (gain and shutter speed).
A third phase (reference 315) is directed to operation. It is performed during the operational use of the camera to improve its settings. It is preferably executed in a very short period of time, for example less than one second, and without perturbation for the camera, except for changing camera settings (i.e. it is a non-invasive phase). It is used to select suitable camera settings, preferably the most suitable camera settings.
To that end, data obtained during the calibration phase are used to calculate good settings, preferably the best settings, according to the quality function determined during the learning phase, in view of the current environmental conditions. Indeed, the environmental conditions, typically lighting, may be different from the environmental conditions corresponding to the calibration. Accordingly, the calibration data must be adjusted to fit the current environmental conditions. Next, the adjusted data are used to calculate the best settings. This may be an iterative process since the adjustments of the calibration data are more accurate when camera settings get closer to the optimal settings. Such an operation phase is preferably carried out each time a new change of camera settings is needed.
The output of the operation phase is a camera setting, for example a set of gain and shutter speed values.
Learning phase Video surveillance cameras can be used in quite different contexts that is to say to conduct different "missions" or "tasks". For example, some cameras may be used to provide an overall view, making it possible to analyse wide areas, for example for crowd management or detection of intruders, while others may be used to provide detailed views, making it possible, for example to recognize faces or license plates. Depending on the type of mission, the constraints associated with the camera may be quite different. In particular, the impact of the noise, blur, and/or contrast is not the same depending on the mission. For example, the blur has generally a high impact on missions for which details are of importance, e.g. for face or license plate readability. In other cases, the noise may have more impact, for example when scenes are monitored continuously by humans (due to the higher eye strain experienced on noisy videos). As set forth above, an objective of the learning phase is to get a quality function which is able to state prima facie the quality of an image in the context of a particular type of missions, as a function of parameters which have an impact on the missions.
According to embodiments, such parameters may be the followings: the parameters which represent a quality of images provided by the camera, which depend on the camera settings. Such parameters may comprise the noise, the blur, and/or the contrast; and the parameters that are directed to the scene and the mission to be performed, referred to as scene-dependent parameters hereafter, their values being referred to as scene values. Their number and their nature depend on the type of missions. These parameters may comprise a size of targets and/or a velocity of the targets. The values of these parameters may be predetermined, may be determined by a user, or may be estimated, for example by image analysis. They do not have a direct impact on the image quality but play a role in how difficult it is to fulfil a mission. For example, the noise has more impact on smaller targets than on larger targets so the perceived quality of noisy images will be worse when targets are smaller.
Regarding the image quality, it has been observed that the noise, the blur, and the contrast are generally the most relevant parameters. Nevertheless, camera settings have an impact on other parameters that may be considered as representative of the image quality, for example on the depth-of-field and/or on or the white balance. However, due to hyperfocal settings in video surveillance systems, the depth of field is usually not very relevant. Likewise, the white balance is generally efficiently handled by the camera auto-mode. Accordingly and for the sake of clarity, the following description is based on the noise, the blur, and the contrast as image quality parameters. However, it must be understood that other parameters may be used. Regarding the scene-dependent parameters, it has been observed that the target size and the target velocity are generally the most relevant parameters. Therefore, for the sake of clarity, although other parameters may be used, the following description is based on these two parameters.
Accordingly, the quality function determined in the learning phase may generally be expressed as follows: fq"dr,ty(missions)(noise, blur, contrast, target size, target velocity) or as a set of functions (one function per type of mission denoted mission<i>): fq,,afity(noise, blur, contrast, target size, target velocity) for mission <i> or as a function corresponding to a predetermined type of mission for which a video surveillance system is to be used: fquary(noise, blur, contrast, target size, target velocity) Such a function makes it possible, during the operation phase, to select efficient camera settings for the mission to be carried out, in view of the noise, blur, contrast, target velocity, and target size corresponding to the current camera settings (according to the results obtained during the calibration phase).
For the sake of illustration, this function may be scaled between 0 (very low quality) and 1 (very high quality).
According to embodiments, the quality function is set by an expert who determines how to penalize the noise, blur, and contrast for a considered type of mission.
For the sake of illustration, the quality function may be the following: Vizot" x Vbiur x Vcon"," tituattY 3 Vnoise + "blur "contrast where Vn,,,se, Vona, and Vcoonast represent values for the noise, blur, and contrast parameters, respectively.
The quality function fquahty makes it possible to determine a quality value as a function of general image characteristics such as the noise, blur, and contrast, and of scene characteristics such as target size, for a particular mission. However, this function cannot be used directly since it is not possible to determine a priori the noise, blur, and contrast since these parameters cannot be set on a camera.
Calibration phase The objective of the calibration phase is to measure in-situ, on the actual camera and the actual scene, all the data that are required to calculate a quality value from an fqoakty function as determined during the learning phase.
Accordingly, the calibration phase comprises three objectives: -determining or measuring the scene-dependent parameters, for example a target size and a target velocity; estimating functions to establish a link between each of the image quality parameters (for example the noise, blur, and contrast) and the camera settings (for example the gain (G) and the shutter speed (S)) as follows: noise = . f noise_calibration(G, S), in short noiseaG,S) blur = f blur calibration(G, S), in short b/urcar(G,S) contrast = . f contrast_calibration(G, S), in short contrastcadG,S) estimating a function to establish a link between an image metric (for example the luminance) and the camera settings (for example the gain (G) and the shutter speed (S)). According to embodiments, luminance is used during the operation phase to infer new calibration functions when scene lighting is modified. It may be expressed as follows: luminance= .,uminance_calibration( G, S), in short IcadG,S) Figure 4 is a block diagram illustrating a first example of steps carried out during a calibration phase of an auto-setting method as illustrated in Figure 3.
As illustrated, a first step (step 400) is directed to selecting camera settings. According to embodiments, this step comprises exploring the manifold of all camera setting values, for example all pairs of gain and shutter speed values, and selecting a set of representative pairs in order to reduce the number of camera settings to analyse.
For the sake of illustration, the shutter speed values to be used may be selected as follows: So = min(S) and = Six 2 with index /varying from 0 to n so that So max(S) and Sn+1 > max(S) and where min(S) is the smallest shutter speed and max(S) is the highest shutter speed.
If shutter speeds the camera may accept are discrete values, the shutter speeds are selected so that their values are the closest to the ones selected according to the previous relation (corresponding to a logarithmic scale).
Similarly, the gain values to be used may be selected according to a uniform linear scale as follows: Go = min(G) and Gi+i is determined such that I(G11) I(Sin) I(G) I(Si) with index /varying from 0 to n such that G" max(G) and Gn+1 > max(G) and where I is the luminance of the image, min(G) is the smallest gain, and max(G) is the higher gain.
As a consequence, the gain and shutter speed values have an equivalent scale in terms of impact on the luminance. In other words, if luminance of the image is increased by a value A when shutter speed value goes from one value to the next, gain value is selected such that the luminance is also increased by the value A when moving from the current gain value to the next one.
After having selected a set of gain and shutter speed values at step 400, images are obtained from the camera set to these values (step 405). For the sake of illustration, three to ten images may be obtained, preferably during a short period of time, for each pair (G, S) of gain and shutter speed values.
In order to optimize the time for obtaining these images and the stability of the camera during acquisition of the images, the change of camera settings is preferably minimized, i.e. the settings of the camera are preferably changed from one gain and/or shutter value to the next ones (since it takes a longer time for a camera to proceed to large changes in gain and shutter speed).
Therefore, according to embodiments, images are obtained as follows for each of the selected gain and shutter speed values: - the gain is set to its minimum value (min(G)) and all the selected values of the shutter speed are set one after the other according to their ascending order (from min(S) to max(S)), a number of three to ten images being obtained for each pair of values (G, S); - the value of the gain is set to the next selected one and all the selected values of the shutter speed are set one after the other according to their descending order (from max(S) to min(S)), a number of three to ten images being obtained for each pair of values (G, S); and these two previous steps are repeated with the next values of the gain until images have been obtained for all selected values of the gain and shutter speed.
Next, after having obtained images for all the selected values of the gain and shutter speed, an image metric is measured for all the obtained images (step 410), here the luminance, and an image quality analysis is performed for each of these images (step 415).
The measurement of the luminance aims at determining a relation between the luminance of an image and the camera settings used when obtaining this image, for example a gain and a shutter value. For each obtained image, the luminance is computed and associated with the corresponding gain and shutter speed values so as to determine the corresponding function or to build a 2-dimensional array wherein a luminance is associated with a pair of gain and shutter speed values (denoted fea,(G, S)). According to embodiments, the luminance corresponds to the mean of pixel values (i.e. intensity values) for each pixel of the image.
According to embodiments, the entropy of the images is also computed during measurement of the luminance for making it possible to determine a contrast value during the image quality analysis. Like the luminance, the entropy is computed for each of the obtained images and associated with the corresponding gain and shutter speed values so as to determine the corresponding function or to build a 2-dimensional array wherein an entropy is associated with a pair of gain and shutter speed values (denoted Ecw(G, S)). According to embodiments, measurement of the entropy comprises the steps of: determining the histogram of the image pixel values, for each channel (i.e. for each component), that is to say counting the number of pixels c; for each possible pixel value (for example form varying from 0 to 255 if each component is coded with 8 bits); and - computing the Shannon entropy according to the following relation: E = -ElLTEtn /092 (Lin), with n is the total number of pixels in all channels.
As described hereafter, the entropy may be determined as a function of the luminance (and not of the camera settings, e.g. gain and shutter speed). Such a relationship between the entropy and the luminance can be considered as valid for any environmental conditions (and not only the environmental conditions associated with the calibration).
Therefore, after having computed an entropy and a luminance for each of the obtained images, the entropy values are associated with the corresponding luminance values so as to determine the corresponding function or to build a 1-dimensional array wherein entropy is associated with luminance (denoted E(0).
Turning back to Figure 4 and as described above, the image quality analysis (step 415) aims at determining image quality parameter values, for example values of noise, blur, and contrast from the images obtained at step 405, in order to establish a relationship between each of these parameters and the camera settings used for obtaining the corresponding images. During this step, a relationship between the contrast and the luminance is also established.
Noise values are measured for the obtained images and the measured values are associated with the corresponding gain and shutter speed values so as to determine the corresponding function or to build a 2-dimensional array wherein a noise value is associated with a pair of gain and shutter speed values (noiseca,(G, S)).
According to an embodiment, the noise of an image is determined as a function of a set of several images (obtained in a short period of time) corresponding to the same camera settings and as a result of the following steps: -removing the motion pixels, i.e. the pixels corresponding to objects in motion or in other words, removing the foreground; -computing a temporal variance for each pixel (i.e., the variance of the fluctuation of each pixel value over time, for each channel); and -computing a global noise value for the set of images as the mean value of the computed variances between all pixels and all channels.
The obtained values make it possible to establish a relationship between the noise and the camera settings.
Likewise, blur values are computed for the obtained images so as to establish a relationship between the blur and the camera settings.
According to embodiments, a blur value is determined as a function of a target velocity and of a shutter speed according to the following relation: blur = Ptar"t11* shutter_speed where Vtarge, is the target velocity, the blur value being given in pixels, the target velocity being given in pixels/second, and the shutter speed being given in seconds. ;Therefore, in view of the environmental conditions associated with the calibration phase (denoted "calibration environmental conditions"), the blur may be determined as follows: blurcat(S) =11 Iltarget11* The target velocity may be predetermined, set by a user, or measured from a sequence of images as described hereafter.
The blur is computed for each of the obtained images according to this relation and the obtained values are associated with the corresponding shutter speed values (the gain does not affect the blur) so as to determine the corresponding function or to build a 1-dimensional array wherein a blur value is associated with shutter speed values (blurca,(S)).
Similarly, the contrast is computed for each of the obtained images. It may be obtained from the entropy according to the following relation: 2 entropy contrast = 2 max _entropy where, for example, max entropy is equal to 8 when the processed images are RGB images and each component is encoded over 8 bits.
Accordingly, the contrast contrastaG, S) may be obtained from the entropy Eca,(G, S). In other words, contrast values may be expressed as a function of the gain and of the shutter speed values from the entropy expressed as a function of the gain and of the shutter speed values.
Likewise, the contrast contrast(I) expressed as a function of the luminance may be obtained from the entropy E(I) that is also expressed as a function of the luminance. This can be done as a result of the following steps: measuring the entropy of each of the obtained images; determining the relationships between the measured entropy values and the camera settings, for example the gain and the shutter speed, denoted Ecal(G, S); - obtaining the previously determined relationships between the luminance values and the camera settings, for example the gain and the shutter speed, denoted leal(G, S); discarding selected camera settings corresponding to gain values leading to noise values that exceed a predetermined noise threshold (the noise may have an impact on the entropy when the noise is too large and thus, by limiting noise to variance values below a predetermined threshold, for example 5 to 10, the impact is significantly reduced); - gathering the remaining entropy values and luminance values, that are associated with gain and shutter speed values, to obtain a reduced data collection of entropy and luminance values sharing the same camera settings. This data collection makes it possible to establish the relationships between entropy and luminance values, for example by using simple regression functions such as a linear interpolation on the entropy and luminance values; determining the relationships between the contrast and the entropy as a function of the luminance, for example according to the following relation: contrast(I) = 2 max _entropy Turning back to Figure 4, it is illustrated how scene-dependent parameter values, for example target size and/or target velocity, may be obtained.
To that end, short sequences of consecutive images, also called chunks, are obtained. For the sake of illustration, ten to twenty chunks representative of the natural diversity of the targets are obtained.
According to particular embodiments, chunks are recorded by using the auto-mode (although the result is not perfect, the chunk analysis is robust to the blur and to the noise and thus, does not lead to significant errors). A motion detector of the camera can be used to detect motion and thus, to select chunks to be obtained.
The recording duration depends on the time it takes to get enough targets to reach statistical significance (10 to 20 targets is generally enough). Depending on the case, it can take only few minutes to several hours Of very few targets are spotted per hour).
In order to avoid waiting, it is possible to use chunk fetching instead of chunk recording (i.e. if the camera had already been used prior to the calibration step, the corresponding videos may be retrieved and used).
After being obtained, the chunks are analyzed to detect targets (step 425) to make it possible to estimate their size and optionally their velocity (step 430). This estimating step may comprise performing a statistical analysis of the values of the parameters of interest (e.g. target size, target velocity). Next, the mean, median, or any other suitable value extracted from the distribution of parameter values is computed and used as the value of reference.
The velocity of targets can be very accurately derived by tracking some points of interest of the target. By using this in combination with a background subtraction method (e.g. the known MOG or MOG2 method described, for example, in Zoran Zivkovic and Ferdinand van der Heijden, "Efficient adaptive density estimation per image pixel for the task of background subtraction". Pattern recognition letters, 27(7):773-780, 2006), it is possible to avoid the detection of the fixed points of interest from the background and thus, to determine velocity with high accuracy even with blurry targets. The target velocity is simply the main velocity of points of interest.
Figure 5 illustrates an example of the distribution of the target velocity (or, similarly, the distribution of the velocity of the points of interest). From such a representation, a target velocity value may be obtained. For the sake of illustration, it can be chosen so as to correspond to the mean velocity for given targets. For the sake of illustration, one can choose a value corresponding to the "median 80%", i.e. a velocity value such that 80% of velocities are under this value and 20% of velocities are over this value.
The target size can be obtained through methods as simple as background subtraction, or more sophisticated ones like target detection algorithms (e.g. face recognition, human detection, or license plate recognition), which are more directly related to the detection 2E(2) of the targets corresponding to the task. Deep learning methods are also very effective. Outliers can be removed by using consensus-derived methods, or by using combinations of background subtraction and target detection at the same time. However, since only statistical results are obtained, it does not matter if some errors exist with such algorithms, since the errors should be averaged out to zero. This tolerance to errors makes such methods robust.
Operation phase As described previously, the operation phase aims at improving camera settings, preferably at determining optimal (or near-optimal) camera settings for a current mission and current environmental conditions, without perturbing significantly the use of the camera. To that end, the operation phase is based on a prediction mechanism (and not on an exploration / measurement mechanism). It uses, in particular, the quality function (fgt./fry) determined in the learning phase, the relationships between image quality parameters and camera settings (e.g. noiseca,(G, S), bluredG, S), and contrastaG, S)) determined during the calibration phase, scene-dependent parameters also determined during the calibration phase, and image metrics relating to images obtained with the current camera settings.
Indeed, since the environmental conditions of the calibration phase and the current environmental conditions (i.e. during the operation phase) are not the same, the new relationships between image quality parameters and camera settings should be predicted so as to determine camera settings as a function of the quality function, without perturbing the camera.
According to embodiments, the noise may be predicted from the gain, independently from the shutter speed. Moreover, it is independent from lighting conditions. Therefore, the relationships between the noise and the gain for the current environmental conditions may be expressed as follows: noise.t(G) = noisecai(G) wherein the noise value associated with a given gain value corresponds to the mean noise for this gain and all the shutter speed values associated with it.
If a noise value should be determined for a gain value that has not been selected during the calibration phase (i.e., if there is a gain value for which there is no corresponding noise value), a linear interpolation may be carried out.
Table 1 in the Appendix gives an example of the relationships between the noise and the gain.
Still according to embodiments, the blur may be determined as a function of the target velocity and the shutter speed as described above. It does not depend on lighting conditions. Accordingly, the relationships between the blur and the shutter speed for the current environmental conditions may be expressed as follows: biurcwrent(S) = blurcar(S) Table 2 in the Appendix gives an example of the relationships between the blur and the shutter speed.
Still according to embodiments, prediction of the contrast as a function of the camera settings according to the current environmental conditions (denoted contrastcurrent(G, S)) comprises prediction of the luminance as a function of the camera settings for the current environmental conditions (denoted 5)) 5)) and the use of the relationships between the contrast and the luminance (contrast(I)) according to the following relation: contrast nurrent(G, S) = contrast eurrent(/current(G, 5)) Prediction of the luminance as a function of the camera settings for the current environmental conditions (/curren t(G, S)) may be based on the luminance expressed as a function of the camera settings for the calibration environmental conditions (noted /Gai(G, S)) and on a so-called shutter shift method.
The latter is based on the assumption that there is a formal similarity between a change in lighting conditions and a change in shutter speed. Based on this assumption, the current luminance fact may be expressed as follows: Act = lcurrent(Sact, Sand= Ical(Gact, Sent + AS) where (Gaol, Sad) is the current camera settings and As is a shutter speed variation.
Therefore, the relationship between the luminance and the camera settings for the current environmental conditions may be determined as follows: interpolating the computed luminance values kai(G, S) to obtain a continuous or pseudo-continuous function; for the current gain Gad, determining As so that I cal(Gact, Sad + AS) = lent for example by using the inverse function of the luminance expressed as a function of the shutter speed (for the current gain Gad), i.e. the shutter speed expressed as a function of the luminance, and computing As as As = Saud) -Sad; and determining the whole function Icinent(G,S) by using the formula kiluent(G, S) = S + AS) However, if the assumption that there is a formal similarity between a change in lighting conditions and a change in shutter speed is correct in the vicinity of the current camera settings, it is not always true for distant camera settings. Accordingly, an iterative process may be used to determine the camera settings to be used, as described hereafter.
Table 3 in the Appendix gives an example of the relationships between the contrast and the gain and the shutter speed.
After having predicted the image quality parameters for the current environmental conditions, optimization of the current camera settings may be carried out. It may be based on a grid search algorithm according to the following steps: sampling the manifold of possible gain and shutter speed values to create a 2D grid of different (Gp,d, Spied) pairs; for each of the (Gpred, Spred) pairs, denoted (G",ed" Spred), computing the values of the image quality parameters according to the previous predictions (noisecurrene(Gpred,d, blurcurredSpredi), and contract -current(I(Gpredj, Sprea))); for each (Gomm, Sor,a) pair, computing a score as a function of the quality function determined during the learning phase, of the current mission (missionact), and of the computed values of the image quality parameters as follows: score" = fq"shamissionsc,) ((lOiSecunent(G),read, blUrcurrent(Sprea id, and COntraStctarent(l(Gpreah Sprea)), target size, target velocity) where target size and target velocity values have been calculated during the calibration phase, - identifying the best score (or one of the best scores), i.e. max(score,), to determine the camera settings to be used, i.e. (Gnea, Snea) = argmax(scorej.
Table 4 in the Appendix gives an example of the relationships between the score and the gain and the shutter speed.
In order to improve the accuracy of the camera settings, the latter may be determined on an iterative basis (in particular to take into account that the assumption that there is a formal similarity between a change in lighting conditions and a change in shutter speed is not always true for distant camera settings).
Accordingly, after the next camera settings have been determined, as described above, and set, the luminance corresponding to these next camera settings is predicted (liked = kurrent(Gnea, Snail)), a new image corresponding to these camera settings is obtained, and the luminance of this image is computed. The predicted luminance and the computed luminance are compared.
If the difference between the predicted luminance and the computed luminance exceeds a threshold, for example a predetermined threshold, the process is repeated to determine new camera settings. The process may be repeated until the difference between the predicted luminance and the computed luminance is less than the threshold or until camera
settings are stable.
It is to be noted that region of interests (ROls) may be taken into account for determining image quality parameter values (in such a case, the image quality parameter values are determined from the ROls only) and for optimizing camera settings.
Figure 6 illustrates an example of steps for determining new camera settings during the operational use of a camera, without perturbing the use of the camera. This may correspond at least partially to step 315 in Figure 3.
As illustrated, first steps are directed to: obtaining images (step 600) from a camera set with current camera settings, from which an actual luminance (lad) may be computed, obtaining these camera settings (step 605), i.e. the actual gain and the shutter speed (Gad and Sad) in the given example, and obtaining the relationships (step 615) between the contrast and the camera settings for the calibration environmental conditions (contrastesi(G, S)), between the contrast and the luminance (contrast(/)), and between the luminance and the camera settings for the calibration environmental conditions (contrastaa,(G, S)).
Next, the relationships between the luminance and the camera settings for the current environmental conditions (6,Tent(G, S)) and the relationship between the contrast and the camera settings for the current environmental conditions (contrast.ent(G, S)) are predicted (step 620), for example using the method and formula described above.
In parallel, before, or after, the quality function (f,,,,,(0), the relationships between the noise and the camera settings for the calibration environmental conditions (noiseca,(G, S)), the relationships between the blur and the camera settings for the calibration environmental conditions (blur.,(G, S)), and the scene-dependent parameter values, e.g. the target size and preferably the target velocity, are obtained (step 625).
Next, these relationships as well as the relationships between the contrast and the camera settings for the current environmental conditions (contrast -current(G, S)) are used to predict image quality parameter values for possible gain and shutter speed values (step 630). As described above, these image quality parameter values may be computed for different (G prod, Spree) pairs forming a 2D grid.
These image quality parameter values are then used with the scene-dependent parameter values to compute scores according to the previously obtained quality function (step 635). According to embodiments, a score is computed for each of the predicted image quality parameter values.
Next, optimized camera settings are selected as a function of the obtained scores and the settings of the camera are modified accordingly (step 640).
According to embodiments, it is determined whether or not predetermined criteria are met (step 645), for example whether or not the actual luminance of an obtained image is close to the predicted luminance.
If the criteria are met, the process is stopped until a new optimization of the camera settings should be made. Otherwise, if the criteria are not met, new camera settings are estimated, as described above.
While the process described above aims at optimizing camera settings on a request basis, for example upon request of a user, it is possible to control automatically the triggering of the process of auto-setting camera parameters. It is also possible to pre-determine camera settings so that as soon as conditions have changed significantly, new settings are applied instantaneously without calculations. Such an automatic process presents several advantages among which are: the whole operation phase is automated and can be run continuously without any user decision; the time needed to make changes of camera settings is much reduced between the decision to change and the change itself; and such an auto-setting-monitored system is able to react very quickly to a sudden change of environment conditions such as on/off lighting.
To that end, the current camera setting values and the luminance value should be obtained on a regular basis. The other steps of the operation phase remain basically the same since computations are based on these values and on values determined during the calibration phase.
According to particular embodiments, predicting image quality parameter values (steps 620 and 630 in Figure 6), determining scores for camera settings (step 635 in Figure 6), and enabling selection of camera settings are carried out in advance, for example at the end of the calibration phase, for all (or many) possible measurement values such as the gain, shutter speed and luminance (G, S, l).
This leads to a best camera setting function that gives optimized camera settings as a function of camera settings and luminance in view of the values obtained during the calibration phase. Such a best camera setting function may be expressed as follows: (Gnext, Set) = best camera settings(G, S, I) To determine such a continuous function, a simple data regression or an interpolation may be used.
Operation phase mainly consists in measuring the current camera setting values and the luminance of the current image (Gad, Sad, lad) and determining optimized camera settings as a result of the best camera setting function determined during the calibration phase. If optimal determined camera setting values (G"xt, Snext) are different from the current values (Gad, Sect), the camera settings are changed.
Figure 7 is a block diagram illustrating a second example of steps carried out during a calibration phase of an auto-selling method as illustrated in Figure 3.
The steps illustrated in Figure 7 differ as a whole from the those of Figure 4 in that they comprise steps of predicting image quality parameter values (step 700), of determining scores for camera settings and luminance values (step 705), and of determining a function for determining camera settings (step 710), for all possible camera setting values and for all possible luminance values (G, S, l).
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, the invention being not restricted to the disclosed embodiment. Other variations on the disclosed embodiment can be understood and performed by those skilled in the art, in carrying out the claimed invention, from a study of the drawings, the disclosure and the appended claims.
Such variations may derive, in particular, from combining embodiments as set forth in the summary of the invention and/or in the appended claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used. Any reference signs in the claims should not be construed as limiting the scope of the invention.
APPENDIX
Gain Go Gl G2 G" Noise)10iSecurrent(G0) nOLSecurrendad IlDiSecurrent(G2) llOiSecunent(Gn) Table 1relationships between the noise and the gain Shutter speed So S1 S2 Sn Blur blUrcurrent(So) bltircurrent(St) bitircurrent(S2) blur,,,,t(S") Table 2: relationships between the blur and the shutter speed Shutter speed/ Gain So Si 52 5,2 Go contrast contra st contrast contrast In -current(Go, -current(Go, -cur rent, -In 0, -current, -0, So) Si) 52) S") G contrastourrendal, So) contrastewrent(Gi, St) contrastcuffent(G1, S2) contrast",,,,,,t(G1, S") G2 contrast In contrastat(G2, Si) contrast In contrast=mnt(G2, S") -current, -2, -current, -2, So) S2) G" contrast contrastawrent(Gn, Si) contrast tr-: contrast""ifent(G", S,) -current(G", -current, -n, So) S2) Table 3: relationships between the contrast and the gain and the shutter speed Shutter speed/ Gain So Si S2 Sn Go score(Go, So) score(Go, Si) score(Go, S2) score(Go, S") Gi score(Gi, So) score(Gi, Si) score(Gi, 52) score(Gi, S") G2 score(G2, So) score(G2, Si) score(G2, S2) score(G2, Sn) G" score(G", Sp) score(G,,, 51) score(G,,, S2) scoreG,,, S,d Table 4: relationships between the score and the gain and the shutter speed

Claims (14)

  1. CLAIMS1. A method of controlling settings of a camera, the method comprising: learning a first function representing a relationship between a plurality of image characteristic values and camera parameters at a first condition; determining a second function by adapting the first function to a second condition based on the plurality of the image characteristic values at the second condition and on the first function; selecting camera parameter values for the second condition based on the determined second function; and setting the camera according to the selected camera parameter values.
  2. 2. The method of claim 1, wherein the first function is learned in accordance with the characteristic values corresponding to change of at least one of the camera parameters.
  3. 3. The method of any one of claims 1 and 2, wherein the first function is a function for calculating a quality value of an image captured by the camera in the first condition, and the second function is a function for calculating a quality value of an image captured by the camera in the second condition different from the first condition.
  4. 4. The method of claim 3, wherein the camera parameters are selected by the quality value obtained by the second function.
  5. 5. The method any one of claims 1 to 4, wherein the first function and the second function are learned in accordance with a mission chosen by a user.
  6. 6. The method of claim 5, wherein the mission is represented by a scene-dependent parameter containing a target size and a target velocity.
  7. 7. The method of any one of claims 1 to 6, wherein the determining of the second function comprises determining relationships between at least one of the image characteristic values and at least one of the camera parameters.
  8. 8. The method of any one of claims 1 to 7, wherein the determining of the second function comprises determining relationships between conditions and at least one of the camera parameters.
  9. 9. The method of claim 8, wherein the determining of the relationships between conditions and at least one of the camera parameters comprises determining relationships between the first condition and the second condition as a function of at least one of the camera parameters.
  10. 10. The method of any one of claims 1 to 9, wherein the image characteristics comprise noise, blur, and/or contrast.
  11. 11. The method of any one of claims 1 to 10, wherein the camera parameters comprise a gain and/or a shutter speed.
  12. 12. A computer program product for a programmable apparatus, the computer program product comprising instructions for carrying out each step of the method according to any one of claims 1 to 11 when the program is loaded and executed by a programmable apparatus.
  13. 13. A non-transitory computer-readable storage medium storing instructions of a computer program for implementing the method according to any one of claims 1 to 11.
  14. 14. A device for controlling settings of a camera, the device comprising a microprocessor configured for carrying out the steps of: learning a first function representing a relationship between a plurality of image characteristic values and camera parameters at a first condition; determining a second function by adapting the first function to a second condition based on a plurality of the image characteristic values at the second condition and on the first function; selecting camera parameter values for the second condition based on the determined second function; and setting the camera according to the selected camera parameter values.
GB2116127.8A 2017-07-03 2017-07-03 Method and system for auto-setting of cameras Active GB2597873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2116127.8A GB2597873B (en) 2017-07-03 2017-07-03 Method and system for auto-setting of cameras

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1710675.8A GB2564387B (en) 2017-07-03 2017-07-03 Method and system for auto-setting of cameras
GB2116127.8A GB2597873B (en) 2017-07-03 2017-07-03 Method and system for auto-setting of cameras

Publications (2)

Publication Number Publication Date
GB2597873A true GB2597873A (en) 2022-02-09
GB2597873B GB2597873B (en) 2022-07-20

Family

ID=59592375

Family Applications (2)

Application Number Title Priority Date Filing Date
GB2116127.8A Active GB2597873B (en) 2017-07-03 2017-07-03 Method and system for auto-setting of cameras
GB1710675.8A Active GB2564387B (en) 2017-07-03 2017-07-03 Method and system for auto-setting of cameras

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB1710675.8A Active GB2564387B (en) 2017-07-03 2017-07-03 Method and system for auto-setting of cameras

Country Status (1)

Country Link
GB (2) GB2597873B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2570448B (en) * 2018-01-23 2021-12-22 Canon Kk Method and system for improving auto-setting of cameras
EP3649774A1 (en) 2017-07-03 2020-05-13 C/o Canon Kabushiki Kaisha Method and system for auto-setting cameras
CN108629814B (en) * 2018-05-14 2022-07-08 北京小米移动软件有限公司 Camera adjusting method and device
GB2586653B (en) * 2019-09-02 2023-04-12 Milestone Systems As Method and system for improving settings of a camera images of which are used to perform a particular task

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5751844A (en) * 1992-04-20 1998-05-12 International Business Machines Corporation Method and apparatus for image acquisition with adaptive compensation for image exposure variation
US6301440B1 (en) * 2000-04-13 2001-10-09 International Business Machines Corp. System and method for automatically setting image acquisition controls
EP3574644A1 (en) * 2017-01-28 2019-12-04 Microsoft Technology Licensing, LLC Real-time semantic-aware camera exposure control

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4345825A (en) * 1979-12-13 1982-08-24 Eastman Kodak Company Apparatus for automatic control of a photographic camera
JP4388906B2 (en) * 2005-03-01 2009-12-24 株式会社リコー Imaging device
JP2008275826A (en) * 2007-04-27 2008-11-13 Nikon Corp Camera system and electronic camera
SE1150505A1 (en) * 2011-05-31 2012-12-01 Mobile Imaging In Sweden Ab Method and apparatus for taking pictures
US10148890B2 (en) * 2015-05-01 2018-12-04 Olympus Corporation Image pickup apparatus and method for controlling the same to prevent display of a through image from being stopped when a shutter unit is not completely opened
GB2570448B (en) * 2018-01-23 2021-12-22 Canon Kk Method and system for improving auto-setting of cameras

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5751844A (en) * 1992-04-20 1998-05-12 International Business Machines Corporation Method and apparatus for image acquisition with adaptive compensation for image exposure variation
US6301440B1 (en) * 2000-04-13 2001-10-09 International Business Machines Corp. System and method for automatically setting image acquisition controls
EP3574644A1 (en) * 2017-01-28 2019-12-04 Microsoft Technology Licensing, LLC Real-time semantic-aware camera exposure control

Also Published As

Publication number Publication date
GB2564387A (en) 2019-01-16
GB2597873B (en) 2022-07-20
GB201710675D0 (en) 2017-08-16
GB2564387B (en) 2021-12-22

Similar Documents

Publication Publication Date Title
US11943541B2 (en) Method and system for auto-setting of cameras
GB2597873A (en) Method and system for auto-setting of cameras
US11288101B2 (en) Method and system for auto-setting of image acquisition and processing modules and of sharing resources in large scale video systems
KR101369062B1 (en) Motion information assisted 3a techniques
US10630889B1 (en) Automatic camera settings configuration for image capture
US11558549B2 (en) Methods and devices for capturing high-speed and high-definition videos
US10410065B2 (en) Dynamic parametrization of video content analytics systems
US11050924B2 (en) Method and system for auto-setting of cameras
US11418701B2 (en) Method and system for auto-setting video content analysis modules
US8798369B2 (en) Apparatus and method for estimating the number of objects included in an image
GB2570448A (en) Method and system for improving auto-setting of cameras
GB2587769A (en) Method and system for updating auto-setting of cameras
WO2021053069A1 (en) Method, device, and computer program for setting parameters values of a video source device
KR20180112335A (en) Method for defoging and Defog system
US11696014B2 (en) Method and system for improving settings of a camera images of which are used to perform a particular task
US11924404B2 (en) Method, device, and non-transitory computer-readable medium for setting parameters values of a video source device
JP2021093694A (en) Information processing apparatus and method for controlling the same
JP2019028802A (en) Image processing apparatus, method of controlling the same, and program therefor
KR102149270B1 (en) Method and system for reducing noise by controlling lens iris
JP2002152669A (en) Moving picture processor, moving picture processing method and recording medium
Kim et al. Visual model of human blur perception for scene adaptive capturing
CN117911278A (en) Large scene monitoring image definition processing method and processing system