CN111629141B - Video camera - Google Patents

Video camera Download PDF

Info

Publication number
CN111629141B
CN111629141B CN201910144740.5A CN201910144740A CN111629141B CN 111629141 B CN111629141 B CN 111629141B CN 201910144740 A CN201910144740 A CN 201910144740A CN 111629141 B CN111629141 B CN 111629141B
Authority
CN
China
Prior art keywords
sample
regression model
class
sample points
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910144740.5A
Other languages
Chinese (zh)
Other versions
CN111629141A (en
Inventor
王欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910144740.5A priority Critical patent/CN111629141B/en
Publication of CN111629141A publication Critical patent/CN111629141A/en
Application granted granted Critical
Publication of CN111629141B publication Critical patent/CN111629141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Automatic Focus Adjustment (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a camera, a focus control method and a focus control device. The invention can respectively create regression models according to the sample class of the angle coordinate, can identify the sample class matched with the angle coordinate in the motion command when the motion command drives the movement to rotate, and call the corresponding regression model to predict the focus value suitable for the angle coordinate. The embodiment predicts the focus value without the search adjustment and definition judgment of a search algorithm and a distance meter, so that the real-time performance and the environmental interference resistance of the automatic focusing of the camera can be improved on the premise of not assisting by the instrument.

Description

Video camera
Technical Field
The invention relates to the technical field of security monitoring, in particular to a camera, a focusing control method and a focusing control device suitable for the camera.
Background
The camera in the security field needs to perform motion preview in a monitoring range, and since the distance between a target in the monitoring range and the camera is uncertain, the camera needs to perform automatic focusing in order to capture an image with sufficient brightness and clarity.
A common automatic focusing method can be implemented based on a search algorithm, that is, a focus value for enabling an image to reach a definition peak is searched within a certain focusing range, and focusing is implemented by using the focus value corresponding to the peak.
However, this approach requires a long time for focus adjustment and is often affected by the environmental conditions in which the camera is located. For example, severe environments such as night or heavy rain and fog weather greatly interfere with the definition of an image, an accurate definition peak cannot be searched, and a resulting focus value may cause virtual focus.
There is also a focusing method by means of a range finder, which mainly performs actual ranging on a photographed target and performs focusing with reference to the measured distance. However, this method increases the cost of security monitoring due to the introduction of the distance meter, and if the distance meter fails, the function of auto-focusing is directly disabled.
Therefore, how to improve the real-time performance and the environmental interference resistance of the camera auto-focusing without the aid of instruments is a technical problem to be solved urgently in the prior art.
Disclosure of Invention
In view of the above, the present application provides a camera, a focus control method, and a focus control apparatus, which can improve the real-time performance and the environmental interference resistance of the camera auto-focus without the aid of an instrument.
In one embodiment, there is provided a camera comprising a cartridge and a processor, wherein the processor is configured to:
extracting an angle coordinate from a motion instruction indicating the rotation of the movement;
determining a sample class matched with the extracted angle coordinate;
calling a regression model corresponding to the determined sample class to predict a focus value;
the drive cartridge uses the predicted focus value for focusing.
Optionally, the processor is further configured to:
detecting the brightness and definition of an image shot by the movement;
and when the brightness and the definition of the image are detected to reach the standard, storing the angle coordinate and the focus value corresponding to the image as sample points.
Optionally, the processor is further configured to:
when the total number of the stored sample points reaches a preset clustering threshold value, clustering all the sample points according to the angle coordinates of all the sample points;
after all the sample points are clustered into a plurality of sample classes through clustering processing, a corresponding regression model is respectively created for each sample class according to the angle coordinates and focus values of all the sample points contained in each sample class.
Optionally, the processor is further configured to:
when the updating time arrives, identifying whether a new sample point is stored after the last updating time;
when newly added sample points are identified, clustering all the sample points according to the angle coordinates of all the sample points;
after all current sample points are re-clustered into a plurality of sample classes through clustering processing, a corresponding regression model is respectively created for each sample class according to the angle coordinates and focus values of all sample points contained in each sample class.
Optionally, the processor is further configured to:
when the updating time arrives, identifying whether a new sample point is stored after the last updating time;
when a newly added sample point is identified, classifying the newly added sample point into a matched sample class;
when a sample class is classified into a newly added sample point, reconstructing a corresponding regression model for the sample class according to angle coordinates and focus values of all sample points currently contained in the sample class;
and after the regression model is reconstructed for the sample class, calculating the regression loss of the reconstructed regression model and the existing regression model, and keeping the regression model with relatively low regression loss as the regression model corresponding to the sample class.
Optionally, the processor is further configured to:
when the total number of the stored sample points reaches a clustering threshold value, clustering all the sample points according to the angle coordinates of all the sample points;
after all the sample points are clustered into a plurality of sample classes through clustering processing, counting the number of the sample points contained in each sample class;
and when the number of the sample points included in the sample class reaches a preset modeling threshold value, establishing a corresponding regression model for the sample class according to the angle coordinates and the focus values of all the sample points included in the sample class.
Optionally, the processor is further configured to:
when the updating time arrives, identifying whether a new sample point is stored after the last updating time;
when newly added sample points are identified, clustering all the sample points according to the angle coordinates of all the sample points;
after all current sample points are re-clustered into a plurality of sample classes through clustering processing, counting the number of the sample points contained in each sample class;
and when the number of the sample points contained in the sample class reaches a preset modeling threshold value, establishing a corresponding regression model for the sample class according to the angle coordinates and the focus values of all the sample points contained in the sample class.
Optionally, the processor is further configured to:
when the updating time arrives, identifying whether a new sample point is stored after the last updating time;
when a newly added sample point is identified, classifying the newly added sample point into a matched sample class;
when a sample class contains newly added sample points, counting the number of the sample points currently contained in the sample class;
when the number of the sample points in the sample class containing the newly added sample points reaches a preset modeling threshold value, establishing a corresponding regression model for the sample class according to the angle coordinates and focus values of all the sample points currently contained in the sample class;
when a sample class simultaneously corresponds to the currently created regression model and the existing regression model, calculating the regression loss amount of the reconstructed regression model and the existing regression model, and keeping the regression model with relatively low regression loss amount as the regression model corresponding to the sample class.
Optionally, the processor is further configured to:
during the initialization period of the camera, calculating the regression accuracy of each regression model;
when the regression accuracy of the regression model corresponding to the sample class is lower than a preset calibration value, according to the angle coordinates and the focus values of all sample points contained in the sample class, the corresponding regression model is created for the sample class again, and the regression model with the accuracy lower than the calibration value is replaced.
In another embodiment, there is provided a focus control method including:
extracting an angle coordinate from a motion instruction indicating the rotation of the movement;
determining a sample class to which the extracted angle coordinate belongs;
calling a regression model corresponding to the determined sample class to predict a focus value;
drive core focusing with predicted focus value
In another embodiment, there is provided a focus control apparatus including:
the coordinate extraction module is used for extracting an angle coordinate from a motion instruction indicating the rotation of the movement;
the sample matching module is used for determining a sample class matched with the extracted angle coordinates;
the model prediction module is used for calling a regression model corresponding to the determined sample class to predict the focus value;
and the focusing driving module is used for driving the movement to focus by using the predicted focus value.
In another embodiment, a non-transitory computer-readable storage medium is provided that stores instructions that, when executed by a processor, cause the processor to:
extracting an angle coordinate from a motion instruction indicating the rotation of the movement;
determining a sample class matched with the extracted angle coordinate;
calling a regression model corresponding to the determined sample class to predict a focus value;
the drive core is focused using the predicted focus value.
As can be seen from the above, the above embodiments may respectively create regression models according to the sample class of the angle coordinate, and each time when a motion command drives the movement to rotate, may identify the sample class matching the angle coordinate in the motion command, and call the corresponding regression model to predict the focus value suitable for the angle coordinate. The embodiment predicts the focus value without the search adjustment and definition judgment of a search algorithm and a distance meter, so that the real-time performance and the environmental interference resistance of the automatic focusing of the camera can be improved on the premise of not assisting by the instrument.
Moreover, the sample class and the regression model can be obtained and updated based on sample points with image brightness and definition meeting requirements, and the sample points can be gradually accumulated in the use process of the camera, so that the focusing effect of the focus value predicted by the regression model is gradually enhanced along with the increase of the use time of the camera.
Drawings
The following drawings are only schematic illustrations and explanations of the present invention, and do not limit the scope of the present invention:
fig. 1 is an exemplary configuration diagram of a video camera in an embodiment of the present invention;
FIG. 2 is a schematic diagram of the spatial coordinate system of the camera shown in FIG. 1;
FIG. 3 is a diagram illustrating an example of sample point clustering in an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the creation of a regression model according to an embodiment of the present invention;
FIG. 5 is an exemplary flowchart of a focus control method in an embodiment of the present invention;
FIGS. 6a and 6b are schematic diagrams illustrating an initial model creation process of a focus control method according to an embodiment of the present invention;
FIGS. 7a and 7b are diagrams illustrating a flow of an example focus control of a focus control method according to an embodiment of the present invention;
FIGS. 8a and 8b are schematic diagrams illustrating an extended flow of sample collection in a focus control method according to an embodiment of the present invention;
FIGS. 9a and 9b are schematic diagrams illustrating a model enhancement updating process of a focus control method according to an embodiment of the present invention;
FIGS. 10a and 10b are schematic diagrams illustrating an initialization verification process of a focus control method according to an embodiment of the present invention;
FIG. 11 is a schematic view showing an exemplary configuration of a focus control apparatus in the embodiment of the present invention;
fig. 12a to 12d are schematic views illustrating an expanded structure of the focus control device in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and examples.
Fig. 1 is an exemplary configuration diagram of a video camera in an embodiment of the present invention. Referring to fig. 1, in one embodiment, the camera includes a base 11 and a cartridge 12, and the cartridge 12 is rotatably mounted to the base 11.
The rotation coordinate of the movement 12 includes an angle coordinate (Pan) in the horizontal direction and an angle coordinate (Tile) in the vertical direction, and accordingly, these two angle coordinates may be simply referred to as PT angle coordinates.
Also included in the movement 12 are a mirror group and an image sensor, and the focusing of the camera may be considered as adjusting the position of the mirror group relative to the image sensor so that the focal point of the mirror group falls on the image sensor. The focal point refers to a point where parallel light rays are refracted by a lens in the lens group and then converge, and a focus value used for focusing can be considered as a coordinate value for adjusting the position of the lens group relative to the image sensor.
Fig. 2 is a schematic view of the spatial coordinate system of the camera shown in fig. 1. Referring to fig. 2, in an angular coordinate system with the movement 12 as the coordinate origin O, the position of the target, such as the target point a in fig. 2, is calibrated by PT angular coordinates, which are expressed as (P) A ,T A )。
Referring back to fig. 1, a processor 10 is also provided in the base 11 for driving the rotation of the movement 12 according to the PT angular coordinate in the motion command sent to the camera and for controlling the focusing of the movement 12. The processor 10 may control the manner in which the movement 12 focuses based on the manner in which the focus value is predicted.
In particular, the prediction of the focus value may rely on the clustering of sample points and the creation of a regression model. The sample points are PT coordinate points whose brightness and definition of the captured image satisfy requirements, that is, the positions of the sample points are represented by PT angle coordinates and are accompanied by focus values used when the captured image is focused on the sample points.
Fig. 3 is a schematic diagram of an example of sample point clustering in the embodiment of the present invention. Referring to fig. 3, all sample points may be clustered according to the PT angle coordinates of each sample point (for example, using any one of the clustering algorithms such as kMeans, etc.) to cluster sample points with close PT angle coordinates into one class, and fig. 3 is only an example of clustering all sample points into three sample classes 20a, 20b, and 20c, but this does not mean that the number of sample classes is limited.
Fig. 4 is a schematic diagram of regression model creation in the embodiment of the present invention. Referring to fig. 4, for the same sample class 20, there may be a certain relationship between the PT angle coordinates and the focus values of the sample points included therein, and a regression model 30 representing such a relationship may be obtained by performing regression analysis S200 on the PT angle coordinates and the focus values of the sample points in the same sample class. The regression model 30 obtained by the regression analysis S200 may be a linear regression model or a nonlinear regression model.
Based on the above-described sample class clustering and regression model creation, each time there is a motion command to drive the movement 12 to rotate, a sample class in which the PT angle coordinate in the motion command matches may be identified first, for example, an offset amount of the PT angle coordinate in the motion command from the central PT angle coordinate of each sample class is determined, and a sample class in which the offset amount is the smallest is determined as a sample class that matches the PT angle coordinate in the motion command. Then, as long as the regression model corresponding to the matched sample class can be successfully called, the focus value suitable for the PT angle coordinate can be predicted, for example, the PT angle coordinate in the motion command is used as the input of the regression model, and the predicted focus value can be generated by the regression model through regression operation.
Because the focus value is predicted without the search adjustment and definition judgment of a search algorithm or a distance meter, the real-time performance and the environmental interference resistance of the automatic focusing of the camera can be improved on the premise of not assisting by the instrument.
Fig. 5 is an exemplary flowchart of a focus control method in an embodiment of the present invention. Referring to fig. 5, the focus control method performed by the processor 10 shown in fig. 1 may include:
s510: extracting an angle coordinate (hereinafter, PT angle coordinate is simply referred to as an angle coordinate) from a motion command indicating the rotation of the movement;
s520: determining a sample class matched with the extracted angle coordinate;
s530: calling a regression model corresponding to the determined sample class to predict a focus value;
s540: the drive core is focused using the predicted focus value.
At this point, the focus control process according to one motion instruction ends.
In actual use, a corresponding regression model can be created for each sample class so as to perform regression prediction for all sample classes; alternatively, the regression model may be created only for a sample class with a large number of samples, mainly considering that the angle coordinates obtained by analyzing a sample class with a small number of samples are not highly accurate or representative in association with the focus value.
In either way, the sample classes and regression models used are obtained based on sample points, which may be collected locally by the camera or collected by other means and introduced to the camera from the outside.
For the sample point collection in the initial use stage of the camera, the total number of the collected sample points needs to be counted, and the total number reaches a preset clustering threshold value to be terminated. That is, the initial acquisition phase of sample points may be considered to be completed when the total number of collected sample points reaches the nominal number.
Fig. 6a and 6b are schematic diagrams of a first model creation process of the focus control method according to the embodiment of the present invention.
Referring to fig. 6a, taking the example that each sample class is created with a corresponding regression model, the focus control method may include the following steps for collecting sample points and creating a model for the first time:
s611: detecting the brightness and definition of an image focused and shot by the movement, jumping to S612 if the brightness and definition of the image meet the standards, and waiting for shooting of the image focused next time if the brightness and definition of the image do not meet the standards;
s612: when the brightness and the definition of the image are detected to reach the standard, storing the angle coordinate and the focus value corresponding to the image as sample points;
s621: when a new sample point is stored, counting whether the total number of the currently stored sample points reaches a preset clustering threshold value, if so, skipping to S622, and if not, waiting for the next storage of the newly added sample point;
s622: when the total number of the stored sample points reaches a preset clustering threshold value, clustering all the sample points according to the angle coordinates of all the sample points;
s623: after all the sample points are clustered into a plurality of sample classes through clustering processing, a corresponding regression model is respectively created for each sample class according to the angle coordinates and focus values of all the sample points contained in each sample class.
Referring first to fig. 6b, taking the example of creating a regression model only for a sample class with a sufficient number of sample points, the focus control method may include the following steps for collecting sample points and creating a model for the first time:
s611: detecting the brightness and definition of an image focused and shot by the movement;
s612: when the brightness and the definition of the image are detected to be up to the standard, storing the angle coordinate and the focus value corresponding to the image as sample points;
s631: when a new sample point is stored, counting whether the total number of the currently stored sample points reaches a preset clustering threshold value, if so, jumping to S632, and if not, waiting for the next storage of the newly added sample point;
s632: when the total number of the stored sample points reaches a clustering threshold value, clustering all the sample points according to the angle coordinates of all the sample points;
s633: after all the sample points are clustered into a plurality of sample classes through clustering processing, counting the number of the sample points contained in each sample class;
s634: and when the number of the sample points included in the sample class reaches a preset modeling threshold value, establishing a corresponding regression model for each sample class of which the number of the sample points reaches the modeling threshold value according to the angle coordinates and the focus values of all the sample points included in the sample class.
After the process of fig. 6a or fig. 6b, the camera can be considered to be in the prediction mode. The focus control process of the focus control method in the prediction mode will also be slightly different for different ways as shown in fig. 6a and fig. 6 b.
Fig. 7a and 7b are schematic diagrams of a flow of an example focus control of the focus control method according to the embodiment of the present invention.
Referring to fig. 7a, the focus control method may specifically include the following steps:
s711: extracting an angle coordinate from a motion instruction indicating the rotation of the movement;
s712: determining a sample class matched with the extracted angle coordinate;
s713: querying a regression model corresponding to the determined sample class, for example, querying a correspondence table created in advance for the sample class and the regression model;
s714: calling the inquired regression model to predict a focus value;
s715: the drive cartridge uses the predicted focus value for focusing.
At this point, the focus control process according to one motion instruction ends.
Referring to fig. 7b, the focus control method may specifically include the following steps:
s721: extracting an angle coordinate from a motion instruction indicating the rotation of the movement;
s722: determining a sample class matched with the extracted angle coordinate;
s723: querying a regression model corresponding to the determined sample class, if the query is successful, executing S724, otherwise, jumping to S751;
s724: calling the inquired regression model to predict a focus value;
s725: driving the movement to focus using the predicted focus value;
s751: adjusting the focus value by using a search algorithm;
s752: the drive core is focused with the adjusted focus value.
At this point, the focus control process according to one motion instruction ends.
In practical application, the collection function of the samples can be continued from the time before the sample class clustering and the regression model creation are completed to the time after the sample class clustering and the regression model creation are completed, so that the camera is endowed with continuously enhanced updated self-learning capability. That is, sample point collection for reinforcement updates may be continued after initial sample collection performed for the first clustering of sample classes and the first creation of the regression model is completed.
To better understand the continuousness of the sample point acquisition, the following further description is made in conjunction with the flow shown in fig. 6a and 6b and the flow shown in fig. 7a and 7 b.
Fig. 8a and 8b are schematic diagrams of a sample collection expansion flow of the focus control method in the embodiment of the present invention. Fig. 8a illustrates an example of creating a regression model for each sample class, and fig. 8b illustrates an example of creating a regression model for a sample class with a sufficient number of sample points.
Referring to fig. 8a, in an initial operation stage after the camera is mounted, after a motion command is received each time, the prediction mode is determined not to be turned on through S810, at this time, a focus value is adjusted through S811 by using a search algorithm, then the movement is driven through S812 to focus by using the adjusted focus value, then the brightness and the definition of an image captured by the movement are detected through S611 shown in fig. 6a, and when it is detected that the brightness and the definition of the image both reach a standard, S612 shown in fig. 6a is skipped to and the angle coordinate and the focus value corresponding to the image are stored as a sample point, so as to achieve initial acquisition of the sample point.
Thereafter, when it is determined through S621 shown in fig. 6a that the total number of the saved sample points reaches the preset clustering threshold, the principle of S622 to S623 shown in fig. 6a may be referred to, and the clustering process may be performed on all the sample points according to the angle coordinates of each sample point, and a corresponding regression model may be created for each sample class. At this time, the camera turns on the prediction mode.
Still referring to fig. 8a, after the prediction mode is turned on, each time a motion command is received, S711 to S715 shown in fig. 7a may be executed by the decision jump of S810, and after S725, sample points are selectively collected by S611 and S612 shown in fig. 6a, so as to implement sample point collection for intensive update. The sample point acquisition for the enhancement update has no termination condition and may be a continuous process as the camera is used.
Referring to fig. 8b again, in the initial operation stage after the camera is mounted, after receiving a motion command each time, the prediction mode is determined not to be turned on through S850, at this time, the focus value is adjusted through S751 shown in fig. 7b by using a search algorithm, the movement is driven through S752 shown in fig. 7b by using the adjusted focus value, and then the angle coordinate and the focus value, at which both the brightness and the sharpness of the image reach the standard, are saved as the sample point through S611 and S612 shown in fig. 6b, so as to achieve the initial collection of the sample point.
Thereafter, when it is determined through S631 in fig. 6b that the total number of saved sample points reaches the preset clustering threshold, all the sample points are clustered with reference to the principles of S632 to S634 as shown in fig. 6b, and a corresponding regression model is created for the sample class in which the number of sample points reaches the preset modeling threshold. At this time, the camera turns on the prediction mode.
Still referring to fig. 8b, after the prediction mode is turned on, each time a motion command is received, S721 to S725 shown in fig. 7b may be executed by the judgment jump of S850, or S751 to S752 may be executed by the jump after S721 to S723, and after focusing is performed by S752 or S725, sample points are still selectively collected by S611 and S612 shown in fig. 6b, so as to implement sample point collection for enhanced update.
As described above with reference to fig. 6a and 6b, for the update after the camera starts the prediction mode, the update may be triggered when a predetermined update time arrives, and the update time may be a preset timing time point, or may be a specific event occurrence time in a non-timing manner, such as when the processor 10 is idle.
For the condition that a corresponding regression model is created for each sample class, when the updating time arrives (for example, the processor is idle), whether a new sample point is saved after the last updating time can be identified; when a newly added sample point is identified, clustering processing may be performed on all currently stored sample points (including the newly added sample point) according to the angle coordinates of each sample point by referring to the same manner as S622 in fig. 6a again; after all the current sample points are regrouped into a plurality of sample classes through the clustering process, a corresponding regression model is created for each sample class according to the angle coordinates and the focus values of all the sample points included in each sample class, in substantially the same manner as in S623 in fig. 6 a.
For the case that the regression model is only created for the sample classes with enough sample point numbers, when the updating time arrives (for example, the processor is idle), whether a new sample point is saved after the last updating time is identified; when a newly added sample point is identified, clustering all currently stored sample points (including the newly added sample point) again according to the angle coordinates of each sample point in substantially the same manner as S632 in fig. 6 b; after all the current sample points are re-clustered into a plurality of sample classes through the clustering process, the number of sample points included in each sample class may be counted in substantially the same manner as S633 in fig. 6 b; then, referring to the same manner as S634 in fig. 6b, for each sample class for which the counted number of sample points reaches the preset modeling threshold, a corresponding regression model is created for each sample class according to the angle coordinates and focus values of all sample points included in the sample class.
Besides the above-mentioned all reconstruction methods, the enhancement update may also adopt a partial correction method.
Fig. 9a and 9b are schematic diagrams of a model enhancement updating process of the focus control method according to the embodiment of the present invention.
Fig. 9a illustrates an example of creating a regression model for each sample class, and fig. 9b illustrates an example of creating a regression model for a sample class with a sufficient number of sample points.
Referring first to fig. 9a, the focus control method may include the following steps for model enhancement update:
s910: when the update time arrives, identifying whether a new sample point is stored after the last update time, if so, skipping to S911 to continue the subsequent process, otherwise, not needing to update, and waiting for the next update time to arrive;
s911: when a newly added sample point is identified, classifying the newly added sample point into a matched sample class, for example, determining an offset of a PT angle coordinate in the motion command from a center angle coordinate of each sample class, and determining a sample class with the minimum offset as a sample class matched with an angle coordinate in the motion command;
s912: when a sample class is classified into the newly added sample points, reconstructing a corresponding regression model for the sample class classified into the newly added sample points according to the angle coordinates and the focus values of all sample points currently contained in the sample class;
s913: after a regression model is reconstructed for the sample class, calculating the regression loss of the reconstructed regression model and the regression loss of the existing regression model;
s914: and comparing the regression loss of the reconstructed regression model with the regression loss of the existing regression model, and keeping the regression model with relatively low regression loss as the regression model corresponding to the sample class according to the comparison result, and discarding the other regression model.
This completes the update process.
Referring first to fig. 9b, the focus control method may include the following steps for model enhancement update:
s920: when the updating time arrives, identifying whether a new sample point is stored after the last updating time;
s921: when a newly added sample point is identified, classifying the newly added sample point into a matched sample class;
s922: when a sample class is classified into the newly added sample points, counting the number of the current sample points in the sample class into which the newly added sample points are classified;
s923: when the number of sample points in the sample class containing the newly added sample points reaches a preset modeling threshold value, establishing a corresponding regression model for the sample class of which the number of the sample points reaches the modeling threshold value according to the angle coordinates and the focus values of all the sample points currently contained in the sample class;
the sample class of the regression model created in S923 may be a sample class in which the regression model does not exist due to insufficient number of sample points, and for such a sample class, the regression model created in S923 is an initial model of the sample class, and subsequent processes may not need to be executed; of course, the sample class of the regression model created in S923 may also be a sample class in which an existing regression model exists, and for such a sample class, the regression model created in S923 is an updated candidate model of the sample class, and it is necessary to jump to S924 to S925 to perform update decision;
s924: when a sample class simultaneously corresponds to the currently created regression model and the existing regression model, calculating the regression loss of the reconstructed regression model and the existing regression model;
s925: and comparing the regression loss of the reconstructed regression model with the regression loss of the existing regression model, keeping one regression model with relatively low regression loss as the regression model corresponding to the sample class according to the comparison result, and discarding the other regression model.
This completes the update process.
In the flow shown in fig. 9a and 9b, the regression loss amount is a result of measuring the loss and the degree of error of the regression model. Specifically, each regression model may be used to predict the focus values of all sample points in the corresponding sample class, calculate the loss difference between the predicted focus value of each sample point and the stored focus value of the same sample point, and use the sum or average of the loss differences of these sample points as the regression loss of the regression model. Because the approximate probabilities of the focus values of all the sample points predicted by the reconstructed regression model and the existing regression model are not all the same, the regression loss of the reconstructed regression model and the regression loss of the existing regression model are different, so that the regression loss of the reconstructed regression model and the regression loss of the existing regression model can be compared to be lower.
Based on the regression model update, the camera can have the self-learning capability of continuously strengthening the update. For the condition that each sample class has a corresponding regression model, the continuous strengthening updating can ensure that the regression prediction precision of the regression model is higher and the regression model is more suitable for the environment where the camera is located; and for the condition that only enough sample classes of the sample points have corresponding regression models, the focusing real-time performance of the focused part of sample classes can be temporarily sacrificed to avoid using the regression models with low accuracy or low representativeness, all the regression models are gradually supplemented through continuous reinforcement updating, and in addition, the regression prediction precision of the regression models can be higher through the continuous reinforcement updating, and the regression models are more suitable for the environment where the camera is located.
In another case, the camera may be restarted after being shut down. In order to prevent the camera from losing all information such as the sample points, the clustering results, the regression models and the like after the camera is stopped, the camera can periodically store the information such as the sample points, the clustering results, the regression models and the like into the nonvolatile storage medium. In addition, in the initialization process after the restart, all the saved sample points, the clustering results, and the regression model may be loaded from the nonvolatile storage medium, and the loaded regression model may be verified.
Fig. 10a and 10b are schematic diagrams illustrating an initialization verification process of the focus control method according to an embodiment of the present invention.
Referring to fig. 10a, the focus control method may include the following steps for model verification:
s1011: during camera initialization, the saved regression models and sample points are called (e.g., loaded from non-volatile storage media);
s1012: calculating the regression accuracy of each regression model by taking each stored sample point as reference;
s1013: when the regression accuracy of the regression model corresponding to the sample class is lower than a preset calibration value, according to the angle coordinates and the focus values of all sample points contained in the sample class, the corresponding regression model is created for the sample class again, and the regression model with the accuracy lower than the calibration value is replaced.
So far, the initialization verification process is ended.
Referring to fig. 10b, as a more preferred solution compared to fig. 10a, the focus control method may include the following steps for model verification:
s1021: during the initialization period of the camera, calling each stored regression model and sample points;
s1022: calculating the regression accuracy of each regression model by taking each saved sample point as a reference;
s1023: judging whether a regression model with regression accuracy lower than a preset calibration value exists, if so, skipping to S1040, and if not, ending the verification process;
s1024: when the regression accuracy of the regression model corresponding to the sample class is lower than a preset calibration value, according to the angle coordinates and the focus values of all sample points contained in the sample class, the corresponding regression model is created for the sample class again, and the regression model with the accuracy lower than the calibration value is replaced.
After S1024, it may return to S1022 to continue the verification until the regression correctness of all regression models is not lower than the preset calibration value.
In the flow shown in fig. 10a and fig. 10b, the process of calculating the regression correctness may include: firstly, predicting the focus values of all sample points in the corresponding sample class by using each regression model, comparing whether the predicted focus value of each sample point is consistent with the stored focus value of the same sample point, and determining the accuracy of the regression model according to the proportion of the sample points with the predicted focus values consistent with the stored focus values in the sample class.
The above initialization verification method based on regression accuracy may also be applied after each regression model is built. Namely, after the regression model is created, the regression accuracy of the regression model is calculated by taking each sample point in the stored corresponding sample class as a reference, whether the regression accuracy of the regression model is lower than a preset calibration value or not is judged, if yes, the regression model is reconstructed and the verification is continued until the regression model with the regression accuracy not lower than the calibration value is obtained.
The focus control method described above may be performed by the processor 10 of the camera, for which purpose a non-transitory computer readable storage medium may also be included in the camera, the non-transitory computer readable storage medium storing instructions that, when executed by the processor, cause the processor to perform the steps of the focus control method as described above.
In the embodiments described below, a focus control apparatus is also provided.
Fig. 11 is a schematic diagram of an exemplary structure of a focus control apparatus in an embodiment of the present invention. Referring to fig. 11, in one embodiment, the focus control device includes:
a coordinate extraction module 1110 for extracting an angle coordinate from a motion instruction indicating the rotation of the movement;
a sample matching module 1120, configured to determine a sample class matching the extracted angle coordinate;
a model prediction module 1130, configured to invoke a regression model corresponding to the determined sample class to predict a focus value;
and a focus driving module 1140 for driving the deck to focus using the predicted focus value.
Fig. 12a to 12d are schematic views illustrating an expanded structure of the focus control device in the embodiment of the present invention.
Referring to fig. 12a, in order to make the focus control apparatus have a self-learning function, the focus control apparatus may further include, based on the structure shown in fig. 11:
the image detection module 1210 is used for detecting the brightness and the definition of an image shot by the movement;
the sample collection module 1220 is configured to, when it is detected that both the brightness and the sharpness of the image meet the criteria, store the angle coordinate and the focus value corresponding to the image as a sample point;
the clustering processing module 1230 is configured to perform clustering processing on all the sample points according to the angle coordinates of each sample point when the total number of the stored sample points reaches a preset clustering threshold;
the model creating module 1240 is configured to, after all the sample points are clustered into a plurality of sample classes through clustering, create a corresponding regression model for each sample class according to the angle coordinates and the focus values of all the sample points included in each sample class;
a sample screening module 1250 configured to identify whether a new sample point is saved after the last update time when the update time arrives;
the clustering module 1230 is further configured to perform clustering on all sample points according to the angle coordinates of each sample point when a newly added sample point is identified;
the model creating module 1240 is further configured to, after all current sample points are regrouped into a plurality of sample classes through clustering, respectively create a corresponding regression model for each sample class according to the angle coordinates and the focus values of all sample points included in each sample class;
referring to fig. 12b again, as an alternative to fig. 12a, on the basis of the structure shown in fig. 11, the focus control device may further include:
the image detection module 1210 is used for detecting the brightness and the definition of an image shot by the movement;
the sample collection module 1220 is configured to, when it is detected that both the brightness and the definition of the image reach the standard, store the angle coordinate and the focus value corresponding to the image as a sample point;
the clustering processing module 1230 is configured to perform clustering processing on all the sample points according to the angle coordinates of each sample point when the total number of the stored sample points reaches a preset clustering threshold;
a model creating module 1240, configured to create a corresponding regression model for each sample class according to the angle coordinates and the focus values of all sample points included in each sample class after clustering all sample points into multiple sample classes;
the sample screening module 1250 is configured to identify whether a new sample point is saved after the last update time when the update time arrives;
the sample classifying module 1260 is used for classifying the newly added sample points after the last updating moment into the matched sample class when the newly added sample points are identified;
the model creating module 1240 is further configured to reconstruct a corresponding regression model for the sample class according to the angle coordinates and the focus values of all sample points currently included in the sample class when the sample class is classified into a newly added sample point;
the model updating module 1270 is configured to, after the regression model is reconstructed for the sample class, calculate the regression loss amounts of the reconstructed regression model and the existing regression models, and keep a regression model with a relatively low regression loss amount as the regression model corresponding to the sample class.
Referring to fig. 12c again, as another alternative to fig. 12a, on the basis of the structure shown in fig. 11, the focus control device may further include:
the image detection module 1210 is used for detecting the brightness and the definition of an image shot by the movement;
the sample collection module 1220 is configured to, when it is detected that both the brightness and the sharpness of the image meet the criteria, store the angle coordinate and the focus value corresponding to the image as a sample point;
the clustering processing module 1230 is configured to perform clustering processing on all the sample points according to the angle coordinates of each sample point when the total number of the stored sample points reaches a clustering threshold;
a sample counting module 1280, configured to count the number of sample points included in each sample class after all the sample points are clustered into a plurality of sample classes through clustering;
the model creating module 1240 is used for creating a corresponding regression model for the sample class according to the angle coordinates and the focus values of all the sample points included in the sample class when the number of the sample points included in the sample class reaches a preset modeling threshold value;
the sample screening module 1250 is configured to identify whether a new sample point is saved after the last update time when the update time arrives;
the clustering module 1230 is further configured to perform clustering on all sample points according to the angle coordinates of each sample point when a newly added sample point is identified;
the sample counting module 1280 is further configured to count the number of sample points included in each sample class after all current sample points are re-clustered into a plurality of sample classes through clustering;
the model creating module 1240 is further configured to create a corresponding regression model for the sample class according to the angle coordinates and the focus value of all the sample points included in the sample class when the number of the sample points included in the sample class reaches a preset modeling threshold.
Referring to fig. 12d again, as a further alternative to fig. 12a, on the basis of the structure shown in fig. 11, the focus control device may further include:
the image detection module 1210 is used for detecting the brightness and the definition of an image shot by the movement;
the sample collection module 1220 is configured to, when it is detected that both the brightness and the sharpness of the image meet the criteria, store the angle coordinate and the focus value corresponding to the image as a sample point;
the clustering processing module 1230 is configured to perform clustering processing on all the sample points according to the angle coordinates of each sample point when the total number of the stored sample points reaches a clustering threshold;
a sample counting module 1280, configured to count the number of sample points included in each sample class after all the sample points are clustered into a plurality of sample classes through clustering;
the model creating module 1240 is configured to create a corresponding regression model for the sample class according to the angle coordinates and the focus values of all the sample points included in the sample class when the number of the sample points included in the sample class reaches a preset modeling threshold;
the sample screening module 1250 is configured to identify whether a new sample point is saved after the last update time when the update time arrives;
the sample classifying module 1260 is used for classifying the newly added sample points into the matched sample class when the newly added sample points are identified;
the sample counting module 1280 is further configured to count the number of sample points currently included in a sample class when the sample class is classified into a newly added sample point;
the model creating module 1240 is further configured to create a corresponding regression model for the sample class according to the angle coordinates and the focus values of all sample points currently included in the sample class when the number of sample points included in the sample class of the newly added sample points reaches a preset modeling threshold;
the model updating module 1270 is configured to, when a sample class corresponds to the currently created regression model and the existing regression model at the same time, calculate the regression loss amounts of the reconstructed regression model and the existing regression model, and keep a regression model with a relatively low regression loss amount as the regression model corresponding to the sample class.
In addition, the focus control apparatus may further include the following modules not shown in the drawings:
the search algorithm module is used for adjusting the focus value by using a search algorithm when the calling of the regression model fails (the number of sample points is not enough for clustering, or the number of sample classes is not enough for modeling);
the regression verification module is configured to calculate a regression accuracy of the regression model during initialization of the camera or after the regression model is created, and when the regression accuracy of the regression model corresponding to a sample class is lower than a preset calibration value, the trigger model creation module 1240 recreates the corresponding regression model for the sample class according to the angle coordinates and the focus value of all sample points included in the sample class, and the trigger model update module 1270 replaces the regression model having the accuracy lower than the calibration value with the recreated regression model.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (6)

1. A camera comprising a movement and a processor, wherein the processor is configured to:
extracting an angle coordinate from a motion instruction indicating the rotation of the movement;
determining a sample class matched with the extracted angle coordinate;
calling a regression model corresponding to the determined sample class to predict a focusing focus value;
driving the movement to focus by using the predicted focus value;
wherein the creation of the regression model comprises:
detecting the brightness and definition of an image shot by the movement;
when the brightness and the definition of the image are detected to reach the standard, storing the angle coordinate and the focus value corresponding to the image as sample points;
when the total number of the stored sample points reaches a preset clustering threshold value, clustering all the sample points according to the angle coordinates of all the sample points;
after all the sample points are clustered into a plurality of sample classes through clustering processing, a corresponding regression model is respectively created for each sample class according to the angle coordinates and focus values of all the sample points contained in each sample class.
2. The camera of claim 1, wherein the processor is further configured to:
when the updating time arrives, identifying whether a new sample point is stored after the last updating time;
when the newly added sample points are identified, clustering all the sample points according to the angle coordinates of all the sample points;
after all current sample points are re-clustered into a plurality of sample classes through clustering processing, a corresponding regression model is respectively created for each sample class according to the angle coordinates and focus values of all sample points contained in each sample class.
3. The camera of claim 1, wherein the processor is further configured to:
when the updating time arrives, identifying whether a new sample point is stored after the last updating time;
when a newly added sample point is identified, classifying the newly added sample point into a matched sample class;
when a sample class is classified into a newly added sample point, reconstructing a corresponding regression model for the sample class according to the angle coordinates and focus values of all sample points currently contained in the sample class;
and after the regression model is reconstructed for the sample class, calculating the regression loss of the reconstructed regression model and the existing regression model, and keeping the regression model with relatively low regression loss as the regression model corresponding to the sample class.
4. The camera of claim 1, wherein the processor is further configured to:
when the total number of the stored sample points reaches a clustering threshold value, clustering all the sample points according to the angle coordinates of all the sample points;
after all the sample points are clustered into a plurality of sample classes through clustering processing, counting the number of the sample points contained in each sample class;
and when the number of the sample points included in the sample class reaches a preset modeling threshold value, establishing a corresponding regression model for the sample class according to the angle coordinates and the focus values of all the sample points included in the sample class.
5. The camera of claim 4, wherein the processor is further configured to:
when the updating time arrives, identifying whether a new sample point is stored after the last updating time;
when newly added sample points are identified, clustering all the sample points according to the angle coordinates of all the sample points;
after all current sample points are reunited into a plurality of sample classes through clustering processing, counting the number of the sample points contained in each sample class;
and when the number of the sample points contained in the sample class reaches a preset modeling threshold value, establishing a corresponding regression model for the sample class according to the angle coordinates and the focus values of all the sample points contained in the sample class.
6. The camera of claim 4, wherein the processor is further configured to:
when the updating time arrives, identifying whether a new sample point is stored after the last updating time;
when a newly added sample point is identified, classifying the newly added sample point into a matched sample class;
when a sample class contains newly added sample points, counting the number of the sample points currently contained in the sample class;
when the number of the sample points in the sample class containing the newly added sample points reaches a preset modeling threshold value, establishing a corresponding regression model for the sample class according to the angle coordinates and focus values of all the sample points currently contained in the sample class;
when a sample class simultaneously corresponds to the currently created regression model and the existing regression model, calculating the regression loss of the reconstructed regression model and the existing regression model, and keeping the regression model with relatively low regression loss as the regression model corresponding to the sample class.
CN201910144740.5A 2019-02-27 2019-02-27 Video camera Active CN111629141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910144740.5A CN111629141B (en) 2019-02-27 2019-02-27 Video camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910144740.5A CN111629141B (en) 2019-02-27 2019-02-27 Video camera

Publications (2)

Publication Number Publication Date
CN111629141A CN111629141A (en) 2020-09-04
CN111629141B true CN111629141B (en) 2023-04-18

Family

ID=72258749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910144740.5A Active CN111629141B (en) 2019-02-27 2019-02-27 Video camera

Country Status (1)

Country Link
CN (1) CN111629141B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202111787U (en) * 2011-06-07 2012-01-11 上海芯启电子科技有限公司 Automatic multi-target tracking picture pick-up system
CN104184985A (en) * 2013-05-27 2014-12-03 华为技术有限公司 Method and device for acquiring image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101530255B1 (en) * 2014-09-04 2015-06-24 주식회사 다이나맥스 Cctv system having auto tracking function of moving target
CN105554387A (en) * 2015-12-23 2016-05-04 北京奇虎科技有限公司 Zoom tracking curve correction method and device
CN105763795B (en) * 2016-03-01 2017-11-28 苏州科达科技股份有限公司 A kind of focus method and device, video camera and camera system
CN108981670B (en) * 2018-09-07 2021-05-11 成都川江信息技术有限公司 Method for automatically positioning coordinates of scene in real-time video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202111787U (en) * 2011-06-07 2012-01-11 上海芯启电子科技有限公司 Automatic multi-target tracking picture pick-up system
CN104184985A (en) * 2013-05-27 2014-12-03 华为技术有限公司 Method and device for acquiring image

Also Published As

Publication number Publication date
CN111629141A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN107305627B (en) Vehicle video monitoring method, server and system
CN111372037B (en) Target snapshot system and method
CN108229333B (en) Method for identifying events in motion video
US9466107B2 (en) Bundle adjustment based on image capture intervals
CN110266938B (en) Transformer substation equipment intelligent shooting method and device based on deep learning
CN110659658B (en) Target detection method and device
US20200380227A1 (en) Two-dimensional code identification and positioning
US20050030387A1 (en) Image capture device having a learning function
RU2013156177A (en) METHOD FOR DETERMINING THE OPTIMAL CONFIGURATION OF THE FOREST VIDEO MONITORING SYSTEM
WO2014010174A1 (en) Image angle variation detection device, image angle variation detection method and image angle variation detection program
US20230038000A1 (en) Action identification method and apparatus, and electronic device
KR102179598B1 (en) Apparatus and method for learning facilities using video file
CN111667501A (en) Target tracking method and device, computing equipment and storage medium
US20160360091A1 (en) Optimizing Capture Of Focus Stacks
CN110072078A (en) Monitor camera, the control method of monitor camera and storage medium
CN109141453A (en) A kind of route guiding method and system
CN112750317A (en) Vehicle reverse running detection method, device, equipment and computer readable storage medium
CN109712188A (en) A kind of method for tracking target and device
CN116071682A (en) Elevator door opening and closing detection method and system based on neural network
CN115410100A (en) Small target detection method and system based on unmanned aerial vehicle image
CN111629141B (en) Video camera
CN105472231A (en) Control method, image acquisition device and electronic equipment
CN116233619B (en) Light supplementing control method applied to face recognition
WO2021022795A1 (en) Method, apparatus, and device for detecting fraudulent behavior during facial recognition process
CN108734065B (en) Gesture image acquisition equipment and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant