CN112969027B - Focusing method and device of electric lens, storage medium and electronic equipment - Google Patents
Focusing method and device of electric lens, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN112969027B CN112969027B CN202110362434.6A CN202110362434A CN112969027B CN 112969027 B CN112969027 B CN 112969027B CN 202110362434 A CN202110362434 A CN 202110362434A CN 112969027 B CN112969027 B CN 112969027B
- Authority
- CN
- China
- Prior art keywords
- sub
- scene
- image
- weight
- variation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/958—Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
- H04N23/959—Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Focusing (AREA)
- Automatic Focus Adjustment (AREA)
Abstract
The invention discloses a focusing method and device of an electric lens, a storage medium and electronic equipment. Wherein, the method comprises the following steps: under the condition that an operation instruction for triggering focusing is received, acquiring scene parameters corresponding to sub-images contained in a currently acquired target image; determining the scene type of the target image according to the respective corresponding scene parameters of the sub-images, and generating a reference weight matrix matched with the target image; executing a first focusing operation according to enabling weights distributed to the sub-images under the condition that the scene type of the target image is indicated as a preset scene; in the process of executing the first focusing operation, counting the variation of the scene parameters corresponding to each sub-image; under the condition that the variation of the scene parameters reaches the adjustment condition, adjusting the enabling weight according to the variation of the scene parameters to obtain a target weight; the second focusing operation is performed according to the target weights of the respective sub-images. The invention solves the technical problem of low focusing efficiency.
Description
Technical Field
The invention relates to the field of image processing, in particular to a focusing method and device of an electric lens, a storage medium and electronic equipment.
Background
Currently, the auto-focusing technology is widely applied to various types of image pickup apparatuses. For an electric lens, automatic focusing usually performs a sharp point search by presetting a recording point, adjusting a focusing lens from the recording point, and finding the focusing point according to a variation trend of a high-frequency component output by a filter in an image search process.
However, in different scenes, the whole high-frequency component of the image may have a small variation trend, a clear point needs to be confirmed for many times, the focusing speed is slow, and focusing fails, so that focusing operation needs to be repeated for many times, and the focusing efficiency is low.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a focusing method and device of an electric lens, a storage medium and electronic equipment, and aims to at least solve the technical problem of low focusing efficiency.
According to an aspect of an embodiment of the present invention, there is provided a focusing method of an electromotive lens, including: under the condition that an operation instruction for triggering focusing is received, acquiring scene parameters corresponding to sub-images contained in a currently acquired target image; determining the scene type of the target image according to the scene parameters corresponding to the sub-images respectively, and generating a reference weight matrix matched with the target image; executing a first focusing operation according to the enabling weight distributed to each sub-image under the condition that the scene type of the target image is indicated as a preset scene; in the process of executing the first focusing operation, counting the variation of the scene parameters corresponding to each sub-image; under the condition that the variation of the scene parameters reaches the adjustment condition, adjusting the enabling weight according to the variation of the scene parameters to obtain a target weight; performing a second focusing operation according to the target weight of each of the sub-images.
According to another aspect of the embodiments of the present invention, there is also provided a focusing apparatus of an electromotive lens, including: the acquisition module is used for acquiring scene parameters corresponding to sub-images contained in a currently acquired target image under the condition of receiving an operation instruction for triggering focusing; a determining module, configured to determine a scene type of the target image according to the scene parameters corresponding to the sub-images, and generate a reference weight matrix matching the target image; a first executing module, configured to execute a first focusing operation according to an enabling weight allocated to each sub-image when a scene type of the target image indicates a preset scene; a counting module, configured to count variation of the scene parameter corresponding to each sub-image in a process of performing the first focusing operation; the adjusting module is used for adjusting the enabling weight according to the variation of the scene parameters under the condition that the variation of the scene parameters reaches the adjusting condition to obtain the target weight; and a second executing module, configured to execute a second focusing operation according to the target weight of each of the sub-images.
According to still another aspect of embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned focusing method of an electro-dynamic lens when running.
According to still another aspect of embodiments of the present invention, there is also provided an electronic apparatus including a memory in which a computer program is stored and a processor configured to execute the above-described focusing method of an electromotive lens by the computer program.
In the embodiment of the invention, the scene type of the target image is determined by adopting the scene parameter corresponding to the sub-image in the acquired target image, under the condition that the scene type is judged to be the preset scene, the variation of the scene parameter is counted in the process of the first focusing operation, so that the enabling weight corresponding to the sub-image is adjusted according to the variation of the scene parameter, the target weight is obtained, in a mode of executing the second focusing operation according to the target weight of the sub-image, the target weight for the second focusing operation is obtained by judging the scene type according to the scene parameter, distributing the enabling weight of the sub-image according to the scene type, adjusting the enabling weight of the sub-image according to the variation of the scene parameter counted in the process of the first focusing operation, and the purpose of adjusting the enabling weight of the sub-image according to the judgment of the scene type and the statistics of the parameter in the process of the first focusing is achieved, the enabling weight of the sub-image is adjusted to the target weight more suitable for the scene type and the focusing operation, so that the purpose of executing the second focusing operation is achieved, the technical effect of adjusting the weight of the sub-image according to the scene type to adjust the focusing focus so as to improve the focusing efficiency is achieved, and the technical problem of low focusing efficiency is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic application environment diagram of an alternative focusing method for an electric lens according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating an alternative focusing method for a motorized lens according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a target image of an alternative method of focusing a motorized lens according to an embodiment of the invention;
fig. 4 is a flow chart illustrating an alternative focusing method for a motorized lens according to an embodiment of the present invention;
FIG. 5 is a flow chart of an alternative method of focusing an electromotive lens according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating an alternative focusing method for a motorized lens according to an embodiment of the present invention;
fig. 7 is a flowchart illustrating an alternative focusing method for a motorized lens according to an embodiment of the present invention;
fig. 8 is a flowchart illustrating an alternative focusing method for a motorized lens according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an alternative focusing apparatus for an electro-dynamic lens according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiments of the present invention, there is provided a focusing method of an electric lens, which may be optionally but not limited to be applied in an environment as shown in fig. 1. The terminal device 102 interacts with the server 112 via the network 110. The terminal device 102 is a device that includes an electric lens and needs to perform focusing. The server 112 is a server corresponding to the terminal apparatus 102. The terminal device 102 acquires the currently acquired target image when receiving the operation instruction for triggering focusing, and uploads the target image to the server 112 through the network 110. The server 112 stores the target image in the database 114, and sequentially executes S102 to S110 through the processing engine 116. The processing engine 116 obtains the sub-images included in the target image and the scene parameters corresponding to the sub-images according to the target image. And determining the scene type of the target image according to the scene parameters respectively corresponding to the sub-images, and generating a reference weight matrix matched with the target image. In a case where it is determined that the scene type of the target image indicates a preset scene, an enable weight is assigned to each sub-image, and the enable weight is transmitted to the terminal device 102. In the process that the terminal device 102 performs the first focusing operation, the server 112 counts the variation of the scene parameter corresponding to each sub-image. And under the condition that the variation of the scene parameters reaches the adjustment condition, adjusting the enabling weight according to the variation of the scene parameters to obtain the target weight. And sends the target weights to terminal device 102 to cause terminal device 102 to perform a second focusing operation in accordance with the target weights of the respective sub-images.
Optionally, in this embodiment, the terminal device 102 is a terminal device configured with an electric lens, and may include but is not limited to at least one of the following: the mobile phone (such as an Android mobile phone, an IOS mobile phone, etc.), a notebook computer, a tablet computer, a palm computer, a desktop computer, an electronic camera, a camera device, a monitoring device, etc. The network 110 may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: a local area network, a metropolitan area network, and a wide area network, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communication. The server 112 may be a single server, a server cluster composed of a plurality of servers, or a cloud server. The above is merely an example, and this is not limited in this embodiment.
As an alternative embodiment, as shown in fig. 2, the focusing method of the motorized lens includes:
s202, under the condition that an operation instruction for triggering focusing is received, acquiring scene parameters corresponding to sub-images contained in a currently acquired target image;
s204, determining the scene type of the target image according to the respective corresponding scene parameters of the sub-images, and generating a reference weight matrix matched with the target image;
s206, under the condition that the scene type of the target image indicates a preset scene, executing a first focusing operation according to the enabling weight distributed to each sub-image;
s208, in the process of executing the first focusing operation, counting the variation of the scene parameters corresponding to each sub-image;
s210, under the condition that the variation of the scene parameters reaches the adjustment condition, adjusting the enabling weight according to the variation of the scene parameters to obtain a target weight;
s212, a second focusing operation is performed according to the target weights of the respective sub-images.
Alternatively, the operation instruction to trigger focusing may be, but is not limited to, performing a focusing operation on an electric lens. The triggering mode can be but is not limited to contact triggering and remote triggering. The touch trigger may be, but is not limited to, a touch screen trigger, a key trigger. The remote trigger may be, but is not limited to being, triggered by a remote control terminal.
Alternatively, the currently captured target image may be, but is not limited to, an image captured by a motorized lens, an image collected within a motorized lens. The target image is a base image on which a focusing operation is performed, and the focusing operation is performed on the target image to find a focus in the target image.
Alternatively, the sub-images are a plurality of image blocks obtained by dividing the target image. The sub-images may be, but are not limited to being, uniformly divided according to the size of the target image, and the image size of each sub-image may be, but is not limited to being, the same.
Alternatively, the scene parameter of the sub-image may be, but is not limited to, an operator value of the sub-image. The calculation of the operator value can be, but is not limited to, Sobel operator, Roberts operator, Prewitt operator, Canny operator.
The Sobel operator, also called Sobel operator, is a discrete difference operator, is used for calculating the gradient approximate value of the image brightness function, and is an algorithm for realizing edge detection by reaching an extreme value at the edge through the gray value weighting difference of four fields of each pixel.
The Roberts operator, also called Roberts operator, is an algorithm for edge detection by using local difference by sending the difference between two adjacent pixels to approximate the gradient magnitude by using diagonal lines.
The Prewitt operator is a first-order differential edge detection algorithm, and utilizes the gray difference of adjacent points of pixel points to implement neighborhood convolution with image by utilizing two directional templates in image space so as to implement the algorithm.
The Canny operator is a multi-stage edge detection algorithm, and a function meeting requirements is searched by using a variational method so as to realize the algorithm.
Optionally, the obtaining of the scene parameters corresponding to the sub-images may be, but is not limited to, implemented by a computing chip. The target image, sub-images, and scene parameters may be, but are not limited to, as shown in fig. 3. The whole image is a target image which is equally divided into 15 × 17 sub-images, and the operator value change curve of each sub-image is marked in the sub-image area.
Alternatively, the scene type may be, but is not limited to, a type for indicating a scene made up of image elements contained in the target image. The scene type may be, but is not limited to, a less detailed scene, a no detailed scene, a more detailed scene. A low detail scene is used to indicate that an image contains image elements in only a few regions, for example, a grass house in a field. Compared with an image formed by a large field, the grass house elements occupy less and concentrated areas. Taking fig. 3 as an example, the image includes a traffic sign in a rainy and foggy weather, where the deeper color of the lower right corner of the image is the traffic sign 302, and the traffic sign is distributed more intensively in the image based on the rain and foggy weather, and most of the images are backgrounds, so the scene shown in fig. 3 is a scene with less details.
Optionally, a multi-detail scene is used to indicate that many image elements are distributed in the image, for example, a night sky studded with stars. The stars are distributed in each area of the night sky, and occupy more areas and are more dispersed. A detail-free scene is used to indicate that the image contains image elements, e.g. a clear sky without clouds and birds. Since the multi-detail scene and the non-detail scene are seen from the whole image, the image elements are distributed more dispersedly, and may be referred to as a general scene. The preset scene may be, but is not limited to, a low detail scene.
Alternatively, the reference weight matrix may be, but is not limited to, a weight matrix indicating that the attention index for the sub-image in the target image corresponds to during the focusing operation. The reference weight matrix is a matrix in which the amount of data is in accordance with the number of sub-images. And focusing is performed on the target image according to the reference weight matrix, and attention degree distribution can be performed according to the sub-images corresponding to the weight values in the reference weight matrix, so that a focus can be found in the target image more quickly.
Optionally, the enabling weight may be, but is not limited to, a weight value that is re-assigned to a corresponding weight value of the sub-image in the reference weight matrix in the case of a preset scene. The target weight may be, but is not limited to, an adjusted weight value obtained by numerically adjusting the enable weight.
Alternatively, the focusing operation may be divided into, but not limited to, coarse and fine adjustments. The coarse tuning can be divided into, but is not limited to, a first coarse tuning (coarse tuning first stage) process and a reverse coarse tuning (coarse tuning second stage) process. The sequence of execution of the focusing operation is generally first coarse adjustment, reverse coarse adjustment, and fine adjustment. The first focusing operation may be, but is not limited to, a first coarse adjustment, i.e., a coarse first stage, and the second focusing operation may be, but is not limited to, a reverse coarse adjustment and a fine adjustment.
Alternatively, the variation of the scene parameter may be, but not limited to, a parameter that can be the variation of the scene parameter, such as a range, a variance, and an information entropy.
In the embodiment of the application, the scene type of the target image is determined by acquiring the scene parameter corresponding to the sub-image in the target image, under the condition that the scene type is judged to be the preset scene, in the process of the first focusing operation, the variation of the scene parameter is counted, so that the enabling weight corresponding to the sub-image is adjusted according to the variation of the scene parameter, the target weight is obtained, in the mode of executing the second focusing operation according to the target weight of the sub-image, the target weight for the second focusing operation is obtained by judging the scene type according to the scene parameter, allocating the enabling weight of the sub-image according to the scene type, adjusting the enabling weight of the sub-image according to the variation of the scene parameter counted in the process of the first focusing operation, and the enabling weight of the sub-image is adjusted according to the judgment of the scene type and the statistics of the parameter in the process of the first focusing, the enabling weight of the sub-image is adjusted to the target weight more suitable for the scene type and the focusing operation, so that the purpose of executing the second focusing operation is achieved, the technical effect of adjusting the weight of the sub-image according to the scene type to adjust the focusing focus so as to improve the focusing efficiency is achieved, and the technical problem of low focusing efficiency is solved.
As an alternative implementation, as shown in fig. 4, the determining the scene type of the target image according to the scene parameters corresponding to the sub-images includes:
s402, calculating global parameters and average parameters of the target image according to the scene parameters of the sub-images;
s404, under the condition that the scene parameters corresponding to the sub-images are smaller than the average parameters and the scene parameters are smaller than a first threshold value, determining the sub-images corresponding to the scene parameters to be non-candidate sub-images;
s406, determining the sub-image as a candidate sub-image under the condition that the scene parameter corresponding to the sub-image is greater than or equal to the average parameter or the scene parameter is greater than or equal to the first threshold;
and S408, determining the scene type of the target image according to the global parameters of the target image and the number of the candidate sub-images.
The non-candidate sub-images are sub-images which do not carry image detail information, and the candidate sub-images are sub-images which carry image detail information.
Alternatively, the global parameter may be, but is not limited to, the sum of the scene parameters of all sub-images comprised by the target image. The average parameter may be, but is not limited to, an average of the scene parameters of all sub-images comprised by the target image. The scene parameter corresponding to the i-th row and j-th column sub-image in the target image can be expressed as Fv ij The global parameter of the target image can be expressed as Fv total The average parameter can be expressed as
Alternatively, by δ 1 Indicates a first threshold value, thenAnd Fv ij <δ 1 If the sub-image in the ith row and the jth column does not contain image detail information and is a detail-free sub-image, the sub-image is judged to be a non-candidate sub-image. In thatOr Fv ij ≥δ 1 In the case of (1), it is determined that the image detail information is contained in the sub-image of the ith row and the jth column, and it is detailAnd judging the sub-image as a candidate sub-image.
In the embodiment of the application, the scene parameters of the sub-images are classified and judged according to the average parameters of the target image and the first threshold value, so that whether the sub-images contain image detail information or not is determined, the scene type of the target image is judged according to the sub-images containing the image detail information, and the interference of the sub-images without the detail information on the judgment of the scene type is reduced.
As an optional implementation manner, the determining the scene type of the target image according to the global parameter of the target image and the number of the candidate sub-images includes:
determining the candidate sub-image as a target candidate sub-image under the condition that the scene parameter corresponding to the candidate sub-image is greater than the product of the first coefficient and the average parameter, wherein the target candidate sub-image is a sub-image carrying target image detail information;
and under the condition that the global parameter is smaller than the second threshold and the number of the target candidate sub-images is smaller than the third threshold, determining that the scene type of the target image is a preset scene.
Optionally, the target candidate sub-image is a sub-image that is obtained by further screening the candidate sub-images and contains the detail information of the target image. The target image detail information may be, but is not limited to, low detail image information, and the target candidate sub-image may be, but is not limited to, a low detail sub-image.
Optionally, the first coefficient is a value greater than 1 and less than 2. Further, the value of the first coefficient is between 1.3 and 1.6. May be represented by alpha 1 Represents the first coefficient, thenIn the case of (3), the sub-image in the ith row and the jth column in the target image is determined as the target candidate sub-image.
Alternatively, the number of target candidate sub-images in the target image is counted, but the number of target candidate sub-images may be represented by S, but is not limited to this.
Alternatively, δ may be used 2 Represents a second threshold value which is a preset threshold value, a numerical valueAnd are not limited herein. May be represented by 3 Represents a third threshold value, which may be, but is not limited to, a value less than 1. Further, the value of the third threshold is less than 0.3. In Fv total <δ 2 And S < delta 3 In this case, the scene type of the target image is determined as a preset scene, that is, the target image is a low-detail scene.
In the embodiment of the application, the target candidate sub-image is determined from the candidate sub-images, and the scene type of the target image is judged according to the target candidate sub-image and the global parameter, so that the scene type of the target image is judged before the focusing operation is executed, different focusing operation modes can be executed according to different scene types, the focusing operation based on the scene types is executed, and the focusing efficiency is improved.
As an alternative implementation, as shown in fig. 5, the generating the reference weight matrix matched with the target image includes:
s502, under the condition that the sub-image is the candidate sub-image, taking the first weight value as the weight value corresponding to the sub-image;
s504, under the condition that the sub-image is a non-candidate sub-image, taking the second weight value as the weight value corresponding to the sub-image;
s506, generating a reference weight matrix of the target image by using the first weight value and the second weight value.
Optionally, the first weight value is greater than the second weight value. The first weight value may be, but is not limited to, a non-0 positive integer, and the second weight value may be, but is not limited to, 0. The corresponding reference weight may be set to 0 for sub-images that do not contain image detail information, thereby reducing the impact of detail-free sub-images on the focusing operation.
Optionally, the first weight value and the second weight value are filled in the matrix according to the position of the corresponding sub-image in the target image, so as to generate a reference weight matrix.
In the embodiment of the present application, the reference weight matrix is generated according to the initial scene parameters of the target image, and represents the initial classification of the sub-image and the initial judgment of the scene type of the target image.
As an alternative embodiment, in the case where the scene type of the target image indicates a general scene, the first focusing operation and the second focusing operation are performed in accordance with the reference weight matrix.
Alternatively, in a case where the scene type of the target image is determined according to the number of target candidate sub-images and the global parameter, and the scene type of the target image is determined as a general scene, the first focusing operation and the second focusing operation are performed according to the reference weight matrix.
Alternatively, in Fv total ≥δ 2 Or S.gtoreq.delta 3 In the case of (2), the scene type of the target image is determined to be a general scene.
Alternatively, performing the first focusing operation and the second focusing operation according to the reference weight matrix may be, but is not limited to, performing the first focusing operation and the second focusing operation with the weight values corresponding to the respective sub-images in the reference weight matrix as reference weights.
In the embodiment of the application, in a general scene, the success rate of focusing according to the reference weight matrix is high, so that in the case of judging the general scene, the first focusing operation and the second focusing operation are directly performed according to the reference weight matrix, so that the execution efficiency of the focusing operation in the general scene is further improved.
As an alternative embodiment, the performing the first focusing operation according to the enabling weights allocated to the respective sub-images includes:
assigning a third weight value to each sub-image as an enabling weight of the sub-image;
the first focusing operation is performed according to the enabling weight.
Optionally, when the scene type of the target image is a preset scene, the third weight value is used to assign the reference weight value corresponding to each sub-image in the reference weight matrix again, and for clear distinction, the third weight value is assigned again to be the enabling weight. The third weight value may be, but is not limited to, a non-zero value, and the third weight value may be, but is not limited to, a value equal to the first weight value, and a value not equal to the first weight value.
It should be noted that the third weight value has no association with the first weight value and the second weight value.
In the embodiment of the application, under the condition that the scene type of the target image is judged to be the preset scene for the first time, the reference weight values of the sub-images are unified into the enabling weight, so that the secondary judgment is carried out through data statistics in the first focusing operation process, the influence of the initial judgment is eliminated, and the accuracy of the scene judgment is improved.
As an alternative implementation, as shown in fig. 6, in the process of performing the first focusing operation, counting the variation of the scene parameter corresponding to each sub-image includes:
s602, acquiring the maximum value of the corresponding scene parameter and the minimum value of the scene parameter of each sub-image in the first focusing operation process;
s604, calculating the difference value between the maximum value of the scene parameter and the minimum value of the scene parameter as the variation of the scene parameter;
and S606, counting the variation of the scene parameters of the sub-images.
Optionally, a difference between the maximum value of the scene parameter and the minimum value of the scene parameter is a range of the scene parameter, and the range of the scene parameter is used as a variation of the scene parameter.
In the embodiment of the application, the range of the scene parameter is used as the variation of the scene parameter, so that the variation of the scene parameter in the first focusing operation process can be better reflected, the variation corresponding to the sub-image is determined according to the variation of the scene parameter, and the target weight of the sub-image can be determined more quickly.
As an alternative embodiment, in a case where the amount of change in the adjustment parameter fails to reach the adjustment condition, the scene type of the target image is adjusted to a general scene.
Optionally, after counting the variation of the scene parameter of each sub-image, the method further includes: and calculating the global variation of the target image, and determining whether the adjustment condition is reached according to the global variation.
Alternatively, the global variation may be, but is not limited to, variation of a global parameter of the target image during the first focusing process, and the variation of the global parameter may be, but is not limited to, range, variance, and entropy according to the global parameter.
Optionally, taking the global variation as an example of using the range of the global parameter, in the first focusing process, the global parameter of the target image is calculated according to the scene parameter of each sub-image. And acquiring the maximum value and the minimum value of the global parameter in the first focusing process.
Optionally, a ratio of the minimum value to the maximum value of the global parameter is calculated as the adjustment parameter. And under the condition that the adjusting parameter is larger than the adjusting threshold, determining that the scene type of the target image is matched with a preset scene, thereby determining that the adjusting condition is reached.
Alternatively, in the case where the adjustment parameter is equal to or greater than the adjustment threshold, it is determined that the scene type of the target image does not match the preset scene, that is, the scene type of the target image should be a general scene, and thus it is determined that the adjustment condition is not reached.
Alternatively, in a case where the amount of change in the scene parameter fails to reach the adjustment condition, after the scene type of the target image is adjusted to a general scene, the second focusing operation is performed in accordance with the reference weight matrix.
Alternatively, performing the second focusing operation according to the reference weight matrix may be, but is not limited to, performing the second focusing operation by taking the weight values corresponding to the respective sub-images in the reference weight matrix as the target weights of the sub-images.
In the embodiment of the application, after the variation of the scene parameter is counted, the adjustment condition is judged by calculating the variation of the global parameter of the target image, so that the scene type is judged again according to the variation of the parameter in the first focusing operation process, and the adjustment condition is determined to be reached under the condition that the parameter variation is matched with the scene type, so that the enabling weight is adjusted according to the variation of the scene parameter. And under the condition that the parameter change is not matched with the scene type, determining that the judgment of the scene type should be adjusted, and processing the target image according to the common scene so as to execute a second focusing operation according to the reference weight matrix. And the scene type is secondarily judged through the change of parameters in the first focusing operation process, so that the accuracy of scene judgment is improved, the second focusing operation is executed according to the target weight matched with the scene type, and the focusing efficiency is improved.
As an alternative implementation, as shown in fig. 7, the adjusting the enabling weight according to the variation of the scene parameter to obtain the target weight includes:
s702, acquiring the variation of the corresponding scene parameters of each sub-image in the first focusing operation process;
s704, calculating the average variable quantity of the variable quantities of the scene parameters of all the sub-images;
and S706, adjusting the enabling weight corresponding to the sub-image according to the comparison result of the variation of the scene parameter and the average variation to obtain the target weight.
Alternatively, the average variation of the scene parameter of all the sub-images is obtained by dividing the number of the sub-images by the sum of the variations of the scene parameter of all the sub-images.
Alternatively, the comparing the variation of the scene parameter with the average variation may be, but is not limited to, comparing the variation of the scene parameter with a threshold related to the average variation. The threshold related to the average variation may be, but is not limited to, a threshold obtained by averaging the variation itself, and a threshold obtained by giving a coefficient to the average variation.
Optionally, the enabling weight corresponding to the sub-image is adjusted according to the comparison result, but not limited to, the enabling weight of the corresponding sub-image is reassigned according to an interval defined by the threshold related to the average variation, as the target weight.
In the embodiment of the application, the enabling weight of the sub-image is adjusted through the variation of the scene parameter in the first focusing operation process, so that the enabling weight of the sub-image is determined according to the scene parameter in the first focusing operation process, and the second focusing operation is executed according to the target weight obtained through adjustment.
As an optional implementation manner, the adjusting the enable weight corresponding to the sub-image to obtain the target weight according to the comparison result of the variation of the scene parameter and the average variation includes:
taking the first target weight value as the target weight of the corresponding sub-image under the condition that the variation of the scene parameter is larger than the product of the second coefficient and the average variation;
taking a third target weight value as the target weight of the corresponding sub-image under the condition that the variation of the scene parameter is smaller than the product of the third coefficient and the average variation;
and in the case that the variation of the scene parameter is between the product of the third coefficient and the average variation and the product of the second coefficient and the average variation, taking the second target weight value as the target weight of the corresponding sub-image, wherein the second coefficient is greater than the third coefficient.
Alternatively, the second coefficient and the third coefficient are both values assigned to the average variation amount for realizing the determination of the threshold value from the average variation amount.
Alternatively, for averaging the variationTo express, target weight of subimage is omega ij The second coefficient and the third coefficient are respectively expressed by alpha 2 And alpha 3 The method of determining the weight value representing the target weight of the sub-image may be, but is not limited to, according to the following:
optionally, the value ω of the target weight value 1 >ω 2 >ω 3 Value of coefficient α 2 >α 3 >1。
In the embodiment of the application, the larger the variation of the scene parameter is in the first focusing operation process, the larger the value corresponding to the target weight is, so that the target weight used by the corresponding sub-image in the second focusing operation process is larger than other sub-images, and the more important sub-image is determined, so that the focusing efficiency is improved, and the focus can be found more quickly.
Alternatively, the focusing method of the motorized lens may be performed as shown in fig. 8. In the case where the target image captured by the electric lens is acquired, S802 is executed, the scene type of the target image is determined and a reference weight matrix is generated. And determining the scene type of the target image according to the scene parameters of each sub-image divided by the target image, and generating a reference weight matrix corresponding to the reference weight of each sub-image. In the case where the scene type of the target image is determined, S804 is performed to determine whether it is a low-detail scene. If yes, i.e. the target image is of the scene type with less detail, S806 is executed to assign an enabling weight to the sub-image and execute the first focusing operation. In the case where the target image is a low detail scene type. Enabling weights are assigned to the sub-images, and a first focusing operation is performed on the target image. In the case where the determination at S804 is no, that is, the target image is not a less detailed scene type but a general scene type, S814 is performed to determine the target weight of each sub-image according to the reference weight matrix. Namely, the reference weight value corresponding to each sub-image in the reference weight matrix is used as the target weight of each sub-image.
In the first focusing operation process, the variation of the scene parameters of each sub-image is counted, and the global parameter variation of the target image is obtained. And judging the adjustment condition according to the global parameter variation. And S808, judging whether the adjustment condition is met. If yes, that is, if the global parameter variation of the target image satisfies the adjustment condition, S810 is executed to adjust the enable weight to obtain the target weight of each sub-image. And adjusting the enabling weight according to the variation of the scene parameter of each sub-image to obtain the target weight of each sub-image. If the judgment is no, that is, under the condition that the global parameter variation of the target image does not meet the adjustment condition, it is determined that the scene type of the target image is judged incorrectly, and if the scene type is a general type, S814 is executed, and the target weight of each sub-image is determined according to the reference weight matrix.
In the case where the target weights of the respective sub-images are determined, S812 is performed, and the second focusing operation is performed according to the target weights.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided a focusing apparatus of an electromotive lens for implementing the focusing method of an electromotive lens described above. As shown in fig. 9, the apparatus includes:
an obtaining module 902, configured to obtain, when an operation instruction for triggering focusing is received, scene parameters corresponding to sub-images included in a currently acquired target image;
a determining module 904, configured to determine a scene type of the target image according to the respective corresponding scene parameters of the sub-images, and generate a reference weight matrix matched with the target image;
a first executing module 906, configured to, in a case where the scene type of the target image indicates a preset scene, execute a first focusing operation according to the enabling weights allocated to the respective sub-images;
a counting module 908, configured to count variation of scene parameters corresponding to each sub-image in a process of performing the first focusing operation;
an adjusting module 910, configured to adjust the enabling weight according to the variation of the scene parameter when the variation of the scene parameter reaches an adjusting condition, to obtain a target weight;
a second performing module 912 for performing a second focusing operation according to the target weight of each sub-image.
Optionally, the determining module 904 includes:
the calculating unit is used for calculating the global parameter and the average parameter of the target image according to the scene parameter of the sub-image;
the first determining unit is used for determining the sub-images corresponding to the scene parameters as non-candidate sub-images under the condition that the scene parameters corresponding to the sub-images are smaller than the average parameters and the scene parameters are smaller than a first threshold, wherein the non-candidate sub-images are sub-images which do not carry image detail information;
the second determining unit is used for determining the sub-image as a candidate sub-image under the condition that the scene parameter corresponding to the sub-image is greater than or equal to the average parameter or the scene parameter is greater than or equal to the first threshold, wherein the candidate sub-image is a sub-image carrying image detail information;
and the scene unit is used for determining the scene type of the target image according to the global parameter of the target image and the number of the candidate sub-images.
The scene unit includes:
the third determining unit is used for determining the candidate sub-image as a target candidate sub-image under the condition that the scene parameter corresponding to the candidate sub-image is greater than the product of the first coefficient and the average parameter, wherein the target candidate sub-image is a sub-image carrying target image detail information;
and the preset unit is used for determining the scene type of the target image as a preset scene under the condition that the global parameter is smaller than the second threshold and the number of the target candidate sub-images is smaller than the third threshold.
Optionally, the determining module 904 includes:
the first assignment unit is used for taking the first weight value as the weight value corresponding to the sub-image under the condition that the sub-image is the candidate sub-image;
the second assignment unit is used for taking the second weight value as the weight value corresponding to the sub-image under the condition that the sub-image is a non-candidate sub-image;
a generating unit configured to generate a reference weight matrix of the target image using the first weight value and the second weight value.
The first executing module 906 includes:
the third assignment unit is used for assigning a third weight value to each sub-image as an enabling weight of the sub-image;
a first performing unit for performing a first focusing operation according to the enabling weight.
The statistic module 908 includes:
the first acquisition unit is used for acquiring the maximum value of the corresponding scene parameter and the minimum value of the corresponding scene parameter of each sub-image in the first focusing operation process;
a difference unit for calculating a difference between the maximum value of the scene parameter and the minimum value of the scene parameter as a variation of the scene parameter;
and the statistical unit is used for counting the variation of the scene parameters of each sub-image.
The adjusting module 910 includes:
the second acquisition unit is used for acquiring the variation of the corresponding scene parameter of each sub-image in the first focusing operation process;
an averaging unit configured to calculate an average variation of variations of scene parameters of all the sub-images;
and the adjusting unit is used for adjusting the enabling weight according to the comparison result of the average variation and the variation of the scene parameters so as to obtain the target weight.
The adjusting unit includes:
the first assignment unit is used for taking the first target weight value as the target weight of the corresponding sub-image under the condition that the variation of the scene parameter is larger than the product of the second coefficient and the average variation;
the second assignment unit is used for taking the third target weight value as the target weight of the corresponding sub-image under the condition that the variation of the scene parameter is smaller than the product of the third coefficient and the average variation;
and the third assignment unit is used for taking the second target weight value as the target weight of the corresponding sub-image under the condition that the variation of the scene parameter is between the product of the third coefficient and the average variation and the product of the second coefficient and the average variation, wherein the second coefficient is larger than the third coefficient.
The focusing apparatus of the above-mentioned electric lens further includes:
and a general module, configured to, in a case where the scene type of the target image indicates a general scene, perform a second focusing operation with reference to the weight values corresponding to the respective sub-images in the weight matrix as target weights.
The focusing apparatus of the above-mentioned electric lens further includes:
and the scene adjusting module is used for adjusting the scene type of the target image into a common scene under the condition that the variation of the scene parameters cannot reach the adjusting condition.
In the embodiment of the application, the scene type of the target image is determined by acquiring the scene parameter corresponding to the sub-image in the target image, under the condition that the scene type is judged to be the preset scene, in the process of the first focusing operation, the variation of the scene parameter is counted, so that the enabling weight corresponding to the sub-image is adjusted according to the variation of the scene parameter, the target weight is obtained, in the mode of executing the second focusing operation according to the target weight of the sub-image, the target weight for the second focusing operation is obtained by judging the scene type according to the scene parameter, allocating the enabling weight of the sub-image according to the scene type, adjusting the enabling weight of the sub-image according to the variation of the scene parameter counted in the process of the first focusing operation, and the enabling weight of the sub-image is adjusted according to the judgment of the scene type and the statistics of the parameter in the process of the first focusing, the enabling weight of the sub-image is adjusted to a target weight more suitable for the scene type and the focusing operation, so that the purpose of executing the second focusing operation is achieved, the technical effect of adjusting the weight of the sub-image according to the scene type to adjust the focusing focus so as to improve the focusing efficiency is achieved, and the technical problem of low focusing efficiency is solved.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the above-described focusing method of a motorized lens, which may be a terminal device or a server shown in fig. 1. The present embodiment takes the electronic device as a terminal device as an example for explanation. As shown in fig. 10, the electronic device comprises a memory 1002 and a processor 1004, the memory 1002 having stored therein a computer program, the processor 1004 being arranged to execute the steps of any of the method embodiments described above by means of the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, under the condition that an operation instruction for triggering focusing is received, acquiring scene parameters corresponding to sub-images contained in a currently acquired target image;
s2, determining the scene type of the target image according to the respective corresponding scene parameters of the sub-images, and generating a reference weight matrix matched with the target image;
s3, in a case where the scene type of the target image indicates a preset scene, performing a first focusing operation according to the enabling weights allocated to the respective sub-images;
s4, in the process of executing the first focusing operation, counting the variation of the scene parameters corresponding to each sub-image;
s5, when the variation of the scene parameters reaches the adjusting condition, adjusting the enabling weight according to the variation of the scene parameters to obtain the target weight;
s6, a second focusing operation is performed according to the target weights of the respective sub-images.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 10 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an IOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 10 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
The memory 1002 may be used to store software programs and modules, such as program instructions/modules corresponding to the focusing method and apparatus for an electric lens in the embodiment of the present invention, and the processor 1004 executes various functional applications and data processing by running the software programs and modules stored in the memory 1002, that is, implements the focusing method for an electric lens. The memory 1002 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1002 can further include memory located remotely from the processor 1004, which can be coupled to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1002 may be specifically, but not limited to, used for storing information such as sub-images of the target image, scene parameters, scene types, and the like. As an example, as shown in fig. 10, the memory 1002 may include, but is not limited to, an obtaining module 902, a determining module 904, a first executing module 906, a counting module 908, an adjusting module 910, and a second executing module 912 of the focusing apparatus including the above-mentioned motorized lens. In addition, the present invention may further include, but is not limited to, other module units in the focusing device of the electric lens, which is not described in detail in this example.
Optionally, the above-mentioned transmission device 1006 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1006 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices so as to communicate with the internet or a local area Network. In one example, the transmission device 1006 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 1008 for displaying the sub-images of the target image, the scene parameters and the scene type; and a connection bus 1010 for connecting the module parts in the above-described electronic apparatus.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computing device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods provided in the various alternative implementations of the focusing aspect of the motorized lens described above. Wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, under the condition that an operation instruction for triggering focusing is received, acquiring scene parameters corresponding to sub-images contained in a currently acquired target image;
s2, determining the scene type of the target image according to the respective corresponding scene parameters of the sub-images, and generating a reference weight matrix matched with the target image;
s3, in a case where the scene type of the target image indicates a preset scene, performing a first focusing operation in accordance with the enabling weights assigned to the respective sub-images;
s4, in the process of executing the first focusing operation, counting the variation of the scene parameters corresponding to each sub-image;
s5, when the variation of the scene parameters reaches the adjusting condition, adjusting the enabling weight according to the variation of the scene parameters to obtain the target weight;
s6, a second focusing operation is performed according to the target weight of each sub-image.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiment of the present invention.
In the above embodiments of the present invention, the descriptions of the embodiments have different emphasis, and reference may be made to related descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or the units may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (13)
1. A focusing method of an electric lens, comprising:
under the condition that an operation instruction for triggering focusing is received, acquiring scene parameters corresponding to sub-images contained in a currently acquired target image, wherein the scene parameters of the sub-images indicate edge detection operator values of the sub-images;
determining the scene type of the target image according to the scene parameters corresponding to the sub-images respectively, and generating a reference weight matrix matched with the target image, wherein the scene type comprises a less-detail scene, a no-detail scene and a more-detail scene, the reference weight matrix is a matrix with the data volume consistent with the number of the sub-images, and the reference weight matrix is a weight matrix corresponding to the attention index of the sub-images in the target image in the focusing operation process;
under the condition that the scene type of the target image is indicated as a preset scene, executing a first focusing operation according to enabling weights distributed to the sub-images, wherein the preset scene is the few-detail scene, and the enabling weights distributed to the sub-images comprise the same initial weight value distributed to the sub-images;
in the process of executing the first focusing operation, counting variation of the scene parameter corresponding to each sub-image, where the variation of the scene parameter refers to a parameter capable of identifying the variation of the scene parameter, and includes at least one of: range, variance, information entropy;
under the condition that the variation of the scene parameters reaches an adjusting condition, adjusting the enabling weight according to the variation of the scene parameters to obtain a target weight;
performing a second focusing operation in accordance with the target weight of each of the sub-images.
2. The focusing method according to claim 1, wherein the determining the scene type of the target image according to the scene parameters corresponding to the sub-images respectively comprises:
calculating the global parameter and the average parameter of the target image according to the scene parameters of the sub-images;
under the condition that the scene parameter corresponding to the sub-image is smaller than the average parameter and the scene parameter is smaller than a first threshold value, determining the sub-image corresponding to the scene parameter as a non-candidate sub-image, wherein the non-candidate sub-image is a sub-image which does not carry image detail information;
determining the sub-image as a candidate sub-image when the scene parameter corresponding to the sub-image is greater than or equal to the average parameter or the scene parameter is greater than or equal to the first threshold, wherein the candidate sub-image is a sub-image carrying image detail information;
and determining the scene type of the target image according to the global parameters of the target image and the number of the candidate sub-images.
3. The focusing method of claim 2, wherein the determining the scene type of the target image according to the global parameter of the target image and the number of candidate sub-images comprises:
determining the candidate sub-image as a target candidate sub-image under the condition that the scene parameter corresponding to the candidate sub-image is greater than the product of a first coefficient and the average parameter, wherein the target candidate sub-image is a sub-image carrying target image detail information;
and under the condition that the global parameter is smaller than a second threshold and the number of the target candidate sub-images is smaller than a third threshold, determining the scene type of the target image as the preset scene.
4. The focusing method of claim 2, wherein the generating a reference weight matrix matching the target image comprises:
taking a first weight value as a weight value corresponding to the sub-image under the condition that the sub-image is the candidate sub-image;
taking a second weight value as a weight value corresponding to the sub-image under the condition that the sub-image is the non-candidate sub-image;
generating the reference weight matrix for the target image using the first and second weight values.
5. The focusing method according to claim 1, wherein the performing of the first focusing operation according to the enabling weight assigned to each of the sub-images includes:
assigning a third weight value to each of the sub-images as the enabling weight of the sub-image;
performing the first focusing operation according to the enabling weight.
6. The focusing method of claim 1, wherein the counting the variation of the scene parameter corresponding to each sub-image during the performing of the first focusing operation comprises:
acquiring the maximum value of the scene parameter and the minimum value of the scene parameter corresponding to each sub-image in the first focusing operation process;
calculating a difference value between the maximum value of the scene parameter and the minimum value of the scene parameter as the variation of the scene parameter;
and counting the variation of the scene parameters of each sub-image.
7. The focusing method of claim 6, wherein the adjusting the enabling weight according to the variation of the scene parameter to obtain a target weight comprises:
acquiring the variation of the scene parameters corresponding to the sub-images in the first focusing operation process;
calculating an average variation of the variations of the scene parameters of all the sub-images;
and adjusting the enabling weight corresponding to the sub-image according to the comparison result of the variation of the scene parameter and the average variation to obtain the target weight.
8. The focusing method of claim 7, wherein the adjusting the enabling weight corresponding to the sub-image to obtain the target weight according to the comparison result between the variation of the scene parameter and the average variation comprises:
taking a first target weight value as the target weight of the corresponding sub-image when the variation of the scene parameter is larger than the product of a second coefficient and the average variation;
taking a third target weight value as the target weight of the corresponding sub-image when the variation of the scene parameter is smaller than the product of a third coefficient and the average variation;
and taking a second target weight value as the target weight of the corresponding sub-image when the variation of the scene parameter is between the product of the third coefficient and the average variation and the product of the second coefficient and the average variation, wherein the second coefficient is larger than the third coefficient.
9. The focusing method according to claim 1, characterized in that:
in a case where the scene type of the target image indicates a general scene, the first focusing operation and the second focusing operation are performed in accordance with the reference weight matrix.
10. The focusing method according to claim 9, characterized in that:
and adjusting the scene type of the target image to the general scene if the variation of the scene parameter fails to reach an adjustment condition.
11. A focusing apparatus for an electromotive lens, comprising:
the device comprises an acquisition module, a processing module and a control module, wherein the acquisition module is used for acquiring scene parameters corresponding to sub-images contained in a currently acquired target image under the condition of receiving an operation instruction for triggering focusing, and the scene parameters of the sub-images indicate edge detection operator values of the sub-images;
the determining module is used for determining scene types of the target image according to the scene parameters corresponding to the sub-images respectively and generating a reference weight matrix matched with the target image, wherein the scene types comprise a few-detail scene, a no-detail scene and a many-detail scene, the reference weight matrix is a matrix with the data volume consistent with the number of the sub-images, and the reference weight matrix is a weight matrix corresponding to the attention indexes of the sub-images in the target image in the focusing operation process;
a first executing module, configured to, when a scene type of the target image indicates a preset scene, execute a first focusing operation according to an enabling weight allocated to each sub-image, where the preset scene is the less-detail scene, and the enabling weight allocated to each sub-image includes allocating a same initial weight value to each sub-image;
a statistics module, configured to, during the process of performing the first focusing operation, count a variation of the scene parameter corresponding to each of the sub-images, where the variation of the scene parameter refers to a parameter capable of identifying a variation of the scene parameter, and includes at least one of: range, variance, information entropy;
the adjusting module is used for adjusting the enabling weight according to the variation of the scene parameters under the condition that the variation of the scene parameters reaches an adjusting condition to obtain a target weight;
a second execution module for executing a second focusing operation according to the target weight of each of the sub-images.
12. A computer-readable storage medium comprising a stored program, characterized in that the program when executed performs the method of any of claims 1 to 10.
13. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 10 by means of the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110362434.6A CN112969027B (en) | 2021-04-02 | 2021-04-02 | Focusing method and device of electric lens, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110362434.6A CN112969027B (en) | 2021-04-02 | 2021-04-02 | Focusing method and device of electric lens, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112969027A CN112969027A (en) | 2021-06-15 |
CN112969027B true CN112969027B (en) | 2022-08-16 |
Family
ID=76281096
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110362434.6A Active CN112969027B (en) | 2021-04-02 | 2021-04-02 | Focusing method and device of electric lens, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112969027B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115190242B (en) * | 2022-07-08 | 2024-02-13 | 杭州海康威视数字技术股份有限公司 | Focusing triggering method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012076992A1 (en) * | 2010-12-07 | 2012-06-14 | Hiok Nam Tay | Auto-focus image system |
CN106934790A (en) * | 2015-12-30 | 2017-07-07 | 浙江大华技术股份有限公司 | A kind of evaluation method of image definition, the automatic method for focusing on and related device |
CN110324536A (en) * | 2019-08-19 | 2019-10-11 | 杭州图谱光电科技有限公司 | A kind of image change automatic sensing focusing method for micro- camera |
CN110572573A (en) * | 2019-09-17 | 2019-12-13 | Oppo广东移动通信有限公司 | Focusing method and device, electronic equipment and computer readable storage medium |
CN111225162A (en) * | 2020-01-21 | 2020-06-02 | 厦门亿联网络技术股份有限公司 | Image exposure control method, system, readable storage medium and camera equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105578045A (en) * | 2015-12-23 | 2016-05-11 | 努比亚技术有限公司 | Terminal and shooting method of terminal |
-
2021
- 2021-04-02 CN CN202110362434.6A patent/CN112969027B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012076992A1 (en) * | 2010-12-07 | 2012-06-14 | Hiok Nam Tay | Auto-focus image system |
CN106934790A (en) * | 2015-12-30 | 2017-07-07 | 浙江大华技术股份有限公司 | A kind of evaluation method of image definition, the automatic method for focusing on and related device |
CN110324536A (en) * | 2019-08-19 | 2019-10-11 | 杭州图谱光电科技有限公司 | A kind of image change automatic sensing focusing method for micro- camera |
CN110572573A (en) * | 2019-09-17 | 2019-12-13 | Oppo广东移动通信有限公司 | Focusing method and device, electronic equipment and computer readable storage medium |
CN111225162A (en) * | 2020-01-21 | 2020-06-02 | 厦门亿联网络技术股份有限公司 | Image exposure control method, system, readable storage medium and camera equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112969027A (en) | 2021-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108898171B (en) | Image recognition processing method, system and computer readable storage medium | |
CN111950543B (en) | Target detection method and device | |
CN111402170B (en) | Image enhancement method, device, terminal and computer readable storage medium | |
CN113190757A (en) | Multimedia resource recommendation method and device, electronic equipment and storage medium | |
JP2013534342A (en) | Object recognition using incremental feature extraction | |
CN107808116B (en) | Wheat and wheat spider detection method based on deep multilayer feature fusion learning | |
CN111091592A (en) | Image processing method, image processing apparatus, electronic device, and readable storage medium | |
JP6174894B2 (en) | Image processing apparatus and image processing method | |
CN105869175A (en) | Image segmentation method and system | |
CN110321892B (en) | Picture screening method and device and electronic equipment | |
CN111310727A (en) | Object detection method and device, storage medium and electronic device | |
CN103353881A (en) | Method and device for searching application | |
CN110009664A (en) | A kind of infrared object tracking method and device based on response diagram fusion | |
CN112969027B (en) | Focusing method and device of electric lens, storage medium and electronic equipment | |
CN113076159A (en) | Image display method and apparatus, storage medium, and electronic device | |
CN114611635B (en) | Object identification method and device, storage medium and electronic device | |
CN111598176B (en) | Image matching processing method and device | |
CN111629146A (en) | Shooting parameter adjusting method, shooting parameter adjusting device, shooting parameter adjusting equipment and storage medium | |
CN111402301B (en) | Water accumulation detection method and device, storage medium and electronic device | |
CN111309946A (en) | Established file optimization method and device | |
CN110689565A (en) | Depth map determination method and device and electronic equipment | |
CN117994375A (en) | Processing method and device of thermal point, storage medium and electronic equipment | |
CN116168045B (en) | Method and system for dividing sweeping lens, storage medium and electronic equipment | |
CN110135422B (en) | Dense target detection method and device | |
CN115471574B (en) | External parameter determination method and device, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |