CN115567778A - Automatic focusing method and device, electronic equipment and storage medium - Google Patents

Automatic focusing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115567778A
CN115567778A CN202110742860.2A CN202110742860A CN115567778A CN 115567778 A CN115567778 A CN 115567778A CN 202110742860 A CN202110742860 A CN 202110742860A CN 115567778 A CN115567778 A CN 115567778A
Authority
CN
China
Prior art keywords
sample image
lens
definition
focusing
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110742860.2A
Other languages
Chinese (zh)
Inventor
蔡飞飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lumi United Technology Co Ltd
Original Assignee
Lumi United Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lumi United Technology Co Ltd filed Critical Lumi United Technology Co Ltd
Priority to CN202110742860.2A priority Critical patent/CN115567778A/en
Publication of CN115567778A publication Critical patent/CN115567778A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Automatic Focus Adjustment (AREA)

Abstract

The embodiment of the application relates to the technical field of automatic focusing, and provides an automatic focusing method, an automatic focusing device, electronic equipment and a storage medium, wherein the method comprises the following steps: moving the lens for multiple times within a preset focusing range according to a first preset step length, and acquiring a first sample image acquired after moving the lens every time; calculating the definition of each first sample image to obtain the initial focusing position of the lens corresponding to the first sample image with the maximum definition; taking the initial focusing position as a starting point, and moving the lens for multiple times within a preset range of the starting point by using a second preset step length to obtain a second sample image after moving the lens for each time, stopping moving the lens until a preset condition is met, and finally obtaining multiple second sample images; and determining the position of the lens corresponding to the second sample image with the maximum definition as the final focusing position. The embodiment of the application can quickly and accurately realize focusing.

Description

Automatic focusing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of auto-focusing technologies, and in particular, to an auto-focusing method, an auto-focusing apparatus, an electronic device, and a storage medium.
Background
The auto-focusing technology is one of the key technologies of computer vision and various imaging systems, and has wide application in imaging systems such as cameras, video cameras, microscopes, endoscopes and the like.
In recent years, with the rapid development of smart homes, cameras are increasingly applied to smart homes, for example, for a smart door lock with a camera, when someone walks into the smart door lock, automatic focusing is required.
The existing automatic focusing technology cannot give consideration to both focusing accuracy and focusing efficiency.
Disclosure of Invention
The embodiment of the application provides an automatic focusing method, an automatic focusing device, electronic equipment and a storage medium, and aims to solve the problem that focusing accuracy and focusing efficiency cannot be considered at the same time.
In a first aspect, an embodiment of the present application provides an auto-focusing method, where the method includes: moving the lens for multiple times within a preset focusing range according to a first preset step length, and acquiring a first sample image acquired after moving the lens for each time; calculating the definition of each first sample image to obtain the initial focusing position of the lens corresponding to the first sample image with the maximum definition; taking the initial focusing position as a starting point, and moving the lens for multiple times within a preset range of the starting point by using a second preset step length to obtain a second sample image after the lens is moved each time until a preset condition is met, and stopping moving the lens to finally obtain multiple second sample images; and determining the position of the lens corresponding to the second sample image with the maximum definition as a final focusing position.
In a second aspect, an embodiment of the present application provides an automatic focusing apparatus, including: the acquisition module is used for moving the lens for multiple times within a preset focusing range according to a first preset step length and acquiring a first sample image acquired after the lens is moved each time; the calculating module is used for calculating the definition of each first sample image to obtain the initial focusing position of the lens corresponding to the first sample image with the maximum definition; the focusing module is used for taking the initial focusing position as a starting point and moving the lens for multiple times within a preset range of the starting point by using a second preset step length to obtain a second sample image after the lens is moved each time until a preset condition is met, and stopping moving the lens to finally obtain multiple second sample images, wherein the second preset step length is smaller than the first preset step length; and the focusing module is also used for determining the position of the lens corresponding to the second sample image with the maximum definition as a final focusing position.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a lens, a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the computer program implements the steps of the auto-focusing method as described above.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the auto-focusing method as described above.
In the embodiment of the application, the initial focusing position is quickly determined by adopting the first preset step length, and after the initial focusing position is determined, the fine focusing is carried out in the preset range of the initial focusing position by adopting the second preset step length, so that the focusing can be quickly and accurately realized finally.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 shows a schematic diagram of an application environment suitable for the embodiment of the present application.
Fig. 2 shows another schematic application environment applicable to the embodiment of the present application.
Fig. 3 is a diagram showing an example of a variation relationship between the focus lens position and the focus evaluation function value in the hill-climbing search process in a single-peak scene.
Fig. 4 is a diagram showing an example of a variation relationship between the focus lens position and the focus evaluation function value in the hill-climbing search process in a bimodal scene.
Fig. 5 shows a flowchart of an auto-focusing method provided in an embodiment of the present application.
Fig. 6 is a diagram showing an example of a variation relationship between the focus lens position and the focus evaluation function value when the auto-focusing method is used.
Fig. 7 is a flowchart illustrating another autofocus method provided in an embodiment of the present application.
Fig. 8 shows a flowchart of another auto-focusing method provided in an embodiment of the present application.
Fig. 9 is a diagram illustrating a process flow of an auto-focusing method according to an embodiment of the present application.
Fig. 10 is a block diagram illustrating an example of an autofocus apparatus according to an embodiment of the present application.
Fig. 11 shows a hardware structure block diagram of an electronic device provided in an embodiment of the present application.
Icon: 10-an intelligent home system; 100-a gateway device; 200-household equipment; 200 a-door and window sensors; 200 b-an intelligent switch; 200 c-bulb; 200 d-intelligent door lock; 200 e-intelligent air conditioner; 300-a server; 400-a terminal device; 500-a router; 1100-an electronic device; 1110-a processor; 1120-storage medium; 1121 — operating system; 1122-data; 1123-applications; 1130-a memory; 1140-input output interface; 1150-wired or wireless network interface; 2100-an autofocus device; 2101-acquisition module; 2102-a computing module; 2103-focusing module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it should be noted that if the terms "upper", "lower", "inside", "outside", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings or the orientation or positional relationship which the present invention product is usually put into use, it is only for convenience of describing the present application and simplifying the description, but it is not intended to indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and thus, should not be construed as limiting the present application.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
An application environment to which the present application relates will be described below.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment suitable for the embodiment of the present application. Fig. 1 provides an intelligent home system 10, where the intelligent home system 10 includes a gateway device 100, a home device 200 connected to the gateway device 100, and a server 300 connected to the gateway device 100. The number of the gateway devices 100 may be at least one, and the number of the home devices 200 may be at least one. When the number of gateway apparatuses 100 is plural, communication connection between different gateway apparatuses 100 is also possible.
In this embodiment, the gateway device 100 may be an intelligent gateway for intelligent home control, and may implement functions of system information acquisition, information input, information output, centralized control, remote control, linkage control, and the like. The gateway equipment can be responsible for specific security alarm, household appliance control and power utilization information acquisition. The gateway device 100 can also perform information interaction with products such as an intelligent interactive terminal in a wireless manner. The gateway device 100 also has a wireless routing function, excellent wireless performance, network security and coverage area.
In this embodiment, the household device 200 may include various intelligent household devices, sensing devices, detection devices, etc. disposed in an indoor space, for example, an intelligent television, an intelligent refrigerator, an intelligent air conditioner, a temperature and humidity sensor, a pressure sensor, a smoke sensor, a human body sensor, a door and window sensor, an intelligent switch, a socket, an electric lamp, an infrared emission device, a camera device, etc. The home devices 200 connected to the gateway device 100 may interact with the gateway device 100 for information and instructions. The gateway device 100 and the home device 200 may be connected through communication manners such as bluetooth, wiFi (Wireless-Fidelity, wireless Fidelity), zigBee (purple peak technology), and the like, and certainly, the connection manner of the gateway device 100 and the home device 200 may not be limited in this embodiment of the application.
In this embodiment, the server 300 may be a local server, a cloud server, or the like, and a specific server type may not be limited in this embodiment. The server 300 connected to the gateway apparatus 100 may wirelessly interact with the gateway apparatus 100. The gateway apparatuses 100 disposed in different indoor spaces may be communicatively connected to the same server 300 through a network to perform information interaction between the server 300 and the gateway apparatus 100.
Further, the smart home system 10 may further include a terminal device 400. The terminal device 400 may include a Personal Computer (PC), a tablet PC, a smart phone, a Personal Digital Assistant (PDA), and the like, which are not limited herein. The terminal device 400 can interact information with the server 300 in a wireless mode such as 2G/3G/4G/5G/WiFi and the like. Of course, the connection mode between the terminal device 400 and the server 300 may not be limited in the embodiment of the present application. In some embodiments, the terminal device 400 may also be used for interaction with a user, so that the user may wirelessly communicate with the gateway device 100 through the terminal device 400 based on the router 500. In addition, the user may add an account information at the gateway device 100 and the terminal device 400 at the same time, and implement information synchronization between the gateway device 100 and the terminal device 400 through the account information.
In some embodiments, the user may set different trigger scenarios or automatic linkages through an Application (APP) of the terminal device 400. As one manner, the terminal device 400 may upload the scenario configuration information or the automation scheme to the server 300, so that when the trigger condition of the trigger scenario or the automation is reached, the server 300 may find a device corresponding to an execution action in the scenario configuration information or the automation scheme according to the stored scenario configuration information or the automation scheme, so as to notify the device to perform the execution action to meet the execution result of the trigger scenario or the automation. Alternatively, the server 300 may also send the scenario configuration information or the automation scheme to the gateway device 100, and the gateway device 100 finds a device corresponding to the execution action in the scenario configuration information or the automation scheme according to the stored scenario configuration information or the automation scheme. Meanwhile, the gateway device 100 may feed back the device's execution back to the server 300.
For example, referring to fig. 2, as an embodiment, when the automation scheme set by the user through the APP of the terminal device 400 is "door and window open automatic lighting", the triggering condition of the automation scheme is "door and window open", and the action is performed as "intelligent switch control light bulb lighting". In this case, the triggering device is a door/window sensor 200a and the actuating device is a smart switch 200b connected to a light bulb 200 c. The automation scheme may be stored in the gateway device 100 or in the server 300, and the path for executing the automation linkage may be through a local area network or a wide area network.
If the automatic execution is performed locally in the gateway device 100 through the lan path, the door/window sensor 200a senses that the door/window is opened, reports an information event that the door/window is opened to the gateway device 100, and after receiving the information event, the gateway device 100 can find the device corresponding to the execution action in the automation scheme, in this case, the intelligent switch 200b, and notify the intelligent switch 200b to control the lighting, thereby realizing the automatic linkage of the automatic lighting when the door/window is opened.
If the automatic execution is performed on the server 300 through the wan path, the door/window sensor 200a senses that the door/window is opened, reports an information event of the door/window opening to the gateway device 100, the gateway device 100 reports the event to the server 300 after receiving the event, and the server 300 finds the device corresponding to the execution action in the automation scheme according to the stored automation scheme, in this case, the intelligent switch 200b, and notifies the intelligent switch 200b to control the lighting through the gateway device 100, thereby realizing the automatic linkage of the automatic lighting when the door/window is opened.
Further, after the lighting is turned on, the execution result of the successful lighting may be fed back to the gateway device 100, and after the gateway device 100 receives the information, the current time, the Identifier (ID) of the automation scheme, and the execution result of the automation scheme may be reported to the server 300 and stored by the server 300. The ID may be a symbol uniquely identifying the automation scheme, may be a number, a character, and the like, and is not limited herein.
As another embodiment, the terminal device 400 includes a camera (or called a lens), the user inputs a face image through the camera of the terminal device 400 and performs face recognition to control the door and window sensor 200a or the smart switch 200b, as another embodiment, the home device 200 may further include a smart door lock 200d with a camera, the smart door lock 200d is in communication connection with the gateway device 100, the face image is input through the smart door lock 200d or face recognition is performed, and the door lock is opened after the face recognition is performed, as another embodiment, the home device 200 may further include a smart air conditioner 200e with a camera, the smart air conditioner 200e is in communication connection with the gateway device 100, the image of the user indoors is acquired through the smart air conditioner 200e and recognized through the server 300, a determination is made according to the recognition result, and if it is determined that the user kicks the quilt during the sleep, the wind speed or the temperature of the smart air conditioner 200e is appropriately adjusted.
When the face image is input through the terminal device 400 or the intelligent door lock 200d, focusing is needed to be performed first, and a clear face image can be obtained after the focusing is successful, so that the face image is identified.
The existing focusing technique is to adjust the distance between the lens and a Charge Coupled Device (CCD) so that the image plane falls on the imaging surface of the CCD. From a basic principle, the autofocus techniques can be divided into two broad categories: one is range finding autofocus based on distance measurement between a lens and a subject being photographed, and the other is focus detection autofocus based on sharp imaging on a focusing screen. For the latter, in order to quickly find a focusing position satisfying a definition requirement in a focusing process, search methods generally used include exhaustive search and hill-climbing search.
For exhaustive search, the lens is controlled to move in a certain step length in the whole focusing process, and the clearest focusing position is searched according to the definition.
For the hill climbing search, a hill climbing process is simulated, a position is randomly selected to climb a hill, the position moves towards a higher direction each time until the crest of the hill is reached, namely, an optimal solution is selected as a current solution in an adjacent space each time until a local optimal solution is obtained. The algorithm can be trapped in a local optimal solution, and whether a global optimal solution can be obtained depends on the position of an initial point. If the initial point is selected to be near the global optimal solution, the global optimal solution can be obtained. The hill climbing algorithm is a local preferred method, adopts a heuristic method, is an improvement on depth-first search, and utilizes feedback information to help generate a solution decision. The method has the advantages that traversal is avoided, and partial nodes are selected through heuristic selection, so that the aim of improving efficiency is fulfilled. A disadvantage of this approach is that the results may not be globally optimal because it is not a full search.
To more pictorially explain the hill climbing search process, please refer to fig. 3, where fig. 3 shows an exemplary graph of a variation relationship between a focus lens position and a focus evaluation function value (i.e., sharpness) in the hill climbing search process in a single-peak scene, in fig. 3, the focus lens position starts from the leftmost position, before reaching point B, the lens moves rightward all the time as the sharpness value increases gradually all the time, after passing point B, the sharpness value decreases all the time, after decreasing to a certain degree, the moving direction of the lens is changed, and the lens starts to move leftward, so that point B is finally obtained as the final focusing position, i.e., the lens position with the highest image sharpness.
Referring to fig. 4, fig. 4 shows an exemplary diagram of a variation relationship between a focus lens position and a focus evaluation function value in a hill-climbing search process in a bimodal scene, in fig. 4, a point a is taken as an initial position of the hill-climbing search, a lens position is gradually moved to the right to reach a point B, the definition of the point B is higher than that of the point a, therefore, the lens position is continuously adjusted to the right to search, a point C is reached, the definition of the point C is found to be smaller than that of the point B, an adjustment direction is changed, the lens position is adjusted to the left to reach a point D, the definition of the point D is higher than that of the point C, the lens position is continuously adjusted to the right until the point P1 is finally reached, and the lens position corresponding to the point P1 is a final focusing position, as can be seen from fig. 4, in fact, the point P1 is not an optimal final focusing position, and the point P is an optimal final focusing position, and due to limitations of the hill-climbing search algorithm, the finally obtained point P1 is a locally optimal solution rather than a globally optimal solution.
It should be noted that, an exhaustive search method may also be adopted in a bimodal or multimodal scene, where the calculation amount of the exhaustive search is large in the case of a high focusing accuracy requirement, and the calculation amount is increased in the bimodal or multimodal scene, and if the calculation amount is reduced, the focusing accuracy is influenced.
Aiming at the advantages and disadvantages of the two modes, the embodiment of the application adopts a mode of combining the two modes, not only can simultaneously utilize the advantages of the two modes, but also can weaken the disadvantages of the two modes to a certain extent.
Referring to fig. 5, fig. 5 is a flowchart illustrating an auto-focusing method according to an embodiment of the present application, the method including the following steps:
and S100, moving the lens for multiple times within a preset focusing range according to a first preset step length, and acquiring a first sample image acquired after moving the lens for each time.
In this embodiment, the preset focusing range is a range of focal lengths that can be adjusted when the lens is focused, and the focal lengths are continuously adjusted along with the movement of the lens, so that first sample images with different degrees of sharpness in the focusing window at different focal lengths can be obtained.
In this embodiment, the focusing window may be determined as needed, for example, the focusing window may be a region in the center 50 × 50 of the captured image, or a region in the lower right corner 60 × 60 of the captured image, and the focusing window may be one or multiple focusing windows, for example, the region in the center 50 × 50 of the captured image and the region in the lower right corner 60 × 60 of the captured image are simultaneously used as the focusing window.
In this embodiment, as a specific implementation manner, the lens may be moved multiple times according to a first preset step from the lens position with the smallest focal length until the lens position with the largest focal length is reached. In another embodiment, the focus distance may be the largest from the lens position until the lens position with the smallest focus distance is reached. To avoid the large amount of data caused by exhaustion, the first preset step size may be set according to actual needs, for example, the first preset step size is set to be slightly larger to reduce the amount of data, and only a position closer to the optimal focusing position needs to be found in coarse granularity.
Step S110, calculating the sharpness of each first sample image to obtain an initial focusing position where the lens corresponding to the first sample image with the highest sharpness is located.
In this embodiment, the sharpness of the first sample image is an important index for measuring the quality of the first sample image, and the sharpness of the image may be calculated through various functions to evaluate the sharpness of the image, and the function for calculating the sharpness of the image may be, but is not limited to, a gradient function, a gray variance function, an entropy function, and the like.
In this embodiment, the initial focusing position is a position of the lens corresponding to the first sample image with the maximum definition determined after moving the lens according to the first preset step length, and therefore, the initial focusing position is only a rough position relatively close to the final focusing position. That is, the final focus position is necessarily near the initial focus position.
And step S120, taking the initial focusing position as a starting point, and moving the lens for multiple times within a preset range of the starting point by using a second preset step length to obtain a second sample image after moving the lens for each time, and stopping moving the lens until a preset condition is met, so as to finally obtain multiple second sample images.
In this embodiment, since the final focusing position is near the initial focusing position, as a better implementation manner, the initial focusing position is used as a starting point, and the lens is moved for multiple times within a preset range near the initial focusing position by using a granularity finer than the first preset step length (that is, the second preset step length is smaller than the first preset step length), so that the final focusing position meeting the requirement can be finally obtained.
In an embodiment, the preset condition may be a number of times of moving the lens, which is predetermined by the user according to actual needs, in a normal case, the more the number of times of moving the lens is, the more the final focusing position is ideal, and meanwhile, the focusing efficiency of the lens is also reduced, and the user may set the preset condition that can achieve expectation and be within an acceptable range according to actual needs. In another embodiment, the preset condition may also be that the maximum sharpness value in the second sample image satisfies a preset range, or reaches a preset value, and the like.
Step S130, determining the position of the lens corresponding to the second sample image with the maximum sharpness as the final focusing position.
In this embodiment, the manner of calculating the sharpness of the second sample image may be the same as or different from the manner of calculating the sharpness of the first sample image.
According to the method provided by the embodiment of the application, the initial focusing position is quickly determined by adopting the first preset step length, the fine focusing range is narrowed, the fine focusing is carried out within the preset range of the initial focusing position by adopting the second preset step length, and finally the focusing can be quickly and accurately realized.
In order to facilitate comparison with the prior art, an exemplary graph of a variation relationship between a focus lens position and a focus evaluation function value when an auto-focus method is used is further provided in an embodiment of the present invention, and please refer to fig. 6, where fig. 6 shows an exemplary graph of a variation relationship between a focus lens position and a focus evaluation function value when an auto-focus method is used.
In this embodiment, if there is a noise in the first sample image, especially if the gradient of the noise is large, the image sharpness is calculated only by the gradient, and the noise with a large gradient is usually determined as an edge with a large gradient, which reduces the accuracy of determining the image sharpness. In order to more accurately judge the image sharpness, on the basis of fig. 5, an implementation manner of calculating the sharpness of the first sample image is further provided in the embodiment of the present application, please refer to fig. 7, fig. 7 shows a flowchart of another auto-focusing method provided in the embodiment of the present application, and step S110 includes the following sub-steps:
in the sub-step S1101, a gradient value of each pixel in each first sample image is calculated.
In this embodiment, the first sample image includes a plurality of pixel points, and as a specific implementation manner, the method for calculating the gradient value of each pixel point in any one first sample image may be:
first, a mask template in the lateral direction and the longitudinal direction is previously set, for example, the mask templates in the lateral direction and the longitudinal direction are respectively set as:
Figure BDA0003143309740000101
wherein, F 1 Denotes a lateral mask stencil, F 2 A longitudinal mask template is shown.
Then, the mask template is convolved with the first sample image to obtain the gradient of each pixel point.
The following formula can be adopted:
H 1 =image*F 1 ,H 2 =image*F 2 wherein H is 1 In the transverse gradient, H 2 Is a longitudinal gradient. The image represents the pixel value of the pixel point.
And finally, obtaining the gradient value of each pixel point according to the transverse gradient and the longitudinal gradient of each pixel point.
As a specific embodiment, the gradient value may be calculated using the following formula,
Figure BDA0003143309740000111
wherein g represents a gradient value.
In the substep S1102, the gradient change rate of each first sample image is calculated according to the gradient values of all the pixels in each first sample image.
In this embodiment, the gradient change rate is used to represent the change degree of the gradient, and each first sample image corresponds to one gradient change rate.
In the substep S1103, the sharpness of each first sample image is calculated according to the maximum value of the gradient values of all the pixels in each first sample image and the gradient change rate of each first sample image.
In this embodiment, for any first sample image, the sharpness of the first sample image is calculated according to the maximum value of the gradient values of all the pixels in the first sample image and the gradient change rate of the first sample image.
According to the method provided by the embodiment of the application, the definition of the first sample image is calculated according to the gradient and the gradient change rate, so that the judgment of the image definition is more accurate, and meanwhile, the misjudgment that the image definition is judged only through the gradient under an extreme condition is avoided.
In this embodiment, in order to make the value of the gradient change rate more reasonable, the embodiment of the present application further provides a specific implementation manner for calculating the gradient change rate:
first, the average value of the gradient values of all the pixel points in each first sample image is calculated.
And secondly, acquiring the maximum value and the minimum value of the gradient values of all pixel points in each first sample image.
And finally, calculating the gradient change rate of each first sample image according to the maximum value, the minimum value and the average value of each first sample image.
In this embodiment, as a specific implementation manner, the gradient change rate of any first sample image can be calculated by the following formula:
Figure BDA0003143309740000121
wherein G is the maximum value of the gradient values of all the pixel points in each first sample image, G is the gradient value, MAX (G) represents the maximum value of the gradient, MIN (G) represents the minimum value of the gradient, and MEAN (G) represents the average value of the gradient.
According to the method provided by the embodiment of the application, the gradient change rate is calculated according to the maximum value, the minimum value and the average value, so that the calculated gradient change rate is more reasonable, and finally, the image definition calculated according to the gradient and the gradient change rate is more accurate.
In this embodiment, when calculating the sharpness according to the maximum value and the gradient change rate, in order to better balance the influence weight of the gradient and the gradient change rate on the sharpness, the embodiment of the present application further provides a specific calculation formula, and for any first sample image, the sharpness may be calculated by using the following formula:
for any first sample image, according to the maximum value of the gradient values of all the pixels in the first sample image and the gradient change rate, using a formula IQA = G α V G (1-α) Calculating the definition of the first sample image, wherein IQA represents the definition of the first sample image, G represents the maximum value of the gradient values of all the pixel points in each first sample image, and V G RepresentThe gradient change rate, α, represents a weighting factor.
In this embodiment, the formula combines the maximum value of the gradient and the gradient change rate, α and α -1 are the index of the maximum value and the index of the gradient change rate, respectively, and the sum of these two indexes is 1, meaning that the maximum value of the gradient has a little more weight and the gradient change rate has a little less weight.
In this embodiment, α may be obtained through experimental verification, for example, in the embodiment of the present application, when α =0.61, the definition obtained through the calculation by the above formula may be used to better evaluate the definition degree or the image quality of the first sample image.
According to the method provided by the embodiment of the application, the weight factors are introduced, and the weight occupied by the gradient and the gradient change rate in the definition calculation can be properly balanced so as to adapt to different application scenes.
In this embodiment, in order to perform focusing with a finer granularity in a preset range of an initial focusing position after obtaining the initial focusing position, the embodiment of the present application further provides a specific implementation manner that the initial focusing position is used as a starting point, and focusing is performed in a preset range of the starting point by using a second preset step size, so as to obtain a plurality of second sample images, please refer to fig. 8, fig. 8 shows a flowchart of another automatic focusing method provided in the embodiment of the present application, and step S120 includes the following sub-steps:
and a substep S1201, taking a second preset step length as a target step length, taking the preset moving direction as a target direction, and taking the definition of a second sample image acquired when the lens is at the starting point as a target definition, wherein the second preset step length is smaller than the first preset step length.
In this embodiment, as a specific implementation manner, the preset moving direction may be to rotate and move the lens leftwards or rightwards.
In this embodiment, the method for calculating the sharpness of the second sample image may be the same as or different from the method for calculating the sharpness of the first sample image, for example, to achieve the technical effect corresponding to the sub-step of the step S110 or the step S110, the method for calculating the sharpness of the second sample image may also be implemented in a manner described in the sub-step of the step S110 or the step S110, and the specific implementation process may refer to the above description, and is not repeated here.
And a substep S1202, moving a lens once within a preset range according to the target step length and the target direction, and taking the second sample image acquired after moving as a current image.
In this embodiment, since the second preset step is smaller than the first preset step, the preset range is inevitably within the preset focusing range and near the initial focusing position.
And a substep S1203, determining a new target direction, a new target step length and a new target definition of the next lens movement according to the target definition and the current definition of the current image.
In this embodiment, the target sharpness is the maximum value of the sharpness in the current second sample image. And if the target definition is greater than or equal to the current definition, the target definition does not need to be updated, otherwise, the current definition needs to be used as a new target definition so as to ensure that the target definition is always the maximum value of the definition in the acquired second sample image.
In this embodiment, in order to control auto-focusing more finely, so that the lens moves more reasonably each time, the higher the definition of the obtained second sample image is, and the target direction and the target step length of each lens movement may be dynamically changed according to the definition of the obtained current image after the current lens moves.
And a substep S1204 of continuously moving the lens according to the new target direction, the new target step length and the new target definition until a preset condition is met, and finally obtaining a plurality of second sample images.
In this embodiment, the lens is moved again according to the new target direction and the new target step length to obtain a new current image acquired after the movement, and the new target direction, the new target step length, and the new target definition of the next lens movement are continuously determined by using the new target definition and the definition of the new current image until the preset conditions are met. For example, if the preset condition is 100 times, 100 second sample images can be finally obtained.
According to the method provided by the embodiment of the application, the new target direction and the new target step length of the next movement of the lens are determined according to the current definition and the target definition of the current image, so that the lens is more reasonable to move each time, the obtained second sample image can be clearer, and finally the final focusing position with the best automatic focusing effect can be determined according to the position of the second sample image with the highest definition.
In this embodiment, according to the difference in the relationship between the current definition and the target definition, the determined new target direction, new target step size, and new target definition may also be different, and the embodiment of the present application further provides an implementation manner of specifically determining the new target direction, the new target step size, and the new target definition:
first, if the current definition is greater than or equal to the target definition, the target direction is taken as a new target direction, the target step length is taken as a new target step length, and the current definition is taken as a new target definition.
In this embodiment, if the current sharpness is greater than or equal to the target sharpness, it means that the position greater than the target sharpness can be found again by moving the lens along the current target direction, and at this time, the current sharpness serves as a new target sharpness without changing the target direction or the target step size, and the lens continues to be moved to find the position greater than the new target sharpness.
And secondly, if the current definition is smaller than the target definition, taking the direction opposite to the target direction as a new target direction, reducing the target step length by a preset value, taking the reduced target step length as a new target step length, and taking the target definition as the new target definition.
In this embodiment, if the current sharpness is smaller than the target sharpness, which means that the position larger than the target sharpness is in the direction opposite to the target direction, the next time the lens is moved, the lens needs to be moved in the direction opposite to the target direction, that is, the direction opposite to the target direction is taken as a new target direction, and in order to avoid missing the position larger than the target sharpness when the lens is moved in the opposite direction, the target step size is decreased by a preset value, that is, the preset value is decreased based on the target step size, so as to control focusing with finer granularity, where the preset value may be a fixed value or a value determined according to a preset function.
In this embodiment, since the current sharpness is smaller than the target sharpness, the target sharpness does not need to be updated at this time, so the target sharpness is directly used as the new target sharpness.
According to the method provided by the embodiment of the application, when the current definition is smaller than the target definition, the moving direction of the lens is changed, the target step length is reduced, otherwise, the lens is continuously moved according to the current moving direction and the current step length, and the finer focusing control can be realized.
In order to clearly explain the processing flow of the auto-focusing method as a whole, the embodiment of the present invention further provides an exemplary processing flow diagram of the auto-focusing method, please refer to fig. 9, and fig. 9 shows an exemplary processing flow diagram of the auto-focusing method provided in the embodiment of the present application.
To more specifically describe the application of the above-mentioned auto-focusing method in a specific scene, in the embodiment of the present application, the application scene of fig. 2 is taken as an example, the electronic device 1100 is the intelligent door lock 200d of fig. 2, when a user walks into the intelligent door lock 200d, the intelligent door lock 200d controls a camera thereon to automatically move a lens multiple times within a preset focusing range, a user face image acquired each time the lens is moved is obtained and is taken as a first sample image, the intelligent door lock 200d calculates the sharpness of each first sample image to obtain an initial focusing position, the intelligent door lock 200d controls the camera thereon to move the lens multiple times with the initial focusing position as a starting point, a user face image acquired after moving the lens each time is obtained, the user face image at this time is taken as a second sample image until a preset condition is met, multiple second sample images are obtained, finally, the position where the lens corresponding to the second sample image with the greatest sharpness is located is determined as a final focusing position, after the final focusing position is determined, the intelligent door lock 200d controls the camera to acquire the user face image at the final focusing position, and identifies the face again, and the intelligent door lock automatically identifies the face image after the face image is opened, and the intelligent door lock 200 d. In the application scene, the automatic focusing method can be used for quickly and accurately focusing and obtaining the face image of the user with higher definition, so that the accuracy of the subsequent face image identification is greatly improved, and the safety and the reliability of the intelligent door lock are improved.
In order to perform the corresponding steps in the above-described embodiments and various possible embodiments, an implementation of the auto-focusing apparatus 2100 is given below. Referring to fig. 10, fig. 10 is a block diagram illustrating an auto-focusing apparatus 2100 according to an embodiment of the present disclosure. It should be noted that the basic principle and the technical effect of the autofocus device 2100 provided in this embodiment are the same as those of the foregoing embodiments, and for the sake of brief description, no reference is made to this embodiment.
The autofocus device 2100 includes an acquisition module 2101, a calculation module 2102, and a focusing module 2103.
The obtaining module 2101 is configured to move the lens multiple times within a preset focusing range according to a first preset step length, and obtain a first sample image acquired after moving the lens each time.
The calculating module 2102 is configured to calculate the sharpness of each first sample image, and obtain an initial focusing position where a lens corresponding to the first sample image with the highest sharpness is located.
As a specific embodiment, the first sample image includes a plurality of pixel points, and the calculating module 2102 is specifically configured to: calculating the gradient value of each pixel point in the first sample image; calculating the gradient change rate according to the gradient values of all pixel points in the first sample image; and calculating the definition of the first sample image according to the maximum value of the gradient values of all the pixel points and the gradient change rate.
As a specific embodiment, when the calculating module 2102 calculates the gradient change rate according to the gradient values of all the pixel points in the first sample image, the calculating module is specifically configured to: calculating the average value of the gradient values of all pixel points in the first sample image; acquiring the maximum value and the minimum value of gradient values of all pixel points in a first sample image; and calculating the gradient change rate according to the maximum value, the minimum value and the average value.
As a specific embodiment, when the calculating module 2102 calculates the sharpness of the first sample image according to the maximum value of the gradient values of all the pixel points and the gradient change rate, the calculating module is specifically configured to: according to the maximum value of the gradient values of all the pixel points in each first sample image and the gradient change rate, using a formula IQA = G α V G (1-α) Calculating the definition of each first sample image, wherein IQA represents the definition of the first sample image, G represents the maximum value of the gradient values of all pixel points in each first sample image, and V G Representing the rate of change of the gradient and alpha representing a weighting factor.
A focusing module 2103, configured to take the initial focusing position as a starting point, and move the lens multiple times within a preset range of the starting point by using a second preset step length, to obtain a second sample image after moving the lens each time, and stop moving the lens until a preset condition is met, to finally obtain multiple second sample images, where the second preset step length is smaller than the first preset step length; and the position of the lens corresponding to the second sample image with the maximum definition is determined as the final focusing position.
As a specific embodiment, the focusing module 2103 is specifically configured to: taking a second preset step length as a target step length, taking a preset moving direction as a target direction, and taking the definition of a second sample image acquired when the lens is at a starting point as a target definition, wherein the second preset step length is smaller than the first preset step length; moving the lens once within a preset range according to the target step length and the target direction, and taking a second sample image acquired after moving as a current image; determining a new target direction, a new target step length and a new target definition of the next movement of the lens according to the target definition and the current definition of the current image; and continuously moving the lens according to the new target direction, the new target step length and the new target definition until preset conditions are met, and finally obtaining a plurality of second sample images.
As a specific implementation manner, when the focusing module 2103 determines a new target direction, a new target step length, and a new target definition of the next lens movement according to the target definition and the current definition of the current image, the focusing module is specifically configured to: if the current definition is greater than or equal to the target definition, taking the target direction as a new target direction, taking the target step length as a new target step length, and taking the current definition as the new target definition; and if the current definition is smaller than the target definition, taking the direction opposite to the target direction as a new target direction, reducing the target step length by a preset value, taking the reduced target step length as a new target step length, and taking the target definition as new target definition.
The autofocus device 2100 according to the embodiment of the present application can implement each process implemented by the autofocus method in the method embodiments of fig. 5 and fig. 7 to fig. 9, and for avoiding repetition, details are not repeated here.
An embodiment of the present application provides an electronic device 1100, where the electronic device 1100 may be the home device 200 or the terminal device 400 with a camera or having a camera function or a photographing function in fig. 1, or may also be the intelligent door lock 200d or the terminal device 400 in fig. 2, and the electronic device 1100 includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the auto-focusing method provided in the foregoing method embodiment.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
Fig. 11 is a block diagram of a hardware structure of an electronic device implementing an auto-focusing method according to an embodiment of the present application. As shown in fig. 11, the electronic device 1100 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1110 (the processors 1110 may include but are not limited to processing devices such as a microprocessor MCU or a programmable logic device FPGA), a memory 1130 for storing data, and one or more storage media 1120 (e.g., one or more mass storage devices) for storing applications 1123 or data 1122. The memory 1130 and the storage medium 1120 may be, among other things, transient storage or persistent storage. The program stored in the storage medium 1120 may include one or more modules, each of which may include a series of instruction operations for a server. Still further, the processor 1110 may be configured to communicate with the storage medium 1120, and execute a series of instruction operations in the storage medium 1120 on the electronic device 1100. The electronic device 1100 may also include one or more power supplies 1160, one or more wired or wireless network interfaces 1150, one or more input-output interfaces 1140, and/or one or more operating systems 1121, such as Windows Server, macOSXTM, unixTM, linuxTM, freeBSDTM, and so forth.
The input/output interface 1140 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the electronic device 1100. In one example, i/o interface 1140 includes a network adapter (NIC) that may be coupled to other network devices via a base station to communicate with the internet. In one example, the input/output interface 1140 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
It will be understood by those of ordinary skill in the art that the structure shown in fig. 11 is merely exemplary and is not intended to limit the structure of the electronic device 1100 described above. For example, electronic device 1100 may also include more or fewer components than shown in FIG. 11, or have a different configuration than that shown in FIG. 11.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the foregoing embodiment of the auto-focusing method, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
To sum up, the embodiment of the present application provides an auto-focusing method, an auto-focusing device, an electronic device, and a storage medium, where the method includes: moving the lens for multiple times within a preset focusing range according to a first preset step length, and acquiring a first sample image acquired after moving the lens for each time; calculating the definition of each first sample image to obtain the initial focusing position of the lens corresponding to the first sample image with the maximum definition; taking the initial focusing position as a starting point, and moving the lens for multiple times within a preset range of the starting point by using a second preset step length to obtain a second sample image after moving the lens for each time, stopping moving the lens until a preset condition is met, and finally obtaining multiple second sample images, wherein the second preset step length is smaller than the first preset step length; and determining the position of the lens corresponding to the second sample image with the maximum definition as the final focusing position. Compared with the prior art, the method and the device have the advantages that the initial focusing position is quickly determined by adopting the first preset step length, fine focusing is carried out within the preset range of the initial focusing position by adopting the second preset step length after the initial focusing position is determined, and finally focusing can be quickly and accurately realized.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An auto-focusing method, the method comprising:
moving the lens for multiple times within a preset focusing range according to a first preset step length, and acquiring a first sample image acquired after moving the lens for each time;
calculating the definition of each first sample image to obtain the initial focusing position of the lens corresponding to the first sample image with the maximum definition;
taking the initial focusing position as a starting point, and moving the lens for multiple times within a preset range of the starting point by using a second preset step length to obtain a second sample image after the lens is moved each time until a preset condition is met, and stopping moving the lens to finally obtain multiple second sample images;
and determining the position of the lens corresponding to the second sample image with the maximum definition as a final focusing position.
2. The auto-focusing method of claim 1, wherein the first sample image includes a plurality of pixel points, and the step of calculating the sharpness of each of the first sample images includes:
calculating the gradient value of each pixel point in each first sample image;
calculating the gradient change rate of each first sample image according to the gradient values of all pixel points in each first sample image;
and calculating the definition of each first sample image according to the maximum value of the gradient values of all the pixel points in each first sample image and the gradient change rate of each first sample image.
3. The auto-focusing method of claim 2, wherein the step of calculating the gradient change rate of each of the first sample images according to the gradient values of all the pixels in each of the first sample images comprises:
calculating the average value of the gradient values of all pixel points in each first sample image;
obtaining the maximum value and the minimum value of the gradient values of all pixel points in each first sample image;
calculating a gradient change rate of each of the first sample images according to the maximum value, the minimum value and the average value of each of the first sample images.
4. The auto-focusing method of claim 2, wherein the step of calculating the sharpness of each of the first sample images according to the maximum value of the gradient values of all the pixels in each of the first sample images and the gradient change rate of each of the first sample images comprises:
according to the maximum value of the gradient values of all the pixel points in each first sample image and the gradient change rate, using a formula IQA = G α V G (1-α) Calculating the definition of each first sample image, wherein IQA represents the definition of the first sample image, G represents the maximum value of the gradient values of all the pixels in each first sample image, and V represents the maximum value of the gradient values of all the pixels in each first sample image G Represents the gradient change rate and alpha represents a weighting factor.
5. The auto-focusing method of claim 1, wherein the step of moving the lens a plurality of times within a preset range of the starting point with the initial focusing position as the starting point and a second preset step size to obtain a second sample image after moving the lens each time until a preset condition is met, and stopping moving the lens until a plurality of second sample images are finally obtained comprises:
taking the second preset step length as a target step length, taking a preset moving direction as a target direction, and taking the definition of a second sample image acquired when the lens is positioned at the starting point as a target definition, wherein the second preset step length is smaller than the first preset step length;
moving the lens once within the preset range according to the target step length and the target direction, and taking a second sample image acquired after moving as a current image;
determining a new target direction, a new target step length and a new target definition of the next movement of the lens according to the target definition and the current definition of the current image;
and continuously moving the lens according to the new target direction, the new target step length and the new target definition until the preset condition is met, and finally obtaining a plurality of second sample images.
6. The auto-focusing method of claim 5, wherein the step of determining a new target direction, a new target step size and a new target sharpness for the next movement of the lens according to the target sharpness and the current sharpness of the current image comprises:
if the current definition is greater than or equal to the target definition, taking the target direction as the new target direction, taking the target step length as a new target step length, and taking the current definition as a new target definition;
and if the current definition is smaller than the target definition, taking the direction opposite to the target direction as a new target direction, reducing the target step length by a preset value, taking the reduced target step length as a new target step length, and taking the target definition as new target definition.
7. An auto-focusing apparatus, comprising:
the acquisition module is used for moving the lens for multiple times within a preset focusing range according to a first preset step length and acquiring a first sample image acquired after the lens is moved each time;
the calculating module is used for calculating the definition of each first sample image to obtain the initial focusing position of the lens corresponding to the first sample image with the maximum definition;
the focusing module is used for taking the initial focusing position as a starting point and moving the lens for multiple times within a preset range of the starting point by using a second preset step length to obtain a second sample image after the lens is moved each time until a preset condition is met, and stopping moving the lens to finally obtain multiple second sample images;
and the focusing module is also used for determining the position of the lens corresponding to the second sample image with the maximum definition as a final focusing position.
8. The autofocus apparatus of claim 7, wherein the first sample image includes a plurality of pixel points, and wherein the computing module is specifically configured to:
calculating the gradient value of each pixel point in each first sample image;
calculating the gradient change rate of each first sample image according to the gradient values of all pixel points in each first sample image;
and calculating the definition of each first sample image according to the maximum value of the gradient values of all the pixel points in each first sample image and the gradient change rate of each first sample image.
9. An electronic device, characterized in that it comprises a lens, a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the auto-focusing method according to any one of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the autofocus method as claimed in one of claims 1 to 6.
CN202110742860.2A 2021-07-01 2021-07-01 Automatic focusing method and device, electronic equipment and storage medium Pending CN115567778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110742860.2A CN115567778A (en) 2021-07-01 2021-07-01 Automatic focusing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110742860.2A CN115567778A (en) 2021-07-01 2021-07-01 Automatic focusing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115567778A true CN115567778A (en) 2023-01-03

Family

ID=84737793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110742860.2A Pending CN115567778A (en) 2021-07-01 2021-07-01 Automatic focusing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115567778A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320748A (en) * 2023-05-22 2023-06-23 深圳明锐理想科技有限公司 Automatic focusing method and device, electronic equipment and automatic optical detection equipment
CN117782998A (en) * 2024-02-27 2024-03-29 宁德时代新能源科技股份有限公司 Battery detection method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320748A (en) * 2023-05-22 2023-06-23 深圳明锐理想科技有限公司 Automatic focusing method and device, electronic equipment and automatic optical detection equipment
CN117782998A (en) * 2024-02-27 2024-03-29 宁德时代新能源科技股份有限公司 Battery detection method and system

Similar Documents

Publication Publication Date Title
CN101794056B (en) Photographing setting control method and photographing device
JP6471934B2 (en) Image recognition method, camera system
CN115567778A (en) Automatic focusing method and device, electronic equipment and storage medium
KR102099635B1 (en) Method for providing guide of camera and electronic device
US9654685B2 (en) Camera apparatus and control method thereof
US8379138B2 (en) Imaging apparatus, imaging apparatus control method, and computer program
CN105376487B (en) A kind of focusing method and device
CN104349056A (en) Image processing apparatus, image processing method, and program
CN104125395A (en) Method and device for realizing automatic shooting
CN108496352B (en) Shooting method and device, image processing method and device
CN111107276B (en) Information processing apparatus, control method thereof, storage medium, and imaging system
US20190295243A1 (en) Method and system for automated video image focus change detection and classification
JP2010074735A (en) Operation input apparatus, operation input method, and program
US10135631B2 (en) Electric equipment management apparatus, electric equipment management method, and electric equipment management system
CN102891960A (en) Method and camera for determining an image adjustment parameter
CN103501393B (en) A kind of mobile terminal and image pickup method thereof
JP2018112996A (en) Video recognition device, video recognition method and program
CN104243796B (en) Camera, method for imaging, template creating device and template establishment method
CN108702457B (en) Method, apparatus and computer-readable storage medium for automatic image correction
CN105306806A (en) Mobile terminal and photographing method thereof
CN108600610A (en) Shoot householder method and device
KR20190025527A (en) Electric apparatus and method of controlling the same
US11463617B2 (en) Information processing apparatus, information processing system, image capturing apparatus, information processing method, and memory
CN109829393B (en) Moving object detection method and device and storage medium
KR20190134101A (en) Server of controlling air conditioner with area recognition and air conditioner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination