CN111641780A - Method, apparatus, device and medium for determining target state of camera lens - Google Patents

Method, apparatus, device and medium for determining target state of camera lens Download PDF

Info

Publication number
CN111641780A
CN111641780A CN202010423384.3A CN202010423384A CN111641780A CN 111641780 A CN111641780 A CN 111641780A CN 202010423384 A CN202010423384 A CN 202010423384A CN 111641780 A CN111641780 A CN 111641780A
Authority
CN
China
Prior art keywords
target state
image data
current image
determining
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010423384.3A
Other languages
Chinese (zh)
Inventor
马朱惠
杨吉平
杨筱瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Myzy Fixture Technology Co Ltd
Original Assignee
Kunshan Myzy Fixture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Myzy Fixture Technology Co Ltd filed Critical Kunshan Myzy Fixture Technology Co Ltd
Priority to CN202010423384.3A priority Critical patent/CN111641780A/en
Publication of CN111641780A publication Critical patent/CN111641780A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the invention discloses a method, a device, equipment and a medium for determining a camera lens target state. Wherein, the method comprises the following steps: if the target state is detected to judge the trigger event, acquiring the current image data of the camera lens; inputting the current image data into a pre-trained target state determination model, and determining the target state of the current image according to the output result; wherein the target states include in-focus and out-of-focus. The embodiment of the invention can judge the target state of the current image in the shooting process of the camera without manual intervention and complex extraction of image features, thereby effectively improving the judging efficiency of the target state of the current image and rapidly and accurately determining the definition of the current image.

Description

Method, apparatus, device and medium for determining target state of camera lens
Technical Field
The present invention relates to an image processing technology, and in particular, to a method, an apparatus, a device, and a medium for determining a target state of a camera lens.
Background
The focus state evaluation is widely used in imaging systems such as cameras, video cameras, microscopes, and endoscopes. The focusing technology is an important means for improving the measurement accuracy of the camera, especially in the measurement with small depth of field and high accuracy. However, cameras are widely used in industrial and intelligent devices, a camera in a focusing state is an important means for acquiring a clear image, but at the moment of acquiring an image, if a lens is in a defocusing position, an acquired picture has no practical significance, and the significance of acquiring the image is affected by the blurring of the image due to the loss of the focus. Currently, an image sharpness function is usually used to evaluate a local area in a currently acquired image, so as to determine whether a lens is in a focusing state.
The defects of the scheme are as follows: the local area is selected to determine whether the image is focused, and because the selection process is irregular, if the selected local area lacks representativeness, a larger error occurs in the judgment result, so that the image definition is difficult to effectively judge.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a medium for determining a target state of a camera lens, which can determine the state of the camera lens in real time in the shooting process of a camera according to a model determined by the target state, so that the effect of controlling the focusing state of the camera lens is achieved.
In a first aspect, an embodiment of the present invention provides a method for determining a camera lens target state, including:
if a target state judging trigger event is detected, acquiring current image data of the camera lens;
inputting the current image data into a pre-trained target state determination model, and determining the target state of the current image according to an output result; wherein the target states include in-focus and out-of-focus.
Optionally, the training process of the target state determination model includes:
acquiring sample image data, and dividing the sample image data into a training set and a verification set;
inputting sample image data of a training set into an initial double residual error network model, and training the initial double residual error network model;
and after the training is finished, verifying by adopting sample image data of the verification set, and if the sample image data accords with the verification standard, obtaining a trained target state determination model.
Optionally, the initial double residual error network model includes ten convolutional layers;
a Relu activation function is connected after each of the convolutional layers for non-linear processing.
Optionally, a drupopout function is further connected behind each convolutional layer, and is used for randomly discarding neurons according to a preset probability.
Optionally, the inputting the current image data into a pre-trained target state determination model, and determining the target state of the current image according to the output result includes:
inputting the current image data into a target state determination model to obtain a first output result and a second output result of the target state determination model; the first output result is the focusing probability of the current image, and the second output result is the defocusing probability of the current image;
and determining the target state of the current image according to the focusing probability and the defocusing probability.
In a second aspect, an embodiment of the present invention provides an apparatus for determining a camera lens target state, including:
the image data acquisition module is used for acquiring current image data of the camera lens if a target state discrimination trigger event is detected;
the target state determining module is used for inputting the current image data into a pre-trained target state determining model and determining the target state of the current image according to an output result; wherein the target states include in-focus and out-of-focus.
Optionally, the apparatus further comprises a model training module; the model training module is specifically configured to:
acquiring sample image data, and dividing the sample image data into a training set and a verification set;
inputting sample image data of a training set into an initial double residual error network model, and training the initial double residual error network model;
and after the training is finished, verifying by adopting sample image data of the verification set, and if the sample image data accords with the verification standard, obtaining a trained target state determination model.
Optionally, the initial double residual error network model includes ten convolutional layers;
a Relu activation function is connected after each of the convolutional layers for non-linear processing.
Optionally, a drupopout function is further connected behind each convolutional layer, and is used for randomly discarding neurons according to a preset probability.
Optionally, the target state determining module is specifically configured to:
inputting the current image data into a target state determination model to obtain a first output result and a second output result of the target state determination model; the first output result is the focusing probability of the current image, and the second output result is the defocusing probability of the current image;
and determining the target state of the current image according to the focusing probability and the defocusing probability.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for determining a target state of a camera lens according to any one of the embodiments of the present invention.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method for determining a target state of a camera lens according to any one of the embodiments of the present invention.
The embodiment of the invention judges the trigger event by detecting the target state to acquire the current image data of the camera lens; inputting the current image data into a pre-trained target state determination model, and determining the target state of the current image according to the output result; wherein the target states include in-focus and out-of-focus. The embodiment of the invention can judge the target state of the current image in the shooting process of the camera without manual intervention and complex extraction of image features, thereby effectively improving the judging efficiency of the target state of the current image and rapidly and accurately determining the definition of the current image.
Drawings
Fig. 1 is a flowchart illustrating a method for determining a target status of a camera lens according to a first embodiment of the invention;
fig. 2 is a flowchart illustrating a method for determining a target state of a camera lens according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device for determining a target state of a camera lens according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device in a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart illustrating a method for determining a target state of a camera lens according to a first embodiment of the invention. The embodiment can be applied to the situation of judging the target state of the camera lens in real time. The method of this embodiment may be executed by a device for determining a target state of a camera lens, where the device may be implemented in a hardware/software manner, and may be configured in an electronic device, so as to implement the method for determining a target state of a camera lens according to any embodiment of the present application. As shown in fig. 1, the method specifically includes the following steps:
and S110, if the target state judging trigger event is detected, acquiring the current image data of the camera lens.
In this embodiment, the target state determination triggering event is a situation that image data of the camera lens changes, and may indicate that image data acquired by the camera lens at this moment is different from image data of a previous place. For example, the target state discrimination trigger event may include a certain angle of deflection of the camera lens, or a change in the distance between the camera lens and the entity of the image to be acquired, that is, a decrease or an increase in the distance between the camera lens and the image entity.
The current image data of the camera lens is the image representation of the entity shot at present, and can be a complete image shot at present; in the embodiment, the acquired current image is not processed, and only the currently acquired original image data is represented, so that the subsequent related operation is directly performed on the basis of the original image, and the complexity of image processing is reduced; meanwhile, the target state discrimination triggering event is used as the acquisition condition for acquiring the next group of images by the camera, so that the state judgment of the acquired current image data can be rapidly carried out in real time in the camera shooting process.
S120, inputting the current image data into a pre-trained target state determination model, and determining the target state of the current image according to the output result; wherein the target states include in-focus and out-of-focus.
In this embodiment, the target state is a position state where the lens of the camera is located relative to the image to be acquired, and may include a focusing state and a defocusing state; the focusing state represents that the distance between a camera lens and an image to be acquired is the same as the focal length determined by the camera; the out-of-focus state represents that the distance between the camera lens and the image to be acquired is different from the focal length determined by the camera; the focusing state and the defocusing state can clearly and effectively judge the definition of the current image, so that a shooting person can determine whether to shoot the current image, the mistaken acquisition in the shooting process is reduced, and the acquisition efficiency of the camera can be effectively improved.
The target state determination model is a neural network model trained based on deep learning, and is used for processing the whole collected image so as to judge the target state of the current image, so that the problem of low judgment efficiency caused by judging only through a local area when the traditional judgment lens is in a state is solved; in the whole judging process, the acquired original image is directly input into the target training model without manual intervention and feature extraction, and the target state of the current image is determined, so that the definition of the current image can be accurately and effectively determined according to the target state of the current image.
The embodiment of the invention judges the trigger event by detecting the target state to acquire the current image data of the camera lens; inputting the current image data into a pre-trained target state determination model, and determining the target state of the current image according to the output result; wherein the target states include in-focus and out-of-focus. The embodiment of the invention can judge the target state of the current image in the shooting process of the camera without manual intervention and complex extraction of image features, thereby effectively improving the judging efficiency of the target state of the current image and rapidly and accurately determining the definition of the current image.
Example two
Fig. 2 is a flowchart illustrating a method for determining a target state of a camera lens according to a second embodiment of the invention. The embodiment is further expanded and optimized on the basis of the embodiment, and can be combined with any optional alternative in the technical scheme. As shown in fig. 2, the method includes:
and S210, if the target state judgment trigger event is detected, acquiring the current image data of the camera lens.
S220, inputting the current image data into the target state determination model to obtain a first output result and a second output result of the target state determination model; the first output result is the focusing probability of the current image, and the second output result is the defocusing probability of the current image.
In this embodiment, a set of image data is input into the object state determination model, and the object state determination model outputs two results, namely, an in-focus probability representing the current image and an out-of-focus probability representing the current image. The target state determination model substantially determines the belonging probability of the image data and the focusing state and the defocusing state according to the input image data, and is used for judging the target state of the current image according to the probability value.
And S230, determining the target state of the current image according to the focusing probability and the defocusing probability.
In this embodiment, the target state of the current image may be determined according to the magnitude values of the focusing probability and the defocusing probability, where the target state of the current image includes focusing and defocusing. Specifically, the focusing probability and the defocusing probability can be compared, and the maximum value is judged with a preset threshold value; if the maximum value is larger than the preset threshold value, and the state corresponding to the maximum value is used as the target state of the current image; if the focusing probability is greater than the defocusing probability, comparing the focusing probability with a preset threshold, and if the focusing probability is greater than the preset threshold, determining that the target state of the current image is a focusing state; the preset threshold may be adjusted according to the actual requirement of the shooting personnel, for example, the preset threshold in this embodiment may be set to 0.5.
Specifically, in this embodiment, the target state of the current image may be determined according to the comparison result by directly comparing the sizes of the focusing probability and the defocusing probability; for example, if the focusing probability is greater than the defocusing probability, the target state of the current image is determined to be in focus.
In this embodiment, optionally, the training process of the target state determination model includes:
acquiring sample image data, and dividing the sample image data into a training set and a verification set;
inputting sample image data of a training set into an initial double residual error network model, and training the initial double residual error network model;
and after the training is finished, verifying by adopting sample image data of the verification set, and if the sample image data accords with the verification standard, obtaining a trained target state determination model.
In the present embodiment, the sample image data is history image data acquired by a camera, and includes an in-focus image and an out-of-focus image; by dividing the sample image data into a training set and a verification set, the number of samples can be effectively increased, and the trained target state determination model has strong adaptability.
Specifically, the target state determination model may be trained by the following steps:
firstly, data acquisition and construction.
a. Dividing sample image data into a training set and a verification set; the training set is 700 pieces and is used for carrying out data training on the target state determination model; the verification set is 350, and is used for carrying out accuracy verification on the trained target state determination model; the training set and validation set may be randomly selected from the DIV2K dataset; and the states of the images in the training set and the verification set need to be calibrated, namely, a category label is established, and the focus can be set to be 1, and the defocus can be set to be 0.
b. The training set and the validation set are subjected to gaussian blur processing, and the convolution kernel size of each gaussian blur processing is random, for example, the random convolution kernel size may be one of (3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, 13 × 13, 15 × 15, 17 × 17, 19 × 19, 21 × 21, 23, 25 × 25, 27 × 27, 29 × 29, 31 × 33, 35 × 35, 37 × 37, 39 × 39), and pictures of different defocus states are simulated.
Secondly, training a target state determination model.
a. A single convolutional neural network layer (i.e., an initial dual residual network model with only a single convolutional neural network layer) can be simply expressed as y ═ w × x + b, and w is a weight parameter of the convolutional layer; b is the bias parameter of the convolution layer; x represents a matrix of data (e.g., picture data); y represents the resulting output of the convolutional layer.
b. Training the initial double-residual-error network model by using a training set to obtain a weight parameter w of the convolutional layer and a bias parameter b of the convolutional layer; verifying the trained initial double-residual error network model by using a verification set, and if the initial double-residual error network model meets the verification standard, obtaining a trained target state determination model; the verification standard related to this embodiment may be determined according to the verification result of the verification set, that is, if the success rate of the verification result of the verification set exceeds a specified threshold, it is considered that the target state determination model is successfully trained.
In this embodiment, the purpose of training the target state determination model is to obtain images with sizes of 224 × 224 of the training set and the verification set for checking points, that is, data sets of the weight (w) and the bias (b) of the network model, where the batch _ size is 32 pictures, and input the pictures into the network model, so as to shorten the training time of the network model on the premise of ensuring the training accuracy with a large learning rate (lr being 1 e-3). Meanwhile, the network model may employ a RMSProp (root Mean Square prop) optimizer that uses a differential squared weighted average for the gradients of the weight w and the bias b. Further optimizing the loss function loss has the problem of too large swing amplitude in updating, and further accelerating the convergence speed of the function. In the training process, the weight and the bias parameters are updated in the direction of decreasing loss function of the network model, after 250 times of period (epoch) training, the check point (ModelCheckPoint) of the model is solidified to a local disk for distinguishing the pictures in the focusing state and the defocusing state. The expression of the loss function loss is as follows:
Figure BDA0002497782940000091
where x represents an input sample, n is the total number of categories to be classified (in this embodiment, n is 2, focus and defocus), C is a real label corresponding to the ith category, and f (x) is a corresponding network model output value.
In this embodiment, optionally, the initial dual residual error network model includes ten convolutional layers;
a Relu activation function is connected after each convolutional layer for non-linear processing.
In this embodiment, the Relu activation function is to ensure that the weight w and the bias b keep the nonlinear feature expression, so that it is avoided that the linear feature expression is difficult to completely represent an image, and the data information of the complete image is more vividly and specifically represented.
In this embodiment, optionally, a Droupout function is further connected after each convolutional layer, and is used for randomly discarding neurons according to a preset probability.
In the embodiment, the Droupout function can randomly discard a part of neurons according to the probability, so that the network is easier to train, and the generalization capability of the network is improved; the target state determination model may include three average pooling layers (pool) primarily for feature dimensionality reduction, compression of the number of data and parameters, thereby reducing overfitting; meanwhile, the fault tolerance of the model is improved; two full-concatenation layers (fc1, fc2), skip concatenation of two convolutional layers, skip concatenation of outer layer, skip concatenation of inner layer, and skip concatenation of inner layer is a residual unit; the purpose of the staggering unit is to extract fuzzy feature information according to the number of liters of feature channels and the depth, and the staggering unit avoids the disappearance of gradients in the training process mainly in a mode of skipping connection, so that the integrity of the automatic feature extraction of the reel base layer is protected. Therefore, the double residual error units are adopted in the training target state determination model, the problem of gradient disappearance in the network training process can be effectively solved, and the discrimination capability of the network on focused and defocused images is improved.
According to the embodiment of the invention, the model can be determined according to the target state, the focusing probability and the defocusing probability of the current image are output, and the target state of the current image is determined according to the focusing probability and the defocusing probability. The embodiment of the invention can determine the probability value output by the model according to the target state, and quickly and effectively judge the target state of the current image.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a device for determining a target state of a camera lens according to a third embodiment of the present invention, which is applicable to a situation of determining a target state of a camera lens in real time. The device is configured in the electronic equipment, and can realize the method for determining the target state of the camera lens in any embodiment of the application. The device specifically comprises the following steps:
an image data obtaining module 310, configured to obtain current image data of the camera lens if a target state discrimination trigger event is detected;
a target state determining module 320, configured to input the current image data to a pre-trained target state determining model, and determine a target state of the current image according to an output result; wherein the target states include in-focus and out-of-focus.
Optionally, the system further comprises a model training module; the model training module is specifically configured to:
acquiring sample image data, and dividing the sample image data into a training set and a verification set;
inputting sample image data of a training set into an initial double residual error network model, and training the initial double residual error network model;
and after the training is finished, verifying by adopting sample image data of the verification set, and if the sample image data accords with the verification standard, obtaining a trained target state determination model.
Optionally, the initial double-residual network model includes ten convolutional layers;
a Relu activation function is connected after each of the convolutional layers for non-linear processing.
Optionally, a drupopout function is further connected behind each convolutional layer, and is used for randomly discarding neurons according to a preset probability.
Optionally, the target state determining module is specifically configured to:
inputting the current image data into a target state determination model to obtain a first output result and a second output result of the target state determination model; the first output result is the focusing probability of the current image, and the second output result is the defocusing probability of the current image;
and determining the target state of the current image according to the focusing probability and the defocusing probability.
The device for determining the target state of the camera lens, provided by the embodiment of the invention, can be used for judging the target state of the current image in the camera shooting process, and does not need manual intervention and complex extraction of image features, so that the efficiency for judging the target state of the current image is effectively improved, and the definition of the current image is quickly and accurately determined.
The device for determining the target state of the camera lens provided by the embodiment of the invention can execute the method for determining the target state of the camera lens provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention, as shown in fig. 4, the electronic device includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of the processors 410 in the electronic device may be one or more, and one processor 410 is taken as an example in fig. 4; the processor 410, the memory 420, the input device 430 and the output device 440 in the electronic apparatus may be connected by a bus or other means, and the bus connection is exemplified in fig. 4.
The memory 420 serves as a computer-readable storage medium, and may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the method for determining the object state of the camera lens in the embodiment of the present invention. The processor 410 executes various functional applications and data processing of the electronic device by executing software programs, instructions and modules stored in the memory 420, that is, implements the method for determining the target state of the camera lens provided by the embodiment of the present invention.
The memory 420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 420 may further include memory located remotely from processor 410, which may be connected to an electronic device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus, and may include a keyboard, a mouse, and the like. The output device 440 may include a display device such as a display screen.
EXAMPLE five
The present embodiment provides a storage medium containing computer-executable instructions for implementing a method for determining a target state of a lens of a camera provided by an embodiment of the present invention when executed by a computer processor.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the method for determining the target state of the camera lens provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the above search apparatus, each included unit and module are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for determining a target state of a camera lens, the method comprising:
if a target state judging trigger event is detected, acquiring current image data of the camera lens;
inputting the current image data into a pre-trained target state determination model, and determining the target state of the current image according to an output result; wherein the target states include in-focus and out-of-focus.
2. The method of claim 1, wherein the training process of the target state determination model comprises:
acquiring sample image data, and dividing the sample image data into a training set and a verification set;
inputting sample image data of a training set into an initial double residual error network model, and training the initial double residual error network model;
and after the training is finished, verifying by adopting sample image data of the verification set, and if the sample image data accords with the verification standard, obtaining a trained target state determination model.
3. The method of claim 2, wherein the initial dual residual network model comprises ten convolutional layers;
a Relu activation function is connected after each of the convolutional layers for non-linear processing.
4. The method of claim 3, wherein a Droupout function is connected after each convolutional layer for randomly discarding neurons according to a preset probability.
5. The method of claim 1, wherein inputting the current image data into a pre-trained target state determination model, and determining the target state of the current image according to the output result comprises:
inputting the current image data into a target state determination model to obtain a first output result and a second output result of the target state determination model; the first output result is the focusing probability of the current image, and the second output result is the defocusing probability of the current image;
and determining the target state of the current image according to the focusing probability and the defocusing probability.
6. An apparatus for determining a target state of a camera lens, the apparatus comprising:
the image data acquisition module is used for acquiring current image data of the camera lens if a target state discrimination trigger event is detected;
the target state determining module is used for inputting the current image data into a pre-trained target state determining model and determining the target state of the current image according to an output result; wherein the target states include in-focus and out-of-focus.
7. The apparatus of claim 6, further comprising a model training module; the model training module is specifically configured to:
acquiring sample image data, and dividing the sample image data into a training set and a verification set;
inputting sample image data of a training set into an initial double residual error network model, and training the initial double residual error network model;
and after the training is finished, verifying by adopting sample image data of the verification set, and if the sample image data accords with the verification standard, obtaining a trained target state determination model.
8. The apparatus of claim 6, wherein the target state determination module is specifically configured to:
inputting the current image data into a target state determination model to obtain a first output result and a second output result of the target state determination model; the first output result is the focusing probability of the current image, and the second output result is the defocusing probability of the current image;
and determining the target state of the current image according to the focusing probability and the defocusing probability.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of determining a camera lens objective state as claimed in any one of claims 1 to 5.
10. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing a method for determining a lens objective state of a camera according to any one of claims 1 to 5.
CN202010423384.3A 2020-05-19 2020-05-19 Method, apparatus, device and medium for determining target state of camera lens Pending CN111641780A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010423384.3A CN111641780A (en) 2020-05-19 2020-05-19 Method, apparatus, device and medium for determining target state of camera lens

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010423384.3A CN111641780A (en) 2020-05-19 2020-05-19 Method, apparatus, device and medium for determining target state of camera lens

Publications (1)

Publication Number Publication Date
CN111641780A true CN111641780A (en) 2020-09-08

Family

ID=72331159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010423384.3A Pending CN111641780A (en) 2020-05-19 2020-05-19 Method, apparatus, device and medium for determining target state of camera lens

Country Status (1)

Country Link
CN (1) CN111641780A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007074033A (en) * 2005-09-02 2007-03-22 Canon Inc Imaging apparatus and control method thereof, computer program, and storage medium
CN103152520A (en) * 2009-08-20 2013-06-12 佳能株式会社 Image capture apparatus and method
CN110533683A (en) * 2019-08-30 2019-12-03 东南大学 A kind of image group analysis method merging traditional characteristic and depth characteristic
CN110611761A (en) * 2018-06-14 2019-12-24 奥林巴斯株式会社 Image pickup apparatus, focus adjustment method, and storage medium
CN111083365A (en) * 2019-12-24 2020-04-28 陈根生 Method and device for rapidly detecting optimal focal plane position

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007074033A (en) * 2005-09-02 2007-03-22 Canon Inc Imaging apparatus and control method thereof, computer program, and storage medium
CN103152520A (en) * 2009-08-20 2013-06-12 佳能株式会社 Image capture apparatus and method
CN110611761A (en) * 2018-06-14 2019-12-24 奥林巴斯株式会社 Image pickup apparatus, focus adjustment method, and storage medium
CN110533683A (en) * 2019-08-30 2019-12-03 东南大学 A kind of image group analysis method merging traditional characteristic and depth characteristic
CN111083365A (en) * 2019-12-24 2020-04-28 陈根生 Method and device for rapidly detecting optimal focal plane position

Similar Documents

Publication Publication Date Title
CN108960211B (en) Multi-target human body posture detection method and system
CN111814902A (en) Target detection model training method, target identification method, device and medium
CN111914665B (en) Face shielding detection method, device, equipment and storage medium
CN109815787B (en) Target identification method and device, storage medium and electronic equipment
CN109389105B (en) Multitask-based iris detection and visual angle classification method
CN111191535B (en) Pedestrian detection model construction method based on deep learning and pedestrian detection method
CN112818871B (en) Target detection method of full fusion neural network based on half-packet convolution
CN111932542B (en) Image identification method and device based on multiple focal lengths and storage medium
CN110610123A (en) Multi-target vehicle detection method and device, electronic equipment and storage medium
CN111967345B (en) Method for judging shielding state of camera in real time
CN114445768A (en) Target identification method and device, electronic equipment and storage medium
CN111539456B (en) Target identification method and device
CN114724246A (en) Dangerous behavior identification method and device
CN112949453A (en) Training method of smoke and fire detection model, smoke and fire detection method and smoke and fire detection equipment
KR20220089602A (en) Method and apparatus for learning variable CNN based on non-correcting wide-angle image
CN111311562B (en) Ambiguity detection method and device for virtual focus image
CN116189063B (en) Key frame optimization method and device for intelligent video monitoring
CN117132768A (en) License plate and face detection and desensitization method and device, electronic equipment and storage medium
CN114596244A (en) Infrared image identification method and system based on visual processing and multi-feature fusion
CN112348762A (en) Single image rain removing method for generating confrontation network based on multi-scale fusion
CN111641780A (en) Method, apparatus, device and medium for determining target state of camera lens
CN113688810B (en) Target capturing method and system of edge device and related device
CN115761332A (en) Smoke and flame detection method, device, equipment and storage medium
CN111127327B (en) Picture inclination detection method and device
CN110796117B (en) Blood cell automatic analysis method, system, blood cell analyzer and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Chen Ruilin

Inventor after: Zhang Dieming

Inventor after: Tao Zhenqiang

Inventor after: Li Zhiming

Inventor after: Zhou Yumin

Inventor after: Shi Xuping

Inventor before: Ma Zhuhui

Inventor before: Yang Jiping

Inventor before: Yang Xiaoyu

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200908