CN110992327A - Lens contamination state detection method and device, terminal and storage medium - Google Patents

Lens contamination state detection method and device, terminal and storage medium Download PDF

Info

Publication number
CN110992327A
CN110992327A CN201911185425.3A CN201911185425A CN110992327A CN 110992327 A CN110992327 A CN 110992327A CN 201911185425 A CN201911185425 A CN 201911185425A CN 110992327 A CN110992327 A CN 110992327A
Authority
CN
China
Prior art keywords
lens
camera
original image
terminal
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911185425.3A
Other languages
Chinese (zh)
Inventor
任家锐
章佳杰
李马丁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Reach Best Technology Co Ltd
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Reach Best Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Reach Best Technology Co Ltd filed Critical Reach Best Technology Co Ltd
Priority to CN201911185425.3A priority Critical patent/CN110992327A/en
Publication of CN110992327A publication Critical patent/CN110992327A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure relates to a method and a device for detecting a lens dirty state, a terminal and a storage medium, and belongs to the technical field of computers. According to the method, the original image collected by the camera is preprocessed to obtain the target image, the difference characteristic between the original image and the target image is obtained, the original image and the difference characteristic are input into the classification model together, the original image and the difference characteristic are classified through the classification model to obtain the prediction probability, when the prediction probability is larger than the probability threshold value, the camera is determined to be in a lens dirty state, if the camera is in the lens dirty state, a prompt can be given to a user, the user can timely wipe the lens of the camera, adverse effects on the shooting effect of the terminal due to the dirty lens are eliminated, the image quality of the image or video shot by the terminal is improved, and the shooting experience of the user is optimized.

Description

Lens contamination state detection method and device, terminal and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for detecting a lens contamination state, a terminal, and a storage medium.
Background
With the development of computer technology, a user can shoot images or videos through a terminal, and a camera of the terminal is usually exposed outside and is not equipped with a lens cover like a special photographic instrument, so that fingerprints, dust, dirt and the like are easily stained on the camera of the terminal in daily use, white fog exists in shot images or videos, the shooting effect of the terminal is adversely affected, the quality of the images or videos shot by the terminal is poor, and the shooting experience of the user is poor.
Disclosure of Invention
The disclosure provides a method, a device, a terminal and a storage medium for detecting a lens dirty state, so as to solve at least the problems that in the related art, lens dirty easily causes adverse effects on the shooting effect of the terminal, the image or video quality shot by the terminal is poor, and the shooting experience of a user is poor. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for detecting a lens contamination state, including:
preprocessing an original image currently acquired by a camera to obtain a target image;
acquiring difference characteristics between the original image and the target image;
inputting the original image and the difference characteristics into a classification model, and classifying the original image and the difference characteristics through the classification model to obtain the prediction probability of whether the camera is in a lens dirty state;
and when the prediction probability is larger than a probability threshold value, determining that the camera is in a lens dirty state.
In a possible embodiment, the preprocessing an original image currently acquired by a camera to obtain a target image includes:
and performing at least one of edge extraction, fuzzy processing, sharpening processing, defogging processing or histogram equalization processing on the original image to obtain the target image.
In one possible embodiment, the acquiring the difference feature between the original image and the target image includes:
and carrying out difference processing on the original image and the target image to obtain the difference characteristic.
In one possible embodiment, the acquiring the difference feature between the original image and the target image includes:
and determining the difference between the number of the edge pixels of the original image and the number of the edge pixels of the target image as the difference characteristic.
In a possible implementation manner, after determining that the camera is in a lens dirty state, the method further includes:
detecting the lens dirty state according to a preset frequency, and if the times that the camera is in the lens dirty state are continuously determined to reach the target times, displaying prompt information in a shooting interface, wherein the prompt information is used for prompting a user to wipe the lens of the camera;
and when the display duration of the prompt message reaches a first target duration, stopping displaying the prompt message in the shooting interface.
In one possible embodiment, after the displaying the prompt message in the shooting interface, the method further includes:
at a target moment after the display moment of the prompt message, re-executing the detection operation of the lens dirty state;
if the camera is determined to be in a lens dirty state, executing the operation of displaying the prompt information in the shooting interface;
otherwise, the prompt message is not displayed within a second target duration after the target time.
In a possible implementation manner, after the operation of displaying the prompt message in the shooting interface is executed if it is determined that the camera is in a lens dirty state, the method further includes:
and when the accumulated display times of the prompt message reaches a time threshold, not displaying the prompt message within a third target time length after the current time, and setting the accumulated display times as 0.
In a possible embodiment, after the re-performing of the lens contamination state detection operation, the method further includes:
and sending the detection data of the detection operation to a server, wherein the detection data is used for triggering the server to carry out parameter adjustment on the classification model, and the detection data comprises at least one of an original image, a target image, a difference characteristic or a detection result.
In a possible embodiment, before preprocessing the original image currently acquired by the camera to obtain the target image, the method further includes:
and when the camera is in a starting state and meets the dirt detection condition, acquiring the original image currently acquired by the camera.
In one possible embodiment, the smudging detection condition includes at least one of an acceleration of the camera being less than an acceleration threshold, the camera being in focus, or a visibility in current weather information being less than a target threshold.
According to a second aspect of the embodiments of the present disclosure, there is provided a device for detecting a lens contamination state, including:
the system comprises a preprocessing unit, a processing unit and a processing unit, wherein the preprocessing unit is configured to execute preprocessing on an original image currently acquired by a camera to obtain a target image;
an acquisition unit configured to perform acquisition of a difference feature between the original image and the target image;
the classification unit is configured to input the original image and the difference features into a classification model, and classify the original image and the difference features through the classification model to obtain a prediction probability of whether the camera is in a lens dirty state;
a determination unit configured to perform determining that the camera is in a lens smudge state when the prediction probability is greater than a probability threshold.
In one possible embodiment, the preprocessing unit is configured to perform:
and performing at least one of edge extraction, fuzzy processing, sharpening processing, defogging processing or histogram equalization processing on the original image to obtain the target image.
In one possible implementation, the obtaining unit is configured to perform:
and carrying out difference processing on the original image and the target image to obtain the difference characteristic.
In one possible implementation, the obtaining unit is configured to perform:
and determining the difference between the number of the edge pixels of the original image and the number of the edge pixels of the target image as the difference characteristic.
In one possible embodiment, the apparatus is further configured to perform:
detecting the lens dirty state according to a preset frequency, and if the times that the camera is in the lens dirty state are continuously determined to reach the target times, displaying prompt information in a shooting interface, wherein the prompt information is used for prompting a user to wipe the lens of the camera;
and when the display duration of the prompt message reaches a first target duration, stopping displaying the prompt message in the shooting interface.
In one possible embodiment, the apparatus is further configured to perform:
at a target moment after the display moment of the prompt message, re-executing the detection operation of the lens dirty state;
if the camera is determined to be in a lens dirty state, executing the operation of displaying the prompt information in the shooting interface;
otherwise, the prompt message is not displayed within a second target duration after the target time.
In one possible embodiment, the apparatus is further configured to perform:
and when the accumulated display times of the prompt message reaches a time threshold, not displaying the prompt message within a third target time length after the current time, and setting the accumulated display times as 0.
In one possible embodiment, the apparatus is further configured to perform:
and sending the detection data of the detection operation to a server, wherein the detection data is used for triggering the server to carry out parameter adjustment on the classification model, and the detection data comprises at least one of an original image, a target image, a difference characteristic or a detection result.
In one possible embodiment, the apparatus is further configured to perform:
and when the camera is in a starting state and meets the dirt detection condition, acquiring the original image currently acquired by the camera.
In one possible embodiment, the smudging detection condition includes at least one of an acceleration of the camera being less than an acceleration threshold, the camera being in focus, or a visibility in current weather information being less than a target threshold.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal, including:
one or more processors;
one or more memories for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the method for detecting a lens smudge state of any one of the above first aspect and possible implementations of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, wherein at least one instruction of the storage medium, when executed by one or more processors of a terminal, enables the terminal to perform the method for detecting a lens contamination state of any one of the above-mentioned first aspect and possible implementations of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising one or more instructions which, when executed by one or more processors of a terminal, enable the terminal to perform the method of detecting a lens smudge state of any one of the above-mentioned first aspect and possible implementations of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps of preprocessing an original image collected by a camera to obtain a target image, obtaining difference characteristics between the original image and the target image, inputting the original image and the difference characteristics into a classification model, classifying the original image and the difference characteristics through the classification model to obtain a prediction probability, and determining that the camera is in a lens dirty state when the prediction probability is larger than a probability threshold value, namely, the terminal can detect whether the camera is in the lens dirty state according to the original image collected by the camera, and if the camera is in the lens dirty state, prompting can be performed on a user, so that the user can timely wipe the lens of the camera, adverse effects on the shooting effect of the terminal due to the lens dirty state are eliminated, the image quality of the image or video shot by the terminal is improved, and the shooting experience of the user is optimized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow chart illustrating a method of detecting a lens smudge condition in accordance with an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of detecting a lens smudge condition in accordance with an exemplary embodiment;
fig. 3 is a block diagram illustrating a logical structure of a lens contamination state detection apparatus according to an exemplary embodiment;
fig. 4 shows a block diagram of a terminal according to an exemplary embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The user information to which the present disclosure relates may be information authorized by the user or sufficiently authorized by each party.
Fig. 1 is a flowchart illustrating a lens contamination state detection method according to an exemplary embodiment, and referring to fig. 1, the lens contamination state detection method is applied to a terminal, which is described in detail below.
In step 101, the terminal preprocesses an original image currently acquired by the camera to obtain a target image.
In step 102, the terminal acquires a difference characteristic between the original image and the target image.
In step 103, the terminal inputs the original image and the difference feature into a classification model, and performs classification processing on the original image and the difference feature through the classification model to obtain a prediction probability of whether the camera is in a lens dirty state.
In step 104, when the predicted probability is greater than the probability threshold, the terminal determines that the camera is in a lens dirty state.
The method provided by the embodiment of the disclosure obtains the target image by preprocessing the original image collected by the camera, obtains the difference characteristic between the original image and the target image, inputs the original image and the difference characteristic into the classification model, classifying the original image and the difference characteristics through a classification model to obtain a prediction probability, when the prediction probability is larger than the probability threshold, the camera is determined to be in the lens dirty state, that is, the terminal can detect whether the camera is in the lens dirty state according to the original image collected by the camera, if the camera is in the lens dirty state, a prompt can be given to a user, the camera lens of the camera can be timely wiped by a user, adverse effects on the shooting effect of the terminal due to the fact that the camera lens is dirty are eliminated, the image quality of images or videos shot by the terminal is improved, and the shooting experience of the user is optimized.
In a possible embodiment, the preprocessing an original image currently acquired by a camera to obtain a target image includes:
and performing at least one of edge extraction, fuzzy processing, sharpening processing, defogging processing or histogram equalization processing on the original image to obtain the target image.
In one possible embodiment, acquiring the difference feature between the original image and the target image comprises:
and carrying out difference processing on the original image and the target image to obtain the difference characteristic.
In one possible embodiment, acquiring the difference feature between the original image and the target image comprises:
and determining the difference between the number of the edge pixels of the original image and the number of the edge pixels of the target image as the difference characteristic.
In a possible embodiment, after determining that the camera is in a lens dirty state, the method further includes:
detecting the lens dirty state according to a preset frequency, and if the times that the camera is in the lens dirty state are continuously determined to reach the target times, displaying prompt information in a shooting interface, wherein the prompt information is used for prompting a user to wipe the lens of the camera;
and when the display duration of the prompt message reaches a first target duration, stopping displaying the prompt message in the shooting interface.
In one possible embodiment, after the prompt message is displayed in the shooting interface, the method further includes:
re-executing the detection operation of the lens dirty state at a target time after the display time of the prompt message;
if the camera is determined to be in a lens dirty state, executing the operation of displaying the prompt information in the shooting interface;
otherwise, the prompt message is not displayed within a second target time length after the target time.
In a possible implementation manner, if it is determined that the camera is in a lens contamination state, after the operation of displaying the prompt message in the shooting interface is performed, the method further includes:
when the accumulated display frequency of the prompt message reaches a frequency threshold value, the prompt message is not displayed within a third target time length after the current time, and the accumulated display frequency is set to 0.
In one possible embodiment, after the detecting operation of the lens contamination state is re-executed, the method further includes:
and sending the detection data of the detection operation to a server, wherein the detection data is used for triggering the server to carry out parameter adjustment on the classification model, and the detection data comprises at least one of an original image, a target image, a difference characteristic or a detection result.
In a possible embodiment, before preprocessing an original image currently acquired by a camera to obtain a target image, the method further includes:
and when the camera is in a starting state and meets the dirt detection condition, acquiring the original image currently acquired by the camera.
In one possible embodiment, the smudging detection condition includes at least one of an acceleration of the camera being less than an acceleration threshold, the camera being in focus, or a visibility in current weather information being less than a target threshold.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 2 is a flowchart illustrating a method for detecting a lens contamination state according to an exemplary embodiment, and referring to fig. 2, the method for detecting a lens contamination state is applied to a terminal, which is described in detail below.
In step 201, when a camera of the terminal is in a start state and meets a contamination detection condition, the terminal acquires an original image currently acquired by the camera.
Optionally, the contamination detection condition may include at least one of an acceleration of the camera being smaller than an acceleration threshold, the camera being in a focus state, or visibility of current weather information of a geographical location where the terminal is located being smaller than a target threshold. Wherein the target threshold may be any value greater than or equal to 0.
In the process, the terminal can determine whether the camera is in a starting state by detecting whether the process corresponding to the camera is started, if so, whether the camera meets the dirt detection condition is detected, and when the dirt detection condition is met, the currently acquired image frame is acquired from the image sensor of the camera and is determined as the original image.
In some embodiments, if the contamination detection condition is that the acceleration of the camera is smaller than the acceleration threshold, the terminal may monitor the acceleration in real time through the acceleration sensor when determining that the camera is in the start state, and when detecting that the acceleration is smaller than the acceleration threshold, execute the operation of acquiring the original image, so that the image blur caused by the shaking of the hand of the user can be avoided, and the situation that the camera is erroneously judged as the contamination of the lens by the terminal is avoided, and the accuracy of the contamination detection of the lens is improved.
In some embodiments, if the contamination detection condition is that the camera is in a focusing state, the terminal may monitor whether the camera is in the focusing state in real time through a process corresponding to the camera when determining that the camera is in the starting state, and when detecting that the camera is in the focusing state, execute an operation of acquiring an original image, so that an image blur caused by inaccurate focusing of the camera can be avoided, and the situation that the camera is mistakenly judged as the contamination of the lens by the terminal is avoided, and the accuracy of the contamination detection of the lens is improved.
In some embodiments, if the contamination detection condition is that visibility in current weather information of a geographic Location where the terminal is located is less than a target threshold, the terminal may obtain Location information of the geographic Location where the terminal is located through Location Based Services (LBS), and obtain local current weather information Based on the Location information, so as to detect whether visibility in the current weather information is less than the target threshold, and if the visibility is less than the target threshold, execute an operation of obtaining an original image, so that a situation that the visibility of the local weather is low, so that white fog exists in the image, and the terminal misjudges the situation as lens contamination, can be avoided, and accuracy of lens contamination detection is improved.
In the process, before the original image is acquired, the camera lens fouling detection of the camera is carried out only when the fouling detection condition is met by judging whether the fouling detection condition is met or not, so that the phenomenon that the image blurring phenomenon or the white fog phenomenon exists in the image due to shaking, inaccurate focusing or low weather visibility of the mobile phone can be avoided, error factors possibly caused by the fouling of the camera lens are eliminated, and the detection accuracy of the fouling state of the camera lens is greatly improved.
In step 202, the terminal preprocesses an original image currently acquired by the camera to obtain a target image.
In some embodiments, the terminal may perform at least one of edge extraction, blurring, sharpening, defogging, or histogram equalization on the original image to obtain the target image.
Optionally, in the process of performing edge extraction on the original image, the terminal may perform gray processing on the original image to obtain a gray image of the original image, perform Canny edge extraction on the gray image to obtain an edge gradient feature map of the original image, and determine the edge gradient feature map as the target image. Of course, the terminal may also perform edge extraction by using Laplacian (Laplacian), Sobel (Sobel), and the like, and the embodiment of the present disclosure does not specifically limit the manner used for edge extraction.
Optionally, the terminal may obtain the target image by using at least one of gaussian filtering, median filtering, maximum filtering, minimum filtering, or bilateral filtering in the process of performing the blurring processing on the original image, so as to achieve an effect of performing image reduction on the original image.
Optionally, in the process of sharpening the original image, the terminal may extract a high-frequency component of the original image through a high-pass filter, and superimpose the high-frequency classification of the original image on the original image, so as to enhance details of a high-frequency portion in the original image, thereby achieving the effects of image sharpening and image enhancement.
Optionally, the terminal may perform defogging processing based on a dark channel defogging algorithm in the process of performing defogging processing on the original image, specifically, the terminal acquires a dark channel image of the original image, determines a refractive index and atmospheric light through the dark channel image, inputs the refractive index and the atmospheric light into a fog diagram formula, and outputs a defogged image (i.e., a target image), thereby achieving an image enhancement effect.
Optionally, the terminal may perform nonlinear stretching on the original image in the process of performing histogram equalization processing on the original image, and reallocate a pixel value to the stretched original image, so that the number of pixels in a certain gray scale range tends to be balanced to obtain a target image, thereby achieving an image enhancement effect.
In the above process, the terminal can obtain the target image having a certain difference from the original image regardless of performing edge extraction, image enhancement or image reduction on the original image, and by performing the following step 203, the difference characteristic between the original image and the target image can be obtained.
In step 203, the terminal performs difference processing on the original image and the target image to obtain a difference characteristic between the original image and the target image.
In the differential processing process, the terminal can directly subtract pixel values of each pixel point in the original image and each pixel point at the corresponding position in the target image, so as to generate a differential image, and the differential image is obtained as the differential feature.
In some embodiments, step 203 may be replaced by: and the terminal determines the difference between the number of the edge pixels of the original image and the number of the edge pixels of the target image as the difference characteristic. When the target image is the edge gradient feature map of the original image, the difference feature between the original image and the target image can be more simply and clearly represented by adopting the alternative mode, and the difference feature in this case is not a difference image but a numerical value.
In step 203, the terminal obtains a difference feature between the original image and the target image, where the difference feature may be a difference image, and certainly, may also be a certain numerical value.
In step 204, the terminal inputs the original image and the difference feature into a classification model, and performs classification processing on the original image and the difference feature through the classification model to obtain a prediction probability of whether the camera is in a lens dirty state.
In the above process, the classification model is obtained by training based on a plurality of training samples, each training sample includes a sample image and a sample difference feature corresponding to the sample image, and the terminal performs preprocessing and difference processing on the sample image through operations similar to those in the above step 202-203, so as to obtain the sample difference feature.
During training, inputting each training sample into an initial model, classifying the training samples through the initial model, outputting the prediction probability of the training samples, obtaining a loss function value of the training process according to the prediction probability and the real condition (whether the lens is in a dirty state or not) of each training sample, if the loss function value does not meet the convergence condition, performing parameter adjustment on the initial model, and iteratively executing the training process until the convergence condition is met to obtain a classification model. It should be noted that the training process may be deployed in a server, and the server sends the classification model to each terminal after training the classification model. Of course, the training may also be performed by the terminal, and the embodiment of the present disclosure does not specifically limit the execution subject of the training process.
In some embodiments, the classification model may be a deep learning model, for example, the deep learning model may include a lightweight neural network such as a MobileNet network, a shuffle network, a SqueezeNet network, and the like, and of course, the deep learning model may also be a CNN (convolutional neural network), a twin convolutional network, a pseudo-twin convolutional network, and the like, and the present disclosure does not specifically limit the form of the classification model.
In the above process, taking the classification model as CNN as an example, the terminal inputs the original image and the difference feature into CNN, performs convolution processing, pooling processing and activation processing on the original image and the difference feature through a plurality of hidden layers in CNN, uses the output graph of each hidden layer as the input graph of the next hidden layer, inputs the output graph of the last convolutional layer into a normalization layer, and performs exponential normalization (softmax) processing on the output graph of the last convolutional layer through the normalization layer, thereby obtaining the prediction probability.
In step 205, when the predicted probability is greater than the probability threshold, the terminal determines that the camera is in a lens dirtiness state.
The probability threshold may be issued by the server to the terminal and stored locally by the terminal. Different terminals may correspond to the same probability threshold, which may simplify the calculation amount at the server side, and of course, different terminals may also correspond to different probability thresholds, so that different probability thresholds may be individually configured for different terminals.
In the process, the terminal compares the prediction probability output by the classification model with a local pre-stored probability threshold, if the prediction probability is greater than the probability threshold, the camera is determined to be in a lens dirty state, and if the prediction probability is less than the probability threshold, the camera is determined not to be in the lens dirty state.
In step 206, the terminal performs a lens contamination state detection operation according to a preset frequency, and if it is continuously determined that the number of times that the camera is in the lens contamination state reaches a target number of times, a prompt message is displayed in a shooting interface, where the prompt message is used to prompt a user to wipe the lens of the camera.
In the above process, the terminal may set a contamination accumulation number with an initial value of 0, repeatedly execute the operations executed in the above step 201 and 205 according to a preset frequency, increase the contamination accumulation number by 1 each time the camera is detected to be in the lens contamination state, and directly set the contamination accumulation number to 0 if the camera is detected not to be in the lens contamination state at a certain time and the current contamination accumulation number is smaller than the target number in the process of accumulating the contamination accumulation number. And if the current accumulated times of the dirt is larger than or equal to the target times, displaying prompt information in a shooting interface, and setting the accumulated times of the dirt as 0. The preset frequency may be any frequency value, such as once per second, twice per minute, and the like, and the value of the preset frequency is not specifically limited in the embodiment of the present disclosure.
When the prompt information is displayed in the shooting interface, the terminal may display in at least one of a floating layer, a pop-up window, a subtitle, a magic expression, a voice prompt, or an interactive button, for example, the prompt information is a text displayed in a floating layer form, "please wipe the front/rear camera with cotton cloth," and the display mode of the prompt information is not specifically limited in the embodiments of the present disclosure.
Optionally, the prompt information may also be provided with a transparency, so as to avoid covering the shot picture and avoid affecting the shooting experience of the user. In addition, the prompt message can be further provided with a closing option, and when the triggering operation of the user on the closing option of the prompt message is detected, the terminal can be triggered to stop displaying the prompt message in the shooting interface.
In the process, the prompt message is displayed only when the lens is detected to be in the dirty state continuously for multiple times, so that the condition that the prompt message is displayed due to accidental false detection can be reduced, and the detection accuracy of the dirty state of the lens is improved.
In step 207, when the display duration of the prompt message reaches the first target duration, the terminal stops displaying the prompt message in the shooting interface.
The first target duration may be any value greater than 0, for example, the first target duration is 3 seconds, and the value of the first target duration is not specifically limited in the embodiment of the present disclosure.
In the process, the terminal sets the display duration with the value of the first target duration for the prompt message, and if the user does not trigger the trigger operation of the closing option of the prompt message within the display duration, the terminal stops displaying the prompt message in the shooting interface after the first target duration, so that the shooting picture is prevented from being covered for a long time, and the shooting experience of the user is optimized.
In step 208, the terminal re-executes the lens contamination state detection operation at a target time after the display time of the presentation information.
The target time may be a time after the display time is added with a fixed time duration, where the fixed time duration may be greater than the first target time duration, or may be less than or equal to the first target time duration, that is, the fixed time duration may be any value greater than 0, for example, the fixed time duration is 8 seconds, and a value of the fixed time duration is not specifically limited in the embodiment of the present disclosure.
In the above process, when the target time is reached, the terminal executes the operation executed in step 201 and step 205 again, if it is determined that the camera is in the lens dirty state, the following step 209 is executed, otherwise (that is, if it is determined that the camera is not in the lens dirty state), the prompt information is not displayed in a second target time period after the target time, that is, the user wipes the lens of the camera according to the prompt information with a high probability, so that the lens is determined not to be in the lens dirty state after the prompt information is displayed, the lens of the camera can be kept in a clean state for a period of time after the lens of the camera is detected again after the prompt information is displayed, and the prompt information does not need to be displayed in the second target time period. The second target time period is any value greater than 0, for example, the second target time period is 15 days.
In some embodiments, the terminal may send detection data of the detection operation to the server, where the detection data is used to trigger the server to perform parameter adjustment on the classification model, and the detection data includes at least one of an original image, a target image, a difference feature, or a detection result.
Optionally, after receiving the detection data, the server may add the original image, the target image, and the difference features to the training sample, and perform parameter adjustment on the classification model according to the expanded training sample, so as to obtain a more accurate classification model.
Optionally, the detection result may be used to feed back the acceptance of the user to the detection service of the lens contamination state, if the detection result is not the lens contamination state, it indicates that the user is likely to have a high acceptance of the detection service, then the server may send a detection frequency with a high value to the terminal, and if the detection result is the lens contamination state, it indicates that the acceptance of the user to the detection service is low, or the scratch of the lens cannot be improved by wiping, then the server may send a detection frequency with a low value to the terminal. The detection frequency mentioned here may be the preset frequency mentioned in step 206 above.
In step 209, if it is determined that the camera is still in the lens dirtiness state, the terminal performs an operation of displaying the prompt message in the shooting interface.
Step 209 is similar to step 206 and will not be described herein. It should be noted that, after the prompt message is displayed, when the display duration of the prompt message reaches the first target, the prompt message is still stopped from being displayed.
In step 210, when the cumulative display frequency of the prompt message reaches the frequency threshold, the prompt message is not displayed within a third target time period after the current time, and the cumulative display frequency is set to 0.
In the above process, the terminal may set the cumulative display number of times with an initial value of 0, add 1 to the value of the cumulative display number of times each time the prompt message is displayed, and when the cumulative display number of times reaches the number threshold, it indicates that the user refuses to wipe the lens of the camera, or the lens of the camera itself has a scratch and cannot be improved by wiping, so that the prompt message is not displayed within a third target time length, and set the cumulative display number of times to 0, where the third target time length is any value greater than 0, for example, the third target time length is 7 days.
The method provided by the embodiment of the disclosure obtains the target image by preprocessing the original image collected by the camera, obtains the difference characteristic between the original image and the target image, inputs the original image and the difference characteristic into the classification model, classifying the original image and the difference characteristics through a classification model to obtain a prediction probability, when the prediction probability is larger than the probability threshold, the camera is determined to be in the lens dirty state, that is, the terminal can detect whether the camera is in the lens dirty state according to the original image collected by the camera, if the camera is in the lens dirty state, a prompt can be given to a user, the camera lens of the camera can be timely wiped by a user, adverse effects on the shooting effect of the terminal due to the fact that the camera lens is dirty are eliminated, the image quality of images or videos shot by the terminal is improved, and the shooting experience of the user is optimized.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 3 is a block diagram illustrating a logical structure of a lens contamination state detection apparatus according to an exemplary embodiment. Referring to fig. 3, the apparatus includes a preprocessing unit 301, an acquisition unit 302, a classification unit 303, and a determination unit 304.
The preprocessing unit 301 is configured to execute preprocessing on an original image currently acquired by the camera to obtain a target image;
an obtaining unit 302 configured to perform obtaining a difference feature between the original image and the target image;
a classification unit 303 configured to perform a classification process on the original image and the difference feature to obtain a prediction probability of whether the camera is in a lens contamination state;
a determining unit 304 configured to determine that the camera is in a lens smudge state when the prediction probability is greater than a probability threshold.
The device provided by the embodiment of the disclosure obtains the target image by preprocessing the original image collected by the camera, obtains the difference characteristic between the original image and the target image, inputs the original image and the difference characteristic into the classification model, classifying the original image and the difference characteristics through a classification model to obtain a prediction probability, when the prediction probability is larger than the probability threshold, the camera is determined to be in the lens dirty state, that is, the terminal can detect whether the camera is in the lens dirty state according to the original image collected by the camera, if the camera is in the lens dirty state, a prompt can be given to a user, the camera lens of the camera can be timely wiped by a user, adverse effects on the shooting effect of the terminal due to the fact that the camera lens is dirty are eliminated, the image quality of images or videos shot by the terminal is improved, and the shooting experience of the user is optimized.
In one possible implementation, the preprocessing unit 301 is configured to perform:
and performing at least one of edge extraction, fuzzy processing, sharpening processing, defogging processing or histogram equalization processing on the original image to obtain the target image.
In one possible implementation, the obtaining unit 302 is configured to perform:
and carrying out difference processing on the original image and the target image to obtain the difference characteristic.
In one possible implementation, the obtaining unit 302 is configured to perform:
and determining the difference between the number of the edge pixels of the original image and the number of the edge pixels of the target image as the difference characteristic.
In one possible embodiment, the apparatus is further configured to perform:
detecting the lens dirty state according to a preset frequency, and if the times that the camera is in the lens dirty state are continuously determined to reach the target times, displaying prompt information in a shooting interface, wherein the prompt information is used for prompting a user to wipe the lens of the camera;
and when the display duration of the prompt message reaches a first target duration, stopping displaying the prompt message in the shooting interface.
In one possible embodiment, the apparatus is further configured to perform:
re-executing the detection operation of the lens dirty state at a target time after the display time of the prompt message;
if the camera is determined to be in a lens dirty state, executing the operation of displaying the prompt information in the shooting interface;
otherwise, the prompt message is not displayed within a second target time length after the target time.
In one possible embodiment, the apparatus is further configured to perform:
when the accumulated display frequency of the prompt message reaches a frequency threshold value, the prompt message is not displayed within a third target time length after the current time, and the accumulated display frequency is set to 0.
In one possible embodiment, the apparatus is further configured to perform:
and sending the detection data of the detection operation to a server, wherein the detection data is used for triggering the server to carry out parameter adjustment on the classification model, and the detection data comprises at least one of an original image, a target image, a difference characteristic or a detection result.
In one possible embodiment, the apparatus is further configured to perform:
and when the camera is in a starting state and meets the dirt detection condition, acquiring the original image currently acquired by the camera.
In one possible embodiment, the smudging detection condition includes at least one of an acceleration of the camera being less than an acceleration threshold, the camera being in focus, or a visibility in current weather information being less than a target threshold.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
With regard to the apparatus in the above-described embodiment, the specific manner in which each unit performs the operation has been described in detail in the embodiment of the detection method regarding the lens contamination state, and will not be described in detail here.
Fig. 4 shows a block diagram of a terminal 400 according to an exemplary embodiment of the present disclosure. The terminal 400 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer iv, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, the terminal 400 includes: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement the lens smudge state detection methods provided by various embodiments herein.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the terminal 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the terminal 400 or in a folded design; in still other embodiments, the display 405 may be a flexible display disposed on a curved surface or a folded surface of the terminal 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate the current geographic position of the terminal 400 for navigation or LBS (location based Service). The positioning component 408 may be a positioning component based on the GPS (global positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
The power supply 409 is used to supply power to the various components in the terminal 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When power source 409 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the terminal 400 by the user. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 413 may be disposed on a side bezel of the terminal 400 and/or a lower layer of the touch display screen 405. When the pressure sensor 413 is disposed on the side frame of the terminal 400, a user's holding signal to the terminal 400 can be detected, and the processor 401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the terminal 400. When a physical key or vendor Logo is provided on the terminal 400, the fingerprint sensor 414 may be integrated with the physical key or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
A proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front surface of the terminal 400. In one embodiment, when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually decreases, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually becomes larger, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is not intended to be limiting of terminal 400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, there is also provided a storage medium, for example a memory, comprising instructions executable by a processor of a terminal to perform the method for detecting a lens smudge state referred to in the various embodiments above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM (Read-Only Memory), a RAM (Random-Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is further provided, which includes one or more instructions executable by a processor of a terminal to perform the lens contamination state detection method involved in the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for detecting a lens smudging state is characterized by comprising the following steps:
preprocessing an original image currently acquired by a camera to obtain a target image;
acquiring difference characteristics between the original image and the target image;
inputting the original image and the difference characteristics into a classification model, and classifying the original image and the difference characteristics through the classification model to obtain the prediction probability of whether the camera is in a lens dirty state;
and when the prediction probability is larger than a probability threshold value, determining that the camera is in a lens dirty state.
2. The method for detecting the lens contamination state according to claim 1, wherein the preprocessing an original image currently acquired by a camera to obtain a target image comprises:
and performing at least one of edge extraction, fuzzy processing, sharpening processing, defogging processing or histogram equalization processing on the original image to obtain the target image.
3. The method for detecting the lens contamination state according to claim 1, wherein the acquiring the difference feature between the original image and the target image comprises:
and carrying out difference processing on the original image and the target image to obtain the difference characteristic.
4. The method for detecting the lens contamination state according to claim 1, wherein the acquiring the difference feature between the original image and the target image comprises:
and determining the difference between the number of the edge pixels of the original image and the number of the edge pixels of the target image as the difference characteristic.
5. The method for detecting a lens contamination state according to claim 1, wherein after determining that the camera is in the lens contamination state, the method further comprises:
detecting the lens dirty state according to a preset frequency, and if the times that the camera is in the lens dirty state are continuously determined to reach the target times, displaying prompt information in a shooting interface, wherein the prompt information is used for prompting a user to wipe the lens of the camera;
and when the display duration of the prompt message reaches a first target duration, stopping displaying the prompt message in the shooting interface.
6. The method for detecting the lens contamination state according to claim 5, wherein after the prompt message is displayed in the shooting interface, the method further comprises:
at a target moment after the display moment of the prompt message, re-executing the detection operation of the lens dirty state;
if the camera is determined to be in a lens dirty state, executing the operation of displaying the prompt information in the shooting interface;
otherwise, the prompt message is not displayed within a second target duration after the target time.
7. The method according to claim 6, wherein if it is determined that the camera is in a lens contamination state, after the operation of displaying the prompt message in the shooting interface is performed, the method further comprises:
and when the accumulated display times of the prompt message reaches a time threshold, not displaying the prompt message within a third target time length after the current time, and setting the accumulated display times as 0.
8. A device for detecting a dirty state of a lens, comprising:
the system comprises a preprocessing unit, a processing unit and a processing unit, wherein the preprocessing unit is configured to execute preprocessing on an original image currently acquired by a camera to obtain a target image;
an acquisition unit configured to perform acquisition of a difference feature between the original image and the target image;
the classification unit is configured to input the original image and the difference features into a classification model, and classify the original image and the difference features through the classification model to obtain a prediction probability of whether the camera is in a lens dirty state;
a determination unit configured to perform determining that the camera is in a lens smudge state when the prediction probability is greater than a probability threshold.
9. A terminal, comprising:
one or more processors;
one or more memories for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to execute the instructions to implement the method of detecting a lens smudge condition of any one of claims 1 to 7.
10. A storage medium, wherein at least one instruction in the storage medium, when executed by one or more processors of a terminal, enables the terminal to perform the method for detecting a lens contamination state according to any one of claims 1 to 7.
CN201911185425.3A 2019-11-27 2019-11-27 Lens contamination state detection method and device, terminal and storage medium Pending CN110992327A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911185425.3A CN110992327A (en) 2019-11-27 2019-11-27 Lens contamination state detection method and device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911185425.3A CN110992327A (en) 2019-11-27 2019-11-27 Lens contamination state detection method and device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN110992327A true CN110992327A (en) 2020-04-10

Family

ID=70087551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911185425.3A Pending CN110992327A (en) 2019-11-27 2019-11-27 Lens contamination state detection method and device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110992327A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524125A (en) * 2020-04-28 2020-08-11 北京海益同展信息科技有限公司 Equipment cleaning method, device and system, electronic equipment and storage medium
CN111666840A (en) * 2020-05-25 2020-09-15 维沃移动通信有限公司 Information prompting method and device and electronic equipment
CN111669611A (en) * 2020-06-19 2020-09-15 广州繁星互娱信息科技有限公司 Image processing method, device, terminal and storage medium
CN112261403A (en) * 2020-09-22 2021-01-22 深圳市豪恩汽车电子装备股份有限公司 Device and method for detecting dirt of vehicle-mounted camera
CN112348784A (en) * 2020-10-28 2021-02-09 北京市商汤科技开发有限公司 Method, device and equipment for detecting state of camera lens and storage medium
CN112351168A (en) * 2020-10-21 2021-02-09 惠州市德赛西威智能交通技术研究院有限公司 Camera laser self-cleaning device and system
CN112583999A (en) * 2020-12-02 2021-03-30 广州立景创新科技有限公司 Lens contamination detection method for camera module
CN113076997A (en) * 2021-03-31 2021-07-06 南昌欧菲光电技术有限公司 Lens band fog identification method, camera module and terminal equipment
CN113758579A (en) * 2021-09-26 2021-12-07 中国纺织科学研究院有限公司 Method for detecting temperature of spinning assembly and spinning equipment
CN114040194A (en) * 2021-11-26 2022-02-11 信利光电股份有限公司 Method and device for testing dirt of camera module and readable storage medium
CN114531542A (en) * 2022-01-18 2022-05-24 华为技术有限公司 Lens contamination detection method and electronic equipment
CN114550123A (en) * 2022-04-25 2022-05-27 江苏日盈电子股份有限公司 Pollution judgment method, pollution judgment system and cleaning method for vehicle-mounted camera
CN115225814A (en) * 2022-06-17 2022-10-21 苏州蓝博控制技术有限公司 Camera assembly and video processing method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108668080A (en) * 2018-06-22 2018-10-16 北京小米移动软件有限公司 Prompt method and device, the electronic equipment of camera lens degree of fouling
CN109360362A (en) * 2018-10-25 2019-02-19 中国铁路兰州局集团有限公司 A kind of railway video monitoring recognition methods, system and computer-readable medium
CN109523527A (en) * 2018-11-12 2019-03-26 北京地平线机器人技术研发有限公司 The detection method in dirty region, device and electronic equipment in image
CN109800654A (en) * 2018-12-24 2019-05-24 百度在线网络技术(北京)有限公司 Vehicle-mounted camera detection processing method, apparatus and vehicle
CN110245697A (en) * 2019-05-31 2019-09-17 厦门大学 A kind of dirty detection method in surface, terminal device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108668080A (en) * 2018-06-22 2018-10-16 北京小米移动软件有限公司 Prompt method and device, the electronic equipment of camera lens degree of fouling
CN109360362A (en) * 2018-10-25 2019-02-19 中国铁路兰州局集团有限公司 A kind of railway video monitoring recognition methods, system and computer-readable medium
CN109523527A (en) * 2018-11-12 2019-03-26 北京地平线机器人技术研发有限公司 The detection method in dirty region, device and electronic equipment in image
CN109800654A (en) * 2018-12-24 2019-05-24 百度在线网络技术(北京)有限公司 Vehicle-mounted camera detection processing method, apparatus and vehicle
CN110245697A (en) * 2019-05-31 2019-09-17 厦门大学 A kind of dirty detection method in surface, terminal device and storage medium

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524125B (en) * 2020-04-28 2023-09-01 京东科技信息技术有限公司 Equipment cleaning method, device, system, electronic equipment and storage medium
CN111524125A (en) * 2020-04-28 2020-08-11 北京海益同展信息科技有限公司 Equipment cleaning method, device and system, electronic equipment and storage medium
CN111666840A (en) * 2020-05-25 2020-09-15 维沃移动通信有限公司 Information prompting method and device and electronic equipment
CN111669611B (en) * 2020-06-19 2022-02-22 广州繁星互娱信息科技有限公司 Image processing method, device, terminal and storage medium
CN111669611A (en) * 2020-06-19 2020-09-15 广州繁星互娱信息科技有限公司 Image processing method, device, terminal and storage medium
CN112261403A (en) * 2020-09-22 2021-01-22 深圳市豪恩汽车电子装备股份有限公司 Device and method for detecting dirt of vehicle-mounted camera
CN112261403B (en) * 2020-09-22 2022-06-28 深圳市豪恩汽车电子装备股份有限公司 Device and method for detecting dirt of vehicle-mounted camera
CN112351168A (en) * 2020-10-21 2021-02-09 惠州市德赛西威智能交通技术研究院有限公司 Camera laser self-cleaning device and system
CN112348784A (en) * 2020-10-28 2021-02-09 北京市商汤科技开发有限公司 Method, device and equipment for detecting state of camera lens and storage medium
WO2022088620A1 (en) * 2020-10-28 2022-05-05 北京市商汤科技开发有限公司 State detection method and apparatus for camera lens, device and storage medium
CN112583999A (en) * 2020-12-02 2021-03-30 广州立景创新科技有限公司 Lens contamination detection method for camera module
TWI779948B (en) * 2020-12-02 2022-10-01 大陸商廣州立景創新科技有限公司 Lens dirt detection method for camera module
CN112583999B (en) * 2020-12-02 2024-03-15 广州立景创新科技有限公司 Method for detecting lens dirt of camera module
CN113076997A (en) * 2021-03-31 2021-07-06 南昌欧菲光电技术有限公司 Lens band fog identification method, camera module and terminal equipment
CN113758579A (en) * 2021-09-26 2021-12-07 中国纺织科学研究院有限公司 Method for detecting temperature of spinning assembly and spinning equipment
CN113758579B (en) * 2021-09-26 2024-01-09 中国纺织科学研究院有限公司 Method for detecting temperature of spinning assembly and spinning equipment
CN114040194A (en) * 2021-11-26 2022-02-11 信利光电股份有限公司 Method and device for testing dirt of camera module and readable storage medium
CN114531542A (en) * 2022-01-18 2022-05-24 华为技术有限公司 Lens contamination detection method and electronic equipment
CN114550123A (en) * 2022-04-25 2022-05-27 江苏日盈电子股份有限公司 Pollution judgment method, pollution judgment system and cleaning method for vehicle-mounted camera
CN115225814A (en) * 2022-06-17 2022-10-21 苏州蓝博控制技术有限公司 Camera assembly and video processing method thereof
CN115225814B (en) * 2022-06-17 2023-09-05 苏州蓝博控制技术有限公司 Camera assembly and video processing method thereof

Similar Documents

Publication Publication Date Title
CN110992327A (en) Lens contamination state detection method and device, terminal and storage medium
CN109829456B (en) Image identification method and device and terminal
CN110572711B (en) Video cover generation method and device, computer equipment and storage medium
CN110650379B (en) Video abstract generation method and device, electronic equipment and storage medium
CN109360222B (en) Image segmentation method, device and storage medium
CN109325924B (en) Image processing method, device, terminal and storage medium
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN111752817A (en) Method, device and equipment for determining page loading duration and storage medium
CN111127509A (en) Target tracking method, device and computer readable storage medium
CN110827195A (en) Virtual article adding method and device, electronic equipment and storage medium
CN111062248A (en) Image detection method, device, electronic equipment and medium
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN110189348B (en) Head portrait processing method and device, computer equipment and storage medium
CN111586279B (en) Method, device and equipment for determining shooting state and storage medium
CN110675473A (en) Method, device, electronic equipment and medium for generating GIF dynamic graph
CN111325701A (en) Image processing method, device and storage medium
CN112749590B (en) Object detection method, device, computer equipment and computer readable storage medium
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
CN110853124A (en) Method, device, electronic equipment and medium for generating GIF dynamic graph
CN111860064A (en) Target detection method, device and equipment based on video and storage medium
CN109561215B (en) Method, device, terminal and storage medium for controlling beautifying function
CN110717365B (en) Method and device for obtaining picture
CN110263695B (en) Face position acquisition method and device, electronic equipment and storage medium
CN111757146B (en) Method, system and storage medium for video splicing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination