CN112804522A - Method and device for detecting abnormal conditions of camera - Google Patents

Method and device for detecting abnormal conditions of camera Download PDF

Info

Publication number
CN112804522A
CN112804522A CN202110402956.4A CN202110402956A CN112804522A CN 112804522 A CN112804522 A CN 112804522A CN 202110402956 A CN202110402956 A CN 202110402956A CN 112804522 A CN112804522 A CN 112804522A
Authority
CN
China
Prior art keywords
detected
image
camera
sample image
abnormal condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110402956.4A
Other languages
Chinese (zh)
Other versions
CN112804522B (en
Inventor
方高
沈梓欣
吴迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Momenta Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Momenta Suzhou Technology Co Ltd filed Critical Momenta Suzhou Technology Co Ltd
Priority to CN202110402956.4A priority Critical patent/CN112804522B/en
Publication of CN112804522A publication Critical patent/CN112804522A/en
Application granted granted Critical
Publication of CN112804522B publication Critical patent/CN112804522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a method and a device for detecting abnormal conditions of a camera, wherein the method comprises the following steps: obtaining an image shot by a camera to be detected in a static state as an image to be detected; determining a condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected, wherein the condition detection result comprises the following steps: the method comprises the following steps that whether the camera to be detected has an abnormal condition result and/or the type of the abnormal condition, and the current abnormal condition detection model is as follows: the model is trained on the basis of the positive sample image representing that the camera corresponding to the characterization is not in the abnormal condition and the negative sample image representing that the camera corresponding to the characterization is in the abnormal condition, so that the detection scene of the camera is expanded, and the camera condition of an automatic driving object can be accurately detected before driving.

Description

Method and device for detecting abnormal conditions of camera
Technical Field
The invention relates to the technical field of image detection, in particular to a method and a device for detecting abnormal conditions of a camera.
Background
Before and during the action of an automatic driving object such as a vehicle and a robot, under the condition that abnormal conditions such as water hanging, dirt, overexposure and darkness occur to a camera arranged on the automatic driving object, a sensing result sensed by a sensing system of the automatic driving object based on an image acquired by the camera can have larger deviation with an actual result, so that the reliability of the sensing system is influenced.
In order to ensure the accuracy of the sensing result of the sensing system, the condition of the camera of the automatic driving object needs to be detected first, and the current detection mode for the abnormal condition of the camera generally comprises the following steps: in the driving process of the automatic driving object, continuous frame images are collected through a camera of the automatic driving object, one frame image is used as a detection frame, the detection frame is subjected to region division, in a detection period, each target detected from the detection frame is tracked by using an image tracking algorithm, the motion characteristics of each target are extracted, and whether the camera has an abnormal state or not is determined according to the extracted motion characteristics of each target.
Therefore, in the process, the detection of the camera condition of the automatic driving object is realized, and the detection can only be based on the image collected by the camera of the automatic driving object in the driving state, and if the camera of the automatic driving object is in an abnormal condition, the detection has potential safety hazard to the automatic driving object.
Disclosure of Invention
The invention provides a method and a device for detecting abnormal conditions of a camera, which are used for expanding the detection scene of the camera and accurately detecting the condition of the camera of an automatic driving object before driving. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for detecting an abnormal condition of a camera, where the method includes:
obtaining an image shot by a camera to be detected in a static state as an image to be detected;
determining a condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected, wherein the condition detection result comprises: the current abnormal condition detection model is as follows, wherein the current abnormal condition detection model is as follows: and training the obtained model based on the positive sample image which represents that the corresponding camera has no abnormal condition and the negative sample image which represents that the corresponding camera has the abnormal condition.
Optionally, the abnormal condition type includes at least one of the following types: water staining, stains, black screens, and overexposure.
Optionally, the current abnormal condition detection model includes a single-frame detection sub-model and a multi-frame judgment sub-model; the image to be detected is a continuous frame image;
the step of determining the condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected comprises the following steps:
for each image to be detected, inputting the image to be detected into the single-frame detection sub-model, and determining detection information corresponding to the image to be detected, wherein the detection information comprises: representing the probability value corresponding to each abnormal condition type of the camera to be detected and representing the probability value corresponding to the abnormal condition of the camera to be detected;
inputting the detection information corresponding to each image to be detected into the multi-frame judgment sub-model, and counting the number of the images to be detected, which are used for representing the abnormal condition of the camera to be detected, of the corresponding detection information as a first number; and determining whether the camera to be detected has abnormal conditions or not based on the first quantity, and determining the target abnormal condition type corresponding to the camera to be detected based on the probability values corresponding to the abnormal condition types in the detection information representing the abnormal conditions of the camera to be detected under the condition that the abnormal conditions of the camera to be detected are determined, so as to obtain the condition detection result corresponding to the camera to be detected.
Optionally, before the step of determining the condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected, the method further includes:
acquiring a positive sample image and label information corresponding to the positive sample image, and a negative sample image and label information corresponding to the negative sample image, wherein the positive sample image is an image for representing that the corresponding camera has no abnormal condition, and the negative sample image is an image for representing that the corresponding camera has an abnormal condition;
determining a target negative sample image by using a preset data enhancement algorithm and the positive sample image and/or the negative sample image, wherein the target negative sample image comprises: the negative sample image and the image after deforming the negative sample image, and/or the target negative sample image comprises: the negative sample image and each positive sample image are subjected to style conversion to obtain an image;
obtaining an initial abnormal condition detection model corresponding to each group of preset hyper-parameters;
aiming at each group of preset hyper-parameters, training the initial abnormal condition detection model by utilizing a first positive sample image and label information thereof, a first target negative sample image and label information thereof as a training set until the initial abnormal condition detection model reaches a preset convergence condition, and determining an intermediate abnormal condition detection model corresponding to the group of preset hyper-parameters, wherein the first positive sample image is a partial image in the positive sample image, and the first target negative sample image is a partial image in the target negative sample image;
and determining a middle abnormal condition detection model with the optimal detection result from all the middle abnormal condition detection models by using a second positive sample image and label information thereof, a second target negative sample image and label information thereof as an evaluation set, wherein the second positive sample image is a partial image in the positive sample image, and the second target negative sample image is a partial image in the target negative sample image.
Optionally, the preset data enhancement algorithm is a pre-established data enhancement model, and the pre-established data enhancement model is: the network model is obtained by training based on the marked appointed positive sample image and the appointed negative sample image, and the pre-established data enhancement model is used for performing data enhancement on the input image;
the step of determining a target negative sample image using a preset data enhancement algorithm and the positive sample image and/or the negative sample image includes:
inputting each positive sample image into the pre-established data enhancement model to obtain a newly added sample image corresponding to each positive sample image;
and determining the negative sample image and the newly added sample image corresponding to each positive sample image as a target negative sample image.
Optionally, the method further includes:
and outputting abnormal alarm information under the condition that the abnormal condition of the camera to be detected is determined.
Optionally, the method further includes:
and under the condition that the condition detection result corresponding to the camera to be detected is determined to be accurate, updating the current abnormal condition detection model based on the image to be detected and the condition detection result corresponding to the image to be detected.
In a second aspect, an embodiment of the present invention provides an apparatus for detecting an abnormal condition of a camera, where the apparatus includes:
the first obtaining module is configured to obtain an image shot by a camera to be detected in a static state as an image to be detected;
a first determining module, configured to determine a condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected, where the condition detection result includes: the current abnormal condition detection model is as follows, wherein the current abnormal condition detection model is as follows: and training the obtained model based on the positive sample image which represents that the corresponding camera has no abnormal condition and the negative sample image which represents that the corresponding camera has the abnormal condition.
Optionally, the abnormal condition type includes at least one of the following types: water staining, stains, black screens, and overexposure.
Optionally, the current abnormal condition detection model includes a single-frame detection sub-model and a multi-frame judgment sub-model; the image to be detected is a continuous frame image;
the first determining module is specifically configured to, for each image to be detected, input the image to be detected into the single-frame detection sub-model, and determine detection information corresponding to the image to be detected, where the detection information includes: representing the probability value corresponding to each abnormal condition type of the camera to be detected and representing the probability value corresponding to the abnormal condition of the camera to be detected;
inputting the detection information corresponding to each image to be detected into the multi-frame judgment sub-model, and counting the number of the images to be detected, which are used for representing the abnormal condition of the camera to be detected, of the corresponding detection information as a first number; and determining whether the camera to be detected has abnormal conditions or not based on the first quantity, and determining the target abnormal condition type corresponding to the camera to be detected based on the probability values corresponding to the abnormal condition types in the detection information representing the abnormal conditions of the camera to be detected under the condition that the abnormal conditions of the camera to be detected are determined, so as to obtain the condition detection result corresponding to the camera to be detected.
Optionally, the apparatus further comprises:
a second obtaining module, configured to obtain a positive sample image and label information corresponding to the positive sample image, and a negative sample image and label information corresponding to the negative sample image before determining a condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected, where the positive sample image is an image that represents that the corresponding camera does not have an abnormal condition, and the negative sample image is an image that represents that the corresponding camera has an abnormal condition;
a second determining module configured to determine a target negative sample image using a preset data enhancement algorithm and the positive sample image and/or the negative sample image, wherein the target negative sample image comprises: the negative sample image and the image after deforming the negative sample image, and/or the target negative sample image comprises: the negative sample image and each positive sample image are subjected to style conversion to obtain an image;
a third obtaining module configured to obtain an initial abnormal condition detection model corresponding to each group of preset hyper-parameters;
a training module configured to train the initial abnormal condition detection model by using a first positive sample image and tag information thereof, a first target negative sample image and tag information thereof as a training set for each group of preset hyper-parameters until the initial abnormal condition detection model reaches a preset convergence condition, and determine an intermediate abnormal condition detection model corresponding to the group of preset hyper-parameters, wherein the first positive sample image is a partial image in the positive sample image, and the first target negative sample image is a partial image in the target negative sample image;
and a third determining module, configured to determine, from all the intermediate abnormal condition detection models, an intermediate abnormal condition detection model with an optimal detection result as a current abnormal condition detection model by using a second positive sample image and label information thereof, and a second target negative sample image and label information thereof as an evaluation set, where the second positive sample image is a partial image in the positive sample image, and the second target negative sample image is a partial image in the target negative sample image.
Optionally, the preset data enhancement algorithm is a pre-established data enhancement model, and the pre-established data enhancement model is: the network model is obtained by training based on the marked appointed positive sample image and the appointed negative sample image, and the pre-established data enhancement model is used for performing data enhancement on the input image;
the second determining module is specifically configured to input each positive sample image into the pre-established data enhancement model to obtain a new sample image corresponding to each positive sample image;
and determining the negative sample image and the newly added sample image corresponding to each positive sample image as a target negative sample image.
Optionally, the apparatus further comprises:
and the output module is configured to output abnormal alarm information under the condition that the abnormal condition of the camera to be detected is determined.
Optionally, the apparatus further comprises:
and the updating module is configured to update the current abnormal condition detection model based on the image to be detected and the corresponding condition detection result thereof under the condition that the condition detection result corresponding to the camera to be detected is determined to be accurate.
As can be seen from the above, the method and the device for detecting abnormal conditions of a camera provided in the embodiments of the present invention obtain an image captured by a camera to be detected in a static state, and use the image as an image to be detected; determining a condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected, wherein the condition detection result comprises the following steps: the method comprises the following steps that whether the camera to be detected has an abnormal condition result and/or the type of the abnormal condition, and the current abnormal condition detection model is as follows: and training the obtained model based on the positive sample image which represents that the corresponding camera has no abnormal condition and the negative sample image which represents that the corresponding camera has the abnormal condition.
By applying the embodiment of the invention, the condition detection result corresponding to the camera to be detected can be determined by directly utilizing the current abnormal condition detection model and the image to be detected, the condition detection result corresponding to the camera to be detected can be determined by directly utilizing the image to be detected acquired when the camera to be detected is in a static state, and the detection scene of the abnormal condition of the camera to be detected is expanded to a certain extent. And the abnormal condition detection can be carried out on the camera to be detected in a static state, so that the user experience logic of the detection before the automatic driving vehicle or the robot is met, and the driving safety of the object where the camera to be detected is located is favorably improved. And the model is beneficial to deployment and is easy to expand to a plurality of abnormal scenes. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. the method has the advantages that the current abnormal condition detection model and the image to be detected can be directly utilized to determine the condition detection result corresponding to the camera to be detected, the image to be detected acquired when the camera to be detected is in a static state can be directly utilized to determine the condition detection result corresponding to the camera to be detected, and the detection scene of the abnormal condition of the camera to be detected is expanded to a certain extent. And the abnormal condition detection can be carried out on the camera to be detected in a static state, so that the user experience logic of the detection before the automatic driving vehicle or the robot is met, and the driving safety of the object where the camera to be detected is located is favorably improved. And the model is beneficial to deployment and is easy to expand to a plurality of abnormal scenes.
2. The method comprises the steps of firstly detecting detection information of a to-be-detected camera corresponding to each frame of to-be-detected image, and then determining the condition of the to-be-detected camera during the process of acquiring the to-be-detected image by utilizing the detection information of the to-be-detected camera corresponding to all the to-be-detected images so as to improve the accuracy of a detection result to a certain extent.
3. Performing data enhancement on the negative sample image to increase the data volume of the negative sample image and obtain more target negative sample images so as to ensure that the trained abnormal condition detection model has better detection capability and the abnormal condition camera has higher detection capability; and selecting a current abnormal condition detection model with higher accuracy in a verification mode so as to ensure the accuracy of a subsequent actual detection process.
4. And a newly added sample image with better expansion effect is obtained by utilizing the pre-established data enhancement model so as to ensure the accuracy of the detection result of the subsequent trained current abnormal condition detection model.
5. And under the condition that the abnormal condition of the camera to be detected is determined, early warning is carried out, and the safety is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flow chart of a method for detecting an abnormal condition of a camera according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a model training process according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device for detecting an abnormal condition of a camera according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The invention provides a method and a device for detecting abnormal conditions of a camera, which are realized. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a schematic flow chart of a method for detecting an abnormal condition of a camera according to an embodiment of the present invention. The method may comprise the steps of:
s101: and obtaining an image shot by the camera to be detected in a static state as an image to be detected.
The method for detecting the abnormal condition of the camera provided by the embodiment of the invention can be applied to any electronic equipment with computing capability, and the electronic equipment can be a terminal or a server. In one implementation, the functional software for implementing the method may exist in the form of separate client software, or may exist in the form of a plug-in to the currently relevant client software.
The type of the camera to be detected can be a fisheye camera, a common camera and the like, and the fisheye camera, the common camera and the like can be used.
The camera to be detected may be: the camera provided on the autonomous vehicle may also be a camera provided on the robot, as well as other cameras on objects that perform corresponding tasks depending on the images they have acquired. The corresponding tasks may include, but are not limited to: positioning, environment sensing, navigation, decision making and other tasks.
The electronic equipment is connected with the camera to be detected, and can obtain an image collected by the camera to be detected as an image to be detected, wherein the image to be detected can be one or more frames of images. In one case, the frame rate of the camera to be detected is f, and the continuous frame images acquired in the t time period of the camera to be detected can be obtained as the images to be detected.
In one implementation, the image to be detected may be an image captured by a camera to be detected in a stationary state. In another case, the image to be detected may also be an image acquired by a camera to be detected in a moving state, which is all right.
S102: and determining a condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected.
Wherein, the condition detection result comprises: the method comprises the following steps that whether the camera to be detected has an abnormal condition result and/or the type of the abnormal condition, and the current abnormal condition detection model is as follows: and training the obtained model based on the positive sample image which represents that the corresponding camera has no abnormal condition and the negative sample image which represents that the corresponding camera has the abnormal condition.
In this step, the electronic device may pre-store a current abnormal condition detection model, where the current abnormal condition detection model is used to: the method comprises the steps of detecting images collected by the cameras to determine whether the images represent abnormal conditions of the corresponding cameras and determine the specific types of the abnormal conditions of the corresponding cameras when the images represent the abnormal conditions of the corresponding cameras.
In one embodiment, the exception condition type may include at least one of the following types: water staining, stains, black screens, and overexposure.
After the electronic equipment obtains the image to be detected, the image to be detected is input into the current abnormal condition detection model, and the condition detection result corresponding to the camera to be detected is determined through the current abnormal condition detection model. Wherein the condition detection result comprises: and judging whether the camera to be detected has the abnormal condition result and/or the type of the abnormal condition.
By applying the embodiment of the invention, the condition detection result corresponding to the camera to be detected can be determined by directly utilizing the current abnormal condition detection model and the image to be detected, the condition detection result corresponding to the camera to be detected can be determined by directly utilizing the image to be detected acquired when the camera to be detected is in a static state, and the detection scene of the abnormal condition of the camera to be detected is expanded to a certain extent. And the abnormal condition detection can be carried out on the camera to be detected in a static state, so that the user experience logic of the detection before the automatic driving vehicle or the robot is met, and the driving safety of the object where the camera to be detected is located is favorably improved. And the model is beneficial to deployment and is easy to expand to a plurality of abnormal scenes.
In another embodiment of the present invention, the current abnormal situation detection model includes: a single-frame detection sub-model and a multi-frame judgment sub-model; the image to be detected is a continuous frame image;
the step S102 may include the following steps 011-:
011: and inputting the image to be detected into the single-frame detection sub-model aiming at each image to be detected, and determining the detection information corresponding to the image to be detected.
Wherein the detection information includes: the probability value corresponding to the type of each abnormal condition of the camera to be detected is represented, and the probability value corresponding to the absence of the abnormal condition of the camera to be detected is represented.
012: inputting detection information corresponding to each image to be detected into a multi-frame judgment sub-model, and counting the number of the images to be detected, which represent that the camera to be detected has abnormal conditions, of the corresponding detection information as a first number; and determining whether the abnormal condition exists in the camera to be detected based on the first quantity, and determining the target abnormal condition type corresponding to the camera to be detected based on the probability values corresponding to the abnormal condition types in the detection information representing the abnormal condition of the camera to be detected under the condition that the abnormal condition exists in the camera to be detected, so as to obtain the condition detection result corresponding to the camera to be detected.
In this implementation, the current abnormal condition detection model may include: a single-frame detection sub-model and a multi-frame judgment sub-model. The single-frame detection submodel can be a convolutional neural network model, and model parameters of the single-frame detection submodel can be obtained through training. The multi-frame judgment sub-model comprises preset judgment logic, and the preset judgment logic is used for determining whether the abnormal condition exists in the camera to be detected or not based on the output result of the single-frame detection sub-model and a preset quantity threshold value, and determining the type of the abnormal condition of the camera to be detected under the condition that the abnormal condition exists in the camera to be detected.
Correspondingly, the electronic equipment inputs the image to be detected into a single-frame detection sub-model aiming at each image to be detected, and obtains detection information corresponding to the image to be detected; then, inputting detection information corresponding to each image to be detected into a multi-frame judgment sub-model; counting the number of the images to be detected, which represent abnormal conditions of the camera to be detected, of the corresponding detection information as a first number through a preset judgment logic of a multi-frame judgment sub-model; and determining whether the abnormal condition exists in the camera to be detected based on the first quantity, and further determining the target abnormal condition type corresponding to the camera to be detected based on the probability values corresponding to the abnormal condition types in the detection information representing the abnormal condition of the camera to be detected under the condition that the abnormal condition exists in the camera to be detected.
In one case, the process of determining whether the abnormal condition exists in the camera to be detected based on the first number may be: judging the first quantity and the size of a preset quantity threshold, and if the first quantity is not less than the preset quantity threshold, determining that the abnormal condition exists in the camera to be detected; and if the first number is smaller than the preset number threshold, determining that the abnormal condition does not exist in the camera to be detected.
The detection information comprises probability values corresponding to the types of the abnormal conditions of the camera to be detected and probability values corresponding to the abnormal conditions of the camera to be detected. In one case, if the probability value corresponding to the abnormal condition of the camera to be detected is smaller than the preset probability threshold value, the detection information represents that the abnormal condition of the camera to be detected exists; otherwise, if the probability value corresponding to the representation that the abnormal condition does not exist in the camera to be detected is not smaller than the preset probability threshold value, the detection information represents that the abnormal condition does not exist in the camera to be detected.
In another case, if the probability value corresponding to the fact that the abnormal condition does not exist in the camera to be detected is larger than the probability value corresponding to the fact that the abnormal condition type exists in each camera to be detected, the detection information indicates that the abnormal condition does not exist in the camera to be detected; on the contrary, if at least one probability value exists in the probability values corresponding to the abnormal condition types of the cameras to be detected and is not smaller than the probability value corresponding to the abnormal condition of the cameras to be detected, the detection information represents that the abnormal condition exists in the cameras to be detected.
The process of determining the target abnormal condition type corresponding to the camera to be detected based on the probability values corresponding to the abnormal condition types in the detection information representing the abnormal condition of the camera to be detected may be as follows: determining the abnormal condition types of the cameras to be detected represented by the images to be detected based on the probability values corresponding to the abnormal condition types in the detection information representing the abnormal conditions of the cameras to be detected, for example: and determining the abnormal condition type corresponding to the probability value in the corresponding detection information as the abnormal condition type of the camera to be detected represented by the image to be detected. And then, counting the number of the images to be detected corresponding to each abnormal condition type based on the abnormal condition types of the cameras to be detected represented by the images to be detected, and determining the abnormal condition type with the largest number of the corresponding images to be detected as the target abnormal condition type corresponding to the cameras to be detected.
In another embodiment of the present invention, as shown in fig. 2, the method may further include the following steps S201 to S205:
s201: and obtaining the positive sample image and the corresponding label information thereof, and the negative sample image and the label information thereof.
The positive sample image is an image representing that the corresponding camera has no abnormal condition, and the negative sample image is an image representing that the corresponding camera has the abnormal condition. In one case, the positive sample image and the negative sample image may be images acquired by a camera corresponding to the positive sample image and the negative sample image in a static state. In another case, the positive sample image and the negative sample image may also be images acquired by the corresponding cameras in a moving state.
S202: and determining a target negative sample image by using a preset data enhancement algorithm and the positive sample image and/or the negative sample image.
Wherein the target negative sample image includes: the negative sample image and the image obtained by deforming the negative sample image, and/or the target negative sample image comprises: the negative sample image and each positive sample image are subjected to a stylistic transformation.
S203: and obtaining an initial abnormal condition detection model corresponding to each group of preset hyper-parameters.
S204: and aiming at each group of preset hyper-parameters, training the initial abnormal condition detection model by utilizing the first positive sample image and the label information thereof, and the first target negative sample image and the label information thereof which are used as a training set until the initial abnormal condition detection model reaches a preset convergence condition, and determining a middle abnormal condition detection model corresponding to the group of preset hyper-parameters.
The first positive sample image is a partial image in the positive sample image, and the first target negative sample image is a partial image in the target negative sample image.
S205: and determining a middle abnormal condition detection model with the optimal detection result from all the middle abnormal condition detection models by using the second positive sample image and the label information thereof, the second target negative sample image and the label information thereof as the evaluation set, and determining the middle abnormal condition detection model as the current abnormal condition detection model.
The second positive sample image is a partial image in the positive sample image, and the second target negative sample image is a partial image in the target negative sample image.
In order to ensure the accuracy of the detection result of the abnormal condition of the camera, the embodiment of the invention also provides a training process of the current abnormal condition detection model, and specifically, the electronic device can firstly obtain the positive sample image and the corresponding label information thereof, and the negative sample image and the label information thereof, wherein the label information corresponding to the positive sample image or the negative sample image can be calibrated manually or by a feature program.
The process of obtaining the positive sample image and the negative sample image by the electronic device may be: the method comprises the steps of obtaining at least one group of continuous frame images collected by at least one sample camera, calibrating each image in the group of continuous frame images aiming at each group of continuous frame images, and determining whether each image is a positive sample image or a negative sample image and label information corresponding to each image.
The at least one set of consecutive frame images may include: the continuous frame images acquired by the same sample camera in different time periods can also include the continuous frame images acquired by different sample cameras in the same time period, which is all possible.
And performing data enhancement on the negative sample image and/or the positive sample image by using a preset data enhancement algorithm to increase the number of the negative sample images, and determining a target negative sample image by combining the original negative sample image. In one case, the preset data enhancement algorithm may be any algorithm in the related art that can change the style of the image to increase the number of negative sample images.
In one case, the preset data enhancement algorithm may be an algorithm for performing a preset process on the negative sample image to obtain a new related negative sample image through the negative sample image, and the preset process may include, but is not limited to: copying, color enhancement, scaling, random image interpolation, rotational translation, noise, and the like.
In another case, the preset data enhancement algorithm may be a model for performing style conversion on the positive sample image, so as to obtain a new sample image with a negative sample style by performing style conversion on the positive sample image.
And subsequently, the electronic equipment obtains an initial abnormal condition detection model, wherein the initial abnormal condition detection model comprises a single-frame detection sub-model and a multi-frame judgment sub-model, the single-frame detection sub-model is a convolutional neural network model, and the multi-frame judgment sub-model comprises preset judgment logic and is used for determining whether the corresponding camera has an abnormal condition or not based on the output of the single-frame detection sub-model.
In one case, the feature extraction layer includes a plurality of convolution blocks (blocks) that can perform convolution operation, pooling operation, and down-sampling operation on an input image to extract image features of the input image, and further, the image features of the input image extracted by the feature extraction layer are input into the feature classification layer, so that a detection result of the input image, that is, a probability value representing that each abnormal condition type exists in a corresponding camera and a probability value representing that the abnormal condition does not exist in the corresponding camera are determined based on the image features of the input image through the feature classification layer.
In one case, the input of the feature extraction layer is a three-channel image of 448 × 448, that is, the sample image and the image to be detected are three-channel images of 448 × 448 in size, and the input of the feature extraction layer may be a cylindrical image captured by a fisheye camera or a normal image captured by a normal needle-eye camera.
After inputting the 448 × 448 three-channel image into the feature extraction layer, firstly, performing convolution operation on the three-channel image by using a convolution kernel of 7 × 7 to obtain a convolution result, and inputting the convolution result into a plurality of convolution blocks, wherein the operation in each block comprises: firstly, convolution operation is carried out on input content to increase the number of channels, then pooling, namely down-sampling is carried out, and an output result of the block, namely the input content of the next block of the block, is obtained. Obtaining 128 × 7 characteristic data after a plurality of volume blocks are operated; and utilizing the convolution kernel of 7 × 7 to perform down-sampling on the feature data of 128 × 7, reducing the dimension of the feature data of 128 × 7 to 1 × N, wherein N represents the number of abnormal condition types, obtaining an output result of a feature extraction layer, and inputting the output result of the feature extraction layer into a feature classification layer, namely a rear full-connection layer, so as to obtain the detection information of the three-channel image.
The training data, namely the magnitude of the positive sample image and the magnitude of the negative sample image, required for training and evaluating the abnormal condition detection model can be in the million level, and to a certain extent, the training data with more data volume is utilized, so that the more accurate the detection result of the abnormal condition detection model of the camera obtained by training is.
The preset hyper-parameters may include, but are not limited to: learning rate, epoch, and batch size. In different groups of preset hyper-parameters, the specific values corresponding to at least one preset hyper-parameter are different, that is, the specific values corresponding to at least one preset hyper-parameter in the initial abnormal condition detection models corresponding to different groups of preset hyper-parameters are different, and the models have the same structure. In one case, the learning rate, epoch, and batch size can all be set empirically, for example: the learning rate may be set to 0.01, the epoch may be set to 10, 15, or 20, the batch size may be set to 100, 150, or 200, etc.
In order to train an abnormal condition detection model with a more accurate detection result, different groups of preset hyper-parameters can be set, wherein each group of preset hyper-parameters corresponds to an initial abnormal condition detection model. The initial abnormal condition detection model comprises an initial single-frame detection sub-model and a multi-frame judgment sub-model. The preset hyper-parameter is the hyper-parameter of the initial single-frame detection submodel, and in the training process, the model parameter of the initial single-frame detection submodel is trained and adjusted, so that the detection result of the single-frame detection submodel after the parameter adjustment is more accurate, and the accuracy of the corresponding abnormal condition detection result is further ensured.
The electronic device may first divide the positive sample image and the target negative sample image into a positive sample image and a target negative sample image as a training set and a positive sample image and a target negative sample image as an evaluation set, and in one case, may further divide a plurality of positive sample images and target negative sample images as a test set. The same group of continuous frame images are divided into the same data set, wherein the data set comprises the training set, the testing set and the evaluating set.
For each group of preset hyper-parameters, randomly inputting an initial single-frame detection sub-model corresponding to the group of preset hyper-parameters by taking each group of continuous frame images as a unit, and performing feature extraction on each sample image in the training set by using a feature extraction layer of the initial single-frame detection sub-model to obtain the image features of each sample image in the training set; and determining first detection information corresponding to each sample image in the training set by using a characteristic classification layer of the initial single-frame detection sub-model and the image characteristics of each sample image in the training set.
Aiming at each group of continuous frame images in the training set, inputting current detection information corresponding to each image in the group of continuous frame images in the training set into a multi-frame judgment sub-model, and counting the number of the images of which the corresponding current detection information is the abnormal condition of the corresponding camera as a second number by using the multi-frame judgment sub-model; and determining whether the corresponding camera has an abnormal condition or not based on the second data to obtain second detection information of the group of continuous frame images in the training set.
Aiming at each group of preset hyper-parameters, calculating by using a cross entropy loss function to obtain a current loss value of the model by using first detection information and second detection information corresponding to a plurality of groups of continuous frame images corresponding to the batch size and the number in the training set and label information thereof: adjusting model parameters of a feature extraction layer and a feature classification layer of the initial single-frame detection submodel corresponding to the group of preset hyper-parameters by using the calculated current loss value, and then returning to the step of randomly inputting the initial single-frame detection submodel corresponding to the group of preset hyper-parameters by taking each group of continuous frame images as a unit, wherein the first positive sample image and the label information thereof, and the first target negative sample image and the label information thereof which are taken as training sets, and continuously iterating; the current loss value of the subsequent calculation is smaller and smaller through continuous iteration; and determining the intermediate abnormal condition detection model corresponding to the group of preset hyper-parameters until the initial abnormal condition detection model reaches the preset convergence condition.
In one case, the above-mentioned reaching of the preset convergence condition may be: the iteration times reach the times corresponding to the epoch; if there is a test set, the above-mentioned reaching the predetermined convergence condition may be: the iteration times reach the times corresponding to the epoch; and determining that the accuracy of the abnormal condition detection model obtained by training meets the requirement through the positive sample image and the target negative sample image which are used as the test set, namely that the calculated current loss value is smaller than a preset threshold value.
After the intermediate abnormal condition detection model corresponding to each group of the preset hyper-parameters is obtained, evaluating the intermediate abnormal condition detection model corresponding to each group of the preset hyper-parameters by using a second positive sample image and label information thereof, a second target negative sample image and label information thereof which are used as an evaluation set; for each intermediate abnormal condition detection model, inputting the second positive sample image and the label information thereof, and the second target negative sample image and the label information thereof, which are used as an evaluation set, into the intermediate abnormal condition detection model by taking each group of continuous frame images as a unit, obtaining third detection information corresponding to each sample image used as the evaluation set, and obtaining fourth detection information corresponding to each group of continuous frame images used as the evaluation set, wherein the fourth detection information corresponding to each group of continuous frame images used as the evaluation set is: determined based on the third detection information corresponding to each sample image in the set of consecutive frame images. The sample image as an evaluation set includes: a second positive sample image and a second target negative sample image.
For each intermediate abnormal condition detection model, determining the accuracy and the recall rate of the intermediate abnormal condition detection model based on the third detection information and the fourth detection information corresponding to each group of continuous frame images serving as an evaluation set corresponding to the intermediate abnormal condition detection model and the label information corresponding to each group of continuous frame images; and drawing a corresponding pr (precision-recall) curve.
And determining the intermediate abnormal condition detection model with the optimal detection result as the current abnormal condition detection model based on the pr curve corresponding to each intermediate abnormal condition detection model. One case is that, when the same recall rate is determined based on the pr curve corresponding to each intermediate abnormal condition detection model, the intermediate abnormal condition detection model with high corresponding accuracy is the intermediate abnormal condition detection model with the optimal detection result.
Wherein, the third detection information corresponding to each group of continuous frame images comprises: and the label information corresponding to each group of continuous frame images comprises the label information corresponding to each image in the group of continuous frame images.
The first detection information and the third detection information include: whether the corresponding image represents the information that the corresponding camera has the abnormal condition or not, namely whether the camera has the abnormal condition or not when acquiring the image; the second detection information and the fourth detection information include: and the camera corresponding to each group of continuous frame images has information on whether abnormal conditions exist in the time period for acquiring the group of continuous frame images.
The label information corresponding to each sample image includes: the method comprises the steps of calibrating information whether a sample image represents that a camera corresponding to the sample image has an abnormal condition, information whether the camera corresponding to a continuous frame image group to which the sample image belongs has the abnormal condition in a time period for acquiring the continuous frame image group, and the abnormal state type of the corresponding camera when the sample image represents that the camera corresponding to the sample image has the abnormal condition. The sample images include positive sample images and negative sample images.
The sample images as the training set may contain images different from the sample images as the evaluation set.
In another embodiment of the present invention, the preset data enhancement algorithm is a pre-established data enhancement model, and the pre-established data enhancement model is: based on a network model obtained by training the marked appointed positive sample image and the appointed negative sample image, a pre-established data enhancement model is used for performing data enhancement on the input image;
the S202 may include the following steps:
and inputting each positive sample image into a pre-established data enhancement model to obtain a newly added sample image corresponding to each positive sample image.
And determining the negative sample image and the newly added sample image corresponding to each positive sample image as a target negative sample image.
In this implementation, the acquisition scenes corresponding to the specified positive sample image and the specified negative sample image are similar to the acquisition scenes corresponding to the positive sample image and the negative sample image, for example: the acquisition scenes corresponding to the positive sample image and the negative sample image are rainy scenes, the positive sample image is an image acquired by a camera in a non-water-hanging state in rainy days, and the negative sample image is an image acquired by the camera in a water-hanging state in rainy days; correspondingly, the acquisition scene corresponding to the designated positive sample image and the designated negative sample image is rainy, the image acquired by the camera in the unwatered state of the designated positive sample image in rainy days is designated, and the image acquired by the camera in the hung state of the designated negative sample image in rainy days is designated.
Another example is: the acquisition scenes corresponding to the positive sample image and the negative sample image are sunny days, the positive sample image is an image acquired by a camera which is in a non-water-hanging state in rainy days, and the negative sample image is an image acquired by the camera which is in a water-hanging state in rainy days; correspondingly, the acquisition scene corresponding to the designated positive sample image and the designated negative sample image is rainy, the image acquired by the camera without stains in a sunny day of the designated positive sample image is designated, and the image acquired by the camera with stains in the sunny day of the designated negative sample image is designated. And so on.
In one case, the pre-established data enhancement model is a CycleGAN (cyclic-generative confrontation network) model, which includes a first generator, a first discriminator, a second generator, and a second discriminator; in the process of training a pre-established data enhancement model, firstly, obtaining an initial data enhancement model; randomly inputting the appointed positive sample image and the corresponding calibration information into a first generator of an initial data enhancement model to obtain a false negative sample image with a negative sample style; determining whether the false negative sample image is in the learned negative sample style or not by using a second discriminator, and outputting a corresponding true and false score; randomly inputting the appointed negative sample image and the corresponding calibration information into a second generator of the initial data enhancement model to obtain a false positive sample image with a positive sample style; and determining whether the false positive sample image is in the learned positive sample style or not by using a first discriminator, and outputting a corresponding true and false score.
Determining the current loss value of the initial data enhancement model by using the specified positive sample image and the corresponding calibration information thereof and the true and false scores corresponding to the false negative sample image, and the negative sample image and the corresponding calibration information thereof and the true and false scores corresponding to the false positive sample image; and adjusting parameters of the first generator, the first discriminator, the second generator and the second discriminator by using the current loss value, returning to the step of randomly inputting the specified positive sample image and the corresponding calibration information into the first generator of the initial data enhancement model, and obtaining the false negative sample image with the negative sample style. And obtaining a pre-established data enhancement model until the iteration times reach the preset times.
The loss function used in the above process of calculating the loss value includes: generator loss function, i.e. first generator loss function
Figure 32336DEST_PATH_IMAGE001
And a second generator loss function
Figure 406423DEST_PATH_IMAGE002
Cyclic loss function
Figure 435559DEST_PATH_IMAGE003
And a loss function of uniqueness
Figure 738364DEST_PATH_IMAGE004
. In one case, the generator loss function may be a multi-class cross-entropy loss function, and the cyclic loss function and the uniqueness loss function may be L1 distance loss functions. The formula for calculating the current loss value can be expressed as follows:
Figure 423423DEST_PATH_IMAGE005
wherein the content of the first and second substances,
Figure 837087DEST_PATH_IMAGE006
Figure 720730DEST_PATH_IMAGE007
Figure 460015DEST_PATH_IMAGE008
Figure 429108DEST_PATH_IMAGE009
Figure 380884DEST_PATH_IMAGE010
Figure 587874DEST_PATH_IMAGE011
wherein n represents the number of sample images for which loss values are calculated,
Figure 232482DEST_PATH_IMAGE012
representing inputting a specified positive sample image x into a first generator to generate a false negative sample image having a negative sample style type;
Figure 688871DEST_PATH_IMAGE013
the difference between the false negative sample image output by the second discriminator and the learned negative sample image after the false negative sample image with the negative sample style type corresponding to the specified positive sample image x is input into the second discriminator; x represents a specified positive sample image in a positive sample style, and y represents a specified negative sample image in a negative sample style;
Figure 709917DEST_PATH_IMAGE014
representing inputting the specified negative sample image y into a second generator to generate a false positive sample image having a positive sample style type;
Figure 302572DEST_PATH_IMAGE015
the difference between the false positive sample image output by the first discriminator and the learned positive sample image after the false positive sample image with the positive sample style type corresponding to the specified negative sample image y is input into the first discriminator;
Figure 118082DEST_PATH_IMAGE016
indicating a difference between the specified negative sample image y output by the second discriminator and the learned negative sample image, with the specified negative sample image y input to the second discriminator;
Figure 265029DEST_PATH_IMAGE017
indicating that a specified positive sample image x is input to the first discriminator, and the difference between the specified positive sample image x output by the first discriminator and the learned positive sample image is passed
Figure 824187DEST_PATH_IMAGE018
The hue influence can be eliminated.
Subsequently, each positive sample image is input into the first generator, and a new sample image with a negative sample style is obtained. Subsequently, a label representing the image acquired by the camera in the abnormal state is added to the newly added sample image corresponding to each positive sample image, namely, the newly added sample image corresponding to each positive sample image is marked as a negative sample image.
In another embodiment of the present invention, in order to improve the safety of an object, such as an autonomous vehicle, where the camera to be detected is located, the method may further include:
and outputting abnormal alarm information under the condition that the abnormal condition of the camera to be detected is determined.
The abnormal alarm information may be sound alarm information, text alarm information, light change alarm information, or other information which can attract the attention of the user.
In another embodiment of the present invention, the method may further include:
and under the condition that the condition detection result corresponding to the camera to be detected is determined to be accurate, updating the current abnormal condition detection model based on the image to be detected and the condition detection result corresponding to the image to be detected.
In order to ensure the accuracy of the abnormal condition detection result of the camera, the current abnormal condition detection model needs to be updated regularly or irregularly. In this embodiment, when it is determined that the condition detection result corresponding to the camera to be detected is accurate, the current abnormal condition detection model is updated by using the image to be detected and the condition detection result corresponding to the image to be detected. Or under the condition that the condition detection result corresponding to the camera to be detected is determined to be accurate and the number of the accumulated historical detection images and the condition detection results corresponding to the historical detection images reaches the preset threshold value, updating the current abnormal condition detection model by using the historical detection images and the condition detection results corresponding to the historical detection images.
The historical detection image may be an image acquired by the camera to be detected or another camera, and the condition detection result corresponding to the historical detection image may be a result determined based on the current abnormal condition detection model or another algorithm for detecting whether an abnormal condition occurs in the camera, which is all possible.
In one implementation, the image to be detected and the corresponding condition detection result thereof are utilized to optimize the model parameters of the current abnormal condition detection model, so as to obtain an optimized abnormal condition detection model; verifying the accuracy of the optimized abnormal condition detection model and the current abnormal condition detection model under the condition of the same recall rate by using the third positive sample image and the label information thereof and the third negative sample image and the label information thereof; if the accuracy of the optimized abnormal condition detection model is not less than the accuracy of the detection result of the current abnormal condition detection model under the condition of the same recall rate, taking the optimized abnormal condition detection model as a new current abnormal condition detection model for the subsequent detection of the abnormal condition of the camera; and if the accuracy of the optimized abnormal condition detection model is lower than the recall rate of the current abnormal condition detection model under the condition of the same recall rate, continuing to utilize the current abnormal condition detection model to detect the subsequent abnormal condition of the camera.
The third positive sample image may include a partial image in the positive sample image, or may include an obtained image indicating that the camera corresponding to the other characterization does not have an abnormal condition; the third negative sample image may include a partial image in the negative sample image, and may also include other obtained images that characterize the abnormal condition of the corresponding camera.
In another implementation, the image to be detected and the corresponding condition detection result thereof may be utilized to optimize the model parameters of the current abnormal condition detection model to obtain an optimized abnormal condition detection model, and the image to be detected and the corresponding condition detection result thereof are utilized to optimize the intermediate abnormal condition detection models corresponding to other groups of preset hyper-parameters to obtain other optimized abnormal condition detection models. Further, the third positive sample image and the label information thereof, and the third negative sample image and the label information thereof are utilized to verify the accuracy of the optimized abnormal condition detection model, the current abnormal condition detection model and other optimized abnormal condition detection models under the condition of the same recall rate; and taking the abnormal condition detection model with the maximum accuracy under the condition of the same recall rate as a new current abnormal condition detection model.
Corresponding to the foregoing method embodiment, an embodiment of the present invention provides an apparatus for detecting an abnormal condition of a camera, where as shown in fig. 3, the apparatus may include:
a first obtaining module 310 configured to obtain an image captured by a camera to be detected in a static state as an image to be detected;
a first determining module 320, configured to determine a condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected, where the condition detection result includes: the current abnormal condition detection model is as follows, wherein the current abnormal condition detection model is as follows: and training the obtained model based on the positive sample image which represents that the corresponding camera has no abnormal condition and the negative sample image which represents that the corresponding camera has the abnormal condition.
By applying the embodiment of the invention, the condition detection result corresponding to the camera to be detected can be determined by directly utilizing the current abnormal condition detection model and the image to be detected, the condition detection result corresponding to the camera to be detected can be determined by directly utilizing the image to be detected acquired when the camera to be detected is in a static state, and the detection scene of the abnormal condition of the camera to be detected is expanded to a certain extent. And the abnormal condition detection can be carried out on the camera to be detected in a static state, so that the user experience logic of the detection before the automatic driving vehicle or the robot is met, and the driving safety of the object where the camera to be detected is located is favorably improved. And the model is beneficial to deployment and is easy to expand to a plurality of abnormal scenes.
In another embodiment of the present invention, the abnormal situation type includes at least one of the following types: water staining, stains, black screens, and overexposure.
In another embodiment of the present invention, the current abnormal situation detection model includes a single-frame detection sub-model and a multi-frame judgment sub-model; the image to be detected is a continuous frame image;
the first determining module 320 is specifically configured to, for each image to be detected, input the image to be detected into the single-frame detection sub-model, and determine detection information corresponding to the image to be detected, where the detection information includes: representing the probability value corresponding to each abnormal condition type of the camera to be detected and representing the probability value corresponding to the abnormal condition of the camera to be detected;
inputting the detection information corresponding to each image to be detected into the multi-frame judgment sub-model, and counting the number of the images to be detected, which are used for representing the abnormal condition of the camera to be detected, of the corresponding detection information as a first number; and determining whether the camera to be detected has abnormal conditions or not based on the first quantity, and determining the target abnormal condition type corresponding to the camera to be detected based on the probability values corresponding to the abnormal condition types in the detection information representing the abnormal conditions of the camera to be detected under the condition that the abnormal conditions of the camera to be detected are determined, so as to obtain the condition detection result corresponding to the camera to be detected.
In another embodiment of the present invention, the apparatus further comprises:
a second obtaining module (not shown in the figure), configured to obtain a positive sample image and label information corresponding to the positive sample image, and a negative sample image and label information corresponding to the negative sample image before determining a condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected, where the positive sample image is an image representing that the corresponding camera does not have an abnormal condition, and the negative sample image is an image representing that the corresponding camera has an abnormal condition;
a second determining module (not shown in the figures) configured to determine a target negative sample image using a preset data enhancement algorithm and the positive sample image and/or the negative sample image, wherein the target negative sample image comprises: the negative sample image and the image after deforming the negative sample image, and/or the target negative sample image comprises: the negative sample image and each positive sample image are subjected to style conversion to obtain an image;
a third obtaining module (not shown in the figure) configured to obtain an initial abnormal condition detection model corresponding to each set of the preset hyper-parameters;
a training module (not shown in the drawings) configured to train the initial abnormal condition detection model by using a first positive sample image and tag information thereof, a first target negative sample image and tag information thereof as a training set for each set of preset hyper-parameters until the initial abnormal condition detection model reaches a preset convergence condition, and determine an intermediate abnormal condition detection model corresponding to the set of preset hyper-parameters, wherein the first positive sample image is a partial image in the positive sample image, and the first target negative sample image is a partial image in the target negative sample image;
and a third determining module (not shown in the figures) configured to determine, from all the intermediate abnormal condition detection models, an intermediate abnormal condition detection model with an optimal detection result as a current abnormal condition detection model by using a second positive sample image and label information thereof, and a second target negative sample image and label information thereof as an evaluation set, wherein the second positive sample image is a partial image in the positive sample image, and the second target negative sample image is a partial image in the target negative sample image.
In another embodiment of the present invention, the preset data enhancement algorithm is a pre-established data enhancement model, and the pre-established data enhancement model is: the network model is obtained by training based on the marked appointed positive sample image and the appointed negative sample image, and the pre-established data enhancement model is used for performing data enhancement on the input image;
the second determination module (not shown in the figure) is specifically configured to
Inputting each positive sample image into the pre-established data enhancement model to obtain a newly added sample image corresponding to each positive sample image;
and determining the negative sample image and the newly added sample image corresponding to each positive sample image as a target negative sample image.
In another embodiment of the present invention, the apparatus further comprises:
and the output module (not shown in the figure) is configured to output abnormal alarm information under the condition that the abnormal condition of the camera to be detected is determined.
In another embodiment of the present invention, the apparatus further comprises:
an updating module (not shown in the figure) configured to update the current abnormal situation detection model based on the image to be detected and the corresponding situation detection result thereof under the condition that the situation detection result corresponding to the camera to be detected is determined to be accurate
The system and apparatus embodiments correspond to the system embodiments, and have the same technical effects as the method embodiments, and for the specific description, refer to the method embodiments. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again. Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for detecting abnormal conditions of a camera, the method comprising:
obtaining an image shot by a camera to be detected in a static state as an image to be detected;
determining a condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected, wherein the condition detection result comprises: the current abnormal condition detection model is as follows, wherein the current abnormal condition detection model is as follows: and training the obtained model based on the positive sample image which represents that the corresponding camera has no abnormal condition and the negative sample image which represents that the corresponding camera has the abnormal condition.
2. The method of claim 1, wherein the abnormal-condition type comprises at least one of: water staining, stains, black screens, and overexposure.
3. The method of claim 1, wherein the current abnormal-condition detection model includes a single-frame detection submodel and a multi-frame judgment submodel; the image to be detected is a continuous frame image;
the step of determining the condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected comprises the following steps:
for each image to be detected, inputting the image to be detected into the single-frame detection sub-model, and determining detection information corresponding to the image to be detected, wherein the detection information comprises: representing the probability value corresponding to each abnormal condition type of the camera to be detected and representing the probability value corresponding to the abnormal condition of the camera to be detected;
inputting the detection information corresponding to each image to be detected into the multi-frame judgment sub-model, and counting the number of the images to be detected, which are used for representing the abnormal condition of the camera to be detected, of the corresponding detection information as a first number; and determining whether the camera to be detected has abnormal conditions or not based on the first quantity, and determining the target abnormal condition type corresponding to the camera to be detected based on the probability values corresponding to the abnormal condition types in the detection information representing the abnormal conditions of the camera to be detected under the condition that the abnormal conditions of the camera to be detected are determined, so as to obtain the condition detection result corresponding to the camera to be detected.
4. The method as claimed in claim 1, wherein before the step of determining the condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected, the method further comprises:
acquiring a positive sample image and label information corresponding to the positive sample image, and a negative sample image and label information corresponding to the negative sample image, wherein the positive sample image is an image for representing that the corresponding camera has no abnormal condition, and the negative sample image is an image for representing that the corresponding camera has an abnormal condition;
determining a target negative sample image by using a preset data enhancement algorithm and the positive sample image and/or the negative sample image, wherein the target negative sample image comprises: the negative sample image and the image after deforming the negative sample image, and/or the target negative sample image comprises: the negative sample image and each positive sample image are subjected to style conversion to obtain an image;
obtaining an initial abnormal condition detection model corresponding to each group of preset hyper-parameters;
aiming at each group of preset hyper-parameters, training the initial abnormal condition detection model by utilizing a first positive sample image and label information thereof, a first target negative sample image and label information thereof as a training set until the initial abnormal condition detection model reaches a preset convergence condition, and determining an intermediate abnormal condition detection model corresponding to the group of preset hyper-parameters, wherein the first positive sample image is a partial image in the positive sample image, and the first target negative sample image is a partial image in the target negative sample image;
and determining a middle abnormal condition detection model with the optimal detection result from all the middle abnormal condition detection models by using a second positive sample image and label information thereof, a second target negative sample image and label information thereof as an evaluation set, wherein the second positive sample image is a partial image in the positive sample image, and the second target negative sample image is a partial image in the target negative sample image.
5. The method of claim 4, wherein the predetermined data enhancement algorithm is a pre-established data enhancement model that is: the network model is obtained by training based on the marked appointed positive sample image and the appointed negative sample image, and the pre-established data enhancement model is used for performing data enhancement on the input image;
the step of determining a target negative sample image using a preset data enhancement algorithm and the positive sample image and/or the negative sample image includes:
inputting each positive sample image into the pre-established data enhancement model to obtain a newly added sample image corresponding to each positive sample image;
and determining the negative sample image and the newly added sample image corresponding to each positive sample image as a target negative sample image.
6. The method of any one of claims 1-5, further comprising:
and outputting abnormal alarm information under the condition that the abnormal condition of the camera to be detected is determined.
7. The method of any one of claims 1-5, further comprising:
and under the condition that the condition detection result corresponding to the camera to be detected is determined to be accurate, updating the current abnormal condition detection model based on the image to be detected and the condition detection result corresponding to the image to be detected.
8. An apparatus for detecting an abnormal condition of a camera, the apparatus comprising:
the first obtaining module is configured to obtain an image shot by a camera to be detected in a static state as an image to be detected;
a first determining module, configured to determine a condition detection result corresponding to the camera to be detected by using the current abnormal condition detection model and the image to be detected, where the condition detection result includes: the current abnormal condition detection model is as follows, wherein the current abnormal condition detection model is as follows: and training the obtained model based on the positive sample image which represents that the corresponding camera has no abnormal condition and the negative sample image which represents that the corresponding camera has the abnormal condition.
9. The apparatus of claim 8, wherein the abnormal-condition type comprises at least one of: water staining, stains, black screens, and overexposure.
10. The apparatus of claim 8, wherein the current abnormal-condition detection model comprises a single-frame detection submodel and a multi-frame judgment submodel; the image to be detected is a continuous frame image;
the first determining module is specifically configured to, for each image to be detected, input the image to be detected into the single-frame detection sub-model, and determine detection information corresponding to the image to be detected, where the detection information includes: representing the probability value corresponding to each abnormal condition type of the camera to be detected and representing the probability value corresponding to the abnormal condition of the camera to be detected;
inputting the detection information corresponding to each image to be detected into the multi-frame judgment sub-model, and counting the number of the images to be detected, which are used for representing the abnormal condition of the camera to be detected, of the corresponding detection information as a first number; and determining whether the camera to be detected has abnormal conditions or not based on the first quantity, and determining the target abnormal condition type corresponding to the camera to be detected based on the probability values corresponding to the abnormal condition types in the detection information representing the abnormal conditions of the camera to be detected under the condition that the abnormal conditions of the camera to be detected are determined, so as to obtain the condition detection result corresponding to the camera to be detected.
CN202110402956.4A 2021-04-15 2021-04-15 Method and device for detecting abnormal conditions of camera Active CN112804522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110402956.4A CN112804522B (en) 2021-04-15 2021-04-15 Method and device for detecting abnormal conditions of camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110402956.4A CN112804522B (en) 2021-04-15 2021-04-15 Method and device for detecting abnormal conditions of camera

Publications (2)

Publication Number Publication Date
CN112804522A true CN112804522A (en) 2021-05-14
CN112804522B CN112804522B (en) 2021-07-20

Family

ID=75811405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110402956.4A Active CN112804522B (en) 2021-04-15 2021-04-15 Method and device for detecting abnormal conditions of camera

Country Status (1)

Country Link
CN (1) CN112804522B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320387A (en) * 2023-04-06 2023-06-23 深圳博时特科技有限公司 Camera module detection system and detection method
CN116996665A (en) * 2023-09-28 2023-11-03 深圳天健电子科技有限公司 Intelligent monitoring method, device, equipment and storage medium based on Internet of things

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170109590A1 (en) * 2014-05-27 2017-04-20 Robert Bosch Gmbh Detection, identification, and mitigation of lens contamination for vehicle mounted camera systems
KR101848312B1 (en) * 2017-03-28 2018-04-13 (주) 모토텍 Sensor fusion system for autonomous emergency braking system in car
CN108898592A (en) * 2018-06-22 2018-11-27 北京小米移动软件有限公司 Prompt method and device, the electronic equipment of camera lens degree of fouling
CN110855976A (en) * 2019-10-08 2020-02-28 南京云计趟信息技术有限公司 Camera abnormity detection method and device and terminal equipment
CN111932596A (en) * 2020-09-27 2020-11-13 深圳佑驾创新科技有限公司 Method, device and equipment for detecting camera occlusion area and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170109590A1 (en) * 2014-05-27 2017-04-20 Robert Bosch Gmbh Detection, identification, and mitigation of lens contamination for vehicle mounted camera systems
KR101848312B1 (en) * 2017-03-28 2018-04-13 (주) 모토텍 Sensor fusion system for autonomous emergency braking system in car
CN108898592A (en) * 2018-06-22 2018-11-27 北京小米移动软件有限公司 Prompt method and device, the electronic equipment of camera lens degree of fouling
CN110855976A (en) * 2019-10-08 2020-02-28 南京云计趟信息技术有限公司 Camera abnormity detection method and device and terminal equipment
CN111932596A (en) * 2020-09-27 2020-11-13 深圳佑驾创新科技有限公司 Method, device and equipment for detecting camera occlusion area and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320387A (en) * 2023-04-06 2023-06-23 深圳博时特科技有限公司 Camera module detection system and detection method
CN116320387B (en) * 2023-04-06 2023-09-29 深圳博时特科技有限公司 Camera module detection system and detection method
CN116996665A (en) * 2023-09-28 2023-11-03 深圳天健电子科技有限公司 Intelligent monitoring method, device, equipment and storage medium based on Internet of things
CN116996665B (en) * 2023-09-28 2024-01-26 深圳天健电子科技有限公司 Intelligent monitoring method, device, equipment and storage medium based on Internet of things

Also Published As

Publication number Publication date
CN112804522B (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN110163300B (en) Image classification method and device, electronic equipment and storage medium
CN109697434B (en) Behavior recognition method and device and storage medium
CN112446378B (en) Target detection method and device, storage medium and terminal
JP6330385B2 (en) Image processing apparatus, image processing method, and program
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN112804522B (en) Method and device for detecting abnormal conditions of camera
CN111310850B (en) License plate detection model construction method and system, license plate detection method and system
CN111768392B (en) Target detection method and device, electronic equipment and storage medium
CN103593672A (en) Adaboost classifier on-line learning method and Adaboost classifier on-line learning system
CN107563299B (en) Pedestrian detection method using RecNN to fuse context information
CN111368634B (en) Human head detection method, system and storage medium based on neural network
CN109871792B (en) Pedestrian detection method and device
CN110610123A (en) Multi-target vehicle detection method and device, electronic equipment and storage medium
CN112884147A (en) Neural network training method, image processing method, device and electronic equipment
CN110114801B (en) Image foreground detection device and method and electronic equipment
EP2447912B1 (en) Method and device for the detection of change in illumination for vision systems
EP3767590A1 (en) Device and method for training a generative model
CN110414544B (en) Target state classification method, device and system
CN111160100A (en) Lightweight depth model aerial photography vehicle detection method based on sample generation
CN113903041A (en) Text recognition method and device, vehicle and storage medium
CN111339808A (en) Vehicle collision probability prediction method and device, electronic equipment and storage medium
CN110334703B (en) Ship detection and identification method in day and night image
CN114663731B (en) Training method and system of license plate detection model, and license plate detection method and system
CN110827319A (en) Improved Staple target tracking method based on local sensitive histogram
CN115170829A (en) System and method for monitoring and identifying foreign matters in generator rotor vent hole

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant