CN113205489A - Monitoring image detection method and device, computer equipment and storage medium - Google Patents

Monitoring image detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113205489A
CN113205489A CN202110417876.6A CN202110417876A CN113205489A CN 113205489 A CN113205489 A CN 113205489A CN 202110417876 A CN202110417876 A CN 202110417876A CN 113205489 A CN113205489 A CN 113205489A
Authority
CN
China
Prior art keywords
image
monitoring image
monitoring
acquiring
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110417876.6A
Other languages
Chinese (zh)
Inventor
冯代洲
梁天海
刘莹影
谢文焱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Haiwen Communication Co ltd
Original Assignee
Guangdong Haiwen Communication Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Haiwen Communication Co ltd filed Critical Guangdong Haiwen Communication Co ltd
Priority to CN202110417876.6A priority Critical patent/CN113205489A/en
Publication of CN113205489A publication Critical patent/CN113205489A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of computers, and provides a monitoring image detection method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring a first monitoring image and a second monitoring image; the first monitoring image and the second monitoring image belong to images of different time periods; detecting image similarity parameters between the first monitoring image and the second monitoring image; and obtaining detection result information according to the image similarity parameters. According to the image similarity parameters, detection result information is obtained, whether the real-time image of the monitoring equipment is changed or not can be detected, manual inspection is not needed, manpower and material resources are saved, equipment maintenance cost is reduced, and operation and maintenance efficiency is improved.

Description

Monitoring image detection method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computers, and in particular, to a method and an apparatus for detecting a monitoring image, a computer device, and a storage medium.
Background
The existing police system can comprise a plurality of camera devices installed on the road and used for shooting, but the camera devices are more in number, the operation and maintenance difficulty and the cost are higher, the shooting angle of the camera devices is generally detected in a manual visual mode or whether the camera devices are shielded or not, and the labor is wasted when more manpower and objects are wasted.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention provides a method and an apparatus for detecting a monitoring image, a computer device and a storage medium.
A method of detecting a surveillance image, the method comprising:
acquiring a first monitoring image and a second monitoring image; the first monitoring image and the second monitoring image belong to images of different time periods;
detecting image similarity parameters between the first monitoring image and the second monitoring image;
and obtaining detection result information according to the image similarity parameters.
In an embodiment of the present invention, the acquiring the first monitoring image and the second monitoring image includes:
acquiring a first monitoring image;
and acquiring a second monitoring image after a preset time period.
In an embodiment of the present invention, the detecting the image similarity parameter between the first monitored image and the second monitored image includes:
obtaining image quality training data according to the first monitoring image and the second monitoring image;
and inputting the first monitoring image, the second monitoring image and the image quality training data into a neural network model to obtain the trained neural network model.
In an embodiment of the present invention, the obtaining of the image quality training data according to the first monitoring image and the second monitoring image includes:
respectively extracting image characteristic information of the first monitoring image and the second monitoring image;
and obtaining image quality training data according to the image characteristic information.
In an embodiment of the present invention, the detecting the image similarity parameter between the first monitored image and the second monitored image includes:
acquiring a new first monitoring image and a new second monitoring image;
and inputting the new first monitoring image and the new second monitoring image into the trained neural network model to obtain the output image similarity parameters.
In an embodiment of the present invention, the obtaining of the detection result information according to the image similarity parameter includes:
and inquiring to obtain detection result information according to the image similarity parameters.
An apparatus for detecting a surveillance image, the apparatus comprising:
the image acquisition module is used for acquiring a first monitoring image and a second monitoring image; the first monitoring image and the second monitoring image belong to images of different time periods;
the detection module is used for detecting image similarity parameters between the first monitoring image and the second monitoring image;
and the detection result information obtaining module is used for obtaining detection result information according to the image similarity parameters.
In one embodiment of the present invention, the image acquisition module includes:
the first image acquisition sub-module is used for acquiring a first monitoring image;
and the second image acquisition sub-module is used for acquiring a second monitoring image after a preset time period.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the above method are implemented when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as described above.
In the monitoring image detection method, a first monitoring image and a second monitoring image are obtained; the first monitoring image and the second monitoring image belong to images of different time periods; detecting image similarity parameters between the first monitoring image and the second monitoring image; according to the image similarity parameters, detection result information is obtained, whether the real-time image of the monitoring equipment is changed or not can be detected, manual inspection is not needed, manpower and material resources are saved, equipment maintenance cost is reduced, and operation and maintenance efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of an application environment provided in one embodiment of the present invention;
fig. 2 is a schematic flow chart of a monitoring image detection method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating the steps of detecting image similarity parameters according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating the steps of detecting image similarity parameters according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating the steps of obtaining image quality training data according to an embodiment of the present invention;
fig. 6 is a block diagram of a monitoring image detecting apparatus according to an embodiment of the present invention;
fig. 7 is an internal structural diagram of a computer apparatus provided in one embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be noted that the term "first \ second" referred to in the embodiments of the present invention is only used for distinguishing similar objects, and does not represent a specific ordering for the objects, and it should be understood that "first \ second" may exchange a specific order or sequence order if allowed. It should be understood that "first \ second" distinct objects may be interchanged under appropriate circumstances such that embodiments of the invention described herein may be practiced in sequences other than those illustrated or described herein.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The monitoring image detection method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 may be, but is not limited to, various image capturing devices, such as an infrared night vision camera, a daytime camera, and the like, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In an embodiment, as shown in fig. 2, a method for detecting a monitoring image is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
s110, acquiring a first monitoring image and a second monitoring image; the first monitoring image and the second monitoring image belong to images of different time periods.
The terminal 102 may include a plurality of image acquisition devices, such as an infrared night vision camera, a daytime camera, and the like, which are not limited too much, and the image acquisition device may acquire different monitoring images, such as a plurality of monitoring images in different time periods, and may also acquire monitoring videos, and extract the monitoring videos in different frames respectively to obtain the monitoring images in different time periods.
In the specific implementation, the daytime camera can collect monitoring videos in real time, extract one monitoring image, extract another monitoring image separated by 0.3 second, and respectively determine the another monitoring image as a first monitoring image and a second monitoring image.
And S120, detecting image similarity parameters between the first monitoring image and the second monitoring image.
In the embodiment of the application, after the first monitoring image and the second monitoring image are obtained, image processing may be performed on the first monitoring image and the second monitoring image to obtain image similarity parameters between the first monitoring image and the second monitoring image.
It should be noted that the image similarity parameter refers to a degree of similarity between two images, and if the degree of similarity between the first monitored image and the second monitored image (i.e., the image similarity parameter) is 90%, it can be considered that the monitored image has not been changed suddenly, i.e., the shooting angle of the image capturing device has not been changed greatly or the image capturing device is not blocked.
In the specific implementation, an image recognition algorithm can be used for carrying out block segmentation on the two monitoring images to obtain the similarity between each block, and then the total similarity of each monitoring image is obtained through summarization to obtain the image similarity parameters of the two monitoring images.
The above-mentioned manner of calculating the monitoring image is only an example of the embodiment of the present application, and the image similarity parameters of the first monitoring image and the second monitoring image may also be calculated by other algorithms, which is not limited in this embodiment of the present application.
For example, the method can also be used for training a plurality of neural network algorithm models through the sample image to obtain the trained neural network algorithm model, and then identifying the first monitoring image and the second monitoring image by using the trained neural network algorithm model to obtain the image similarity parameters of the first monitoring image and the second monitoring image.
For the type of the neural network algorithm model, it may include multiple supervised machine learning algorithm models, such as a linear regression algorithm model, a bp (back propagation) neural network algorithm model, a decision tree model, a support vector machine model, a KNN (K-Nearest Neighbors) model, etc., and the embodiments of the present application do not make too many restrictions on the type of the machine learning algorithm model.
And S130, obtaining detection result information according to the image similarity parameters.
Further applied to the embodiment of the application, after the image similarity parameters are obtained, gun measurement result information can be obtained according to the image similarity parameters.
In specific implementation, a corresponding relationship between the image similarity parameter and the detection result information can be established, and the corresponding detection result information can be inquired through a mapping table between the image similarity parameter and the detection result information.
For example, the specific content of the mapping table may be that when the image similarity parameter is greater than a preset threshold (e.g., 70%), the corresponding detection result information is normal; and when the image similarity parameter is less than 70%, the corresponding detection result information is the sudden change of the monitoring picture, and the shooting direction of the monitoring equipment may be changed or the monitoring equipment is blocked.
The preset threshold may be any value set by a person skilled in the art according to practical situations, such as 60% or 09%, and the embodiments of the present application are not limited to this.
It should be noted that, the steps executed in the foregoing may be executed on a terminal or a server side, and may be executed synchronously or asynchronously, and this embodiment of the present application is not limited to this.
The detection method of the monitoring image is small, and a first monitoring image and a second monitoring image are obtained; the first monitoring image and the second monitoring image belong to images of different time periods; detecting image similarity parameters between the first monitoring image and the second monitoring image; according to the image similarity parameters, detection result information is obtained, whether the real-time image of the monitoring equipment is changed or not can be detected, manual inspection is not needed, manpower and material resources are saved, equipment maintenance cost is reduced, and operation and maintenance efficiency is improved.
In one embodiment, the step of acquiring the first monitoring image and the second monitoring image in step S110 includes: acquiring a first monitoring image; and acquiring a second monitoring image after a preset time period.
In this embodiment, the terminal or the server may store the monitoring video or the monitoring image, first extract the first monitoring image, and extract the second monitoring image after a preset time period.
For example, the preset time period may be 5 seconds, that is, the first monitoring image and the second monitoring image are acquired before and after 5 seconds apart.
For the preset time period, it may be any time interval set by a person skilled in the art according to practical situations, such as 8 seconds or 10 seconds, and the embodiment of the present application is not limited too much.
In one embodiment, the step of detecting the image similarity parameter between the first monitored image and the second monitored image in step S120 includes:
a substep S11, obtaining image quality training data according to the first monitoring image and the second monitoring image;
and a substep S12, inputting the first monitoring image, the second monitoring image and the image quality training data into a neural network model to obtain a trained neural network model.
In the embodiment of the application, some first monitoring images and some second monitoring images are used as training samples and input into the neural network model to train the neural network model.
Firstly, obtaining image quality training data according to a first monitoring image and a second monitoring image, wherein in concrete implementation, the image quality training data can be obtained through probability model analysis.
It should be noted that the image quality training data may refer to a degree of similarity between images, and may be determined manually or identified by an algorithm, which is not limited in this embodiment of the present application.
And inputting the first monitoring image, the second monitoring image and corresponding image quality training data as samples into the neural network model for training until the iteration of the neural network model meets a preset condition, and stopping inputting the samples to obtain the trained neural network model.
Further, in an embodiment, the step of detecting the image similarity parameter between the first monitored image and the second monitored image in step S120 includes:
a substep S13 of acquiring a new first monitoring image and a new second monitoring image;
and a substep S14, inputting the new first monitoring image and the new second monitoring image into the trained neural network model to obtain the output image similarity parameters.
After the trained neural network model is obtained, a new first monitoring image and a new second monitoring image can be extracted, that is, the new first monitoring image and the new second monitoring image can be input into the trained neural network model, and at the moment, the trained neural network model outputs image similarity parameters of the new first monitoring image and the new second monitoring image.
For example, the neural network model may include a deep Convolutional Neural Network (CNNs) model, which may include three structures consisting of convolution, activation, and pooling. In the embodiment of the present application, the CNN outputs the result of a specific feature space of each monitored image. When processing the image classification task, the feature space output by the CNN is used as an input of a fully connected layer or a fully connected neural network (FCN), and the fully connected layer is used to complete the mapping from the input image to the tag set.
In one embodiment, the step of obtaining the image quality training data according to the first monitor image and the second monitor image in the sub-step S11 includes:
the substep S111 is used for respectively extracting image characteristic information of the first monitoring image and the second monitoring image;
and a substep S112, obtaining image quality training data according to the image characteristic information.
The image characteristic information of the first monitoring image and the second monitoring image can be extracted in various modes, and then image quality training data can be obtained according to the image characteristic information.
In one embodiment, the step of obtaining the detection result information according to the image similarity parameter in the sub-step S130 includes: and checking the direction according to the image similarity parameters to obtain detection result information.
After the image similarity parameters are obtained, a mapping table between the image similarity parameters and the detection result information can be inquired, the corresponding detection result information is output,
for example, when the image similarity parameter is 50%, the corresponding detection result information in the mapping table is: the shooting direction of the monitoring equipment may be changed or the monitoring equipment is blocked. And outputting the detection result information to a display screen of the server side.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
In order to more clearly illustrate the scheme of the present application, the detection of the monitoring image in the above embodiments of the present application may be explained in a specific monitoring scene.
The server may pre-construct a training picture set, specifically, may obtain monitoring video information, obtain a monitoring image from the monitoring video information, perform distortion type analysis through a support vector machine model, or perform distortion type analysis through other manners; common distortion types may include aliasing distortion, blurring distortion, blocking distortion, noise interference distortion, etc., wherein blurring distortion includes motion blur, defocus blur; further, the noise interference distortion includes optical shot noise distortion, readout noise distortion, impulse noise distortion, and ringing distortion, and the embodiments of the present application do not excessively limit the types of the above distortion.
The server can perform probability model analysis and convolutional neural network model training based on a training picture set, specifically, a regression analysis model is established for a specific distortion type, a statistical probability model between image characteristics and image quality is established, a total quality evaluation index is obtained according to the matching degree (such as the distance between the characteristics) of an image and the probability model, a certain image transformation domain or space characteristics are extracted, the neural network model is verified based on a known image distortion type, and a trained convolutional neural network model is obtained.
When the monitoring image is detected, the server can acquire a video stream shot by the camera, namely a first monitoring image and a second monitoring image; the first monitoring image and the second monitoring image belong to images of different time periods; and inputting the first monitoring image and the second monitoring image into the convolutional neural network model for detection to obtain an image detection result corresponding to the monitoring image. In some embodiments, the server may split the continuous video stream into a plurality of monitoring images, which are sequentially input to the convolutional neural network model for detection.
In one embodiment, as shown in fig. 6, there is provided a detection apparatus for monitoring an image, the apparatus 600 including:
an image obtaining module 301, configured to obtain a first monitoring image and a second monitoring image; the first monitoring image and the second monitoring image belong to images of different time periods;
a detection module 302, configured to detect an image similarity parameter between the first monitored image and the second monitored image;
a detection result information obtaining module 303, configured to obtain detection result information according to the image similarity parameter.
In one embodiment, the image acquisition module comprises:
a first image obtaining module for obtaining a first monitoring image;
and the second image acquisition sub-module is used for acquiring a second monitoring image after a preset time period.
In one embodiment, the detection module comprises:
the image quality training data obtaining submodule is used for obtaining image quality training data according to the first monitoring image and the second monitoring image;
and the neural network model training submodule is used for inputting the first monitoring image, the second monitoring image and the image quality training data into the neural network model to obtain the trained neural network model.
In one embodiment, the image quality training data obtaining sub-module includes:
the extraction unit is used for respectively extracting image characteristic information of the first monitoring image and the second monitoring image;
and the image quality training data obtaining unit is used for obtaining image quality training data according to the image characteristic information.
In one embodiment, the detection module comprises:
the second image acquisition sub-module is used for acquiring a new first monitoring image and a new second monitoring image;
and the image similarity parameter obtaining submodule is used for inputting the new first monitoring image and the new second monitoring image into the trained neural network model to obtain the output image similarity parameter.
In one embodiment, the detection result information obtaining module includes:
and the detection result information obtaining submodule is used for obtaining detection result information according to the image similarity parameter query.
For specific limitations of the detection device for the monitored image, reference may be made to the above limitations on the detection method for the monitored image, which are not described herein again. The modules in the device for detecting the monitoring image can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in a computer device, such as a processor of an intelligent camera or a processor of a server, or can be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The invention can be applied to computer equipment, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack-mounted server, a blade server, a tower server or a rack-mounted server (including an independent server or a server cluster formed by a plurality of servers) and the like which can execute programs. The computer device of the embodiment at least includes but is not limited to: a memory, a processor, communicatively coupled to each other via a system bus, as shown in fig. 7. It should be noted that fig. 7 only shows a computer device with memory, processor components, but it should be understood that not all of the shown components are required to be implemented, and more or fewer components may be implemented instead. The memory (i.e., readable storage medium) includes flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the memory may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device. Of course, the memory may also include both internal and external storage devices for the computer device. In this embodiment, the memory is generally used for storing an operating system and various types of application software installed in the computer device. In addition, the memory may also be used to temporarily store various types of data that have been output or are to be output. The processor may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor is typically used to control the overall operation of the computer device. In this embodiment, the processor is configured to run a program code stored in the memory or process data to implement a method for detecting a monitoring image.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above-described method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for detecting a surveillance image, the method comprising:
acquiring a first monitoring image and a second monitoring image; the first monitoring image and the second monitoring image belong to images of different time periods;
detecting image similarity parameters between the first monitoring image and the second monitoring image;
and obtaining detection result information according to the image similarity parameters.
2. The method of claim 1, wherein the acquiring the first monitor image and the second monitor image comprises:
acquiring a first monitoring image;
and acquiring a second monitoring image after a preset time period.
3. The method of claim 1, wherein detecting image similarity parameters between the first monitored image and the second monitored image comprises:
obtaining image quality training data according to the first monitoring image and the second monitoring image;
and inputting the first monitoring image, the second monitoring image and the image quality training data into a neural network model to obtain the trained neural network model.
4. The method of claim 1, wherein obtaining image quality training data from the first monitor image and the second monitor image comprises:
respectively extracting image characteristic information of the first monitoring image and the second monitoring image;
and obtaining image quality training data according to the image characteristic information.
5. The method of claim 3, wherein detecting image similarity parameters between the first monitored image and the second monitored image comprises:
acquiring a new first monitoring image and a new second monitoring image;
and inputting the new first monitoring image and the new second monitoring image into the trained neural network model to obtain the output image similarity parameters.
6. The method according to claim 1, wherein the obtaining detection result information according to the image similarity parameter comprises:
and inquiring to obtain detection result information according to the image similarity parameters.
7. An apparatus for detecting a surveillance image, the apparatus comprising:
the image acquisition module is used for acquiring a first monitoring image and a second monitoring image; the first monitoring image and the second monitoring image belong to images of different time periods;
the detection module is used for detecting image similarity parameters between the first monitoring image and the second monitoring image;
and the detection result information obtaining module is used for obtaining detection result information according to the image similarity parameters.
8. The apparatus of claim 7, wherein the image acquisition module comprises:
the first image acquisition sub-module is used for acquiring a first monitoring image;
and the second image acquisition sub-module is used for acquiring a second monitoring image after a preset time period.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202110417876.6A 2021-04-16 2021-04-16 Monitoring image detection method and device, computer equipment and storage medium Pending CN113205489A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110417876.6A CN113205489A (en) 2021-04-16 2021-04-16 Monitoring image detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110417876.6A CN113205489A (en) 2021-04-16 2021-04-16 Monitoring image detection method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113205489A true CN113205489A (en) 2021-08-03

Family

ID=77027397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110417876.6A Pending CN113205489A (en) 2021-04-16 2021-04-16 Monitoring image detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113205489A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240235A (en) * 2014-08-26 2014-12-24 北京君正集成电路股份有限公司 Method and system for detecting whether camera is covered or not
CN111723644A (en) * 2020-04-20 2020-09-29 北京邮电大学 Method and system for detecting occlusion of surveillance video
CN111898486A (en) * 2020-07-14 2020-11-06 浙江大华技术股份有限公司 Method and device for detecting abnormity of monitoring picture and storage medium
CN112597864A (en) * 2020-12-16 2021-04-02 佳都新太科技股份有限公司 Monitoring video abnormity detection method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240235A (en) * 2014-08-26 2014-12-24 北京君正集成电路股份有限公司 Method and system for detecting whether camera is covered or not
CN111723644A (en) * 2020-04-20 2020-09-29 北京邮电大学 Method and system for detecting occlusion of surveillance video
CN111898486A (en) * 2020-07-14 2020-11-06 浙江大华技术股份有限公司 Method and device for detecting abnormity of monitoring picture and storage medium
CN112597864A (en) * 2020-12-16 2021-04-02 佳都新太科技股份有限公司 Monitoring video abnormity detection method and device

Similar Documents

Publication Publication Date Title
D'Avino et al. Autoencoder with recurrent neural networks for video forgery detection
CN107153817B (en) Pedestrian re-identification data labeling method and device
CN110110601B (en) Video pedestrian re-recognition method and device based on multi-time space attention model
CN109325964B (en) Face tracking method and device and terminal
CN109710780B (en) Archiving method and device
CN109783685B (en) Query method and device
CN111626201B (en) Commodity detection method, commodity detection device and readable storage medium
CN111680551A (en) Method and device for monitoring livestock quantity, computer equipment and storage medium
CN108647587B (en) People counting method, device, terminal and storage medium
US10909388B2 (en) Population density determination from multi-camera sourced imagery
CN111639653A (en) False detection image determining method, device, equipment and medium
CN111814776B (en) Image processing method, device, server and storage medium
CN110866428B (en) Target tracking method, device, electronic equipment and storage medium
CN110569770A (en) Human body intrusion behavior recognition method and device, storage medium and electronic equipment
CN112434566A (en) Passenger flow statistical method and device, electronic equipment and storage medium
CN114445768A (en) Target identification method and device, electronic equipment and storage medium
CN115761881A (en) Improved yolov 5-SFF-based detection method and system
CN113971821A (en) Driver information determination method and device, terminal device and storage medium
CN114663871A (en) Image recognition method, training method, device, system and storage medium
CN114764895A (en) Abnormal behavior detection device and method
CN116229336A (en) Video moving target identification method, system, storage medium and computer
CN116612498A (en) Bird recognition model training method, bird recognition method, device and equipment
CN113205489A (en) Monitoring image detection method and device, computer equipment and storage medium
CN112738387B (en) Target snapshot method, device and storage medium
CN110572618B (en) Illegal photographing behavior monitoring method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination