CN111753860B - Analysis anomaly detection method and device - Google Patents

Analysis anomaly detection method and device Download PDF

Info

Publication number
CN111753860B
CN111753860B CN201910239272.XA CN201910239272A CN111753860B CN 111753860 B CN111753860 B CN 111753860B CN 201910239272 A CN201910239272 A CN 201910239272A CN 111753860 B CN111753860 B CN 111753860B
Authority
CN
China
Prior art keywords
analysis
camera
preset
judging
analysis result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910239272.XA
Other languages
Chinese (zh)
Other versions
CN111753860A (en
Inventor
李林森
曾挥毫
赵世范
闫春
浦世亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910239272.XA priority Critical patent/CN111753860B/en
Publication of CN111753860A publication Critical patent/CN111753860A/en
Application granted granted Critical
Publication of CN111753860B publication Critical patent/CN111753860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an analysis anomaly detection method and device, wherein the analysis anomaly detection method comprises the following steps: acquiring images acquired by a camera within a preset time period, analyzing the acquired images aiming at a preset analysis object to obtain analysis data representing the camera aiming at the preset analysis object within the preset time period, and judging that the image analysis aiming at the camera is abnormal when the analysis data is not matched with the analysis result aiming at the preset analysis object of the camera in a normal working state. According to the technical scheme provided by the embodiment of the application, the analysis result is used for representing the comprehensive index of the camera aiming at the preset analysis object in the normal working state, the analysis result is used as a judgment standard for judging whether the image analysis aiming at the camera is abnormal or not, and the analysis result is compared with the analysis data, so that the analysis abnormality detection can be more comprehensively carried out aiming at the camera.

Description

Analysis anomaly detection method and device
Technical Field
The present disclosure relates to the field of data analysis technologies, and in particular, to a method and an apparatus for detecting analysis anomalies.
Background
Based on a large number of cameras set in public places, and image data and video data collected by the cameras, intelligent analysis is widely applied to the fields of security, public security and the like. Specifically, the intelligent analysis device associates a large number of cameras, acquires multimedia data such as images and videos from the associated cameras, and then performs analysis processing such as inquiry and research and judgment on the acquired multimedia data to obtain corresponding analysis results.
The intelligent analysis may analyze different objects such as a face, a vehicle, a license plate, etc., for example, an intelligent analysis device that performs intelligent analysis on the vehicle, where associated cameras are disposed on an expressway, and the cameras photograph the vehicle on the expressway and send the photographed vehicle image to the intelligent analysis device. The intelligent analysis device performs intelligent analysis for the vehicle according to the received vehicle image.
Intelligent analysis relies on pictures taken by the camera and the pictures taken by the camera can also have a direct impact on the analysis results of the intelligent analysis. Therefore, in order to ensure that the intelligent analysis is performed normally and the obtained analysis result is normal, it is necessary to perform abnormality detection on the intelligent analysis, and the obtained analysis result can be utilized only in the case where the intelligent analysis is normal. At present, whether intelligent analysis is abnormal is judged by evaluating the imaging quality of an image acquired by a camera, and if the imaging effect of the image acquired by the camera is fuzzy, the pixels are low and the like is not good, the intelligent analysis is judged to be abnormal.
The current judging mode of whether the intelligent analysis is abnormal only aims at the imaging effect and the visual effect of the image, however, besides being influenced by the imaging effect and the visual effect of the image, factors such as shooting angle, exposure time, aperture and the like of the camera can cause the intelligent analysis to be abnormal. For example, the camera should be a vehicle that shoots at a high speed, and the shooting angle of the camera deviates from the high speed, and shooting is performed at a place other than the high speed, so that even if the image shot by the camera is clear but the vehicle at the high speed is not shot, abnormality occurs in the analysis result of the intelligent analysis. Therefore, how to perform intelligent analysis anomaly detection more comprehensively is a problem to be solved compared with the prior art.
Disclosure of Invention
The embodiment of the application aims to provide an analysis anomaly detection method and device, so as to realize intelligent analysis anomaly detection more comprehensively compared with the prior art. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an analysis anomaly detection method, including:
acquiring images acquired by a camera within a preset time period;
analyzing the acquired image aiming at a preset analysis object to obtain analysis data which characterizes the camera aiming at the preset analysis object in the preset duration;
judging whether the analysis data are matched with the analysis result of the camera aiming at the preset analysis object in a normal working state, wherein the time period of the analysis result in the time period is the same as the time period of the analysis data in the time period, and the analysis result is used for representing the comprehensive index of the preset analysis object in all the time periods aimed at;
if the images do not match, the image analysis of the camera is judged to be abnormal.
Optionally, the method further comprises:
if the analysis data is matched with the analysis result of the camera aiming at the preset analysis object in the normal working state, judging that the image analysis aiming at the camera is normal; and adding the analysis data to the analysis result to obtain a new analysis result of the camera aiming at the preset analysis object in a normal working state.
Optionally, the preset analysis objects are the number of shooting objects, and the analysis result is the average number of shooting objects shot by the camera in a normal working state;
analyzing the acquired image for a preset analysis object to obtain analysis data representing the camera for the preset analysis object in the preset duration, wherein the analysis data comprises the following steps:
performing image recognition on the acquired image, and determining the number of shooting objects in the acquired image;
the judging whether the analysis data is matched with the analysis result of the camera aiming at the preset analysis object in the normal working state comprises the following steps:
and judging whether the determined quantity is smaller than the average quantity, if so, judging that the analysis data is not matched with the analysis result, and if not, judging that the analysis data is matched with the analysis result.
Optionally, the preset analysis mode is a matching rate between a shooting object and a preset label object, and the analysis result is an average matching rate between the shooting object and the label object shot by the camera in a normal working state;
analyzing the acquired image for a preset analysis object to obtain analysis data representing the camera for the preset analysis object in the preset duration, wherein the analysis data comprises the following steps:
Identifying the shooting objects in the acquired image, and counting the number of the identified shooting objects as a first number;
matching the identified shooting object with the tag object;
counting the number of successfully matched shooting objects in the identified shooting objects, and taking the number as a second number;
determining a ratio of the second number to the first number as the match rate;
the judging whether the analysis data is matched with the analysis result of the camera aiming at the preset analysis object in the normal working state comprises the following steps:
and judging whether the matching rate is smaller than the average matching rate, if so, judging that the data is not matched with the analysis result, and if not, judging that the analysis data is matched with the analysis result.
Optionally, the tag object is obtained by:
when the matching of the identified shooting objects fails, marking the shooting objects with failed matching by using a new label;
and adding the shooting object carrying the new label into the label object.
Optionally, the preset analysis object is the association strength between each camera, and the analysis result is an accumulated association strength graph for representing the association strength between the camera and other cameras in a normal working state;
Analyzing the acquired image for a preset analysis object to obtain analysis data representing the camera for the preset analysis object in the preset duration, wherein the analysis data comprises the following steps:
identifying the shooting objects in the acquired images, and classifying and marking the identified shooting objects by using labels;
extracting the shooting objects of the same label from the marked shooting objects, and generating a motion trail aiming at the shooting objects;
mapping the generated motion trail to a preset camera association graph to obtain an incremental association strength graph within the preset duration, wherein the camera association graph is used for representing the position relationship among the cameras, and the incremental association strength graph is used for representing the association strength among the cameras reflected based on the motion trail of the shooting object within the preset duration;
the judging whether the analysis data is matched with the analysis result of the camera aiming at the preset analysis object in the normal working state comprises the following steps:
comparing the incremental correlation intensity of each camera in the incremental correlation intensity map with the cumulative correlation intensity of the camera in the cumulative correlation intensity map;
Judging whether the relation between the incremental association strength and the accumulated association strength meets an association strength condition, if so, judging that the analysis data is matched with the analysis result, and if not, judging that the analysis data is not matched with the analysis result.
Optionally, the determining whether the relationship between the incremental correlation strength and the accumulated correlation strength satisfies the correlation strength condition includes:
calculating an average value of the accumulated association strength in all the aimed time periods;
calculating a variance based on the average and the cumulative correlation strength;
calculating whether the absolute value of the difference between the incremental correlation strength and the average value is larger than a preset multiple of the variance;
if yes, judging that the relation between the incremental association strength and the accumulated association strength does not meet the association strength condition; if not, judging that the relation between the incremental association strength and the accumulated association strength meets the association strength condition.
In a second aspect, an embodiment of the present application provides an analysis anomaly detection apparatus, including:
the acquisition module is used for acquiring images acquired by the camera within a preset time length;
The analysis module is used for analyzing the acquired image aiming at a preset analysis object to obtain analysis data which characterizes the camera aiming at the preset analysis object in the preset duration;
the judging module is used for judging whether the analysis data are matched with the analysis result of the camera aiming at the preset analysis object in a normal working state, wherein the time period of the analysis result is the same as the time period of the analysis data in the time period, and the analysis result is used for representing the comprehensive index of the preset analysis object in all the aimed time periods;
and the judging module is used for judging that the image analysis aiming at the camera is abnormal when the judging result of the judging module is negative.
Optionally, the apparatus further comprises:
the adding module is used for judging that the image analysis of the camera is normal if the analysis data is matched with the analysis result of the camera aiming at the preset analysis object in the normal working state; and adding the analysis data to the analysis result to obtain a new analysis result of the camera aiming at the preset analysis object in a normal working state.
Optionally, the preset analysis objects are the number of shooting objects, and the analysis result is the average number of shooting objects shot by the camera in a normal working state;
the analysis module is specifically used for:
performing image recognition on the acquired image, and determining the number of shooting objects in the acquired image;
the judging module is specifically configured to:
and judging whether the determined quantity is smaller than the average quantity, if so, judging that the analysis data is not matched with the analysis result, and if not, judging that the analysis data is matched with the analysis result.
Optionally, the preset analysis mode is a matching rate between a shooting object and a preset label object, and the analysis result is an average matching rate between the shooting object and the label object shot by the camera in a normal working state; the analysis module is specifically used for:
identifying the shooting objects in the acquired image, and counting the number of the identified shooting objects as a first number;
matching the identified shooting object with the tag object;
counting the number of successfully matched shooting objects in the identified shooting objects, and taking the number as a second number;
Determining a ratio of the second number to the first number as the match rate;
the judging module is specifically configured to:
and judging whether the matching rate is smaller than the average matching rate, if so, judging that the data is not matched with the analysis result, and if not, judging that the analysis data is matched with the analysis result.
Optionally, the tag object is obtained by:
when the matching of the identified shooting objects fails, marking the shooting objects with failed matching by using a new label;
and adding the shooting object carrying the new label into the label object.
Optionally, the preset analysis object is the association strength between each camera, and the analysis result is an accumulated association strength graph for representing the association strength between the camera and other cameras in a normal working state;
the analysis module is specifically used for:
identifying the photographic subject in the acquired image, classifying and marking the identified shooting objects by using labels;
extracting the photographic subjects of the same label from the marked photographic subjects, generating a motion trail for the photographic subject;
Mapping the generated motion trail to a preset camera association graph to obtain an incremental association strength graph within the preset duration, wherein the camera association graph is used for representing the position relationship among the cameras, and the incremental association strength graph is used for representing the association strength among the cameras reflected based on the motion trail of the shooting object within the preset duration;
the judging module is specifically configured to:
comparing the incremental correlation intensity of each camera in the incremental correlation intensity map with the cumulative correlation intensity of the camera in the cumulative correlation intensity map;
judging whether the relation between the incremental association strength and the accumulated association strength meets the association strength condition, if so, and judging that the analysis data is matched with the analysis result, and if not, judging that the analysis data is not matched with the analysis result.
Optionally, the judging module is specifically configured to:
calculating an average value of the accumulated association strength in all the aimed time periods;
calculating a variance based on the average and the cumulative correlation strength;
calculating whether the absolute value of the difference between the incremental correlation strength and the average value is larger than a preset multiple of the variance;
If yes, judging that the relation between the incremental association strength and the accumulated association strength does not meet the association strength condition; if not, judging that the relation between the incremental association strength and the accumulated association strength meets the association strength condition.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory, wherein,
a memory for storing a computer program;
and the processor is used for realizing any of the steps of the analysis abnormality detection method when executing the program stored in the memory.
In a fourth aspect, embodiments of the present application provide a machine-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the above-described analysis anomaly detection method steps.
In the technical scheme provided by the embodiment of the application, the images acquired by the camera in the preset time period are acquired, the acquired images are analyzed for the preset analysis object, analysis data representing the camera in the preset time period for the preset analysis object is obtained, whether the analysis data are matched with the analysis result of the camera in the normal working state for the preset analysis object or not is judged, wherein the time period of the analysis result in the time period is the same as the time period of the analysis data in the time period, the analysis result is used for representing the comprehensive index of the preset analysis object in all the time periods, and if the analysis result is not matched, the image analysis for the camera is judged to be abnormal.
Through the technical scheme provided by the embodiment of the application, the images in the preset time period are analyzed for the preset analysis object, so that corresponding analysis data can be obtained, the analysis data represent the data of the camera for the preset analysis object in the preset time period, and therefore, the analysis for the preset analysis object in the preset time period, whether abnormal or normal, can be reflected in the obtained analysis data. The analysis result is used for representing the comprehensive index of the camera aiming at the preset analysis object in the normal working state, and generally, the analysis result of each preset analysis object in any time period of the camera in the normal working state is kept relatively stable, so that the analysis result is used as a judgment standard for judging whether the image analysis aiming at the camera is abnormal or not, and compared with the analysis data, the analysis abnormality detection can be more comprehensively carried out aiming at the camera.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an analysis anomaly detection method according to an embodiment of the present application;
FIG. 2 (a) is a schematic illustration of an analysis object acquired by a camera;
FIG. 2 (b) is a schematic diagram of the motion trajectory of the analysis object;
FIG. 3 is a schematic illustration of an intensity map of a camera;
FIG. 4 (a) is a scene graph provided in an embodiment of the present application;
FIG. 4 (b) is a correlation strength chart according to an embodiment of the present application;
FIG. 4 (c) is another correlation strength graph provided in an embodiment of the present application;
FIG. 5 is an illustration of an analysis provided by an embodiment of the present application a schematic structural diagram of the abnormality detection device;
fig. 6 is a schematic diagram of a result of the electronic device according to the embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In order to achieve more comprehensive intelligent analysis anomaly detection compared with the prior art, the embodiment of the application provides an analysis anomaly detection method and device, wherein the analysis anomaly detection method comprises the following steps:
Acquiring images acquired by a camera within a preset time period;
analyzing the acquired image aiming at a preset analysis object to obtain analysis data of the characterization camera aiming at the preset analysis object in a preset duration;
judging whether analysis data are matched with analysis results of a camera aiming at a preset analysis object in a normal working state, wherein a time period of the analysis results is the same as a time period of the analysis data in the time period, and the analysis results are used for representing comprehensive indexes of the preset analysis object in all the time periods;
if the images do not match, it is determined that an abnormality has occurred in the image analysis with respect to the camera.
In the technical scheme provided by the embodiment of the application, the images acquired by the camera in the preset time period are acquired; analyzing the acquired image in a preset analysis mode to obtain an analysis result; judging whether an analysis result meets analysis indexes corresponding to a preset analysis mode, wherein the analysis indexes are as follows: an index for representing the normal state of the camera, which is obtained according to the historical image data of the camera and is aimed at a preset analysis mode; if not, determining that the analysis is abnormal. Through the technical scheme provided by the embodiment of the application, the analysis result is compared with the analysis index representing the normal state, so that the analysis result obtained when the image is unclear does not meet the analysis index, and when factors such as the shooting angle, the exposure time and the aperture of the camera influence intelligent analysis, the influence can be reflected on the obtained analysis result, namely, the analysis result can not meet the analysis index, and further analysis abnormality can be judged.
The following first describes an analysis anomaly detection method provided in an embodiment of the present application. As shown in fig. 1, the analysis anomaly detection method provided in the embodiment of the present application includes the following steps.
S101, acquiring images acquired by a camera within a preset duration.
The number of the selected cameras can be one or a plurality of cameras. The image collected by the camera is the image which needs to be intelligently analyzed, and the camera is connected with equipment which is used for intelligently analyzing.
The intelligent analysis is performed on a shooting object such as a face, a vehicle, a license plate and the like in a region, so that in one implementation, the selected camera can be a camera in a region. For example, if the analysis object of the intelligent analysis is for a person in an office building, the selected camera may be a camera in the office building. When the intelligently analyzed photographic subject is for a vehicle on an expressway, the selected camera may be the camera on the expressway.
The preset duration may be set in a user-defined manner. For example, the preset time period may be a day time or a month. The camera collects images within a preset time period, and intelligent analysis is conducted on the collected images within the preset time period.
S102, analyzing the acquired image aiming at a preset analysis object to obtain analysis data of the characterization camera aiming at the preset analysis object in a preset duration.
The preset analysis objects may be customized, for example, the preset analysis objects may be the number of shooting objects, the matching rate between the shooting objects and preset tag objects, and the association strength between the cameras. The preset analysis object is not limited to the above three, but may include other analysis objects, and is not limited thereto.
In one embodiment, the acquired image may be analyzed for only one predetermined analysis object, such that the resulting analysis data is for the predetermined analysis object. In another embodiment, the preset analysis object includes a plurality of analysis objects, so that analysis for the plurality of analysis objects can be performed on the acquired image, and corresponding analysis data can be obtained for each analysis object, so that a plurality of analysis data can be obtained.
When the acquired image needs to be analyzed for multiple analysis objects, the acquired image is analyzed for each analysis object, so that a plurality of corresponding analysis data can be obtained.
For example, the acquired images are analyzed for the number of subjects and the association strength between the cameras, respectively, and then when the acquired images are analyzed for the number of subjects, analysis data for the number of subjects can be obtained. When the acquired image is analyzed for the correlation strength between the cameras, analysis data for the correlation strength between the cameras can also be obtained.
S103, judging whether the analysis data is matched with an analysis result of the camera aiming at a preset analysis object in a normal working state. If not, step S104 is performed.
Wherein the time period for which the analysis result is directed is the same as the time period for which the analysis data is directed. For example, the time period is in days, i.e., one period is one day, and the time period for which the data is analyzed is the early peak time period: 7 to 9 points, the time period for the analysis result of the matching is also: 7 to 9 points. For another example, the time period is a 5-day period in month units, that is, one period is one month, and the time period for which the analysis data is analyzed is a 5-day period from No. 1 to No. 5, and the time period for which the analysis result of matching is performed is also a 5-day period from No. 1 to No. 5.
The analysis result can be used for representing the comprehensive index of the preset analysis object in all the aimed time periods, wherein the comprehensive index is the index of the camera aiming at the preset analysis object in the normal working state.
The comprehensive index may be an average value or an accumulated value. The preset analysis objects are different, and the types of comprehensive indexes represented by the analysis results are different. For example, when the preset analysis object is the number of photographing objects, the comprehensive index characterized by the analysis result is the average number. When the preset analysis object is the matching rate between the shooting object and the preset label object, the comprehensive index represented by the analysis result is the average matching rate. When the preset analysis object is the association strength among the cameras, the comprehensive index represented by the analysis result is the accumulated association strength.
It is considered that the analysis result for each preset analysis object is stable for the video camera in the normal state. Thus, by a large number of historical images of the camera, the analysis results can be obtained using the normalization method. The historical images are all images collected when the camera is in a normal working state.
In matching the analysis data with the analysis result, a matching value range including the analysis result may be set based on the analysis result. If the analysis data is within the matching value range, the analysis data may be considered to match the analysis result. If the analysis data is not within the matching value range, the analysis data may be considered to be mismatched with the analysis result.
In one embodiment, the analysis result is taken as an intermediate value, a value A smaller than the analysis result is obtained by subtracting a preset interval value from the analysis result, a value B larger than the analysis result is obtained by adding the analysis result to the preset interval value, and a range between the value A and the value B is determined as a matching value range.
For example, the analysis object is preset as the number of photographing objects, and the analysis result is the average number of photographing objects. In this case, a range of the number may be determined by taking the average number as the intermediate value, and when judging whether the analysis data matches the analysis result, it may be determined whether the analysis data is within the range of the number, if the analysis data is within the range of the number, it may be determined that the analysis data matches the analysis result, and if the analysis data is outside the range of the number, it may be determined that the analysis data does not match the analysis result.
That is, for one camera, the number of photographic subjects photographed by the camera in one period of time in the normal state is closer to the average number, that is, in the range of the number based on the average number. When the image captured by the camera is analyzed for the number of subjects, the analysis data obtained is outside the range of the number, and the analysis data may be considered to be not matched with the average number.
For another example, the preset analysis object is a matching rate between the shooting object and the preset tag object, and the analysis result is an average matching rate between the shooting object and the tag object. At this time, a matching rate range can be determined with the average matching rate as an intermediate value. When judging whether the analysis data is matched with the analysis result, namely judging whether the analysis data is in the matching rate range, if the analysis data is in the matching rate range, judging that the analysis data is matched with the analysis result, and if the analysis data is out of the matching rate range, judging that the analysis data is not matched with the analysis result.
That is, for one camera, in one period of time in the normal state, the photographed object that can be recognized and successfully matches the tag object in the database data among the photographed objects accounts for the proportion of all photographed objects that is the matching rate, which is close to the average matching rate, that is, is in the matching rate range based on the average matching rate. When images acquired by the camera in the same time period are analyzed for the matching rate, the obtained analysis data is out of the matching rate range, and the analysis data can be considered to be not matched with the average number.
For another example, the preset analysis object is a correlation strength between the cameras, and the correlation strength between the two cameras reflects a frequency of occurrence of a motion trajectory of the photographing object between the two cameras. The analysis result is the cumulative correlation strength between the cameras. At this time, the analysis data is the number of motion tracks of the shooting object between the two cameras, and when judging whether the analysis data is matched with the analysis result, namely judging whether the cumulative association strength between the analysis data and the two cameras meets the preset condition, the condition can be set in a self-defining way.
When the preset analysis objects are multiple, corresponding analysis data can be obtained for the analysis performed by each preset analysis object. For the plurality of analysis data, whether each analysis data matches with the analysis result corresponding to the analysis data can be determined.
For example, the preset analysis objects include the number of shot objects, the matching rate between the shot objects and the preset tag objects, and the association strength between the cameras, and the corresponding analysis data may be obtained as follows: the method comprises the steps of judging whether the first quantity is matched with the target quantity or not, judging whether the first matching rate is matched with the target matching rate or not, and judging whether the first association strength is matched with the target association strength or not if the first matching rate is matched with the target association strength or not, wherein the analysis result corresponding to the quantity of shooting objects is the target quantity, the analysis result corresponding to the matching rate is the target matching rate, and the analysis result corresponding to the association strength is the target association strength or not.
S104, judging that the image analysis of the camera is abnormal.
If the analysis data is judged to be matched with the analysis result of the camera aiming at the preset analysis object in the normal working state, the image analysis aiming at the camera can be considered to be normal. If the analysis data is not matched with the analysis result of the camera aiming at the preset analysis object in the normal working state, the image analysis aiming at the camera can be judged to be abnormal.
In one embodiment, if it is determined that the analysis data matches the analysis result of the camera for the preset analysis object in the normal operation state, it may be determined that the image analysis of the camera is normal, and the camera is in the normal operation state. At this time, the analysis data may be added to the analysis result to obtain a new analysis result, which may also represent the comprehensive index of the camera in the normal operating state in the preset analysis object in all the aimed time periods. In addition, the new analysis result may be used as an analysis result for next matching with the analysis data for the preset analysis object.
In one embodiment, the preset number of the analysis objects may be the number of the shooting objects, and the analysis result is the average number of the shooting objects shot by the camera in the normal working state.
After the images acquired by the camera within the preset time period are acquired, the acquired images can be subjected to image recognition. For example, when the photographing object is a face, image recognition is performed on the face in the acquired image. When the photographic subject is a vehicle, then image recognition is performed on the vehicle in the acquired image.
After image recognition is performed on each acquired image, statistics may be performed on the recognized photographic subjects, and the number of photographic subjects included in the acquired image may be determined, where the number may represent the number of photographic subjects captured by the camera within a preset time period.
It is determined whether the determined number is less than the average number. Wherein the average number is derived from images acquired by the camera in a normal operating state. In one implementation for obtaining the average number, the number of the shooting objects in each time period may be summed up and an average value may be calculated, where the average value is the average number.
In another implementation, the number of shot objects shot by the camera in a period of time tends to be stable, and basically falls within a range of values, at this time, the minimum value in the range of values may be determined as an average number, and when the number of shot objects shot by the camera in a period of time is smaller than the average number, that is, not within the range of values, the image analysis abnormality of the camera may be determined.
If the determined number is judged to be smaller than the average number, it may be judged that the analysis data does not match the analysis result, that is, it is judged that an abnormality occurs in the image analysis for the camera. If the determined number is judged not to be smaller than the average number, it is judged that the analysis data matches the analysis result, that is, it is judged that the image analysis for the camera is normal.
For example, a section of road is provided with a camera 1, a camera 2, a camera 3, and a camera 4, and the 4 cameras take a picture of a person passing through the road, that is, a photographic subject is a person. The images acquired by the camera 1 are analyzed for the number of people photographed, and the analysis conclusion which can be obtained by a large number of historical analysis results is: the number of persons photographed by the camera 1 during the time of day tends to be stable, substantially in the value range of [90, 100 ]. [90, 100] represents 90 or more and 100 or less. Based on the analysis results obtained for the historical analysis results of the camera 1, 90 can be taken as the average number of cameras 1. Then, when the number of persons photographed by the camera 1 in one day is 10, it can be determined that an abnormality occurs in the image analysis for the camera 1. When the number of people photographed by the camera 1 in one day is 92, it can be determined that the image analysis for the camera 1 is normal.
In addition, the analysis of the number of photographed subjects is affected by external factors including: different time periods, holidays, weather, major activities, etc. For example, vehicles on the road may increase significantly during the early peak (7 to 9) and late peak (17 to 19) periods, and may be less during the 12 to 14 pm period. The people flow in one office building can be obviously increased in the working days, and the people flow in the office building on weekends can be obviously reduced.
Thus, based on the periodic variation, the analysis results may be different for different external factors. For example, for a vehicle on the road, the average number of time periods at the early peak (7 to 9 points) and the late peak (17 to 19 points) may be set to 1000, and the average number of time periods at 12 to 14 points in noon may be set to 100. For another example, for people flow in a office building, the average number on weekdays may be set to 400, and the average number on weekends may be set to 50.
In one embodiment, the preset analysis object is a matching rate between a shooting object and a preset tag object, and the analysis result is an average matching rate between the shooting object and the tag object shot by the camera in a normal working state.
After the images acquired by the camera within the preset time period are acquired, the shooting objects in the acquired images can be identified, and the number of the identified shooting objects is calculated as a first number. The first number represents the number of shooting objects shot by the camera within a preset time period.
After the photographic subject is identified, the identified photographic subject may be matched with the tag subject. The tag object may be a preset object, which is a shooting object shot by a camera and marked by a tag. Each tag object corresponds to at least one tag, and the tags of the tag objects may be different. It can be considered that the number of the sub-blocks, the tag is a valid identification of the tag object. For example, the tag object is a person or a face, and the tag of the tag object may be a name. When the label object in the database data is a vehicle or a license plate, the label of the label object can be the license plate.
In one implementation, the tag object may be used as the base database data, that is, the base database data includes the tag object carrying the tag, the base database data may be preset, and the tag data included in the base database data may also be preset. The administrator marks a known object to be photographed, that is, a label object, by using a label.
In addition, the tag object can be expanded. In one embodiment, when there is a failure in matching a photographic subject with an existing tag subject among the identified photographic subjects, the photographic subject with the new tag is marked with the new tag, and the photographic subject with the new tag becomes the tag subject. Thus, when the camera shoots the shooting object again, the shooting object can be successfully matched with the newly added tag object.
For example, a camera captures a face, which is Zhang three. After the face is identified from the image, the face is matched with the existing label object, and each face image in the existing label object is marked with a corresponding name. When the matching of the face in the database data fails, the face is marked by using Zhang Sanning, and the face carrying the label Zhang Sanning after marking becomes a new label object.
After matching the identified photographic subjects with the tag subjects, the number of photographic subjects that have successfully matched among the identified photographic subjects may be counted and used as the second number. In one implementation, when there is a successful match between a photographic subject and a tag subject, the photographic subject is tagged with the tag of the tag subject. For a tagged subject, the subject can be considered to match successfully. For example, the identified photographed object includes an object 1, and when the object 1 is successfully matched with the tag object 1, the tag 1 of the tag object 1 is used to mark the object 1, so that the object 1 carries the tag 1. After the successfully matched subjects are marked, the number of subjects marked in the identified subjects is calculated as a second number.
After the first number and the second number are acquired, a ratio of the second number to the first number may be determined as a matching rate between the photographic subject and the tag subject.
And judging whether the matching rate is smaller than the average matching rate. In one implementation, the number of captured objects captured by the camera tends to stabilize, and the number of captured objects that can be successfully matched with the tag object also tends to stabilize. For example, people in a office building that are photographed by a camera on a weekday are essentially people who are working on the office building, and these people are essentially working on a weekday, so that the camera photographs these people every day on a weekday, this part of the people is relatively stable to the camera, and this part of the people can be successfully matched in the intelligent analysis.
Based on the fact that the number of shooting objects shot by the camera in a time period is stable and the number of shooting objects successfully matched is also stable, the matching rate of the camera in the time period tends to be stable, the camera can be considered to be in a matching rate range after normalization processing, the matching rate of the camera in the time period can be considered to be in the matching rate range, and therefore image analysis of the camera is determined to be normal. At this time, the minimum value in the matching rate range may be determined as the average matching rate, and when the matching rate of the camera is smaller than the average matching rate, that is, not in the matching rate range, it may be determined that the image analysis of the camera is abnormal.
If the matching rate is smaller than the average matching rate, it may be determined that the data does not match the analysis result, i.e., it is determined that an abnormality occurs in the image analysis for the camera. If the matching rate is not less than the average matching rate, it may be determined that the data matches the analysis result, i.e., that the image analysis for the camera is normal.
For example, a section includes a camera 1, a camera 2, a camera 3, and a camera 4, and the 4 cameras take a picture of a person passing through the section, that is, a shooting object is a person. The image acquired by the camera 1 is analyzed for the matching rate between the shooting object and the tag object, and according to the obtained historical analysis result, the number of people shot by the camera 1 in the time of day can be obtained to be stable, and the matching rate for people is also stable and basically in the numerical range of [85%,95% ]. Based on the above-described history analysis result of the camera 1, 85% can be regarded as the average matching rate of the camera 1. Then, when the analysis data obtained after the image captured by the camera 1 in one day is analyzed for the matching rate is 30%, it can be determined that an abnormality occurs in the image analysis for the camera 1. When the analysis result obtained by analyzing the image captured by the camera 1 in one day for the matching rate is 90%, it can be determined that the image analysis for the camera 1 is normal.
In addition, the analysis for the matching rate between the photographed object and the preset tag object may be subject to the influence of external factors including: different time periods, holidays, weather, major activities, etc. Thus, different average matching rates can be set for the periodic variation, respectively. For example, for a vehicle on the road, the average matching rate is 85% in the early peak (7 to 9 points) period, and 60% in the 12 to 14 pm period. In addition, if the matching rate is not less than the average matching rate, that is, if it is determined that the analysis data matches the analysis result, the average matching rate may be updated according to the matching rate, and the updated average matching rate may be obtained and then used as the analysis result in the next analysis of the matching rate.
In one embodiment, the preset analysis object is the association strength between the cameras, and the analysis result is an accumulated association strength graph for representing the association strength between the camera and other cameras in a normal working state.
After the images acquired by the camera within the preset time period are acquired, the shooting objects in the acquired images can be identified, and the identified shooting objects can be classified and marked by utilizing the labels. In one implementation, a photographic object in the acquired image is identified, the identified photographic object is matched with a tag object, and when the photographic object is successfully matched with the tag object, the tag of the tag object is used for marking the analysis object.
The labels are the same for the same subject. For example, when the subject is a person, the tag is the same for the same person. When the shooting object is a vehicle, the labels are license plates of the same vehicle.
Taking fig. 2 (a) as an example, different shapes in the drawing respectively represent different subjects, and the same shape represents the same subject. Fig. 2 (a) includes three subjects, "■" means one subject, "++" indicates another subject and "++" indicates another subject. Each shape in fig. 2 (a) represents that the photographic subject represented by the shape is photographed once by the camera, and thus, the subject "■" is photographed 5 times, the subject "+.i" is photographed 5 times, and the subject "#" is photographed 5 times.
And extracting the shooting objects with the same label from the marked shooting objects, and generating a motion trail aiming at the shooting objects. That is, each photographic subject corresponds to a motion trajectory, which is used to represent the motion trajectory of the photographic subject.
Taking fig. 2 (b) as an example, for the shot object "■", the shot 5 shot objects "■" are connected in chronological order to form a motion track for the shot object "■", and as shown in fig. 2 (b), the shot object "■" is first at the position 1, then moves to the position 2, then moves to the position 3, then moves from the position 3 to the position 4, and finally moves to the position 5. And connecting the shot 5 shooting objects 'according to the time sequence to form a motion trail aiming at the shooting objects' according to the time sequence. The photographed 5 photographed objects are connected according to time sequence, and a motion track for the photographed objects is formed.
After the motion trail of each shooting object is generated, mapping the generated motion trail to a preset camera association graph to obtain an incremental association intensity graph in a preset duration.
The camera association diagram is used for representing the position relation among the cameras, and the camera association diagram can be preset. The camera association diagram includes cameras required for intelligent analysis. The camera in the camera association diagram can be adjusted according to actual conditions so as to keep the camera association diagram synchronous with the camera arranged in the actual scene. For example, when a camera for intelligent analysis is newly added, the newly added camera is correspondingly added to the camera association diagram. When the cameras used for intelligent analysis are reduced, then the reduced cameras are deleted accordingly in the camera association diagram.
The correlation intensity map for the cameras is used to represent correlation intensity between cameras reflected on the basis of the movement track of the photographic subject in a period of time. In general, the same camera is different for different time periods, and the obtained correlation intensity map is different because the objects photographed by the camera are different in different time periods.
Wherein the correlation strength between the two cameras reflects the frequency of occurrence of the movement track of the shooting object between the two cameras. The association strength between two cameras indicates that the two cameras capture the object continuously, i.e., after one camera captures the object, the camera that captures the object next is the other camera of the two cameras.
For example, after the camera a captures the object a, the camera that next captures the object a is the camera B, and then the motion trajectory of the object a is mapped to the correlation strength between the camera a and the camera B. If the camera a shoots the object a and then the camera C shoots the object a, then the camera B shoots the object a, then the motion trail of the object a is mapped to the association strength between the camera a and the camera C and the association strength between the camera C and the camera B.
If the association strength between two cameras is higher, it can be considered that the shooting object appears more frequently between the two cameras; if the association strength between two cameras is lower, it can be considered that the number of times the photographic subject appears between the two cameras is smaller.
As shown in fig. 3, in an associated intensity graph, a line segment direction between two cameras indicates a moving direction of a photographic subject, for example, a direction in which the camera 1 is directed to the camera 2 indicates that the photographic subject is moving from the camera 1 to the camera 2. The number of the two-camera connection represents the number of the motion trajectories of the shooting object which are accumulated, for example, the number of the connection line of the camera 1 pointing to the camera 2 is 2780, which represents that 2780 motion trajectories are mapped on the associated intensity diagrams of the camera 1 and the camera 2, when there is still another track mapped from camera 1 to camera 2, then the number of the line on the correlation intensity map where camera 1 points to camera 2 becomes 2781.
For example, when one movement trace of the photographic subject 1 is from the camera 1 to the camera 2 and then to the camera 3, the number of the line of the camera 1 directed to the camera 2 in the correlation intensity map of fig. 3 becomes 2781 and the number of the line of the camera 2 directed to the camera 3 becomes 1821 after the movement trace is mapped to the correlation intensity map of fig. 3.
In one way of calculating the strength of the association between two cameras, a ratio of the connection between the two cameras to the total number of connections in which one camera points to the other camera can be calculated, which ratio can be considered as the strength of the association between the two cameras. Taking fig. 3 as an example, the number of lines of the camera 1 pointing to the camera 2 is 2780, and the total number of lines of the camera 1 pointing to other cameras is: 2780+3680+2110, i.e. the total number is 8570, it follows that the intensity of the association of camera 1 with camera 2 is 2780 to 8570, which is approximately equal to 32.4%.
Based on the description of the correlation intensity graph, the motion trail of the shooting object is aimed at the motion trail of the shooting object in a preset time period, and the obtained incremental correlation intensity graph is used for representing the correlation intensity among cameras reflected based on the motion trail of the shooting object in the preset time period. For the numerical representation of the line between two cameras in the incremental correlation intensity map: the number of motion trajectories mapped between the two cameras within a preset duration. Based on the incremental correlation intensity map, the correlation intensity between any two cameras can be calculated.
After the enhanced association intensity map is obtained, the incremental association intensity of each camera in the incremental association intensity map is respectively compared with the accumulated association intensity of the camera in the preset accumulated association intensity map.
The accumulated association intensity graph is used for representing accumulated association intensity between cameras in a normal state obtained according to historical image data of the cameras, namely, the association intensity between the cameras in the accumulated association intensity graph is in a state of tending to be stable, and the association intensity between the cameras in a normal state of the cameras in a period of time can be considered to be close to the association intensity in the accumulated association intensity graph.
And judging whether the relation between the incremental association strength and the accumulated association strength meets the association strength condition. The association strength condition may be set in a user-defined manner. If the relation between the incremental association strength and the accumulated association strength meets the association strength condition, the analysis data is judged to be matched with the analysis result, and if the relation between the incremental association strength and the accumulated association strength does not meet the association strength condition, the analysis data is judged to be not matched with the analysis result.
In one embodiment, an average of the cumulative correlation strengths over all time periods targeted is calculated. Taking fig. 3 as an example, for the correlation intensities of the cameras 1 pointing to the cameras 2, for all the time periods for which are N days, the correlation intensity of one camera 1 pointing to the camera 2 can be obtained every day, so that the correlation intensities of N cameras 1 pointing to the camera 2 can be obtained for N days, and then the average value of the correlation intensities of the cameras 1 pointing to the camera 2 can be obtained by summing up the N correlation intensities and dividing by N.
After calculating the average value of the correlation strengths, the variance may be calculated based on the average value and the cumulative correlation strength for each time period. Calculating whether the absolute value of the difference between the incremental correlation strength and the average value is larger than a preset multiple of the variance; if yes, judging that the relation between the incremental association strength and the accumulated association strength does not meet the association strength condition; if not, judging that the relation between the incremental association strength and the accumulated association strength meets the association strength condition.
Taking fig. 4 (a) as an example, fig. 4 (a) shows a section where 4 cameras are provided: the cameras 1, 2, 3, and 4 are the shooting objects, i.e., the 4 cameras shoot the people on the road. If all the time periods are N days, under the condition that the 4 cameras are normal, acquiring historical images of the previous N days of the 4 cameras, and obtaining an association intensity diagram for the camera 1, the camera 2, the camera 3 and the camera 4. An associated intensity plot for one of the N days as shown in fig. 4 (b). Wherein, the association strength of the camera 1 pointing to the camera 2 is 0.99, the association strength of the camera 1 pointing to the camera 3 is 0.01, the association strength of the camera 2 pointing to the camera 3 is 1, and the association strength of the camera 3 pointing to the camera 4 is 1. After some people pass through the camera 1, the people may not be shot due to the shooting angle of the camera 2, and the people are directly shot by the camera 3, so that the association strength of the camera 1 pointing to the camera 2 is 0.99, and the association strength of the camera 1 pointing to the camera 3 is 0.01. The resulting correlation intensity maps for each day may be different for camera 1, camera 2, camera 3, and camera 4.
If from the first day to the nth day, the associated intensities of the camera 1 pointing toward the camera 3 are respectively: 0.99,0.99,0.98,0.95,0.99, …,0.99. The average avg_edge_weight of the correlation intensities for N days of camera 1 pointing towards camera 3 is: (0.99+0.99+0.98+0.95+0.99+ … +0.99)/N.
After calculating the average of the correlation intensities, the variance is calculated using the following formula:
where σ is variance, μ is the correlation strength average, μ=avg_edge_weight, X i For the intensity of the association of camera 1 pointing towards camera 3 on day i, i is any integer from 1 to N, N being a positive integer.
The analysis anomaly detection is performed for the camera 1, the camera 2, the camera 3 and the camera 4, the preset time period is 1 day, that is, the correlation intensity analysis is performed on the images collected by each camera in the n+1th day, the obtained enhanced correlation intensity is as shown in fig. 4 (c), the enhanced correlation intensity of the camera 1 pointing to the camera 2 is 0.1, and the enhanced correlation intensity of the camera 1 pointing to the camera 3 is 0.9.
The absolute value of the difference between the enhanced association strength of the camera 1 pointing to the camera 2 and the average value of the association strength of the camera 1 pointing to the camera 2 is calculated as follows: 0.1-avg_edge_weight and comparing the absolute value to a preset multiple of variance.
For example, when the preset multiple is 3 times, and the absolute value |0.1-avg_edge_weight| is larger than the variance of 3 times, it is determined that the relationship between the incremental association strength and the accumulated association strength does not meet the association strength condition, and alarm information for analyzing abnormality can be sent. When the absolute value |0.1-avg_edge_weight| is not more than 3 times of variance, the relation between the incremental correlation strength and the accumulated correlation strength is judged to meet the correlation strength condition, namely the analysis for the (n+1) th day is normal.
In one embodiment, when the relationship between the incremental correlation strength and the cumulative correlation strength meets the correlation strength condition, the incremental correlation strength can be increased to the cumulative correlation strength so as to update the cumulative correlation strength, and a new cumulative correlation strength can be obtained after updating, and the new cumulative correlation strength can be used as the next correlation strength analysis for the cameras.
Specifically, the motion trail between the cameras represented by the incremental correlation strength is correspondingly increased to the motion trail between the cameras represented by the cumulative correlation strength. After increasing the incremental correlation strength to the accumulated correlation strength, a new correlation strength map may be obtained, where the new correlation strength map is an updated correlation strength map, and based on the new correlation strength map, the correlation strength between the cameras may be recalculated, where the obtained correlation strength is the new accumulated correlation strength.
Through the technical scheme provided by the embodiment of the application, the images in the preset time period are analyzed for the preset analysis object, so that corresponding analysis data can be obtained, the analysis data represent the data of the camera for the preset analysis object in the preset time period, and therefore, the analysis for the preset analysis object in the preset time period, whether abnormal or normal, can be reflected in the obtained analysis data. The analysis result is used for representing the comprehensive index of the camera aiming at the preset analysis object in the normal working state, and generally, the analysis result of each preset analysis object in any time period of the camera in the normal working state is kept relatively stable, so that the analysis result is used as a judgment standard for judging whether the image analysis aiming at the camera is abnormal or not, and compared with the analysis data, the analysis abnormality detection can be more comprehensively carried out aiming at the camera.
Corresponding to the above embodiment of the method for detecting an analysis anomaly, the embodiment of the present application further provides an analysis anomaly detection apparatus, as shown in fig. 5, including: the system comprises an acquisition module 510, an analysis module 520, a judgment module 530 and a judgment module 540.
An acquisition module 510, configured to acquire images acquired by the camera within a preset duration;
the analysis module 520 is configured to perform analysis on the acquired image for a preset analysis object, so as to obtain analysis data for characterizing the camera for the preset analysis object within a preset duration;
the judging module 530 is configured to judge whether the analysis data and the analysis result of the camera for the preset analysis object in the normal working state are matched, where a time period for which the analysis result is for is the same as a time period for which the analysis data is for the time period, and the analysis result is used for representing a comprehensive index of the preset analysis object in all the time periods for which the analysis result is for;
a determining module 540, configured to determine that an abnormality occurs in the image analysis for the camera when the determination result of the determining module 530 is no.
In one embodiment, the analysis abnormality detection apparatus may further include:
the adding module is used for judging that the image analysis aiming at the camera is normal if the analysis data is matched with the analysis result aiming at the preset analysis object in the normal working state of the camera; and adding the analysis data to an analysis result to obtain an analysis result of the new camera aiming at a preset analysis object in a normal working state.
In one embodiment, the preset analysis objects are the number of shooting objects, and the analysis result is the average number of shooting objects shot by the camera in a normal working state; the analysis module 520 specifically is configured to:
performing image recognition on the acquired image, and determining the number of shooting objects in the acquired image;
the judging module 530 is specifically configured to:
and judging whether the determined quantity is smaller than the average quantity, if so, judging that the analysis data is not matched with the analysis result, and if not, judging that the analysis data is matched with the analysis result.
In one embodiment, the preset analysis mode is the matching rate between the shooting object and the preset label object, and the analysis result is the average matching rate between the shooting object and the label object shot by the camera in the normal working state; the analysis module 520 specifically is configured to:
identifying shooting objects in the acquired image, and counting the number of the identified shooting objects to be used as a first number;
matching the identified shooting object with the tag object;
counting the number of successfully matched shooting objects in the identified shooting objects, and taking the number as a second number;
determining the ratio of the second quantity to the first quantity as a matching rate;
The judging module 530 is specifically configured to:
judging whether the matching rate is smaller than the average matching rate, if so, judging that the data is not matched with the analysis result, and if not, judging that the analysis data is matched with the analysis result.
In one embodiment, the tag object is obtained by:
when the matching of the shooting objects fails in the identified shooting objects, marking the shooting objects with the failed matching by using a new label;
the shooting object carrying the new tag is added to the tag object.
In one embodiment, the preset analysis object is the association strength between the cameras, and the analysis result is an accumulated association strength graph for representing the association strength between the camera and other cameras in a normal working state; the analysis module 520 specifically is configured to:
identifying shooting objects in the acquired images, and classifying and marking the identified shooting objects by using labels;
extracting a shooting object of the same label from the marked shooting objects, and generating a motion trail aiming at the shooting object;
mapping the generated motion trail to a preset camera association graph to obtain an incremental association strength graph within a preset time period, wherein the camera association graph is used for representing the position relationship among cameras, and the incremental association strength graph is used for representing the association strength among cameras reflected based on the motion trail of a shooting object within the preset time period;
The judging module 530 is specifically configured to:
comparing the incremental correlation intensity of each camera in the incremental correlation intensity map with the cumulative correlation intensity of the camera in the cumulative correlation intensity map;
judging whether the relation between the incremental association strength and the accumulated association strength meets the association strength condition, if so, judging that the analysis data is matched with the analysis result, and if not, judging that the analysis data is not matched with the analysis result.
In one embodiment, the judging module 530 is specifically configured to:
calculating an average value of the accumulated association strength in all the aimed time periods;
calculating a variance based on the average and the accumulated correlation strength;
calculating whether the absolute value of the difference between the incremental correlation strength and the average value is larger than a preset multiple of the variance;
if yes, judging that the relation between the incremental association strength and the accumulated association strength does not meet the association strength condition; if not, judging that the relation between the incremental association strength and the accumulated association strength meets the association strength condition.
Through the technical scheme provided by the embodiment of the application, the images in the preset time period are analyzed for the preset analysis object, so that corresponding analysis data can be obtained, the analysis data represent the data of the camera for the preset analysis object in the preset time period, and therefore, the analysis for the preset analysis object in the preset time period, whether abnormal or normal, can be reflected in the obtained analysis data. The analysis result is used for representing the comprehensive index of the camera aiming at the preset analysis object in the normal working state, and generally, the analysis result of each preset analysis object in any time period of the camera in the normal working state is kept relatively stable, so that the analysis result is used as a judgment standard for judging whether the image analysis aiming at the camera is abnormal or not, and compared with the analysis data, the analysis abnormality detection can be more comprehensively carried out aiming at the camera.
Corresponding to the above embodiment of the analysis anomaly detection method, the embodiment of the present application further provides an electronic device, as shown in fig. 6, including a processor 610, a communication interface 620, a memory 630, and a communication bus 640, where the processor 610, the communication interface 620, and the memory 630 complete communication with each other through the communication bus 640;
a memory 630 for storing a computer program;
the processor 610, when executing the program stored in the memory 630, performs the following steps:
acquiring images acquired by a camera within a preset time period;
analyzing the acquired image aiming at a preset analysis object to obtain analysis data of the characterization camera aiming at the preset analysis object in a preset duration;
judging whether analysis data are matched with analysis results of a camera aiming at a preset analysis object in a normal working state, wherein a time period of the analysis results is the same as a time period of the analysis data in the time period, and the analysis results are used for representing comprehensive indexes of the preset analysis object in all the time periods;
if the images do not match, it is determined that an abnormality has occurred in the image analysis with respect to the camera.
In the technical scheme provided by the embodiment of the application, the images acquired by the camera in the preset time period are acquired; analyzing the acquired image in a preset analysis mode to obtain an analysis result; judging whether an analysis result meets analysis indexes corresponding to a preset analysis mode, wherein the analysis indexes are as follows: an index for representing the normal state of the camera, which is obtained according to the historical image data of the camera and is aimed at a preset analysis mode; if not, determining that the analysis is abnormal. Through the technical scheme provided by the embodiment of the application, the analysis result is compared with the analysis index representing the normal state, so that the analysis result obtained when the image is unclear does not meet the analysis index, and when factors such as the shooting angle, the exposure time and the aperture of the camera influence intelligent analysis, the influence can be reflected on the obtained analysis result, namely, the analysis result can not meet the analysis index, and further analysis abnormality can be judged.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
Corresponding to the above embodiment of the analysis anomaly detection method, the embodiment of the application further provides a machine-readable storage medium storing machine executable instructions, which when invoked and executed by a processor, cause the processor to implement the above analysis anomaly detection method.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the analysis anomaly detection apparatus, electronic device, and machine-readable storage medium embodiments, since they are substantially similar to the analysis anomaly detection method embodiments, the description is relatively simple, and reference should be made to the description of the method embodiments in part.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (10)

1. A method of detecting an analytical anomaly, the method comprising:
acquiring images acquired by a camera within a preset time period;
analyzing the acquired image aiming at a preset analysis object to obtain analysis data which characterizes the camera aiming at the preset analysis object in the preset duration;
judging whether the analysis data are matched with the analysis result of the camera aiming at the preset analysis object in a normal working state, wherein the time period of the analysis result in the time period is the same as the time period of the analysis data in the time period, and the analysis result is used for representing the comprehensive index of the preset analysis object in all the time periods aimed at;
If the images are not matched, judging that the image analysis aiming at the camera is abnormal;
the preset analysis mode is the matching rate between a shooting object and a preset label object, and the analysis result is the average matching rate of the shooting object and the label object shot by the camera in a normal working state; analyzing the acquired image for a preset analysis object to obtain analysis data representing the camera for the preset analysis object in the preset duration, wherein the analysis data comprises the following steps: identifying the shooting objects in the acquired image, and counting the number of the identified shooting objects as a first number; matching the identified shooting object with the tag object; counting the number of successfully matched shooting objects in the identified shooting objects, and taking the number as a second number; determining a ratio of the second number to the first number as the match rate; the judging whether the analysis data is matched with the analysis result of the camera aiming at the preset analysis object in the normal working state comprises the following steps: judging whether the matching rate is smaller than the average matching rate, if so, judging that the data is not matched with the analysis result, and if not, judging that the analysis data is matched with the analysis result;
Or alternatively, the first and second heat exchangers may be,
the preset analysis object is the association strength between the cameras, and the analysis result is an accumulated association strength graph for representing the association strength between the camera and other cameras in a normal working state; analyzing the acquired image for a preset analysis object to obtain analysis data representing the camera for the preset analysis object in the preset duration, wherein the analysis data comprises the following steps: identifying the shooting objects in the acquired images, and classifying and marking the identified shooting objects by using labels; extracting the shooting objects of the same label from the marked shooting objects, and generating a motion trail aiming at the shooting objects; mapping the generated motion trail to a preset camera association graph to obtain an incremental association strength graph within the preset duration, wherein the camera association graph is used for representing the position relationship among the cameras, and the incremental association strength graph is used for representing the association strength among the cameras reflected based on the motion trail of the shooting object within the preset duration; the judging whether the analysis data is matched with the analysis result of the camera aiming at the preset analysis object in the normal working state comprises the following steps: comparing the incremental correlation intensity of each camera in the incremental correlation intensity map with the cumulative correlation intensity of the camera in the cumulative correlation intensity map; judging whether the relation between the incremental association strength and the accumulated association strength meets an association strength condition, if so, judging that the analysis data is matched with the analysis result, and if not, judging that the analysis data is not matched with the analysis result.
2. The method according to claim 1, wherein the method further comprises:
if the analysis data is matched with the analysis result of the camera aiming at the preset analysis object in the normal working state, judging that the image analysis aiming at the camera is normal; and adding the analysis data to the analysis result to obtain a new analysis result of the camera aiming at the preset analysis object in a normal working state.
3. The method of claim 1, wherein the tag object is obtained by:
when the matching of the identified shooting objects fails, marking the shooting objects with failed matching by using a new label;
and adding the shooting object carrying the new label into the label object.
4. The method of claim 1, wherein the determining whether the relationship of the incremental strength of association and the cumulative strength of association satisfies the strength of association condition comprises:
calculating the cumulative correlation strength at average over all time periods for the target;
calculating a variance based on the average and the cumulative correlation strength;
Calculating whether the absolute value of the difference between the incremental correlation strength and the average value is larger than a preset multiple of the variance;
if yes, judging that the relation between the incremental association strength and the accumulated association strength does not meet the association strength condition; if not, judging that the relation between the incremental association strength and the accumulated association strength meets the association strength condition.
5. An analysis abnormality detection device is provided, which is capable of detecting an analysis abnormality, characterized in that the device comprises:
the acquisition module is used for acquiring images acquired by the camera within a preset time length;
the analysis module is used for analyzing the acquired image aiming at a preset analysis object to obtain analysis data which characterizes the camera aiming at the preset analysis object in the preset duration;
the judging module is used for judging whether the analysis data are matched with the analysis result of the camera aiming at the preset analysis object in a normal working state, wherein the time period of the analysis result is the same as the time period of the analysis data in the time period, and the analysis result is used for representing the comprehensive index of the preset analysis object in all the aimed time periods;
The judging module is used for judging that the image analysis aiming at the camera is abnormal when the judging result of the judging module is negative;
the preset analysis mode is the matching rate between a shooting object and a preset label object, and the analysis result is the average matching rate of the shooting object and the label object shot by the camera in a normal working state; the analysis module is specifically used for: identifying the shooting objects in the acquired image, and counting the number of the identified shooting objects as a first number; matching the identified shooting object with the tag object; counting the number of successfully matched shooting objects in the identified shooting objects, and taking the number as a second number; determining a ratio of the second number to the first number as the match rate; the judging module is specifically configured to: judging whether the matching rate is smaller than the average matching rate, if so, judging that the data is not matched with the analysis result, and if not, judging that the analysis data is matched with the analysis result;
or alternatively, the first and second heat exchangers may be,
the preset analysis object is the association strength between the cameras, and the analysis result is an accumulated association strength graph for representing the association strength between the camera and other cameras in a normal working state; the analysis module is specifically used for: identifying the shooting objects in the acquired images, and classifying and marking the identified shooting objects by using labels; extracting the shooting objects of the same label from the marked shooting objects, and generating a motion trail aiming at the shooting objects; mapping the generated motion trail to a preset camera association graph to obtain an incremental association strength graph within the preset duration, wherein the camera association graph is used for representing the position relationship among the cameras, and the incremental association strength graph is used for representing the association strength among the cameras reflected based on the motion trail of the shooting object within the preset duration; the judging module is specifically configured to: comparing the incremental correlation intensity of each camera in the incremental correlation intensity map with the cumulative correlation intensity of the camera in the cumulative correlation intensity map; judging whether the relation between the incremental association strength and the accumulated association strength meets an association strength condition, if so, judging that the analysis data is matched with the analysis result, and if not, judging that the analysis data is not matched with the analysis result.
6. The apparatus of claim 5, wherein the apparatus further comprises:
the adding module is used for judging that the image analysis of the camera is normal if the analysis data is matched with the analysis result of the camera aiming at the preset analysis object in the normal working state; and adding the analysis data to the analysis result to obtain a new analysis result of the camera aiming at the preset analysis object in a normal working state.
7. The apparatus of claim 5, wherein the tag object is obtained by:
when the matching of the identified shooting objects fails, marking the shooting objects with failed matching by using a new label;
and adding the shooting object carrying the new label into the label object.
8. The apparatus of claim 5, wherein the determining module is specifically configured to:
calculating an average value of the accumulated association strength in all the aimed time periods;
calculating a variance based on the average and the cumulative correlation strength;
calculating whether the absolute value of the difference between the incremental correlation strength and the average value is larger than a preset multiple of the variance;
If yes, judging that the relation between the incremental association strength and the accumulated association strength does not meet the association strength condition; if not, judging that the relation between the incremental association strength and the accumulated association strength meets the association strength condition.
9. An electronic device comprising a processor and a memory, wherein,
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-4 when executing a program stored on a memory.
10. A machine-readable storage medium, characterized in that it has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-4.
CN201910239272.XA 2019-03-27 2019-03-27 Analysis anomaly detection method and device Active CN111753860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910239272.XA CN111753860B (en) 2019-03-27 2019-03-27 Analysis anomaly detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910239272.XA CN111753860B (en) 2019-03-27 2019-03-27 Analysis anomaly detection method and device

Publications (2)

Publication Number Publication Date
CN111753860A CN111753860A (en) 2020-10-09
CN111753860B true CN111753860B (en) 2024-02-02

Family

ID=72671049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910239272.XA Active CN111753860B (en) 2019-03-27 2019-03-27 Analysis anomaly detection method and device

Country Status (1)

Country Link
CN (1) CN111753860B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008236353A (en) * 2007-03-20 2008-10-02 Mitsubishi Electric Corp Camera abnormality detection device
US8724871B1 (en) * 2011-12-14 2014-05-13 Atti International Services Company, Inc. Method and system for identifying anomalies in medical images
CN104850841A (en) * 2015-05-20 2015-08-19 银江股份有限公司 Elder abnormal behavior monitoring method with combination of RFID and video identification
KR101567026B1 (en) * 2015-04-23 2015-11-06 (주)리얼허브 Apparatus and method for determining abnormal situation
KR101630154B1 (en) * 2015-12-31 2016-06-15 주식회사 디앤에스테크놀로지 Image detecting system and method for underbody of vehicle
JP2016224707A (en) * 2015-05-29 2016-12-28 リコーエレメックス株式会社 Inspection system
JP2018077837A (en) * 2016-10-28 2018-05-17 Jfeスチール株式会社 Position recognition method and system, and abnormality determination method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170173262A1 (en) * 2017-03-01 2017-06-22 François Paul VELTZ Medical systems, devices and methods

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008236353A (en) * 2007-03-20 2008-10-02 Mitsubishi Electric Corp Camera abnormality detection device
US8724871B1 (en) * 2011-12-14 2014-05-13 Atti International Services Company, Inc. Method and system for identifying anomalies in medical images
KR101567026B1 (en) * 2015-04-23 2015-11-06 (주)리얼허브 Apparatus and method for determining abnormal situation
CN104850841A (en) * 2015-05-20 2015-08-19 银江股份有限公司 Elder abnormal behavior monitoring method with combination of RFID and video identification
JP2016224707A (en) * 2015-05-29 2016-12-28 リコーエレメックス株式会社 Inspection system
KR101630154B1 (en) * 2015-12-31 2016-06-15 주식회사 디앤에스테크놀로지 Image detecting system and method for underbody of vehicle
JP2018077837A (en) * 2016-10-28 2018-05-17 Jfeスチール株式会社 Position recognition method and system, and abnormality determination method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Anomaly and tampering detection of cameras by providing details;Sayyed Mohammad Hosseini et al;2016 6th International Conference on Computer and Knowledge Engineering (ICCKE);第165-170页 *
Techniques and challenges in building intelligent systems: anomaly detection in camera surveillance;Saini D K, et al;Proceedings of First International Conference on Information and Communication Technology for Intelligent Systems: Volume 2. Springer International Publishing;第11-21页 *
视频监控系统的关键技术研究;刘坤;上海交通大学;第1-97页 *

Also Published As

Publication number Publication date
CN111753860A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN107305627B (en) Vehicle video monitoring method, server and system
CN110619657B (en) Multi-camera linkage multi-target tracking method and system for intelligent communities
TWI416068B (en) Object tracking method and apparatus for a non-overlapping-sensor network
JP5459674B2 (en) Moving object tracking system and moving object tracking method
US20030053659A1 (en) Moving object assessment system and method
CN106571040B (en) Suspicious vehicle confirmation method and equipment
CN111259813B (en) Face detection tracking method, device, computer equipment and storage medium
CN110852148A (en) Visitor destination verification method and system based on target tracking
CN117242489A (en) Target tracking method and device, electronic equipment and computer readable medium
CN109783663B (en) Archiving method and device
Wang et al. Traffic camera anomaly detection
CN113869137A (en) Event detection method and device, terminal equipment and storage medium
CN111753860B (en) Analysis anomaly detection method and device
CN112184814B (en) Positioning method and positioning system
CN114900657A (en) Method for acquiring shooting angle of camera
CN112949390B (en) Event detection method and device based on video quality
CN111914591A (en) Duration determination method and device
CN114359828A (en) Target behavior recording method, device, storage medium and electronic device
Van Den Hengel et al. Finding camera overlap in large surveillance networks
CN114240826A (en) Shooting equipment abnormality detection method, device and system and electronic device
CN112770081B (en) Parameter adjustment method and device of monitoring equipment, electronic equipment and storage medium
KR20230082207A (en) System for analyzing traffic data based on npu and method thereof
CN112149451B (en) Affinity analysis method and device
CN110400329B (en) People flow counting method and system
CN115082326A (en) Processing method for deblurring video, edge computing equipment and central processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant