CN117576634A - Anomaly analysis method, device and storage medium based on density detection - Google Patents

Anomaly analysis method, device and storage medium based on density detection Download PDF

Info

Publication number
CN117576634A
CN117576634A CN202410062419.3A CN202410062419A CN117576634A CN 117576634 A CN117576634 A CN 117576634A CN 202410062419 A CN202410062419 A CN 202410062419A CN 117576634 A CN117576634 A CN 117576634A
Authority
CN
China
Prior art keywords
target
image
target area
area
anomaly analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410062419.3A
Other languages
Chinese (zh)
Other versions
CN117576634B (en
Inventor
高美
蔡旗
李中振
冯长驹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202410062419.3A priority Critical patent/CN117576634B/en
Publication of CN117576634A publication Critical patent/CN117576634A/en
Application granted granted Critical
Publication of CN117576634B publication Critical patent/CN117576634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an anomaly analysis method, an anomaly analysis device and a storage medium based on density detection, wherein the anomaly analysis method based on density detection comprises the following steps: performing feature detection processing on the acquired image to be detected to obtain a target area in the image to be detected, wherein the target area contains a target object; determining the actual area of the target area based on the image area of the target area and a preset perspective transformation parameter, wherein the perspective transformation parameter is used for representing the perspective relation between the image area and the actual area of the same target area; determining a target density of target objects in the target area based on the actual area and the number of target objects; and in response to the target density being greater than a preset density threshold, performing anomaly analysis on the target area to obtain an anomaly analysis result. According to the scheme, the analysis efficiency of the abnormal event can be improved.

Description

Anomaly analysis method, device and storage medium based on density detection
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an anomaly analysis method, apparatus, and storage medium based on density detection.
Background
With the increasing level of socioeconomic performance, public areas such as tourist attractions, squares and parks become important places for daily leisure and entertainment of people, and particularly during holidays, the people flow in the public areas is more concentrated.
In order to improve safety protection, whether an abnormal event occurs in a target area is analyzed by counting the crowd density change trend of the target area at present so as to better detect and control the traffic of people, thereby greatly reducing the risk of occurrence of crowd malignant accidents and achieving the purpose of safety protection.
However, in a scene with very dense crowd, detecting an abnormal event by the current image processing method is easy to reduce the abnormal detection efficiency due to various influencing factors in the scene.
Disclosure of Invention
The application provides at least one anomaly analysis method, device and equipment based on density detection and a computer readable storage medium.
The first aspect of the present application provides an anomaly analysis method based on density detection, including: performing feature detection processing on the acquired image to be detected to obtain a target area in the image to be detected, wherein the target area contains a target object; determining the actual area of the target area based on the image area of the target area and a preset perspective transformation parameter, wherein the perspective transformation parameter is used for representing the perspective relation between the image area and the actual area of the same target area; determining a target density of the target object in the target area based on the actual area and the number of target objects; and in response to the target density being greater than a preset density threshold, performing anomaly analysis on the target area to obtain an anomaly analysis result.
In an embodiment, the step of determining the actual area of the target area based on the image area of the target area and a preset perspective transformation parameter includes: determining the image area of the target area based on the pixel duty ratio of the target area in the image to be detected; and carrying out product operation based on the image area and the perspective transformation parameter to obtain the actual area of the target area, wherein the perspective transformation parameter is determined by the image pixel distance and the actual physical distance between the pixel points at different positions in the acquired calibration image.
In an embodiment, the perspective transformation parameters include a perspective transformation matrix, and before the step of multiplying based on the image area and the perspective transformation parameters to obtain the actual area of the target area, the method further includes: acquiring pixel size data of a plurality of calibration areas in the calibration image, wherein the calibration areas comprise calibration points, and the calibration points have corresponding pixel coordinate data; and performing matrix operation based on the obtained actual size data of each calibration area, the pixel size data and the pixel coordinate data to obtain the perspective transformation matrix.
In an embodiment, the step of performing feature detection processing on the obtained image to be detected to obtain the target area in the image to be detected includes: inputting the image to be detected into a pre-trained segmentation model to obtain image characteristics of the image to be detected, which are output by the segmentation model; and determining a target area in the image to be detected and a target object in the target area based on the image characteristics.
In an embodiment, the step of performing anomaly analysis on the target area in response to the target density being greater than a preset density threshold to obtain an anomaly analysis result includes: determining that a congestion event occurs in the target area in response to the target density of the target area being greater than the density threshold; and respectively carrying out static anomaly analysis and dynamic anomaly analysis on the target area with the congestion event to obtain the anomaly analysis result.
In an embodiment, the step of performing static anomaly analysis and dynamic anomaly analysis on the target area where the congestion event occurs to obtain the anomaly analysis result includes: inputting the target area into a pre-trained abnormal target detection model for static abnormality analysis to obtain a static detection result output by the abnormal target detection model; inputting the target area into a pre-trained behavior detection model for dynamic anomaly analysis to obtain a dynamic detection result output by the behavior detection model; and determining the abnormal analysis result based on the static detection result and the dynamic detection result.
In an embodiment, the step of determining the anomaly analysis result based on the static detection result and the dynamic detection result includes: if the static detection result represents that a preset abnormal object exists in the target area, and/or if the dynamic detection result represents that a preset abnormal behavior exists in the target object in the target area, the abnormal analysis result is obtained, and the abnormal analysis result represents that an abnormal event occurs in the target area; and reporting the target area where the congestion event occurs and the abnormal event.
In an embodiment, the step of performing static anomaly analysis and dynamic anomaly analysis on the target area where the congestion event occurs to obtain the anomaly analysis result includes: carrying out feature decomposition processing on the image features of the target area in each image to be detected, which are acquired in a preset period, so as to obtain branch features with multiple scales; weighting and fusing all the branch characteristics based on the fusion weights corresponding to all the branch characteristics to obtain fusion characteristics; and determining a dynamic detection result of the dynamic abnormality analysis based on the fusion characteristic.
A second aspect of the present application provides an anomaly analysis device based on density detection, including: the detection module is used for carrying out feature detection processing on the acquired image to be detected to obtain a target area in the image to be detected, wherein the target area contains a target object; the area determining module is used for determining the actual area of the target area based on the image area of the target area and a preset perspective transformation parameter, wherein the perspective transformation parameter is used for representing the perspective relation between the image area and the actual area of the same target area; a density determination module for determining a target density of the target object in the target area based on the actual area and the number of target objects; and the anomaly analysis module is used for responding to the condition that the target density is larger than a preset density threshold value, and carrying out anomaly analysis on the target area to obtain an anomaly analysis result.
A third aspect of the present application provides an electronic device, including a memory and a processor, where the processor is configured to execute program instructions stored in the memory, so as to implement the anomaly analysis method based on density detection.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions which, when executed by a processor, implement the above-described anomaly analysis method based on density detection.
According to the scheme, the characteristic detection processing is carried out on the obtained image to be detected, so that a target area containing a target object in the image to be detected is obtained; converting the image area of the target area in the image to be detected into the actual area of the target area in the real scene based on preset perspective transformation parameters; determining a target density of target objects in the target area based on the number of target objects in the target area and the actual area of the target area; whether the target area is abnormal or not is judged according to the size of the target density, and then the target area with the target density larger than the density threshold value is subjected to abnormal analysis, so that an abnormal analysis result is obtained, and the accuracy and the analysis efficiency of the abnormal analysis are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the technical aspects of the application.
FIG. 1 is a flow diagram of an exemplary embodiment of a density detection based anomaly analysis method of the present application;
FIG. 2 is a schematic perspective calibration diagram of an exemplary anomaly analysis method based on density detection of the present application;
FIG. 3 is a schematic diagram of an exemplary anomaly analysis flow in the anomaly analysis method based on density detection of the present application;
FIG. 4 is a schematic diagram of branch feature fusion in the anomaly detection based anomaly analysis method of the present application;
FIG. 5 is a schematic illustration of the attention feature fusion process of a behavior detection model in the density detection-based anomaly analysis method of the present application;
FIG. 6 is a schematic diagram of a attention feature fusion module in the density detection based anomaly analysis method of the present application;
FIG. 7 is a block diagram of an anomaly analysis device based on density detection, as shown in an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of an embodiment of an electronic device of the present application;
fig. 9 is a schematic structural view of an embodiment of the computer-readable storage medium of the present application.
Detailed Description
The following describes the embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
For ease of understanding, one of application scenarios of the density detection-based anomaly analysis method of the present application will now be exemplarily described. The anomaly analysis method can judge whether an anomaly event exists in the region to be observed or not through a density detection mode, wherein the anomaly analysis method can include, but is not limited to, acquiring a region where a target object is located by using a segmentation model, calculating the density of the target object in the region, detecting whether the anomaly object exists in the region to be observed or not by using an anomaly target detection model, and detecting whether the anomaly event exists in the region to be observed or not by using a behavior detection model. The method has the advantages that the abnormal analysis is carried out on the region to be observed from multiple dimensions, so that abnormal events occurring in the region can be accurately and effectively detected, and the security and protection efficiency is improved.
Referring to fig. 1, fig. 1 is a flow chart illustrating an exemplary embodiment of an anomaly analysis method based on density detection according to the present application. Specifically, the method may include the steps of:
step S110, performing feature detection processing on the acquired image to be detected to obtain a target area in the image to be detected, wherein the target area contains a target object.
The acquired image to be detected may be multiple or one, i.e. the image to be detected may be extracted from the acquired image sequence to be detected (such as a video frame acquired continuously); or may be obtained by image acquisition alone, without limitation.
The feature detection processing refers to obtaining features of a target object to be detected in an image to be detected, thereby determining a region where the target object is located, and obtaining a target region, that is, a region where an abnormal event is easy to occur. The method for detecting the target object may include, but is not limited to, performing segmentation processing on the detected region where the target object is located in the image to be detected and other regions in the image to be detected by using a pre-trained segmentation model, where the segmentation model is obtained by training using a sample image containing features of the target object. After the image to be detected is input into the segmentation model, a feature map output by the segmentation model can be obtained, and each pixel point or each pixel region (including a plurality of pixel points) in the feature map output by the segmentation model can have a corresponding feature value (including but not limited to a calculated maximum value, an extreme value, a mean value and the like) which indicates the probability that the point or the region has a target object.
The method comprises the steps of marking a target object in a sample image in the process of training a segmentation model, obtaining a density map of the target object as a true value of a training sample, training the model until the model converges, and obtaining a final segmentation model.
It can be understood that taking the case that the target object is a human being as an example, in the traditional density detection method, the distribution situation of the target object is mainly determined by detecting the head characteristics, for example, the standard deviation of the gaussian kernel is determined by using the average distance between k head positions adjacent to a head position to determine the gaussian distribution of the target object, but the method cannot accurately represent the actual distribution of the head characteristics (target object) in the region, and thus the model accuracy is affected; in addition, when counting the density of the target objects in an area, the conventional method is to determine whether the target area is a dense area by detecting whether the number of the target objects in the target area of the image to be detected reaches a number threshold, but in practice, the actual physical area of the dense area is unknown because the characteristic of 'near-large-far-small' formed by perspective relation in the image acquisition process is not considered, and the actual crowd density condition (the distribution condition of the target objects) cannot be reflected.
Further, in the process of labeling the target object in the sample image, the perspective relation between the sample image and the real scene can be calibrated based on the image pixel distance between at least two pixel points in the sample image and the actual physical distance of the position coordinates in the real scene corresponding to the pixel points, and the target object in the sample image is labeled based on the perspective relation, so that measurement and learning can be performed based on the perspective relation between the image and the reality in the model training process, the Gaussian distribution of the target object (such as a head area) can be calculated adaptively, the accuracy of training data is ensured, and further, the trained segmentation model is performed on the basis of considering the perspective relation of the image to be detected when the image to be detected is processed.
Step S120, determining an actual area of the target area based on the image area of the target area and a preset perspective transformation parameter, where the perspective transformation parameter is used to characterize a perspective relationship between the image area and the actual area of the same target area.
The method is described in combination with the steps, so that the data used in the implementation process of the method and the obtained result are in accordance with the real scene, and therefore training is performed based on the training data after perspective calibration processing in the model training process, and perspective transformation parameters are obtained; when the density detection is carried out on the image to be detected after model training is finished, perspective transformation parameters are also used for processing related data. The perspective transformation parameters may be perspective transformation matrices (refer to homography matrices), each image acquisition point has a corresponding perspective transformation parameter, and the perspective transformation parameters may be stored in an image acquisition device of each image acquisition point, or binding relations between each image acquisition point (or device) and the corresponding perspective transformation parameters are set and uniformly stored in a database, so that devices such as a terminal and/or a server can query the perspective transformation parameters of a certain image acquisition device in the database, and perspective transformation processing is performed on all or part of images of an image to be detected acquired by the image acquisition device based on the perspective transformation parameters of the image acquisition device.
For example, the image coordinates of the central pixel point of a target area on the image to be detected are (x, y), the physical distance of the height and width of the target area in the corresponding real scene is (H, W), and the calibrated perspective transformation parameters areThus, the perspective relationship between the image area and the actual area of the same target area can be expressed as. Therefore, the perspective transformation processing is performed on the image area of the target area based on the image area occupied by the target area in the image to be detected (for example, the determination is performed by the number of pixels of the target area) and the acquired perspective transformation parameters, so as to obtain the actual area of the target area in the real scene.
Step S130, determining the target density of the target object in the target area based on the actual area and the number of target objects.
In the description of the above steps, before training the segmentation model, the known feature information of the target object may be labeled in advance when labeling the target object in the sample image, for example, when detecting the head feature (one head region counts as one person), the actual height of the head of the adult person may be about 0.25m, and the width may be about 0.23m as one of the head feature information. For the detected head region (which can be represented by a rectangular detection frame), the coordinates of the center point of the head region in the image to be detected can be obtained The image area (pixel size) occupied by the head rectangular frame in the image to be detected can be estimated as +.>. Further, in this embodiment, a density map of the training neural network model may be defined by convolving a gaussian kernel with a gaussian kernel size +.>The standard deviation of Gaussian kernel isTaking the generated density map asTraining a true value of a sample to obtain a segmentation model, wherein the value of each pixel point or each pixel area in an output feature map of the model represents the prediction probability of the head of the point; if the prediction probability is greater than a preset probability threshold, judging that a head part (equivalent to the existence of the target object) exists, and counting the number of the target objects in the target area; the target density of the target objects in the target area can be obtained based on the ratio between the number of target objects in the same target area and the actual area of the target area.
In the method provided by the application, the feature extraction processing can be performed on the image to be detected by using the trained segmentation model to obtain a feature map; searching for connected domains with characteristic values not being 0 in the characteristic map to obtain a target connected domain, namely determining the region (target region) where the target object is located based on the characteristics of the target object, obtaining the target region, counting the number (for example, the number of people) of the target object in the target region, and calculating the real area of the target region according to the perspective relation calibrated in advance by the image acquisition point and the image area of the target region, so as to obtain the target density (crowd density) of all the connected domains.
It should be noted that the above embodiments are only exemplary for detecting people flow density, and are not limited to the specific scenario applicable to the present application. The anomaly analysis based on density detection of the present application can be applied to animals other than humans or to objects where vital signs are absent, without limitation. For example, animals such as a mouse group, an ant group, or a bird group can be detected.
And step S140, performing anomaly analysis on the target area in response to the target density being greater than a preset density threshold value to obtain an anomaly analysis result.
If the target density of one or more target areas is greater than a preset density threshold value, representing that abnormal events are likely to be caused or are currently occurring in the target areas due to the overlarge density of the target objects, and therefore, carrying out abnormal analysis on the areas; if the target density of one or more target areas is less than or equal to the density threshold value, the normal of the target areas is represented; and carrying out anomaly analysis on the region with the target density larger than the density threshold value in the image to be detected, thereby obtaining an anomaly analysis result.
One or more target areas (target connected areas) in the image to be detected can be provided, so that abnormality analysis can be performed on a plurality of areas in one image to be detected at the same time; the preset density threshold may be one or more, and thus each target area may be provided with the same density threshold or each target area may be correspondingly provided with a different density threshold.
It can be seen that, the method and the device obtain the target area containing the target object in the image to be detected by performing feature detection processing on the obtained image to be detected; converting the image area of the target area in the image to be detected into the actual area of the target area in the real scene based on preset perspective transformation parameters; determining a target density of target objects in the target area based on the number of target objects in the target area and the actual area of the target area; whether the target area is abnormal or not is judged according to the size of the target density, and then the target area with the target density larger than the density threshold value is subjected to abnormal analysis, so that an abnormal analysis result is obtained, and the accuracy and the analysis efficiency of the abnormal analysis are improved.
On the basis of the above embodiments, the present embodiment describes a step of determining the actual area of the target region based on the image area of the target region and the perspective transformation parameters set in advance. Specifically, the method of the embodiment comprises the following steps:
determining the image area of the target area based on the pixel duty ratio of the target area in the image to be detected; and carrying out product operation based on the image area and perspective transformation parameters to obtain the actual area of the target area, wherein the perspective transformation parameters are determined by the image pixel distances and the actual physical distances among the pixel points at different positions in the acquired calibration image.
As described in connection with the foregoing embodiment, the image area of the target area corresponds to the number of pixels of the target area in the image to be detected, for example, the pixel size of the image to be detected is 800×800, and the target area occupies 800 pixels in the image to be detected, so that the image area of the target area is 1/800 of the size of the image to be detected. The size of the image to be detected and the size of the calibration image can be matched, so that perspective transformation is performed based on the perspective transformation parameters.
For example, reference may be made to fig. 2, where fig. 2 is an exemplary perspective calibration schematic diagram of the anomaly analysis method based on density detection of the present application, which characterizes the process of calibrating perspective transformation parameters based on a sample image (i.e., calibration image). Specifically, 3 target objects 1/2/3 with known true physical dimensions at different positions are selected from the calibration image obtained from an image acquisition point to be calibrated, the respective calibration frames of the target objects can be displayed by adopting different representation methods, and the coordinates of the central points of the 3 calibration frames are respectivelyThe pixel sizes (i.e. image pixel distances) of the 3 calibration frames are +.>And 3 calibration frames are known to have the real physical dimensions (i.e., the real physical distances) of respectively The unit may be rice (m). Based on the perspective relation described in the foregoing embodiment and the above-described size information, k1, k2 and b1, b2 in the perspective transformation matrix can be calculated, specifically:
thus, a perspective transformation matrix of the image acquisition point position can be obtained. When the image to be detected is acquired at the image acquisition point, perspective change processing can be performed on the image to be detected based on the perspective transformation matrix, for example, perspective transformation processing is performed on a target area.
On the basis of the above embodiments, the steps before the product operation is performed based on the image area and the perspective transformation parameters, including the perspective transformation matrix, to obtain the actual area of the target area will be described. Specifically, the method of the embodiment comprises the following steps:
acquiring pixel size data of a plurality of calibration areas in a calibration image, wherein the calibration areas comprise calibration points, and the calibration points have corresponding pixel coordinate data; and performing matrix operation based on the obtained actual size data, pixel size data and pixel coordinate data of each calibration area to obtain a perspective transformation matrix.
The above steps are combined to describe, the perspective transformation parameter may be a perspective transformation matrix, and the product operation is performed based on the data in the image dimension and the perspective transformation matrix, so that the data in the real dimension can be obtained. Before the method, a calibration image is selected for calibrating each parameter in the perspective transformation matrix; determining pixel coordinate data of a plurality of calibration areas (such as 3 rectangular calibration frames) with known actual sizes and calibration points (such as center points) of the calibration areas in the calibration image; based on the perspective relationship described in the foregoing embodiments:
Pixel coordinate data based on three calibration pointsPixel size data of the respective calibration areas in the image dimension +.>Actual size data of each calibration area in real dimensionAnd calculating k1, k2, b1 and b2 to obtain a perspective transformation matrix.
On the basis of the above embodiment, the steps of performing feature detection processing on the acquired image to be detected to obtain the target area in the image to be detected will be described in the embodiment of the present application. Specifically, the method of the embodiment comprises the following steps:
inputting the image to be detected into a pre-trained segmentation model to obtain the image characteristics of the image to be detected output by the segmentation model; a target region in the image to be detected and a target object in the target region are determined based on the image features.
In the foregoing description, the feature detection process of the present application may include a step of inputting the image to be detected into a pre-trained segmentation model for feature extraction. The image features to be detected can be subjected to certain perspective transformation processing based on consideration of perspective relation when the segmentation model is trained, for example, the image features can be realized by detecting the head region when the crowd density is required to be detected, the actual size of the head region of an adult is about 0.25m, the width of the head region is about 0.23m, and the center point coordinate of the head region is determined to be the coordinate based on the calibration frame of the head region Therefore, the pixel size of the rectangular calibration frame in the image to be detected can be calculated by combining the perspective relation>The method comprises the steps of carrying out a first treatment on the surface of the Will->As gaussian kernel size, +.>As the standard deviation of the Gaussian kernel, a pulse function convolution Gaussian kernel is adoptedDefining a density map for model training; the generated density map is used as a true value of a training sample for training, and a segmentation model is obtained and can be used for acquiring head regions (namely target regions) in the image to be detected, wherein each head region represents a target object.
Based on the above embodiments, the embodiments of the present application describe a step of performing an anomaly analysis on a target area in response to the target density being greater than a preset density threshold value, to obtain an anomaly analysis result. Specifically, the method of the embodiment comprises the following steps:
determining that a congestion event occurs in the target area in response to the target density of the target area being greater than a density threshold; and respectively carrying out static anomaly analysis and dynamic anomaly analysis on the target area with the congestion event to obtain an anomaly analysis result.
In connection with the foregoing embodiment, if the density of the target object in the target area is greater than the corresponding density threshold, it may be determined that a congestion event occurs in the target area, so that the target area may be abnormal and dangerous, and the target area where the congestion event occurs may be reported and processed first, and then the static anomaly analysis and the dynamic anomaly analysis are performed on the target area where the congestion event occurs, so as to obtain the anomaly analysis result of the target area.
Wherein, static anomaly analysis refers to detecting some anomaly objects in a target area where a congestion event occurs, and dynamic anomaly analysis refers to detecting anomaly behaviors of the target objects in the target area where the congestion event occurs. Through the combination of static analysis and dynamic analysis, the abnormal events of the target area can be more accurately identified.
Before static anomaly analysis and dynamic anomaly analysis are performed on a target area where a congestion event occurs, a maximum circumscribed rectangle of the target area may be calculated, and the circumscribed rectangle area is used as an area to be processed and is respectively input into an anomaly target detection model for static anomaly analysis and a behavior detection model for dynamic anomaly analysis.
Based on the above embodiments, the steps of performing static anomaly analysis and dynamic anomaly analysis on the target area where the congestion event occurs to obtain the anomaly analysis result will be described in the embodiments of the present application. Specifically, the method of the embodiment comprises the following steps:
inputting the target area into a pre-trained abnormal target detection model for static abnormality analysis to obtain a static detection result output by the abnormal target detection model; inputting the target area into a pre-trained behavior detection model for dynamic anomaly analysis to obtain a dynamic detection result output by the behavior detection model; and determining an abnormal analysis result based on the static detection result and the dynamic detection result.
In the description of the foregoing embodiments, for static anomaly analysis and dynamic anomaly analysis, corresponding detection models are required to detect the target areas where congestion events occur, respectively.
Specifically, referring to fig. 3, fig. 3 is a schematic diagram of an exemplary anomaly analysis flow in the anomaly analysis method based on density detection in the present application, where the same anomaly detection model is respectively input into a pre-trained anomaly target detection model for static anomaly analysis and a pre-trained behavior detection model for dynamic anomaly analysis, so as to obtain a static detection result output by the anomaly target detection model and a dynamic detection result output by the behavior detection model. The static detection result indicates whether an abnormal object (such as a banner, a billboard and other objects) exists in a target area where a congestion event occurs, and if the abnormal object exists in the target area through an abnormal target detection model, the target area is judged to be abnormal; the dynamic detection result may represent whether a target object with abnormal behavior exists in the target area where the congestion event occurs (for example, for crowd detection, the behavior may include, but is not limited to, normal walking, lying, crawling, running, etc.), and if it is determined that the target object in the target area has abnormal behavior through the behavior detection model, the target area is determined to be abnormal.
Alternatively, for static anomaly detection and dynamic anomaly detection, detection may be performed based on an image sequence after occurrence of a congestion event, for example, a target area in a continuous multi-frame image to be detected is respectively input into each detection model to perform anomaly detection; therefore, the static anomaly detection model can also judge whether an anomaly object in a moving state exists according to anomaly objects detected in two or more adjacent frames of images, and judge that the target area is anomaly after determining that the anomaly object in the moving state exists, so as not to recognize a normally pulled banner as anomaly.
In summary, if any one of the static detection result and the dynamic detection result is abnormal, it may be determined that the abnormal analysis result of the target area is abnormal.
In addition, if objects such as a banner and a billboard are to be detected, when the abnormal target detection model is trained, training images may be collected and rectangular frames of the banner and the billboard in the training images may be marked, and frames such as Yolo, centerNet or Darknet may be used to train the abnormal target detection model.
On the basis of the above embodiments, the present embodiment describes a step of determining an abnormality analysis result based on a static detection result and a dynamic detection result. Specifically, the method of the embodiment comprises the following steps:
If the static detection result represents that a preset abnormal object exists in the target area, and/or if the dynamic detection result represents that a preset abnormal behavior exists in the target object in the target area, an abnormal analysis result is obtained, and the abnormal analysis result represents that an abnormal event occurs in the target area; and reporting the target area where the congestion event occurs and the abnormal event.
In the foregoing description, if an abnormal object (such as a banner, a billboard, etc.) is detected in the target area and/or an abnormal behavior (such as lying, crawling, running, etc.) is detected in the target area, an abnormal analysis result indicating that an abnormal event occurs in the target area may be obtained, and the target area and the abnormal event where a congestion event occurs may be reported. If the static detection result indicates that no abnormal object exists in the target area and the dynamic detection result indicates that no abnormal behavior exists in the target object in the target area, an abnormal analysis result indicating that no abnormal event occurs in the target area is obtained.
Based on the above embodiments, the steps of performing static anomaly analysis and dynamic anomaly analysis on the target area where the congestion event occurs to obtain the anomaly analysis result will be described in the embodiments of the present application. Specifically, the method of the embodiment comprises the following steps:
Carrying out feature decomposition processing on the image features of the target area in each image to be detected, which are acquired in a preset period, so as to obtain branch features with multiple scales; weighting and fusing all the branch characteristics based on the fusion weights corresponding to all the branch characteristics to obtain fusion characteristics; and determining a dynamic detection result of the dynamic abnormality analysis based on the fusion characteristic.
In the process of dynamically analyzing the target area with the congestion event, the foregoing embodiment is used to analyze the dynamic anomaly of the target area of the plurality of images to be detected in an image sequence, that is, the dynamic anomaly analysis is performed on the target area of the plurality of images to be detected acquired in a preset period of time.
It should be noted that, in the process of training the behavior detection model, an image sequence within a period of time is also obtained as training sample data, and for crowd detection, each sample image in the image sequence needs to include various behaviors such as normal walking, lying, crawling, running, and the like. In addition, each frame of sample image requires decomposing the features after 2D convolution intoThree branch features, where x and y refer to the pixel abscissas on the images, and t refers to the temporal information contained in the image sequence, to characterize feature variations between images in the image sequence that occur at different times. Then, referring to fig. 4, fig. 4 is a schematic diagram of branch feature fusion in the anomaly analysis method based on density detection in the present application, and the attention feature fusion module shown in fig. 4 may be the same module; the attention feature fusion module inputs the multiple branch features after feature decomposition into the behavior detection model to carry out weighted fusion And finally obtaining fusion characteristics, wherein the attention characteristic fusion module determines the weight corresponding to each branch characteristic based on the set attention characteristic fusion method; and judging which type of behavior the current behavior of the target object belongs to according to the fusion characteristics.
Specifically, for the attention feature fusion module, reference may be made to fig. 5, where fig. 5 is a schematic diagram of an attention feature fusion process of a behavior detection model in the anomaly analysis method based on density detection in the present application. Inputting two branch features F1 and F2 (such as branch features xt and yt) to be fused into an attention feature fusion module, wherein the attention feature fusion module performs addition processing on the two branch features and inputs the two branch features into a multi-scale channel attention module, and determines weights corresponding to the two branch features by using an attention feature fusion method; and then, carrying out weighted fusion on the two branch characteristics based on the weights of the two branch characteristics, so as to obtain initial fusion characteristics of the two branch characteristics. And then, referring to fig. 4 and fig. 5, the initial fusion feature and the rest of the branch feature are taken as the feature to be fused and input into the attention feature fusion module for weighted fusion, so as to obtain the final fusion feature.
Further, the processing procedure of the attention feature fusion module may be shown with reference to fig. 6, and fig. 6 is a schematic diagram of the attention feature fusion module in the anomaly analysis method based on density detection in the present application. After the branch feature is input into the attention feature fusion module, the attention feature fusion module performs pooling processing, convolution processing and the like by using a 1D time sequence pooling layer (such as globalAvgPooling global average pooling), wherein a point convolution plus Relu activation function module and a point convolution PWCnv module can be set in the convolution processing process, and the weight of the branch is determined by using an attention feature fusion method, wherein a numerical value between 0 and 1 can be obtained through Sigmoid function calculation as the weight.
Therefore, after the target area in each image to be detected is input into a trained behavior detection model, the behavior detection model is similar to the feature decomposition processing of the image features of the target area, so that branch features with multiple scales are obtained; determining the fusion weight of each branch feature based on the attention fusion method, and carrying out weighted fusion processing on each branch feature based on the fusion weight corresponding to each branch feature to obtain fusion features; determining a dynamic detection result of dynamic anomaly analysis based on the fusion characteristics, specifically, judging whether an abnormal behavior exists in a target object in a target area based on the fusion characteristics, if so, determining that the dynamic detection result is abnormal; if the dynamic detection result is not present, the dynamic detection result is normal.
According to the method, the perspective relation between the pixel distance in the calibration calculation image and the actual physical distance is marked, so that a target area which accords with a real scene can be obtained, gaussian distribution of a target object in the target area is calculated adaptively, accuracy of training data is further guaranteed, and optimization of a data processing process of the segmentation model is achieved; the influence of other areas on the detection result can be eliminated by calculating the target density in the target communication domain only containing the target object, so that the accuracy of the density detection result is ensured and the accuracy of the subsequent abnormal analysis is improved; for a high-density region, for each frame of image to be detected in an image sequence to be detected, decomposing the image characteristics subjected to 2D convolution into three branch characteristics, and learning the weight of each branch characteristic by using a attention characteristic fusion mechanism, so that a model learns the dynamic characteristics of the high-density region, and training a behavior detection model; detecting a preset abnormal object by combining an abnormal target detection model, and determining the class of the abnormal event; the behavior detection model can analyze a high-density connected domain (namely a target object aggregation area), pay attention to the overall motion trend of a target object, and does not need to extract the motion trail of a single target object, so that the problems of failure detection and high tracking difficulty of the single target in a scene with dense targets and serious shielding are avoided.
It should be further noted that, the execution subject of the anomaly analysis method based on density detection may be an anomaly analysis device based on density detection, for example, the anomaly analysis method based on density detection may be executed by a terminal device or a server or other processing device, where the terminal device may be a User Equipment (UE), a computer, a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital processing (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the anomaly analysis method based on density detection may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Fig. 7 is a block diagram of an anomaly analysis device based on density detection, as shown in an exemplary embodiment of the present application. As shown in fig. 7, the exemplary abnormality analysis apparatus 900 based on density detection includes: a detection module 710, an area determination module 720, a density determination module 730, and an anomaly analysis module 740. Specifically:
the detection module 710 is configured to perform feature detection processing on the obtained image to be detected, so as to obtain a target area in the image to be detected, where the target area contains a target object.
The area determining module 720 is configured to determine an actual area of the target area based on an image area of the target area and a preset perspective transformation parameter, where the perspective transformation parameter is used to characterize a perspective relationship between the image area and the actual area of the same target area.
A density determining module 730 for determining a target density of the target object in the target area based on the actual area and the number of target objects.
The anomaly analysis module 740 is configured to perform anomaly analysis on the target area in response to the target density being greater than a preset density threshold value, to obtain an anomaly analysis result.
In the exemplary anomaly analysis device based on density detection, a target area containing a target object in an acquired image to be detected is obtained by performing feature detection processing on the image to be detected; converting the image area of the target area in the image to be detected into the actual area of the target area in the real scene based on preset perspective transformation parameters; determining a target density of target objects in the target area based on the number of target objects in the target area and the actual area of the target area; whether the target area is abnormal or not is judged according to the size of the target density, and then the target area with the target density larger than the density threshold value is subjected to abnormal analysis, so that an abnormal analysis result is obtained, and the accuracy and the analysis efficiency of the abnormal analysis are improved.
The function of each module may be referred to an embodiment of an anomaly analysis method based on density detection, which is not described herein.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of an electronic device of the present application. The electronic device 800 comprises a memory 801 and a processor 802, the processor 802 being adapted to execute program instructions stored in the memory 801 to implement the steps of any of the above described embodiments of the density detection based anomaly analysis method. In one particular implementation scenario, electronic device 800 may include, but is not limited to: the electronic device 800 may also include mobile devices such as a notebook computer and a tablet computer, and is not limited herein.
Specifically, the processor 802 is configured to control itself and the memory 801 to implement the steps of any of the density detection-based anomaly analysis method embodiments described above. The processor 802 may also be referred to as a CPU (Central Processing Unit ). The processor 802 may be an integrated circuit chip with signal processing capabilities. The processor 802 may also be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field-programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 802 may be commonly implemented by an integrated circuit chip.
According to the electronic equipment provided by the scheme, the method can be used for carrying out feature detection processing on the acquired image to be detected to obtain the target area containing the target object in the image to be detected; converting the image area of the target area in the image to be detected into the actual area of the target area in the real scene based on preset perspective transformation parameters; determining a target density of target objects in the target area based on the number of target objects in the target area and the actual area of the target area; whether the target area is abnormal or not is judged according to the size of the target density, and then the target area with the target density larger than the density threshold value is subjected to abnormal analysis, so that an abnormal analysis result is obtained, and the accuracy and the analysis efficiency of the abnormal analysis are improved.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a computer readable storage medium of the present application. The computer readable storage medium 910 stores program instructions 911 that can be executed by a processor, the program instructions 911 for implementing the steps in any of the above-described embodiments of the density detection-based anomaly analysis method.
The storage medium provided by the scheme can store the program instruction of the method and obtain the target area containing the target object in the image to be detected by performing feature detection processing on the acquired image to be detected when the program instruction is operated; converting the image area of the target area in the image to be detected into the actual area of the target area in the real scene based on preset perspective transformation parameters; determining a target density of target objects in the target area based on the number of target objects in the target area and the actual area of the target area; whether the target area is abnormal or not is judged according to the size of the target density, and then the target area with the target density larger than the density threshold value is subjected to abnormal analysis, so that an abnormal analysis result is obtained, and the accuracy and the analysis efficiency of the abnormal analysis are improved.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (10)

1. An anomaly analysis method based on density detection, the method comprising:
performing feature detection processing on the acquired image to be detected to obtain a target area in the image to be detected, wherein the target area contains a target object;
determining the actual area of the target area based on the image area of the target area and a preset perspective transformation parameter, wherein the perspective transformation parameter is used for representing the perspective relation between the image area and the actual area of the same target area;
determining a target density of the target object in the target area based on the actual area and the number of target objects;
and in response to the target density being greater than a preset density threshold, performing anomaly analysis on the target area to obtain an anomaly analysis result.
2. The method of claim 1, wherein the step of determining the actual area of the target region based on the image area of the target region and a pre-set perspective transformation parameter comprises:
determining the image area of the target area based on the pixel duty ratio of the target area in the image to be detected;
And carrying out product operation based on the image area and the perspective transformation parameter to obtain the actual area of the target area, wherein the perspective transformation parameter is determined by the image pixel distance and the actual physical distance between the pixel points at different positions in the acquired calibration image.
3. The method of claim 2, wherein the perspective transformation parameters comprise a perspective transformation matrix, and wherein prior to the step of multiplying based on the image area and the perspective transformation parameters to obtain an actual area of the target region, the method further comprises:
acquiring pixel size data of a plurality of calibration areas in the calibration image, wherein the calibration areas comprise calibration points, and the calibration points have corresponding pixel coordinate data;
and performing matrix operation based on the obtained actual size data of each calibration area, the pixel size data and the pixel coordinate data to obtain the perspective transformation matrix.
4. The method according to claim 1, wherein the step of performing feature detection processing on the acquired image to be detected to obtain the target area in the image to be detected includes:
Inputting the image to be detected into a pre-trained segmentation model to obtain image characteristics of the image to be detected, which are output by the segmentation model;
and determining a target area in the image to be detected and a target object in the target area based on the image characteristics.
5. The method of claim 1, wherein the step of performing an anomaly analysis on the target region in response to the target density being greater than a preset density threshold value, results in an anomaly analysis result, comprises:
determining that a congestion event occurs in the target area in response to the target density of the target area being greater than the density threshold;
and respectively carrying out static anomaly analysis and dynamic anomaly analysis on the target area with the congestion event to obtain the anomaly analysis result.
6. The method according to claim 5, wherein the step of performing static anomaly analysis and dynamic anomaly analysis on the target area where the congestion event occurs to obtain the anomaly analysis result includes:
inputting the target area into a pre-trained abnormal target detection model for static abnormality analysis to obtain a static detection result output by the abnormal target detection model;
Inputting the target area into a pre-trained behavior detection model for dynamic anomaly analysis to obtain a dynamic detection result output by the behavior detection model;
and determining the abnormal analysis result based on the static detection result and the dynamic detection result.
7. The method of claim 6, wherein the step of determining the anomaly analysis result based on the static detection result and the dynamic detection result comprises:
if the static detection result represents that a preset abnormal object exists in the target area, and/or if the dynamic detection result represents that a preset abnormal behavior exists in the target object in the target area, the abnormal analysis result is obtained, and the abnormal analysis result represents that an abnormal event occurs in the target area;
and reporting the target area where the congestion event occurs and the abnormal event.
8. The method according to claim 5, wherein the step of performing static anomaly analysis and dynamic anomaly analysis on the target area where the congestion event occurs to obtain the anomaly analysis result includes:
carrying out feature decomposition processing on the image features of the target area in each image to be detected, which are acquired in a preset period, so as to obtain branch features with multiple scales;
Weighting and fusing all the branch characteristics based on the fusion weights corresponding to all the branch characteristics to obtain fusion characteristics;
and determining a dynamic detection result of the dynamic abnormality analysis based on the fusion characteristic.
9. An electronic device comprising a memory and a processor for executing program instructions stored in the memory to implement the method of any one of claims 1 to 8.
10. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the method of any of claims 1 to 8.
CN202410062419.3A 2024-01-16 2024-01-16 Anomaly analysis method, device and storage medium based on density detection Active CN117576634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410062419.3A CN117576634B (en) 2024-01-16 2024-01-16 Anomaly analysis method, device and storage medium based on density detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410062419.3A CN117576634B (en) 2024-01-16 2024-01-16 Anomaly analysis method, device and storage medium based on density detection

Publications (2)

Publication Number Publication Date
CN117576634A true CN117576634A (en) 2024-02-20
CN117576634B CN117576634B (en) 2024-05-28

Family

ID=89886582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410062419.3A Active CN117576634B (en) 2024-01-16 2024-01-16 Anomaly analysis method, device and storage medium based on density detection

Country Status (1)

Country Link
CN (1) CN117576634B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156706A (en) * 2015-04-07 2016-11-23 中国科学院深圳先进技术研究院 Pedestrian's anomaly detection method
CN110866453A (en) * 2019-10-22 2020-03-06 同济大学 Real-time crowd stable state identification method and device based on convolutional neural network
CN113537172A (en) * 2021-09-16 2021-10-22 长沙海信智能系统研究院有限公司 Crowd density determination method, device, equipment and storage medium
CN114066842A (en) * 2021-11-12 2022-02-18 浙江托普云农科技股份有限公司 Method, system and device for counting number of ears and storage medium
CN114429596A (en) * 2020-10-29 2022-05-03 航天信息股份有限公司 Traffic statistical method and device, electronic equipment and storage medium
WO2022088581A1 (en) * 2020-10-30 2022-05-05 上海商汤智能科技有限公司 Training method for image detection model, related apparatus, device, and storage medium
CN114812418A (en) * 2022-04-25 2022-07-29 安徽农业大学 Portable plant density and plant spacing measurement system
CN115082820A (en) * 2022-05-13 2022-09-20 北京无线电计量测试研究所 Method and device for detecting global abnormal behaviors of group in surveillance video
CN115273464A (en) * 2022-07-05 2022-11-01 湖北工业大学 Traffic flow prediction method based on improved space-time Transformer
CN116311219A (en) * 2023-01-29 2023-06-23 广州市玄武无线科技股份有限公司 Ground pile article occupied area calculation method and device based on perspective transformation
CN116740617A (en) * 2023-08-01 2023-09-12 北京全路通信信号研究设计院集团有限公司 Data processing method, device, electronic equipment and storage medium
CN117011774A (en) * 2023-04-19 2023-11-07 中国人民公安大学 Cluster behavior understanding system based on scene knowledge element

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156706A (en) * 2015-04-07 2016-11-23 中国科学院深圳先进技术研究院 Pedestrian's anomaly detection method
CN110866453A (en) * 2019-10-22 2020-03-06 同济大学 Real-time crowd stable state identification method and device based on convolutional neural network
CN114429596A (en) * 2020-10-29 2022-05-03 航天信息股份有限公司 Traffic statistical method and device, electronic equipment and storage medium
WO2022088581A1 (en) * 2020-10-30 2022-05-05 上海商汤智能科技有限公司 Training method for image detection model, related apparatus, device, and storage medium
CN113537172A (en) * 2021-09-16 2021-10-22 长沙海信智能系统研究院有限公司 Crowd density determination method, device, equipment and storage medium
CN114066842A (en) * 2021-11-12 2022-02-18 浙江托普云农科技股份有限公司 Method, system and device for counting number of ears and storage medium
CN114812418A (en) * 2022-04-25 2022-07-29 安徽农业大学 Portable plant density and plant spacing measurement system
CN115082820A (en) * 2022-05-13 2022-09-20 北京无线电计量测试研究所 Method and device for detecting global abnormal behaviors of group in surveillance video
CN115273464A (en) * 2022-07-05 2022-11-01 湖北工业大学 Traffic flow prediction method based on improved space-time Transformer
CN116311219A (en) * 2023-01-29 2023-06-23 广州市玄武无线科技股份有限公司 Ground pile article occupied area calculation method and device based on perspective transformation
CN117011774A (en) * 2023-04-19 2023-11-07 中国人民公安大学 Cluster behavior understanding system based on scene knowledge element
CN116740617A (en) * 2023-08-01 2023-09-12 北京全路通信信号研究设计院集团有限公司 Data processing method, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANG K: "Real-time visual tracking based on dual attention Siamese network", 《JOURNAL OF COMPUTER APPLICATIONS》, 31 December 2019 (2019-12-31) *
沈文祥;秦品乐;曾建潮;: "基于多级特征和混合注意力机制的室内人群检测网络", 计算机应用, no. 12, 15 October 2019 (2019-10-15) *

Also Published As

Publication number Publication date
CN117576634B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN111160379B (en) Training method and device of image detection model, and target detection method and device
CN110176027B (en) Video target tracking method, device, equipment and storage medium
CN107358149B (en) Human body posture detection method and device
CN110598558B (en) Crowd density estimation method, device, electronic equipment and medium
CN108764085B (en) Crowd counting method based on generation of confrontation network
CN114022830A (en) Target determination method and target determination device
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN107563299B (en) Pedestrian detection method using RecNN to fuse context information
CN111612822B (en) Object tracking method, device, computer equipment and storage medium
CN113052147B (en) Behavior recognition method and device
CN111582032A (en) Pedestrian detection method and device, terminal equipment and storage medium
CN112115803B (en) Mask state reminding method and device and mobile terminal
CN111652181B (en) Target tracking method and device and electronic equipment
CN110795975A (en) Face false detection optimization method and device
CN114155278A (en) Target tracking and related model training method, related device, equipment and medium
CN113920585A (en) Behavior recognition method and device, equipment and storage medium
CN113505643A (en) Violation target detection method and related device
CN117576634B (en) Anomaly analysis method, device and storage medium based on density detection
JP2024516642A (en) Behavior detection method, electronic device and computer-readable storage medium
CN111027560B (en) Text detection method and related device
CN111860261A (en) Passenger flow value statistical method, device, equipment and medium
CN112560853A (en) Image processing method, device and storage medium
CN115631477A (en) Target identification method and terminal
CN107563284B (en) Pedestrian tracking method and device
CN117409372A (en) Dense crowd counting method and device based on global and local density fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant