CN117274910A - Personnel retention monitoring method and device, electronic equipment and storage medium - Google Patents

Personnel retention monitoring method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117274910A
CN117274910A CN202311333256.XA CN202311333256A CN117274910A CN 117274910 A CN117274910 A CN 117274910A CN 202311333256 A CN202311333256 A CN 202311333256A CN 117274910 A CN117274910 A CN 117274910A
Authority
CN
China
Prior art keywords
images
fused
frame
monitoring
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311333256.XA
Other languages
Chinese (zh)
Inventor
田鹏飞
石小华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Yuntian Lifei Technology Co ltd
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Qingdao Yuntian Lifei Technology Co ltd
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Yuntian Lifei Technology Co ltd, Shenzhen Intellifusion Technologies Co Ltd filed Critical Qingdao Yuntian Lifei Technology Co ltd
Priority to CN202311333256.XA priority Critical patent/CN117274910A/en
Publication of CN117274910A publication Critical patent/CN117274910A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a personnel retention monitoring method, which comprises the following steps: the method comprises the steps of obtaining a current monitoring frame of a monitoring area, performing rough de-duplication processing on a plurality of shooting images based on image similarity to obtain a plurality of images to be fused, calculating transparency of an overlapping area between the images to be fused, performing image fusion on the images to be fused based on the transparency of the overlapping area to obtain a fused image, detecting the number of people of the fused image to obtain the number of people of the fused image, determining the current monitoring frame as a current effective frame if the number of people of the fused image meets a preset number of people, determining whether continuous effective frames meet preset conditions, and determining that people in the monitoring area stay if the continuous effective frames meet the preset conditions. By the method, the personnel retention condition of the large public area is monitored, and the personnel quantity in the flowing state can be counted and analyzed more accurately, so that the accuracy and the efficiency of the personnel retention monitoring method are improved.

Description

Personnel retention monitoring method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of urban management, and in particular, to a method and apparatus for monitoring personnel retention, an electronic device, and a storage medium.
Background
In conventional personnel retention monitoring techniques, this is accomplished by providing a plurality of monitoring devices in a particular area and capturing images of the area by the devices. According to the conventional personnel retention monitoring technology, due to the position setting or image processing method of the monitoring devices, the images captured by the monitoring devices may have problems of overlapping, blurring and the like, so that accurate statistics on the number of personnel in a specific area is affected. Therefore, the acquired images of the existing personnel retention monitoring method may overlap and blur, so that the problem that the number of personnel in a specific area cannot be counted accurately is caused.
Disclosure of Invention
The embodiment of the invention provides a personnel retention monitoring method, which aims to solve the problem that the existing personnel retention monitoring method cannot accurately count the number of personnel in a specific area through deployed monitoring equipment. The method comprises the steps of performing rough de-duplication processing on a plurality of shot images in a specific area to obtain a plurality of images to be fused, performing image fusion on each image to be fused according to the transparency of an overlapping area between each image to be fused, and determining whether personnel retention exists in the specific area after the number of people is detected and continuous effective frames are confirmed on the fused images. The personnel retention condition is determined through continuous effective frames, and the number of the personnel in the flowing state can be counted and analyzed more accurately, so that the accuracy and the efficiency of the personnel retention monitoring method are improved.
In a first aspect, an embodiment of the present invention provides a method for monitoring personnel retention, including the steps of:
acquiring a current monitoring frame of a monitoring area, wherein the current monitoring frame comprises a plurality of shooting images shot by a plurality of monitoring devices at the current moment, and the monitoring area is monitored by the plurality of monitoring devices;
performing coarse de-duplication processing on a plurality of shot images based on image similarity to obtain a plurality of images to be fused, wherein the number of the images to be fused is smaller than or equal to that of the shot images;
calculating the transparency of an overlapping area between the images to be fused, and carrying out image fusion on the images to be fused based on the transparency of the overlapping area to obtain a fused image;
detecting the number of people in the fusion image to obtain the number of people in the fusion image;
if the number of people in the fusion image meets the preset number of people, determining the current monitoring frame as a current effective frame;
determining whether a continuous effective frame meets a preset condition, wherein the continuous effective frame comprises a current effective frame and a historical effective frame, and two adjacent effective frames in the continuous effective frame have a preset frame extraction interval;
And if the continuous effective frame meets the preset condition, determining that personnel are detained in the monitoring area.
Optionally, the acquiring the current monitoring frame of the monitoring area includes:
acquiring the area of the monitoring area and the personnel flow speed;
determining a preset frame extraction interval based on the area of the monitoring area and the personnel flow speed;
and determining the current monitoring frame of the monitoring area based on the preset frame extraction interval.
Optionally, the performing coarse deduplication processing on the plurality of captured images based on the image similarity to obtain a plurality of images to be fused includes:
carrying out hash calculation on a plurality of shooting images to obtain hash codes of each shooting image;
based on the hash codes of the photographed images, obtaining hamming distances between the photographed images;
comparing the Hamming distance between the shooting images with a preset Hamming distance threshold, determining shooting images needing to be subjected to de-duplication from the shooting images with Hamming distances larger than the preset Hamming distance threshold, and eliminating the shooting images needing to be subjected to de-duplication to obtain a plurality of images to be fused.
Optionally, the calculating the transparency of the overlapping area between the images to be fused, and performing image fusion on the images to be fused based on the transparency of the overlapping area, to obtain a fused image, includes:
Extracting characteristic values of overlapping areas among the images to be fused;
calculating the similarity of the overlapping areas between the images to be fused based on the characteristic values of the overlapping areas between the images to be fused;
determining the transparency of the overlapping area between the images to be fused based on the similarity of the overlapping area between the images to be fused;
and fusing the images to be fused based on the transparency of the overlapping area between the images to be fused, so as to obtain a fused image.
Optionally, the determining the transparency of the overlapping area between the images to be fused based on the similarity of the overlapping area between the images to be fused includes:
determining thermodynamic diagram distribution of the overlapping area between the images to be fused based on the similarity of the overlapping area between the images to be fused;
and determining the transparency of the overlapped area between the images to be fused based on thermodynamic distribution of the overlapped area between the images to be fused.
Optionally, the determining the transparency of the overlapping area between the images to be fused based on thermodynamic distribution of the overlapping area between the images to be fused includes:
Determining the thermodynamic proportion of the overlapping area between the images to be fused based on thermodynamic distribution of the overlapping area between the images to be fused;
and determining the transparency of the overlapping area between the images to be fused based on the thermodynamic proportion of the overlapping area between the images to be fused.
Optionally, if the continuous valid frame meets the preset condition, determining that personnel are detained in the monitoring area includes:
acquiring a historical effective frame before a current effective frame, wherein a preset frame extraction interval is arranged between the last frame of the historical effective frame and the current effective frame;
determining a continuous effective frame based on the historical effective frame and the current effective frame;
calculating the number of frames of the continuous effective frames;
determining frame extraction time length of continuous effective frames based on the frame number;
and when the frame extraction time length of the continuous effective frames is greater than or equal to the preset frame extraction time length, determining that personnel retention exists in the target area.
In a second aspect, an embodiment of the present invention further provides a personnel retention monitoring device, including:
the first acquisition module is used for acquiring a current monitoring frame of a monitoring area, wherein the current monitoring frame comprises a plurality of shooting images shot by a plurality of monitoring devices at the current moment, and the monitoring area is monitored by the plurality of monitoring devices;
The de-duplication module is used for performing rough de-duplication on the plurality of shot images based on the image similarity to obtain a plurality of images to be fused, wherein the number of the images to be fused is smaller than or equal to that of the shot images;
the computing module is used for computing the transparency of the overlapping area between the images to be fused, and carrying out image fusion on the images to be fused based on the transparency of the overlapping area to obtain a fused image;
the detection module is used for detecting the number of people of the fusion image to obtain the number of people of the fusion image;
the first determining module is used for determining the current monitoring frame as a current effective frame if the number of people in the fusion image meets the preset number of people;
a second determining module, configured to determine whether a continuous effective frame meets a preset condition, where the continuous effective frame includes a current effective frame and a historical effective frame, and two adjacent effective frames in the continuous effective frame have a preset frame extraction interval;
and the third determining module is used for determining that personnel are detained in the monitoring area if the continuous effective frames meet the preset conditions.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps in the personnel retention monitoring method provided by the embodiment of the invention when executing the computer program.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of the personnel retention monitoring method provided by the embodiments of the present invention.
In the embodiment of the invention, a current monitoring frame of a monitoring area is obtained, rough de-duplication processing is carried out on a plurality of shooting images based on image similarity to obtain a plurality of images to be fused, transparency of an overlapping area between the images to be fused is calculated, image fusion is carried out on the images to be fused based on the transparency of the overlapping area to obtain a fused image, people number detection is carried out on the fused image to obtain the people number of the fused image, if the people number of the fused image meets the preset people number, the current monitoring frame is determined as the current effective frame, whether the continuous effective frame meets the preset condition is determined, and if the continuous effective frame meets the preset condition, people retention in the monitoring area is determined. The method comprises the steps of performing rough de-duplication processing on a plurality of shot images in a specific area to obtain a plurality of images to be fused, performing image fusion on each image to be fused according to the transparency of an overlapping area between each image to be fused, and determining whether personnel retention exists in the specific area after the number of people is detected and continuous effective frames are confirmed on the fused images. The personnel retention condition is determined through continuous effective frames, and the number of the personnel in the flowing state can be counted and analyzed more accurately, so that the accuracy and the efficiency of the personnel retention monitoring method are improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a personnel retention monitoring method provided by an embodiment of the present invention;
FIG. 2 is a flow chart of another personnel retention monitoring method provided by an embodiment of the present invention;
FIG. 3 is a schematic view of a personnel retention monitor device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, fig. 1 is a flowchart of a personnel retention monitoring method according to an embodiment of the present invention, where the personnel retention monitoring method includes the steps of:
101. and acquiring a current monitoring frame of the monitoring area.
In the embodiment of the invention, the personnel retention monitoring method can be applied to a city management platform, and the city management platform stores the monitoring image information of each public area and has the functions of data acquisition, data analysis, data transmission and data storage. The monitoring image information of the public area can be obtained by shooting by a plurality of monitoring devices deployed in the public area, and is acquired and stored by the city management platform.
The monitoring area can be obtained by dividing an area needing to be subjected to personnel quantity investigation or personnel movement tracking according to urban development planning by urban management personnel. Specifically, the monitoring area generally needs to be monitored by a plurality of monitoring devices together, so that the monitoring range of the monitoring area can be covered, and the scene content is completely presented. For example, personnel retention in a prison public area, or some large public area, and public areas such as subway stations, which may have a large number of personnel flowing.
The current monitoring frame may be a plurality of captured images captured by a plurality of monitoring devices at the current moment, specifically, according to an actual implementation, a city manager may set a frame extraction interval of the monitoring frames, intercept the monitoring images acquired by the monitoring devices according to the set frame extraction interval, where the frame extraction interval may be fast according to the personnel flowing condition under the area and the area is wider, and set up to perform an inspection every 30 minutes, that is, every 30 minutes is a frame interval, and select each monitoring image after every 30 minutes according to the set frame extraction interval, and use each monitoring image as the current monitoring frame.
In one possible embodiment, a city manager sets a preset frame extraction interval according to the area in the monitoring area and the personnel flow speed through the city management platform, selects a monitoring image acquired by monitoring equipment in the monitoring area according to the preset frame extraction interval, and uses the selected monitoring image as a current monitoring frame.
102. And performing rough de-duplication processing on the plurality of shot images based on the image similarity to obtain a plurality of images to be fused.
In the embodiment of the invention, the image similarity can measure the similarity degree standard between two images, and the images can be roughly roughened and de-duplicated through similarity calculation between the images, namely some suspected or repeatedly appearing images are roughly removed. The number of the images to be fused is smaller than or equal to the number of the shot images, and no similar or overlapped area exists between the images to be fused. Specifically, the image processing algorithms such as a picture hash algorithm, a feature extraction algorithm, a repeated image identifier, a hash algorithm and the like can be used for carrying out de-duplication processing on the images, and after the de-duplication process, at least one or more repeated photos are reserved and added into the image list to be fused.
In one possible embodiment, the city management platform performs similarity calculation on the acquired multiple monitoring images, performs de-duplication processing on the multiple monitoring images according to a calculation result, and obtains multiple images to be fused, and stores the multiple images to be fused in a database of the city management platform.
103. And calculating the transparency of the overlapping area between the images to be fused, and carrying out image fusion on the images to be fused based on the transparency of the overlapping area to obtain a fused image.
In the embodiment of the present invention, the transparency may refer to a light transmission degree between images, and in the image fusion technology, the light transmission degree may be reflected on a pixel coverage effect of an upper layer image to a lower layer image. Specifically, in the image fusion, the upper layer image covers the lower layer image, at this time, if the transparency of the upper layer image is hundred percent, details of the lower layer image will fully emerge, and conversely, when the transparency of the upper layer image is zero percent, details of the lower layer image will fully cover by the upper layer image, so that the effect that only the upper layer image can be seen is achieved.
The above image fusion may refer to a process of fusing information of a plurality of images into one image through an image fusion algorithm, and the fused image is a fused image. Specifically, the method can be performed by an image processing algorithm such as an averaging method and a weighting method algorithm, a pixel gray level selection algorithm, a PCA fusion method and the like.
In one possible embodiment, the city management platform extracts the features of the monitored image, calculates the corresponding similarity according to the extracted features, and then maps the similarity degree into the depth of the color according to a preset color mapping method to form a thermodynamic diagram similar to thermodynamic distribution, and performs weighted fusion on the monitored image according to the thermodynamic diagram so as to obtain a fused image. The feature extraction can be performed through a convolutional neural network, specifically, a feature extraction model to be trained is firstly constructed, the feature extraction model to be trained is subjected to supervised training, a scene image is input into the feature extraction model to be trained, the feature image of the scene image is output, and the feature extraction model to be trained is trained by taking the similarity of the feature image and the similarity of the scene image as the best similarity of the feature image to be close to 1. And when the similarity of the output characteristic image and the similarity of the scene image are close to 1, obtaining a trained characteristic extraction model. And extracting the characteristics of the monitoring image through the trained characteristic extraction model so that the similarity can be calculated according to the characteristics.
104. And detecting the number of people in the fusion image to obtain the number of people in the fusion image.
In the embodiment of the invention, the number of people detection can be obtained by identifying the fused image through a head-shoulder vision algorithm, a feature clustering algorithm, a region growing algorithm and other number detection algorithms, and in particular, in the embodiment, the number of people in the fused image is obtained by identifying the head-shoulder positions of the human body in the fused image and calculating the number of people according to the head-shoulder vision algorithm.
105. And if the number of people in the fused image meets the preset number of people, determining the current monitoring frame as the current effective frame.
In the embodiment of the invention, the preset number of people can be determined according to the flow speed of people in the monitored area and the area of the area. Specifically, the preset number of people can be set according to the rule that the larger the area of the monitoring area is, the higher the preset number of people is, or the higher the preset number of people is, the slower the flow speed of people in the monitoring area is. The preset number of people can be determined according to the following formula:
wherein P is a preset people number proportion parameter, A is an area parameter of a monitoring area, B is a people flow speed parameter of the monitoring area, and T s The number of people is reasonably contained in the standard one square. As can be seen from the formula, when the area parameter of the monitoring area is larger and the personnel flow speed parameter of the monitoring area is smaller, the preset number parameter corresponding to the monitoring area is larger, namely the preset number is set larger.
The current valid frame may be a fused image when the current monitoring frame satisfies a preset number of people. Specifically, when the number of people in the fused image meets the preset number of people, determining the monitoring frame corresponding to the fused image as the current effective frame. For example, because of the area of the monitoring area and the flow speed of the personnel, the preset number of people is set to 1000, when the number of people in the fused image calculated by the personnel number detection algorithm is 1000 or 1001, the number of people in the fused image is greater than or equal to the preset number of people at this time, and then the monitoring frame corresponding to the current fused image is taken as the current effective frame.
106. It is determined whether the consecutive valid frames satisfy a preset condition.
In the embodiment of the present invention, the continuous valid frames include, but are not limited to, a current valid frame and a historical valid frame, where the historical valid frame may be calculated from the last valid frame to the end of the next invalid frame, and the continuous valid frame between the two valid frames may be used as the historical valid frame. Specifically, a preset frame extraction interval exists between the historical effective frame and the current effective frame, and two adjacent effective frames in the continuous effective frames also have the preset frame extraction interval.
In general, a plurality of continuous current effective frames can be taken as a whole, the total duration of the effective frames can be calculated through presetting the frame extraction interval and the effective frame quantity, the total duration is compared with the frame extraction interval, and if the total duration of the effective frames is greater than or equal to the frame extraction interval, the continuous effective frames are determined to meet the preset condition.
107. And if the continuous effective frames meet the preset conditions, determining that personnel are detained in the monitoring area.
In the embodiment of the present invention, the above-mentioned personnel stay may mean that the time that the personnel stay in a certain place or place exceeds the original schedule or the specified time, and cannot leave on time. Specifically, in the present embodiment, the flow speed of the personnel may be slow due to the control of the monitoring area, and the like, so that a personnel retention situation may occur. For example, during a peak subway, a subway platform may send out personnel flow control measures, so that an inbound or inbound team can only take a long team, and thus personnel retention may occur.
In a possible embodiment, after the city management platform processes the continuous effective frame, if the processing result indicates that the continuous effective frame meets the preset condition, it is determined that a monitoring area corresponding to the continuous effective frame may be in a situation that personnel are detained, so that a terminal or other devices notifies a city manager to manage the monitoring area.
Optionally, in the step of acquiring the current monitoring frame of the monitoring area, the area of the monitoring area and the personnel flow speed may be acquired, then a preset frame extraction interval is determined based on the area of the monitoring area and the personnel flow speed, and finally the current monitoring frame of the monitoring area is determined based on the preset frame extraction interval.
In the embodiment of the present invention, the area of the monitoring area may be estimated by estimating the area of the monitoring area in the monitoring image, specifically, after the monitoring images of a plurality of monitoring areas are obtained, the monitoring images are de-duplicated to obtain an approximate scene image of the monitoring area, and the area occupied by the image is estimated according to a reference object in the image, where the reference object may be selected according to a specific embodiment, so as to reduce the error, for example, a dustbin or a tile with a standard volume may be used as a reference object, so as to estimate the area of the monitoring area. The above-mentioned person flow velocity may be calculated referring to a ratio of a distance and time that a person moves from a position at a first time to a position at a second time. Specifically, the moving distance is obtained by recording the first time position information and the second time position information, and the ratio of the moving distance to the time information is used as the personnel flow speed according to the time information when the position is recorded.
More specifically, the preset frame interval may be set by the monitored area and the personnel flow rate according to the following formula:
Q=∑(0.7G+0.3H)×T1
wherein Q is a preset frame extraction interval, the general unit is a frame, G is a parameter of the influence of the area of the monitoring area on the frame extraction interval, H is a parameter of the influence of the flow speed of the personnel on the frame extraction interval, the specific influence needs to be judged according to the environment or behavior factors existing in the specific implementation scheme, and T is a parameter of the influence of the area of the monitoring area on the frame extraction interval 1 For the number of frequency frames that can be observed by a normal human eye. As can be seen from the above formula, when the area of the monitored area is larger and the flow speed of the person is higher, the preset frame extraction interval is larger, that is, the more different information is shot, which is more beneficial to accurate statistics.
In a possible embodiment, when the city management platform acquires the monitoring image of the monitoring area, a preset frame interval is obtained by analyzing the monitoring image, and the current monitoring frame content is determined according to the preset frame interval.
Optionally, in the step of performing coarse deduplication processing on the multiple shot images based on the image similarity to obtain multiple images to be fused, hash computation may be performed on the multiple shot images to obtain hash codes of the multiple shot images, then hamming distances between the multiple shot images are obtained based on the hash codes of the shot images, finally hamming distances between the multiple shot images are compared with a preset hamming distance threshold, the shot images needing deduplication are determined from the shot images with hamming distances greater than the preset hamming distance threshold, and the shot images needing deduplication are removed to obtain multiple images to be fused.
In the embodiment of the present invention, the hash calculation may be a process of calculating the plurality of captured images according to a perceptual hash algorithm or a mean hash algorithm. The hash code may be a series of values obtained by calculating the plurality of captured images according to the Ha Xiji algorithm. Specifically, according to different hash algorithms, hash codes obtained from the same image are different. Therefore, when performing similarity calculation on images using a hash algorithm, it is necessary to select the same hash algorithm. For example, the mean hash algorithm is a rule that an average pixel value of an image is compared with each pixel value in the image, and according to the rule that each pixel value in the image is greater than or equal to the average pixel value of the image and is marked as 1, each pixel value in the image is smaller than the average pixel value of the image and is marked as 0, the image is converted into a series of hash values, and the similarity degree between the images is judged according to the hash values between the images.
The hamming distance may refer to the number of hash codes to be converted when two hash codes are synchronized, for example, when the hash code is 110000 and the hash code is 111000, no matter which hash code is converted from, only the third bit 1 needs to be converted into 0 or 0 needs to be converted into 1, that is, only one bit hash code needs to be converted, and the hamming distance between the two hash codes is 1. The preset hamming distance threshold can be set according to actual scenes and requirements, specifically, the overlapping areas of a plurality of equipment shooting ranges in the current scene can be counted, all the overlapping areas are divided into two groups of equipment, the hamming distance threshold is calculated according to the following formulas from large to small according to the proportion of the overlapping areas in the equipment shooting areas:
Where V is a hamming distance threshold, W represents the duty ratio of the overlapping area in the device photographing area, N represents the number of areas where the overlapping duty ratio occurs, and M represents how many overlapping photographing areas exist in total. It can be seen that the larger the duty ratio of the overlapping area in the device photographing area, the larger the number of areas in which the overlapping duty ratio occurs, the larger the hamming distance threshold setting.
In a possible embodiment, after obtaining a plurality of shot images, the city management platform calculates hamming distances between the shot images by performing hash computation on the shot images to obtain hash codes corresponding to the shot images, sets a hamming distance threshold according to the hamming distance threshold setting formula, determines that an image with a hamming distance greater than a preset hamming distance threshold is a repeated image, performs de-duplication processing on the shot images, and finally obtains a plurality of fusion images.
Optionally, in the step of calculating the transparency of the overlapping area between each image to be fused, and performing image fusion on the images to be fused based on the transparency of the overlapping area to obtain the fused image, the feature value of the overlapping area between each image to be fused may be extracted, then the similarity of the overlapping area between each image to be fused is calculated based on the feature value of the overlapping area between each image to be fused, then the transparency of the overlapping area between each image to be fused is determined based on the similarity of the overlapping area between each image to be fused, and finally the images to be fused are fused based on the transparency of the overlapping area between the images to be fused to obtain the fused image.
In the embodiment of the present invention, the feature value of the overlapping area between the to-be-fused images may refer to an environmental feature that can be identified and has a relatively strong recognition, for example, a green ball with thorns, and on yellow sand, the above feature may be associated with cactus. The similarity of the overlapping areas between the images to be fused can be calculated according to the ratio of the characteristic values to the areas of the overlapping areas, so that the similarity between the overlapping areas is obtained, and the transparency of the overlapping areas between the images to be fused is determined according to the similarity. The transparency of the overlapping area between the images to be fused may refer to the transparency of each layer image between the overlapping areas between the images to be fused, for example, when three layers of images are arranged between the overlapping areas, if the transparency of the upper layer image is one hundred percent, the transparency of the middle layer image is one hundred percent, and the transparency of the lower layer image is zero percent, the pixel information of the overlapping images are all the image information of the lower layer image. Therefore, the transparency of the image can be determined according to the weighting method according to the degree of similarity between the overlapping areas of the images.
Optionally, in the step of determining the transparency of the overlapping area between the images to be fused based on the similarity of the overlapping areas between the images to be fused, thermodynamic distribution of the overlapping area between the images to be fused may be determined based on the similarity of the overlapping area between the images to be fused, and then the transparency of the overlapping area between the images to be fused may be determined based on the thermodynamic distribution of the overlapping area between the images to be fused.
In the embodiment of the present invention, thermodynamic distribution of the overlapping area between the to-be-fused images may refer to a process of obtaining a duty ratio of each to-be-fused image between the overlapping areas according to similarity calculation of the overlapping areas between the to-be-fused images, mapping the similarity to a corresponding color according to a preset color mapping method, and generating an image with color distribution corresponding to the overlapping areas according to a rule that the more the similarity is, the more the color is red, the less the similarity is, and the more the color is blue.
In one possible embodiment, after obtaining the similarity of the overlapping areas between the images to be fused, the city management platform generates a fused image with red-blue color distribution according to a rule that the similarity is calculated and the color is more red and the similarity is less blue according to an area with higher similarity, and adjusts transparency to an image corresponding to the redder overlapping area in the fusion process to be lower and adjusts transparency to an image corresponding to the bluer overlapping area in the fusion process to be higher according to the fused image with red-blue color distribution.
Optionally, in the step of determining the transparency of the overlapping region between the images to be fused based on the thermodynamic distribution of the overlapping region between the images to be fused, the thermodynamic occupation ratio of the overlapping region between the images to be fused may be determined based on the thermodynamic distribution of the overlapping region between the images to be fused, and then the transparency of the overlapping region between the images to be fused may be determined based on the thermodynamic occupation ratio of the overlapping region between the images to be fused.
In the embodiment of the present invention, the thermal duty may refer to a specific gravity of an image in an overlapping area in a thermodynamic diagram of a fused image. Specifically, the transparency of the image during fusion may be adjusted according to the specific gravity of the overlapping area occupied by the image during fusion. For example, when the specific gravity of the overlapping area occupied by the image at the time of fusion is high, it means that the similarity between the image and other images is too large, i.e., the less vivid the characteristics are, the transparency of the image at the time of fusion is reduced. On the contrary, when the proportion of the overlapping area occupied by the image is lower during fusion, the image has too small similarity with other images, namely the clearer the characteristics are, the transparency of the image during fusion is increased.
In one possible embodiment, when the personnel retention monitoring method is called, the city management platform performs weighted calculation according to thermodynamic diagram proportion of a certain image to a fused image after fusion, and performs dynamic transparency adjustment on the weighted calculation. Specifically, when the images are fused, the transparency corresponding to the image fused last time can be adjusted according to the times of the fused images. For example, when the first image and the second image are fused, the transparency of the first image and the second image is adjusted to obtain a first fused image, when the third image is fused, the transparency of the first fused image and the third image is adjusted according to the duty ratio of the third image in the thermodynamic diagram of the fused image, then the third image is fused to the first fused image to obtain a second fused image, and so on.
Optionally, in the step of determining that the monitoring area has personnel retention if the continuous effective frame meets the preset condition, a historical effective frame before the current effective frame can be obtained, then the continuous effective frame is determined based on the historical effective frame and the current effective frame, then the frame number of the continuous effective frame is calculated, secondly the frame extraction duration of the continuous effective frame is determined based on the frame number, and finally the personnel retention condition in the target area is determined when the frame extraction duration of the continuous effective frame is greater than or equal to the preset frame extraction duration.
In the embodiment of the present invention, a preset frame extraction interval is provided between the last frame of the historical effective frame and the current effective frame. And taking the current effective frame as a continuous effective frame from the beginning of the historical effective frame, recording the frame number of the continuous effective frame, calculating the acquired frame number according to a preset frame extraction interval to obtain the frame extraction duration of the continuous effective frame, comparing the frame extraction duration with the preset frame extraction duration, and determining that personnel retention exists in the target area if the frame extraction duration of the continuous effective frame is greater than or equal to the preset frame extraction duration. For example, according to the preset frame extraction interval 10s, dividing into one frame per second, if the continuous effective frame is greater than or equal to 10s, indicating that the personnel retention condition exists in the target area, namely, in the ten continuous fusion images, the personnel number exceeds the preset number of people, and determining that the personnel retention condition exists in the target area.
As shown in fig. 2, an embodiment of the present invention further provides a personnel retention monitoring process, which includes:
firstly, setting an alarm threshold and a frame extraction interval according to scene factors of a current public area, then carrying out video frame extraction on all devices in the public area according to the frame extraction interval to obtain frame extraction images, carrying out rough de-duplication by utilizing an image hash algorithm, calculating the similarity of the de-duplicated images by using a convolutional neural network, generating a corresponding region thermodynamic diagram according to a similarity calculation result, carrying out weighted calculation on corresponding colors of the region thermodynamic diagram to obtain transparency of a corresponding region, and fusing the frame extraction images according to the transparency of the corresponding region.
After the images are fused, judging the number of people in the fused images, if the number of people in the fused images does not reach the alarm threshold, discarding the frame extraction image corresponding to the current fused image and all the images among the frame extraction images, and returning to the step of carrying out video frame extraction on all the devices in the public area according to the frame extraction interval. If the number of people in the fused image reaches the alarm threshold, recording the current frame-extracting image as the current effective frame, judging whether the total frame-extracting interval duration from the current effective frame to the previous historical effective frame is larger than the frame-extracting interval, if the total frame-extracting interval duration from the current effective frame to the previous historical effective frame is smaller than the frame-extracting interval, returning to the step of carrying out video frame-extracting on all devices in a public area according to the frame-extracting interval, and if the total frame-extracting interval duration from the current effective frame to the previous historical effective frame is larger than or equal to the frame-extracting interval, generating an early warning notification to city management personnel, and ending the personnel retention monitoring method.
As shown in fig. 3, an embodiment of the present invention further provides a personnel retention monitoring device, which is characterized by including:
a first obtaining module 301, configured to obtain a current monitoring frame of a monitoring area, where the current monitoring frame includes a plurality of captured images captured by a plurality of monitoring devices at a current moment, and the monitoring area is monitored by the plurality of monitoring devices;
the de-duplication module 302 is configured to perform coarse de-duplication on the multiple captured images based on image similarity, so as to obtain multiple images to be fused, where the number of the images to be fused is less than or equal to the number of the captured images;
the calculating module 303 is configured to calculate transparency of an overlapping area between the images to be fused, and perform image fusion on the images to be fused based on the transparency of the overlapping area, so as to obtain a fused image;
the detection module 304 is configured to detect the number of people in the fused image, so as to obtain the number of people in the fused image;
a first determining module 305, configured to determine the current monitoring frame as a current valid frame if the number of people in the fused image meets a preset number of people;
a second determining module 306, configured to determine whether a continuous effective frame meets a preset condition, where the continuous effective frame includes a current effective frame and a historical effective frame, and two adjacent effective frames in the continuous effective frame have a preset frame extraction interval;
A third determining module 307, configured to determine that a person is detained in the monitoring area if the continuous valid frame meets the preset condition.
Optionally, the first obtaining module 301 includes:
the first acquisition submodule is used for acquiring the area of the monitoring area and the personnel flow speed;
the first determining submodule is used for determining a preset frame extraction interval based on the area of the monitoring area and the personnel flow speed;
and the second determining submodule is used for determining the current monitoring frame of the monitoring area based on the preset frame extraction interval.
Optionally, the deduplication module 302 includes:
the first computing sub-module is used for carrying out hash computation on a plurality of shooting images to obtain hash codes of each shooting image;
the second computing sub-module is used for obtaining the Hamming distance between the shooting images based on the hash codes of the shooting images;
and the third determining submodule is used for comparing the Hamming distance between the shot images with a preset Hamming distance threshold value, determining the shot image needing to be subjected to de-duplication from the shot images with the Hamming distance larger than the preset Hamming distance threshold value, and removing the shot image needing to be subjected to de-duplication to obtain a plurality of images to be fused.
Optionally, the calculating module 303 includes:
the first extraction submodule is used for extracting characteristic values of overlapping areas among the images to be fused;
the third calculation sub-module is used for calculating the similarity of the overlapping areas between the images to be fused based on the characteristic values of the overlapping areas between the images to be fused;
a fourth determining submodule, configured to determine transparency of an overlapping region between the images to be fused based on similarity of the overlapping region between the images to be fused;
and the first fusion sub-module is used for fusing the images to be fused based on the transparency of the overlapping area between the images to be fused to obtain a fused image.
Optionally, the apparatus further includes:
the fourth determining module is used for determining thermodynamic diagram distribution of the overlapping area between the images to be fused based on the similarity of the overlapping area between the images to be fused;
and a fifth determining module, configured to determine transparency of the overlapping area between the to-be-fused images based on thermodynamic distribution of the overlapping area between the to-be-fused images.
Optionally, the apparatus further includes:
the sixth determining module is used for determining the thermodynamic proportion of the overlapping area between the images to be fused based on thermodynamic diagram distribution of the overlapping area between the images to be fused;
And a seventh determining module, configured to determine transparency of the overlapping area between the images to be fused based on a thermodynamic occupation ratio of the overlapping area between the images to be fused.
Optionally, the third determining module 307 includes:
the second acquisition sub-module is used for acquiring a historical effective frame before a current effective frame, and a preset frame extraction interval is arranged between the last frame of the historical effective frame and the current effective frame;
a fifth determining sub-module for determining a continuous active frame based on the historical active frame and the current active frame;
a fourth calculation sub-module for calculating the number of frames of the continuous effective frames;
a sixth determining submodule, configured to determine a frame duration of continuous effective frames based on the frame number;
and a seventh determining submodule, configured to determine that a person retention condition exists in the target area when the frame extraction time length of the continuous effective frame is greater than or equal to a preset frame extraction time length.
As shown in fig. 4, an embodiment of the present invention further provides an electronic device, which is characterized by including a processor, where the processor may execute any one of the above personnel retention monitoring methods.
Specifically, a computer program comprising a processor 401 and a memory 402, and a human retention monitoring method stored on the memory 402 and capable of running on the processor 401, wherein:
The processor 401 runs a computer program of the personnel retention monitoring method stored in the memory 402, performing the steps of:
acquiring a current monitoring frame of a monitoring area, wherein the current monitoring frame comprises a plurality of shooting images shot by a plurality of monitoring devices at the current moment, and the monitoring area is monitored by the plurality of monitoring devices;
performing coarse de-duplication processing on a plurality of shot images based on image similarity to obtain a plurality of images to be fused, wherein the number of the images to be fused is smaller than or equal to that of the shot images;
calculating the transparency of an overlapping area between the images to be fused, and carrying out image fusion on the images to be fused based on the transparency of the overlapping area to obtain a fused image;
detecting the number of people in the fusion image to obtain the number of people in the fusion image;
if the number of people in the fusion image meets the preset number of people, determining the current monitoring frame as a current effective frame;
determining whether a continuous effective frame meets a preset condition, wherein the continuous effective frame comprises a current effective frame and a historical effective frame, and two adjacent effective frames in the continuous effective frame have a preset frame extraction interval;
And if the continuous effective frame meets the preset condition, determining that personnel are detained in the monitoring area.
Optionally, in the personnel retention monitoring method, the step of executing the step of acquiring the current monitoring frame of the monitoring area by the processor 401 includes:
acquiring the area of the monitoring area and the personnel flow speed;
determining a preset frame extraction interval based on the area of the monitoring area and the personnel flow speed;
and determining the current monitoring frame of the monitoring area based on the preset frame extraction interval.
Optionally, in the method for monitoring personnel retention, the step of performing, by the processor 401, coarse deduplication processing on the plurality of captured images based on image similarity to obtain a plurality of images to be fused includes:
carrying out hash calculation on a plurality of shooting images to obtain hash codes of each shooting image;
based on the hash codes of the photographed images, obtaining hamming distances between the photographed images;
comparing the Hamming distance between the shooting images with a preset Hamming distance threshold, determining shooting images needing to be subjected to de-duplication from the shooting images with Hamming distances larger than the preset Hamming distance threshold, and eliminating the shooting images needing to be subjected to de-duplication to obtain a plurality of images to be fused.
Optionally, in the method for monitoring personnel retention, the step of executing the calculating the transparency of the overlapping area between the images to be fused by the processor 401 and performing image fusion on the images to be fused based on the transparency of the overlapping area to obtain a fused image includes:
extracting characteristic values of overlapping areas among the images to be fused;
calculating the similarity of the overlapping areas between the images to be fused based on the characteristic values of the overlapping areas between the images to be fused;
determining the transparency of the overlapping area between the images to be fused based on the similarity of the overlapping area between the images to be fused;
and fusing the images to be fused based on the transparency of the overlapping area between the images to be fused, so as to obtain a fused image.
Optionally, in the personnel retention monitoring method, the step of determining the transparency of the overlapping area between the images to be fused by using the processor 401 includes:
determining thermodynamic diagram distribution of the overlapping area between the images to be fused based on the similarity of the overlapping area between the images to be fused;
And determining the transparency of the overlapped area between the images to be fused based on thermodynamic distribution of the overlapped area between the images to be fused.
Optionally, in the personnel retention monitoring method, the step of determining the transparency of the overlapping area between the images to be fused by using the thermodynamic diagram distribution of the overlapping area between the images to be fused performed by the processor 401 includes:
determining the thermodynamic proportion of the overlapping area between the images to be fused based on thermodynamic distribution of the overlapping area between the images to be fused;
and determining the transparency of the overlapping area between the images to be fused based on the thermodynamic proportion of the overlapping area between the images to be fused.
Optionally, in the personnel retention monitoring method, the step of determining that the monitoring area has personnel retention if the continuous valid frame meets the preset condition is performed by the processor 401 includes:
acquiring a historical effective frame before a current effective frame, wherein a preset frame extraction interval is arranged between the last frame of the historical effective frame and the current effective frame;
Determining a continuous effective frame based on the historical effective frame and the current effective frame;
calculating the number of frames of the continuous effective frames;
determining frame extraction time length of continuous effective frames based on the frame number;
and when the frame extraction time length of the continuous effective frames is greater than or equal to the preset frame extraction time length, determining that personnel retention exists in the target area.
The embodiment of the invention also provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the computer program realizes each process of the personnel retention monitoring method or the application end personnel retention monitoring method provided by the embodiment of the invention, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
Those skilled in the art will appreciate that the processes implementing all or part of the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, and the program may include the processes of the embodiments of the methods as above when executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM) or the like.
The foregoing disclosure is illustrative of the present invention and is not to be construed as limiting the scope of the invention, which is defined by the appended claims.

Claims (10)

1. A method of monitoring personnel retention, the method comprising the steps of:
acquiring a current monitoring frame of a monitoring area, wherein the current monitoring frame comprises a plurality of shooting images shot by a plurality of monitoring devices at the current moment, and the monitoring area is monitored by the plurality of monitoring devices;
performing coarse de-duplication processing on a plurality of shot images based on image similarity to obtain a plurality of images to be fused, wherein the number of the images to be fused is smaller than or equal to that of the shot images;
calculating the transparency of an overlapping area between the images to be fused, and carrying out image fusion on the images to be fused based on the transparency of the overlapping area to obtain a fused image;
detecting the number of people in the fusion image to obtain the number of people in the fusion image;
if the number of people in the fusion image meets the preset number of people, determining the current monitoring frame as a current effective frame;
determining whether a continuous effective frame meets a preset condition, wherein the continuous effective frame comprises a current effective frame and a historical effective frame, and two adjacent effective frames in the continuous effective frame have a preset frame extraction interval;
And if the continuous effective frame meets the preset condition, determining that personnel are detained in the monitoring area.
2. The personnel retention monitoring method of claim 1, wherein the obtaining a current monitoring frame of a monitoring area comprises:
acquiring the area of the monitoring area and the personnel flow speed;
determining a preset frame extraction interval based on the area of the monitoring area and the personnel flow speed;
and determining the current monitoring frame of the monitoring area based on the preset frame extraction interval.
3. The personnel retention monitoring method according to claim 1 or 2, wherein the performing coarse deduplication processing on the plurality of captured images based on the image similarity to obtain a plurality of images to be fused comprises:
carrying out hash calculation on a plurality of shooting images to obtain hash codes of each shooting image;
based on the hash codes of the photographed images, obtaining hamming distances between the photographed images;
comparing the Hamming distance between the shooting images with a preset Hamming distance threshold, determining shooting images needing to be subjected to de-duplication from the shooting images with Hamming distances larger than the preset Hamming distance threshold, and eliminating the shooting images needing to be subjected to de-duplication to obtain a plurality of images to be fused.
4. The method for monitoring personnel retention according to claim 1, wherein the calculating the transparency of the overlapping area between the images to be fused and the image fusion of the images to be fused based on the transparency of the overlapping area to obtain the fused image comprises:
extracting characteristic values of overlapping areas among the images to be fused;
calculating the similarity of the overlapping areas between the images to be fused based on the characteristic values of the overlapping areas between the images to be fused;
determining the transparency of the overlapping area between the images to be fused based on the similarity of the overlapping area between the images to be fused;
and fusing the images to be fused based on the transparency of the overlapping area between the images to be fused, so as to obtain a fused image.
5. The personnel retention monitoring method of claim 4, wherein the determining the transparency of the overlapping region between the respective images to be fused based on the similarity of the overlapping region between the respective images to be fused comprises:
determining thermodynamic diagram distribution of the overlapping area between the images to be fused based on the similarity of the overlapping area between the images to be fused;
And determining the transparency of the overlapped area between the images to be fused based on thermodynamic distribution of the overlapped area between the images to be fused.
6. The personnel retention monitoring method of claim 5, wherein the determining the transparency of the overlapping region between the respective images to be fused based on thermodynamic distribution of the overlapping region between the respective images to be fused comprises:
determining the thermodynamic proportion of the overlapping area between the images to be fused based on thermodynamic distribution of the overlapping area between the images to be fused;
and determining the transparency of the overlapping area between the images to be fused based on the thermodynamic proportion of the overlapping area between the images to be fused.
7. The personnel retention monitoring method according to any one of claims 1-6, wherein said determining that personnel are retained in the monitored area if the continuous valid frame satisfies the preset condition comprises:
acquiring a historical effective frame before a current effective frame, wherein a preset frame extraction interval is arranged between the last frame of the historical effective frame and the current effective frame;
determining a continuous effective frame based on the historical effective frame and the current effective frame;
Calculating the number of frames of the continuous effective frames;
determining frame extraction time length of continuous effective frames based on the frame number;
and when the frame extraction time length of the continuous effective frames is greater than or equal to the preset frame extraction time length, determining that personnel retention exists in the target area.
8. A personnel retention monitoring device, the personnel retention monitoring device comprising:
the first acquisition module is used for acquiring a current monitoring frame of a monitoring area, wherein the current monitoring frame comprises a plurality of shooting images shot by a plurality of monitoring devices at the current moment, and the monitoring area is monitored by the plurality of monitoring devices;
the de-duplication module is used for performing rough de-duplication on the plurality of shot images based on the image similarity to obtain a plurality of images to be fused, wherein the number of the images to be fused is smaller than or equal to that of the shot images;
the computing module is used for computing the transparency of the overlapping area between the images to be fused, and carrying out image fusion on the images to be fused based on the transparency of the overlapping area to obtain a fused image;
the detection module is used for detecting the number of people of the fusion image to obtain the number of people of the fusion image;
The first determining module is used for determining the current monitoring frame as a current effective frame if the number of people in the fusion image meets the preset number of people;
a second determining module, configured to determine whether a continuous effective frame meets a preset condition, where the continuous effective frame includes a current effective frame and a historical effective frame, and two adjacent effective frames in the continuous effective frame have a preset frame extraction interval;
and the third determining module is used for determining that personnel are detained in the monitoring area if the continuous effective frames meet the preset conditions.
9. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the personnel retention monitoring method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps in the personnel retention monitoring method according to any one of claims 1 to 7.
CN202311333256.XA 2023-10-13 2023-10-13 Personnel retention monitoring method and device, electronic equipment and storage medium Pending CN117274910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311333256.XA CN117274910A (en) 2023-10-13 2023-10-13 Personnel retention monitoring method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311333256.XA CN117274910A (en) 2023-10-13 2023-10-13 Personnel retention monitoring method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117274910A true CN117274910A (en) 2023-12-22

Family

ID=89202470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311333256.XA Pending CN117274910A (en) 2023-10-13 2023-10-13 Personnel retention monitoring method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117274910A (en)

Similar Documents

Publication Publication Date Title
US10070053B2 (en) Method and camera for determining an image adjustment parameter
KR101896406B1 (en) Road crack detection apparatus of pixel unit and method thereof, and computer program for executing the same
US9792505B2 (en) Video monitoring method, video monitoring system and computer program product
EP2959454B1 (en) Method, system and software module for foreground extraction
US20160260306A1 (en) Method and device for automated early detection of forest fires by means of optical detection of smoke clouds
CN110580428A (en) image processing method, image processing device, computer-readable storage medium and electronic equipment
KR101204259B1 (en) A method for detecting fire or smoke
CN105631418A (en) People counting method and device
CN115620212B (en) Behavior identification method and system based on monitoring video
CN110248101B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN111898581A (en) Animal detection method, device, electronic equipment and readable storage medium
JP5271227B2 (en) Crowd monitoring device, method and program
CN112733690A (en) High-altitude parabolic detection method and device and electronic equipment
CN110942456B (en) Tamper image detection method, device, equipment and storage medium
KR102391853B1 (en) System and Method for Processing Image Informaion
CN111325048A (en) Personnel gathering detection method and device
CN110781853A (en) Crowd abnormality detection method and related device
CN111444758A (en) Pedestrian re-identification method and device based on spatio-temporal information
CN111242023A (en) Statistical method and statistical device suitable for complex light passenger flow
CN112769877A (en) Group fog early warning method, cloud server, vehicle and medium
KR102244878B1 (en) Cctv security system and method based on artificial intelligence
CN110599514B (en) Image segmentation method and device, electronic equipment and storage medium
JP2020160840A (en) Road surface defect detecting apparatus, road surface defect detecting method, road surface defect detecting program
CN117132768A (en) License plate and face detection and desensitization method and device, electronic equipment and storage medium
KR102584708B1 (en) System and Method for Crowd Risk Management by Supporting Under and Over Crowded Environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination