CN112183304A - Off-position detection method, system and computer storage medium - Google Patents
Off-position detection method, system and computer storage medium Download PDFInfo
- Publication number
- CN112183304A CN112183304A CN202011016989.7A CN202011016989A CN112183304A CN 112183304 A CN112183304 A CN 112183304A CN 202011016989 A CN202011016989 A CN 202011016989A CN 112183304 A CN112183304 A CN 112183304A
- Authority
- CN
- China
- Prior art keywords
- track
- human body
- tracking
- detection
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 149
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000006073 displacement reaction Methods 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 17
- 238000000605 extraction Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 3
- 238000009434 installation Methods 0.000 abstract description 3
- 230000015654 memory Effects 0.000 description 29
- 230000006399 behavior Effects 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a dislocation detection method, a system and a computer storage medium, wherein the dislocation detection method comprises the following steps: s1, acquiring time-series-based image data of a human body in a certain area; s2, processing the image data to obtain detection frames corresponding to each human body in the region one by one; s3, tracking the human body in the area according to the detection frame; s4, calculating track points of each frame according to the human body tracking result to generate a human body displacement track; and S5, judging whether the human body leaves the position according to whether the human body displacement track crosses the edge of a certain position. According to the method provided by the embodiment of the invention, the image is used for dislocation detection, and the dislocation judgment is carried out through the human body track, so that the method has a good identification effect, can obtain the moving track of the target personnel in real time, can judge whether the behavior of leaving the position of multiple persons occurs at the same time, and reduces the hardware cost and the installation complexity.
Description
Technical Field
The present invention relates to the field of personnel management, and more particularly, to a method and system for detecting dislocation, and a computer storage medium.
Background
For some special places, the state of the person in the monitoring range needs to be accurately identified, and whether the monitored person leaves the position or not needs to be determined. The existing departure position detection method generally adopts hardware: a detection system is formed by a pressure sensor, an infrared sensor and the like, and the state of whether the person leaves at the position is obtained. The general off-position detection and identification process comprises the following steps: firstly, acquiring target state data through a sensor, wherein the target can be a position or a person, then analyzing the acquired data through software, and finally judging whether the dislocation behavior occurs. Although the personnel off-position behavior can be obtained through indirect judgment in a hardware mode of installing an infrared sensor and the like on the position, the mode mainly adopts hardware judgment, the installation difficulty is high, the requirement on the accuracy of the sensor is high, and the sensor directly installed on the position is aged along with the environmental time. Hardware such as a sensor in one position can only judge whether a target person is out of position, and hardware such as a plurality of sensors is needed for judging that a plurality of persons are out of position, so that the cost is high.
The existing other off-position detection method is to use an image sensor to perform off-position detection, the application is less, the off-position detection is mainly performed by an image method, the recognition rate is low, and the missing rate is high compared with the missing rate of a detection method by installing hardware on a position.
Disclosure of Invention
In order to solve the technical problems, the invention provides a dislocation detection method, a dislocation detection system and a computer storage medium, which can acquire the moving track of a target person in real time, can judge whether a multi-person dislocation behavior occurs or not at the same time, and reduce the hardware cost and the installation complexity.
The dislocation detection method according to the embodiment of the first aspect of the invention comprises the following steps: s1, acquiring time-series-based image data of a human body in a certain area; s2, processing the image data to obtain detection frames corresponding to each human body in the area one by one; s3, tracking the human body in the area according to the detection frame; s4, calculating track points of each frame according to the human body tracking result to generate a human body displacement track; and S5, judging whether the human body leaves the position according to whether the human body displacement track crosses the edge of the position.
Therefore, according to the dislocation detection method provided by the embodiment of the invention, whether personnel in a certain area leave a certain position can be subjected to dislocation analysis by combining a video image obtained in a certain area and a human body tracking method, so that not only is a hardware system for dislocation detection simplified and hardware development and assembly cost reduced, but also the monitoring range is wider, and a multi-person displacement track can be obtained through one image data to judge whether the multiple persons leave the position.
The off-position detection method according to the embodiment of the invention can also have the following additional technical characteristics:
according to some embodiments of the present invention, in step S1, the background acquires the image data of the human body in a certain area based on time series through a camera installed in the area.
According to some embodiments of the invention, step S2 includes: s21, inputting the image data into a detection network to obtain a human body detection frame; and S22, removing redundant human body detection frames through NMS to obtain the detection frames corresponding to each human body in the region one by one.
According to some embodiments of the invention, step S3 includes: s31, inputting the detection frames, and extracting through a feature extraction network to obtain features corresponding to the detection frames; s32, carrying out feature matching on the human body by a pedestrian re-identification method, and connecting the detection frames of the front and rear frames of each human body to obtain a human body detection frame image sequence with a time sequence; s33, calculating the intersection ratio of one detection frame and the last frame of tracking track frame in the human body detection frame image sequence, if the intersection ratio is larger than a set threshold, continuing cosine matching between the detection frame and the last frame of tracking track frame, and if the intersection ratio is smaller than the set threshold, further performing intersection ratio matching between the detection frame which is not matched and the tracking track frame; s34, performing cosine matching on the detection frame and the previous frame of tracking track frame, if the matching is successful, the tracking result is a tracking successful track, sending the tracking successful track into track updating, and if the matching is unsuccessful, performing intersection comparison matching on the detection frame which is not matched and the tracking track frame; and S35, merging and comparing the detection frames which are not matched with the tracking track frames, matching successfully, wherein the tracking result is a tracking successful track, and if the matching is unsuccessful, the tracking result is a tracking lost track or the detection frames which are not matched, and sending the tracking lost track or the detection frames to track updating.
According to some embodiments of the present invention, in step S35, if the tracking result is that the detection box is not matched, the following steps are performed: s351, taking the unmatched detection frame as a track to be determined, changing the track to be determined after continuous tracking is successfully performed for a fixed number of times, and changing the track to be deleted if tracking loss occurs during tracking; s352, determining that the state track is changed into a deleted state track to be deleted if the continuous loss times exceed a fixed value; and S353, deleting the track of the state to be deleted later.
According to some embodiments of the invention, in step S4, the trace point is a center point of a lower frame of the detection frame, and step S5 includes: s51, obtaining a tracking result to obtain the latest n track points; and S52, if the track point number n is smaller than the set threshold value, the target is considered to be away from the position, and the target is judged to be away from the position.
According to some embodiments of the invention, step S5 further comprises: and S53, if the track point number n is larger than the set threshold value, counting the points a and b inside and outside the position, then judging whether the points a and b meet the set value, if not, considering the target leaving position, and judging that the target leaving position is a leaving position, wherein a is 0.2n and b is 0.2 n.
According to some embodiments of the invention, step S5 further comprises: and S54, judging that the pixel distances width/m of the two sides of the center point of the lower frame of the latest track point are out of positions, wherein width is the width of the pixels of the lower frame of the detection frame, if the width is satisfied, the target leaves the position, judging that the target leaves the position, n is 10, and m is 6-8.
An off-position detection system according to an embodiment of the second aspect of the present invention includes: the system comprises an image acquisition module, a time sequence-based image acquisition module and a time sequence-based image acquisition module, wherein the image acquisition module acquires image data of a human body in a certain area; the image processing module is used for processing the image data to obtain detection frames which are respectively in one-to-one correspondence with each human body in the area; the human body tracking module is used for tracking the human body in the region; the track obtaining module calculates track points of each frame according to a human body tracking result to generate a human body displacement track; and the judging module judges whether the human body is out of position according to whether the human body displacement track crosses the position edge.
In a third aspect, an embodiment of the present invention provides a computer storage medium including one or more computer instructions, which when executed implement the method according to the above embodiment.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a method of off-position detection according to an embodiment of the present invention;
FIG. 2 is a flowchart of human body tracking in the dislocation detection method according to the embodiment of the present invention;
FIG. 3 is a flowchart illustrating the human tracking trajectory updating method according to an embodiment of the present invention;
FIG. 4 is a flow chart of the off-position determination of the off-position detection method according to the embodiment of the present invention;
fig. 5 is a schematic diagram of an electronic device according to an embodiment of the invention.
Reference numerals:
an electronic device 300;
a memory 310; an operating system 311; an application 312;
a processor 320; a network interface 330; an input device 340; a hard disk 350; a display device 360.
Detailed Description
The following detailed description of embodiments of the present invention will be made with reference to the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
First, the off-position detection method according to an embodiment of the present invention will be described in detail with reference to the drawings.
As shown in fig. 1, the off-position detection method according to the embodiment of the present invention includes the following steps:
s1, acquiring time-series-based image data of a human body in a certain area;
s2, processing the image data to obtain detection frames corresponding to each human body in the region one by one;
s3, tracking the human body in the area according to the detection frame;
s4, calculating track points of each frame according to the human body tracking result to generate a human body displacement track;
and S5, judging whether the human body leaves the position according to whether the human body displacement track crosses the edge of a certain position.
First of all, it should be noted that the off-position detection method according to the embodiment of the present invention may be used to detect whether a person in a certain area leaves a certain position, for example, a certain area may be a guard house that needs to accurately identify a state of the person in a monitoring range and determine whether the monitored person leaves the position, and a certain position may be a bed or a seat in the guard house.
Specifically, when the dislocation detection method according to the embodiment of the present invention is used, image data in a certain area that needs to be subjected to dislocation detection is first obtained, where the image data is time-series image data including a human activity state in the area, and may be, for example, a video in a certain time period, and there may be one or more human bodies in the area; then, processing the image data in the area, and screening out the detection frames uniquely corresponding to each person, for example, the detection frames corresponding to each person can be obtained by identifying the human body of the image data and screening for multiple times; then respectively tracking the human body in the area according to each detection frame, calculating track points of each person on each frame of image according to human body tracking results, and connecting the track points in series to generate a human body displacement track, wherein the tracking of the human body can be realized by tracking a plurality of target persons through one image data or by carrying out pedestrian re-identification across a plurality of image data; and finally, judging whether the human body leaves the position according to whether the human body displacement track of each person crosses the edge of a certain position.
Therefore, according to the dislocation detection method provided by the embodiment of the invention, whether personnel in a certain area leave a certain position can be subjected to dislocation analysis by combining a video image obtained in a certain area and a human body tracking method, so that not only is a hardware system for dislocation detection simplified and hardware development and assembly cost reduced, but also the monitoring range is wider, and a multi-person displacement track can be obtained through one image data to judge whether the multiple persons leave the position.
According to an embodiment of the present invention, in step S1, the background acquires time-series-based image data of a human body in an area through a camera installed in the area.
Specifically, a plurality of single cameras can be arranged in a certain area, each single camera can acquire image data of the human body of the person in the corresponding range based on the time sequence, and meanwhile, the human body meeting the set requirement can be screened out by each single camera.
According to an embodiment of the present invention, step S2 includes:
s21, inputting the image data into a detection network to obtain a human body detection frame;
and S22, removing redundant human body detection frames through NMS to obtain detection frames corresponding to each human body in the region one by one.
Specifically, the obtained human body of a certain region is input into a detection network based on time series image data, such as: the detection network can be a deep learning network Faster R-CNN, image data are input into the deep learning network Faster R-CNN to obtain a plurality of human body detection frames, then redundant human body detection frames are removed through NMS, NMS is non-maximum value inhibition, the obtained human body detection frames are processed, sorting is carried out according to class classification probability of a classifier, probability belonging to monitored personnel is arranged from small to large, cross-comparison is carried out on the human body detection frames arranged according to the probability, a threshold value is set, elimination larger than a certain threshold value is carried out, reservation smaller than the threshold value is carried out, and cross-comparison is carried out on the reserved human body detection frames. And obtaining detection frames which respectively correspond to each human body in the region one by one through multiple comparisons. Through the steps, data can be acquired in real time and further image data with higher precision can be acquired.
According to an embodiment of the present invention, step S3 includes:
s31, inputting the detection frames, and extracting the features corresponding to the detection frames through a feature extraction network;
s32, carrying out feature matching on the human body by a pedestrian re-identification method, and connecting the detection frames of the front frame and the rear frame of each human body to obtain a human body detection frame image sequence with a time sequence;
s33, calculating the intersection ratio of one detection frame in the human body detection frame image sequence and the previous frame tracking track frame, if the intersection ratio is larger than a set threshold, continuing cosine matching between the detection frame and the previous frame tracking track frame, and if the intersection ratio is smaller than the set threshold, continuing intersection ratio matching between the detection frame which is not matched and the tracking track frame;
s34, performing cosine matching on the detection frame and the previous frame of tracking track frame, if the matching is successful, the tracking result is a tracking successful track, sending the tracking successful track into track updating, and if the matching is unsuccessful, performing intersection comparison matching on the detection frame which is not matched and the tracking track frame;
and S35, merging and comparing the detection frames which are not matched with the tracking track frames, matching successfully, wherein the tracking result is a tracking successful track, and if the matching is unsuccessful, the tracking result is a tracking lost track or the detection frames which are not matched, and sending the tracking lost track or the detection frames to track updating.
In other words, as shown in fig. 2, the human body tracking of the human body in the area may be performed by the following steps:
first, the detection frames corresponding to the human body obtained in step S22 are input to the feature extraction network as input, and features corresponding to the detection frames are obtained by extraction, for example: the feature extraction network can be a deep convolution network VGG-Net model, and the extracted features corresponding to the detection frames can be features of wearing, posture, hair style and the like.
Then, matching is carried out on the human body detection frames according to the characteristics of wearing, posture, hair style and the like of the pedestrians, the identification and the retrieval of the pedestrians in the cross-camera and cross-scene are carried out, and the detection frames of the front frame and the rear frame of each human body are connected to obtain a human body detection frame image sequence with a time sequence.
Then, taking the distance between one detection frame in the time-series human body detection frame image sequence and the last frame tracking track frame as an intersection ratio, if the intersection ratio is larger than a set threshold, the distance between the corresponding detection frame and the tracking track frame is close, continuing cosine matching between the detection frame and the tracking track frame, if the intersection ratio is smaller than the set threshold, the distance between the corresponding detection frame and the tracking track frame is far, and if cosine matching is not performed subsequently, further performing intersection and comparison matching between the detection frame which is not matched and the tracking track frame.
It should be noted that the tracking trace frame is also a detection frame, and belongs to a detection frame of a previous frame, and this detection frame belongs to a certain trace, and is therefore referred to as a tracking trace frame, which is also referred to as a tracking frame. The distance between the detection frame and the tracking track frame ranges from 0 to 1, wherein 0 is the farthest distance, and 1 is the closest distance.
And performing cosine matching on the detection frame and the previous frame of tracking track frame, if the matching is successful, the tracking result is a tracking successful track, sending the tracking successful track into track updating, and if the matching is unsuccessful, performing intersection comparison matching on the detection frame which is not matched and the tracking track frame.
And finally, performing intersection comparison matching on the detection frame which is not matched with the tracking track frame, wherein if the matching is successful, the tracking result is a tracking successful track, and if the matching is unsuccessful, the tracking result is a tracking lost track or the detection frame which is not matched is sent to track updating.
The human body is tracked through the steps to obtain the human body track, the human body track is not limited to the external attribute of the human body, the high-dimensional characteristic of the human body is concerned, meanwhile, the generated track covers all activities of the human body in the relevant area, compared with the track obtained by the existing infrared sensor, the information is more comprehensive, compared with the track obtained by the existing image sensor, the human body track recognition rate is greatly improved.
Further, according to an embodiment of the present invention, in step S35, if the tracking result is that the detection box is not matched, the following steps are performed:
s351, taking the unmatched detection frame as a track to be determined, changing the track to be determined after continuous tracking is successfully performed for a fixed number of times, and changing the track to be deleted if tracking loss occurs during tracking;
s352, determining that the state track is changed into a deleted state track to be deleted if the continuous loss times exceed a fixed value;
and S353, deleting the track of the state to be deleted later.
Specifically, as shown in fig. 3, the unmatched detection frame is used as the track to be determined, the unmatched detection frame is continuously tracked, the track to be determined is changed from the track to be determined after the continuous tracking is successfully performed for a fixed number of times, if the tracking is lost during the continuous tracking, the track to be deleted is changed from the track to be determined to the track to be deleted, and if the number of times of continuous loss of the track to be determined exceeds a fixed value, the track to be deleted is changed to the track to be deleted.
It should be noted that the determined state trajectory and the trajectory to be determined may be changed into a to-be-deleted state, the trajectory to be deleted may not be changed into other two states, and the trajectory to be deleted is deleted later. And tracking the detection frames which are not matched again, so that the tracking precision is increased, and the target loss probability is reduced.
According to an embodiment of the present invention, in step S4, the track point is a center point of a lower frame of the detection frame, and step S5 includes:
s51, obtaining a tracking result to obtain the latest n track points;
and S52, if the track point number n is smaller than the set threshold value, the target is considered to be away from the position, and the target is judged to be away from the position.
According to an embodiment of the present invention, step S5 further includes:
and S53, if the track point number n is larger than the set threshold value, counting the points a and b inside and outside the position, then judging whether the points a and b meet the set value, if not, considering the target leaving position, and judging that the target leaving position is a leaving position, wherein a is 0.2n and b is 0.2 n.
According to an embodiment of the present invention, step S5 further includes:
and S54, judging that the pixel distances width/m of the two sides of the center point of the lower frame of the latest track point are out of positions, wherein width is the width of the pixels of the lower frame of the detection frame, if the width is satisfied, the target leaves the position, judging that the target leaves the position, n is 10, and m is 6-8.
Specifically, as shown in fig. 4, the latest n track points of the monitored person are obtained through the tracking result, wherein the track points are the central points of the lower frame of the human body detection frame, a threshold value is set, and if the number n of the track points is smaller than the set threshold value, the monitored person is considered to leave the position and is judged to leave the position. If n is larger than the set threshold, counting the points a and b inside and outside the position, then judging whether the points a and b meet the set value, if so, determining that the monitored person is out of the position from the inside of the position, and judging the monitored person is out of the position. Meanwhile, in order to avoid that track points may shake at the edge when a monitored person sits at the edge of the position, and misjudgment is misjudged as off-position, a judgment condition is added, namely the distance between the track point of the current frame of the off-position track and the edge of the position is larger than the width/m pixel distance, namely the pixel distances of the width/m at two sides of the center point of the lower frame of the latest track point are all outside the position, if the distance is met, the monitored person is considered to leave the position, and the off-position is judged. Wherein width is the width of the lower frame pixel of the detection frame, and m is a value obtained through tracking actual test, and is generally 6-8.
In summary, the off-position detection method according to the present invention has at least the following advantages:
(1) and the video images are obtained only through the camera for off-position analysis, so that a hardware system for off-position detection is simplified, and the monitoring range is larger.
(2) Under the condition of a large number of people, the moving track of the target person is obtained in real time by adding a human body tracking mode by using a video analysis method, and whether the multi-person off-position behavior occurs can be judged at the same time.
(3) The human body trajectory tracking is added, so that the off-position behavior judgment algorithm is simple and accurate, and the judgment that the human body is in the position to the outside of the position but not in the position where the human body outside the position approaches to the position, which causes misjudgment, is effectively carried out.
The off-position detection system comprises an image acquisition module, an image processing module, a human body tracking module, a track acquisition module and a judgment module.
Specifically, the image acquisition module acquires image data of a human body in a certain region based on a time sequence, the image processing module processes the image data to obtain detection frames which are respectively in one-to-one correspondence with each human body in the region, the human body tracking module tracks the human body in the region, the track acquisition module calculates track points of each frame according to a human body tracking result to generate a human body displacement track, and the judgment module judges whether the human body is out of position according to whether the human body displacement track crosses a position edge.
Therefore, according to the dislocation detection system provided by the embodiment of the invention, the hardware system for dislocation detection is simplified only by obtaining the video image for dislocation analysis, the monitoring range is wider, the displacement track of multiple persons can be obtained through one image data, whether the multiple persons dislocate or not can be judged, and the human body track tracking is added, so that the dislocation behavior judgment algorithm is simple and accurate.
The functions of the modules of the off-position detection system according to the embodiment of the present invention have been described in detail in the above embodiments, and thus are not described again.
In addition, the present invention also provides a computer storage medium, which includes one or more computer instructions, and when executed, the one or more computer instructions implement any of the above-mentioned dislocation detection methods.
That is, the computer storage medium stores a computer program that, when executed by the processor, causes the processor to execute any of the above-described dislocation detection methods.
As shown in fig. 5, an embodiment of the present invention provides an electronic device 300, which includes a memory 310 and a processor 320, where the memory 310 is configured to store one or more computer instructions, and the processor 320 is configured to call and execute the one or more computer instructions, so as to implement any one of the methods described above.
That is, the electronic device 300 includes: a processor 320 and a memory 310, in which memory 310 computer program instructions are stored, wherein the computer program instructions, when executed by the processor, cause the processor 320 to perform any of the methods described above.
Further, as shown in fig. 5, the electronic device 300 further includes a network interface 330, an input device 340, a hard disk 350, and a display device 360.
The various interfaces and devices described above may be interconnected by a bus architecture. A bus architecture may be any architecture that may include any number of interconnected buses and bridges. Various circuits of one or more Central Processing Units (CPUs), represented in particular by processor 320, and one or more memories, represented by memory 310, are coupled together. The bus architecture may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like. It will be appreciated that a bus architecture is used to enable communications among the components. The bus architecture includes a power bus, a control bus, and a status signal bus, in addition to a data bus, all of which are well known in the art and therefore will not be described in detail herein.
The network interface 330 may be connected to a network (e.g., the internet, a local area network, etc.), and may obtain relevant data from the network and store the relevant data in the hard disk 350.
The input device 340 may receive various commands input by an operator and send the commands to the processor 320 for execution. The input device 340 may include a keyboard or a pointing device (e.g., a mouse, a trackball, a touch pad, a touch screen, or the like).
The display device 360 may display the result of the instructions executed by the processor 320.
The memory 310 is used for storing programs and data necessary for operating the operating system, and data such as intermediate results in the calculation process of the processor 320.
It will be appreciated that memory 310 in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. The memory 310 of the apparatus and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 310 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 311 and application programs 312.
The operating system 311 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application programs 312 include various application programs, such as a Browser (Browser), and are used for implementing various application services. A program implementing methods of embodiments of the present invention may be included in application 312.
The method disclosed by the above embodiment of the present invention can be applied to the processor 320, or implemented by the processor 320. Processor 320 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 320. The processor 320 may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, and may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present invention. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 310, and the processor 320 reads the information in the memory 310 and completes the steps of the method in combination with the hardware.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
In particular, the processor 320 is also configured to read the computer program and execute any of the methods described above.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the transceiving method according to various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. A dislocation detection method is characterized by comprising the following steps:
s1, acquiring time-series-based image data of a human body in a certain area;
s2, processing the image data to obtain detection frames corresponding to each human body in the area one by one;
s3, tracking the human body in the area according to the detection frame;
s4, calculating track points of each frame according to the human body tracking result to generate a human body displacement track;
and S5, judging whether the human body leaves the position according to whether the human body displacement track crosses the edge of the position.
2. The method according to claim 1, wherein in step S1, the background acquires the image data of the human body in a certain region based on time series through a camera installed in the region.
3. The method according to claim 1, wherein step S2 includes:
s21, inputting the image data into a detection network to obtain a human body detection frame;
and S22, removing redundant human body detection frames through NMS to obtain the detection frames corresponding to each human body in the region one by one.
4. The method according to claim 1, wherein step S3 includes:
s31, inputting the detection frames, and extracting through a feature extraction network to obtain features corresponding to the detection frames;
s32, carrying out feature matching on the human body by a pedestrian re-identification method, and connecting the detection frames of the front and rear frames of each human body to obtain a human body detection frame image sequence with a time sequence;
s33, calculating the intersection ratio of one detection frame and the last frame of tracking track frame in the human body detection frame image sequence, if the intersection ratio is larger than a set threshold, continuing cosine matching between the detection frame and the last frame of tracking track frame, and if the intersection ratio is smaller than the set threshold, further performing intersection ratio matching between the detection frame which is not matched and the tracking track frame;
s34, performing cosine matching on the detection frame and the previous frame of tracking track frame, if the matching is successful, the tracking result is a tracking successful track, sending the tracking successful track into track updating, and if the matching is unsuccessful, performing intersection comparison matching on the detection frame which is not matched and the tracking track frame;
and S35, merging and comparing the detection frames which are not matched with the tracking track frames, matching successfully, wherein the tracking result is a tracking successful track, and if the matching is unsuccessful, the tracking result is a tracking lost track or the detection frames which are not matched, and sending the tracking lost track or the detection frames to track updating.
5. The method according to claim 4, wherein in step S35, if the tracking result is that the detection box is not matched, the following steps are executed:
s351, taking the unmatched detection frame as a track to be determined, changing the track to be determined after continuous tracking is successfully performed for a fixed number of times, and changing the track to be deleted if tracking loss occurs during tracking;
s352, determining that the state track is changed into a deleted state track to be deleted if the continuous loss times exceed a fixed value;
and S353, deleting the track of the state to be deleted later.
6. The method according to claim 1, wherein in step S4, the track point is a center point of a lower frame of the detection frame, and step S5 includes:
s51, obtaining a tracking result to obtain the latest n track points;
and S52, if the track point number n is smaller than the set threshold value, the target is considered to be away from the position, and the target is judged to be away from the position.
7. The method according to claim 6, wherein step S5 further comprises:
and S53, if the track point number n is larger than the set threshold value, counting the points a and b inside and outside the position, then judging whether the points a and b meet the set value, if not, considering the target leaving position, and judging that the target leaving position is a leaving position, wherein a is 0.2n and b is 0.2 n.
8. The method according to claim 7, wherein step S5 further comprises:
and S54, judging that the pixel distances width/m of the two sides of the center point of the lower frame of the latest track point are out of positions, wherein width is the width of the pixels of the lower frame of the detection frame, if the width is satisfied, the target leaves the position, judging that the target leaves the position, n is 10, and m is 6-8.
9. An off-position detection system, comprising:
the system comprises an image acquisition module, a time sequence-based image acquisition module and a time sequence-based image acquisition module, wherein the image acquisition module acquires image data of a human body in a certain area;
the image processing module is used for processing the image data to obtain detection frames which are respectively in one-to-one correspondence with each human body in the area;
the human body tracking module is used for tracking the human body in the region;
the track obtaining module calculates track points of each frame according to a human body tracking result to generate a human body displacement track;
and the judging module judges whether the human body is out of position according to whether the human body displacement track crosses the position edge.
10. A computer storage medium comprising one or more computer instructions which, when executed, implement the method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011016989.7A CN112183304B (en) | 2020-09-24 | 2020-09-24 | Off-position detection method, off-position detection system and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011016989.7A CN112183304B (en) | 2020-09-24 | 2020-09-24 | Off-position detection method, off-position detection system and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112183304A true CN112183304A (en) | 2021-01-05 |
CN112183304B CN112183304B (en) | 2024-07-16 |
Family
ID=73956619
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011016989.7A Active CN112183304B (en) | 2020-09-24 | 2020-09-24 | Off-position detection method, off-position detection system and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112183304B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113392776A (en) * | 2021-06-17 | 2021-09-14 | 深圳市千隼科技有限公司 | Seat leaving behavior detection method and storage device combining seat information and machine vision |
WO2023000856A1 (en) * | 2021-07-23 | 2023-01-26 | 上海商汤智能科技有限公司 | Abnormal event detection method and apparatus, electronic device, storage medium, and computer program product |
CN116309692A (en) * | 2022-09-08 | 2023-06-23 | 广东省机场管理集团有限公司工程建设指挥部 | Method, device and medium for binding airport security inspection personal packages based on deep learning |
CN116862980A (en) * | 2023-06-12 | 2023-10-10 | 上海玉贲智能科技有限公司 | Target detection frame position optimization correction method, system, medium and terminal for image edge |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425967A (en) * | 2013-07-21 | 2013-12-04 | 浙江大学 | Pedestrian flow monitoring method based on pedestrian detection and tracking |
CN104137155A (en) * | 2012-02-29 | 2014-11-05 | 皇家飞利浦有限公司 | Apparatus, method and system for monitoring presence of persons in an area |
CN107423708A (en) * | 2017-07-25 | 2017-12-01 | 成都通甲优博科技有限责任公司 | The method and its device of pedestrian's flow of the people in a kind of determination video |
KR20180068435A (en) * | 2016-12-14 | 2018-06-22 | 케이에스아이 주식회사 | Visual observation system and visual observation method using the same |
CN109522854A (en) * | 2018-11-22 | 2019-03-26 | 广州众聚智能科技有限公司 | A kind of pedestrian traffic statistical method based on deep learning and multiple target tracking |
CN109903312A (en) * | 2019-01-25 | 2019-06-18 | 北京工业大学 | A kind of football sportsman based on video multi-target tracking runs distance statistics method |
-
2020
- 2020-09-24 CN CN202011016989.7A patent/CN112183304B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104137155A (en) * | 2012-02-29 | 2014-11-05 | 皇家飞利浦有限公司 | Apparatus, method and system for monitoring presence of persons in an area |
CN103425967A (en) * | 2013-07-21 | 2013-12-04 | 浙江大学 | Pedestrian flow monitoring method based on pedestrian detection and tracking |
KR20180068435A (en) * | 2016-12-14 | 2018-06-22 | 케이에스아이 주식회사 | Visual observation system and visual observation method using the same |
CN107423708A (en) * | 2017-07-25 | 2017-12-01 | 成都通甲优博科技有限责任公司 | The method and its device of pedestrian's flow of the people in a kind of determination video |
CN109522854A (en) * | 2018-11-22 | 2019-03-26 | 广州众聚智能科技有限公司 | A kind of pedestrian traffic statistical method based on deep learning and multiple target tracking |
CN109903312A (en) * | 2019-01-25 | 2019-06-18 | 北京工业大学 | A kind of football sportsman based on video multi-target tracking runs distance statistics method |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113392776A (en) * | 2021-06-17 | 2021-09-14 | 深圳市千隼科技有限公司 | Seat leaving behavior detection method and storage device combining seat information and machine vision |
CN113392776B (en) * | 2021-06-17 | 2022-07-12 | 深圳日海物联技术有限公司 | Seat leaving behavior detection method and storage device combining seat information and machine vision |
WO2023000856A1 (en) * | 2021-07-23 | 2023-01-26 | 上海商汤智能科技有限公司 | Abnormal event detection method and apparatus, electronic device, storage medium, and computer program product |
CN116309692A (en) * | 2022-09-08 | 2023-06-23 | 广东省机场管理集团有限公司工程建设指挥部 | Method, device and medium for binding airport security inspection personal packages based on deep learning |
CN116309692B (en) * | 2022-09-08 | 2023-10-20 | 广东省机场管理集团有限公司工程建设指挥部 | Method, device and medium for binding airport security inspection personal packages based on deep learning |
CN116862980A (en) * | 2023-06-12 | 2023-10-10 | 上海玉贲智能科技有限公司 | Target detection frame position optimization correction method, system, medium and terminal for image edge |
CN116862980B (en) * | 2023-06-12 | 2024-01-23 | 上海玉贲智能科技有限公司 | Target detection frame position optimization correction method, system, medium and terminal for image edge |
Also Published As
Publication number | Publication date |
---|---|
CN112183304B (en) | 2024-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112183304A (en) | Off-position detection method, system and computer storage medium | |
CN108470332B (en) | Multi-target tracking method and device | |
JP6825674B2 (en) | Number of people counting method and number of people counting system | |
CN108986137B (en) | Human body tracking method, device and equipment | |
CN111079536B (en) | Behavior analysis method, storage medium and device based on human body key point time sequence | |
CN111723773A (en) | Remnant detection method, device, electronic equipment and readable storage medium | |
CN110647818A (en) | Identification method and device for shielding target object | |
CN113780145A (en) | Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium | |
CN112541403A (en) | Indoor personnel falling detection method utilizing infrared camera | |
CN115272967A (en) | Cross-camera pedestrian real-time tracking and identifying method, device and medium | |
JP2018124801A (en) | Gesture recognition device and gesture recognition program | |
KR101137110B1 (en) | Method and apparatus for surveying objects in moving picture images | |
CN113112479A (en) | Progressive target detection method and device based on key block extraction | |
CN117133050A (en) | Method, device, equipment and storage medium for detecting abnormal behavior of car | |
CN112348011A (en) | Vehicle damage assessment method and device and storage medium | |
CN111695404A (en) | Pedestrian falling detection method and device, electronic equipment and storage medium | |
CN112927258A (en) | Target tracking method and device | |
CN107392100B (en) | Detection method for automatically detecting local abnormality in monitoring video | |
CN115661521A (en) | Fire hydrant water leakage detection method and system, electronic equipment and storage medium | |
CN114926973A (en) | Video monitoring method, device, system, server and readable storage medium | |
CN114067390A (en) | Old people falling detection method, system, device and medium based on video image | |
CN117115725A (en) | Urban area monitoring method, system, equipment and storage medium | |
CN114399721A (en) | People flow analysis method, equipment and non-volatile computer readable medium | |
CN112784691A (en) | Target detection model training method, target detection method and device | |
CN115346143A (en) | Behavior detection method, electronic device, and computer-readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |