CN112183304B - Off-position detection method, off-position detection system and computer storage medium - Google Patents

Off-position detection method, off-position detection system and computer storage medium Download PDF

Info

Publication number
CN112183304B
CN112183304B CN202011016989.7A CN202011016989A CN112183304B CN 112183304 B CN112183304 B CN 112183304B CN 202011016989 A CN202011016989 A CN 202011016989A CN 112183304 B CN112183304 B CN 112183304B
Authority
CN
China
Prior art keywords
track
human body
frame
tracking
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011016989.7A
Other languages
Chinese (zh)
Other versions
CN112183304A (en
Inventor
冯家辉
王祥雪
林焕凯
董振江
陈利军
黄仝宇
程庆
谭焕新
刘双广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gosuncn Technology Group Co Ltd
Original Assignee
Gosuncn Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gosuncn Technology Group Co Ltd filed Critical Gosuncn Technology Group Co Ltd
Priority to CN202011016989.7A priority Critical patent/CN112183304B/en
Publication of CN112183304A publication Critical patent/CN112183304A/en
Application granted granted Critical
Publication of CN112183304B publication Critical patent/CN112183304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an off-position detection method, an off-position detection system and a computer storage medium, wherein the off-position detection method comprises the following steps: s1, acquiring image data of a human body in a certain area based on a time sequence; s2, processing the image data to obtain detection frames corresponding to each human body in the region one by one; s3, tracking the human body in the region according to the detection frame; s4, calculating track points of each frame according to the human body tracking result to generate a human body displacement track; s5, judging whether the human body leaves the position according to whether the human body displacement track spans the edge of a certain position. According to the method provided by the embodiment of the invention, the image is utilized to carry out dislocation detection, the dislocation judgment is carried out through the human body track, the method has a good recognition effect, the moving track of the target person can be obtained in real time, whether the multi-person dislocation behavior occurs or not can be judged at the same time, and the hardware cost and the installation complexity are reduced.

Description

Off-position detection method, off-position detection system and computer storage medium
Technical Field
The present invention relates to the field of personnel management, and more particularly, to an off-position detection method, an off-position detection system, and a computer storage medium.
Background
For some special places, the states of personnel in the monitoring range need to be accurately identified, and whether the monitored personnel leave the position or not is determined. The existing method for detecting the leaving position generally adopts hardware: the detection system is formed by means of a pressure sensor, an infrared sensor and the like, so that whether the person leaves the position or not is obtained. The general dislocation detection and identification flow is as follows: firstly, acquiring target state data through a sensor, wherein the target can be a position or a person, then analyzing the acquired data by software, and finally judging whether dislocation occurs. Although the person out-of-position behavior can be indirectly judged by mounting hardware such as an infrared sensor on the position, the hardware is mainly judged by the method, the mounting difficulty is high, the accuracy requirement on the sensor is high, and the sensor directly mounted on the position can age with the environmental time. The hardware such as the sensor on one position only can judge whether a target person leaves or not, and the judgment of the leaving of a plurality of persons needs to use the hardware such as a plurality of sensors, so that the cost is high.
The existing other off-position detection method is to use an image sensor to carry out off-position detection, and is less in application, the off-position detection is mainly carried out by an image method, the recognition rate is low, and the omission rate is high compared with the detection method using hardware installed on the position.
Disclosure of Invention
In order to solve the technical problems, the invention provides an off-position detection method, an off-position detection system and a computer storage medium, which can acquire the moving track of a target person in real time, can judge whether the off-position behaviors of multiple persons occur at the same time, and reduce hardware cost and installation complexity.
According to an embodiment of the first aspect of the present invention, the dislocation detection method includes the following steps: s1, acquiring image data of a human body in a certain area based on a time sequence; s2, processing the image data to obtain detection frames corresponding to each human body in the region one by one; s3, tracking the human body in the region according to the detection frame; s4, calculating track points of each frame according to the human body tracking result to generate a human body displacement track; s5, judging whether the human body leaves the position according to whether the human body displacement track spans the edge of a certain position.
Therefore, according to the dislocation detection method provided by the embodiment of the invention, whether personnel in a certain area leave a certain position or not can be analyzed by combining a video image obtained in a certain area with a human body tracking method, so that not only is a hardware system of dislocation detection simplified and hardware development and assembly cost reduced, but also the monitoring range is larger, a plurality of persons can be obtained through one image data, whether the persons leave or not is judged, and in addition, the method adopts human body track tracking, so that dislocation behavior judgment is simpler and more accurate.
The off-position detection method according to the embodiment of the invention can also have the following additional technical characteristics:
According to some embodiments of the invention, in step S1, the image data of the human body in a certain area based on time series is acquired in the background by a camera installed in the area.
According to some embodiments of the invention, step S2 comprises: s21, inputting the image data into a detection network to obtain a human body detection frame; s22, removing redundant human body detection frames through the NMS to obtain the detection frames corresponding to each human body in the area one by one.
According to some embodiments of the invention, step S3 comprises: s31, inputting the detection frames, and extracting the corresponding features of the detection frames through a feature extraction network; s32, performing feature matching on the human body through a pedestrian re-recognition method, and connecting the detection frames of the front frame and the rear frame of each human body to obtain a human body detection frame image sequence with a time sequence; s33, calculating the intersection ratio of one detection frame in the human body detection frame image sequence and the tracking track frame of the previous frame, if the intersection ratio is larger than a set threshold, continuing cosine matching of the detection frame and the tracking track frame of the previous frame, and if the intersection ratio is smaller than the set threshold, then performing intersection ratio matching of the detection frame which is not matched with the tracking track frame; s34, performing cosine matching on the detection frame and the tracking track frame of the previous frame, if the matching is successful, sending the tracking result into track updating, and if the matching is unsuccessful, performing cross-union matching on the detection frame which is not matched with the tracking track frame; and S35, carrying out cross-matching ratio matching on the unmatched detection frame and the tracking track frame, if the matching is successful, tracking the track successfully, and if the matching is unsuccessful, tracking the track lost or the detection frame is not matched, and sending the track lost or the track lost to track update.
According to some embodiments of the present invention, in step S35, if the tracking result is that the tracking result does not match the detection frame, the following steps are performed: s351, taking the unmatched detection frames as undetermined state tracks, changing the undetermined state tracks into the determined state tracks after continuous tracking is successful for fixed times, and changing the state tracks into deleted state tracks to be deleted when tracking loss occurs during tracking; s352, determining that the state track is changed into a deleted state track to be deleted if the continuous loss times exceeds a fixed value; s353, the to-be-deleted state track is deleted later.
According to some embodiments of the invention, in step S4, the track point is a center point of a lower frame of the detection frame, and step S5 includes: s51, acquiring tracking results to obtain the latest n track points; and S52, if the number n of track points is smaller than a set threshold value, the target leaving position is considered, and the target leaving position is judged to be the leaving position.
According to some embodiments of the invention, step S5 further comprises: and S53, counting the points a and b in and out of the positions if the track point n is larger than a set threshold, judging whether the points a and b meet a set value, and if not, judging that the target leaves the position, wherein a=0.2n and b=0.2n.
According to some embodiments of the invention, step S5 further comprises: s54, judging that the points of the two sides of the center point of the lower frame of the latest track point are out of positions, wherein width is the width of the pixels of the lower frame of the detection frame, if the width is met, judging that the target leaves the position, and n=10, and m is 6-8.
An off-position detection system according to an embodiment of the second aspect of the present invention includes: the image acquisition module acquires image data of human bodies in a certain area based on a time sequence; the image processing module is used for processing the image data to obtain detection frames which are respectively corresponding to each human body in the area one by one; the human body tracking module is used for tracking the human body in the region; the track acquisition module is used for calculating track points of each frame according to the human body tracking result to generate a human body displacement track; and the judging module judges whether the human body is out of position according to whether the human body displacement track spans the position edge.
In a third aspect, embodiments of the present invention provide a computer storage medium comprising one or more computer instructions which, when executed, implement a method as described in the above embodiments.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of an off-position detection method according to an embodiment of the invention;
FIG. 2 is a flow chart of tracking a human body in an off-position detection method according to an embodiment of the invention;
FIG. 3 is a flowchart of updating a human body tracking trajectory of an off-position detection method according to an embodiment of the present invention;
FIG. 4 is a flowchart of an off-position determination of an off-position detection method according to an embodiment of the present invention;
Fig. 5 is a schematic diagram of an electronic device according to an embodiment of the invention.
Reference numerals:
an electronic device 300;
a memory 310; an operating system 311; an application 312;
a processor 320; a network interface 330; an input device 340; a hard disk 350; and a display device 360.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
An off-position detection method according to an embodiment of the present invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the dislocation detection method according to the embodiment of the invention includes the following steps:
s1, acquiring image data of a human body in a certain area based on a time sequence;
s2, processing the image data to obtain detection frames corresponding to each human body in the region one by one;
s3, tracking the human body in the region according to the detection frame;
s4, calculating track points of each frame according to the human body tracking result to generate a human body displacement track;
S5, judging whether the human body leaves the position according to whether the human body displacement track spans the edge of a certain position.
Firstly, it should be noted that the dislocation detection method according to the embodiment of the present invention may be used to detect whether a person in a certain area leaves a certain position, for example, a certain area may be a guardhouse where a person state in a monitoring range needs to be accurately identified, whether a person to be monitored leaves a position may be a bed or a seat in the guardhouse, etc., and by using the dislocation detection method according to the embodiment of the present invention, whether a person in the guardhouse leaves a bed or a seat may be detected in real time.
Specifically, when the dislocation detection method according to the embodiment of the present invention is used, firstly, image data in a certain area where dislocation detection is required is obtained, where the image data is time-series image data including a human body activity state in the area, for example, the image data may be a video of a certain time period, and the number of human bodies in the area may be one or multiple; then, processing the image data in the area, screening out detection frames corresponding to each person uniquely, for example, by identifying the human body of the image data, and obtaining one detection frame corresponding to each person through multiple times of screening; then, tracking the human body in the area according to each detection frame, calculating to obtain the track points of each human body on each frame of image according to the tracking result of the human body, and connecting the track points in series to generate a human body displacement track, wherein the tracking of the human body can be that one image data tracks a plurality of target persons, or that the tracking of the human body can be that a plurality of image data is crossed to carry out pedestrian re-identification; finally, whether the human body leaves the position can be judged according to whether the human body displacement track of each human body spans the edge of a certain position.
Therefore, according to the dislocation detection method provided by the embodiment of the invention, whether personnel in a certain area leave a certain position or not can be analyzed by combining a video image obtained in a certain area with a human body tracking method, so that not only is a hardware system of dislocation detection simplified and hardware development and assembly cost reduced, but also the monitoring range is larger, a plurality of persons can be obtained through one image data, whether the persons leave or not is judged, and in addition, the method adopts human body track tracking, so that dislocation behavior judgment is simpler and more accurate.
According to one embodiment of the present invention, in step S1, image data of a human body in an area based on a time series is acquired in the background by a camera installed in the area.
Specifically, a plurality of single cameras can be arranged in a certain area, each single camera can respectively acquire image data of human bodies of personnel in a corresponding range based on time sequences, and each single camera can also respectively screen out human bodies meeting set requirements.
According to one embodiment of the invention, step S2 comprises:
s21, inputting the image data into a detection network to obtain a human body detection frame;
s22, removing redundant human body detection frames through the NMS to obtain detection frames corresponding to each human body in the area one by one.
Specifically, the obtained human body of a certain region is input into a detection network based on time-series image data, for example: the detection network can be a deep learning network Faster R-CNN, the image data is input into the deep learning network Faster R-CNN to obtain a plurality of human body detection frames, then redundant human body detection frames are removed through NMS, NMS is non-maximum suppression, the obtained plurality of human body detection frames are processed, sorting is carried out according to the classification probability of the classifier, the probability of the person to be monitored is arranged from small to large, the human body detection frames arranged according to the probability are subjected to cross-over comparison, a threshold value is set, the human body detection frames are removed, the human body detection frames smaller than the threshold value are reserved, and the cross-over comparison is carried out on the reserved human body detection frames. And obtaining detection frames corresponding to each human body in the region one by one through multiple times of comparison. The data can be acquired in real time and further the image data with higher precision can be obtained through the steps.
According to one embodiment of the invention, step S3 comprises:
s31, inputting detection frames, and extracting corresponding features of the detection frames through a feature extraction network;
s32, performing feature matching on the human body through a pedestrian re-recognition method, and connecting detection frames of front and rear frames of each human body to obtain a human body detection frame image sequence with a time sequence;
S33, calculating the intersection ratio of one detection frame in the human body detection frame image sequence and the tracking track frame of the previous frame, if the intersection ratio is larger than a set threshold, continuing cosine matching of the detection frame and the tracking track frame of the previous frame, and if the intersection ratio is smaller than the set threshold, then performing intersection ratio matching of the detection frame which is not matched with the tracking track frame;
S34, performing cosine matching on the detection frame and the tracking track frame of the previous frame, if the matching is successful, the tracking result is a tracking successful track, and sending the tracking successful track to track updating, and if the matching is unsuccessful, performing cross-ratio matching on the detection frame and the tracking track frame which are not matched;
s35, carrying out cross-matching comparison matching on the detection frames which are not matched with the tracking track frames, if the matching is successful, tracking the track successfully, and if the matching is unsuccessful, tracking the track lost or the detection frames which are not matched with the tracking result, and sending the track lost or the track lost to the tracking track frame for updating.
In other words, as shown in fig. 2, human body tracking of a human body in an area may be performed by:
First, the detection frames corresponding to the human body one by one obtained in step S22 are input to the feature extraction network to extract features corresponding to the detection frames, for example: the feature extraction network can be a deep convolutional network VGG-Net model, and the features corresponding to each detection frame can be features such as wearing, body state, hairstyle and the like.
And then, carrying out matching on the human body detection frames with characteristics of wearing, posture, hairstyle and the like of the pedestrians, identifying and searching the pedestrians under the cross-camera cross-scene, and connecting the detection frames of the front frame and the rear frame of each human body to obtain a human body detection frame image sequence with a time sequence.
And then, taking the distance between one detection frame in the human body detection frame image sequence with the time sequence and the tracking track frame of the previous frame as an intersection ratio, if the intersection ratio is larger than a set threshold value, the distance between the corresponding detection frame and the tracking track frame is closer, continuing cosine matching between the detection frame and the tracking track frame, if the intersection ratio is smaller than the set threshold value, the distance between the corresponding detection frame and the tracking track frame is farther, and if cosine matching is not carried out later, then, carrying out intersection ratio matching between the detection frame which is not matched and the tracking track frame.
It should be noted that the tracking track frame is also a detection frame, and belongs to the detection frame of the previous frame, and the detection frame belongs to a certain track, so that the tracking track frame is also called a tracking track frame. The distance between the detection frame and the tracking track frame ranges from 0 to 1,0 is the farthest distance, and 1 is the nearest distance.
And performing cosine matching on the detection frame and the tracking track frame of the previous frame, if the matching is successful, the tracking result is a tracking successful track, sending the tracking successful track to track updating, and if the matching is unsuccessful, performing cross-union matching on the detection frame which is not matched with the tracking track frame.
And finally, carrying out cross-matching comparison matching on the detection frames which are not matched with the tracking track frames, if the matching is successful, the tracking result is a tracking successful track, and if the matching is unsuccessful, the tracking result is a tracking lost track or the detection frames are not matched, and sending the tracking lost track to track updating.
The human body tracking is carried out through the steps to acquire the human track, the external attribute of the human body is not limited, the high-dimensional characteristics of the human body are focused, the generated track covers all activities of the human in a related area, the track acquired by the infrared sensor is more comprehensive in information compared with the track acquired by the existing infrared sensor, and the human track identification rate is greatly improved compared with the track acquired by the existing image sensor.
Further, according to an embodiment of the present invention, in step S35, if the tracking result is that the detection frame is not matched, the following steps are performed:
S351, taking the unmatched detection frame as a to-be-determined state track, changing the to-be-determined state track after continuous tracking is successful for a fixed number of times, and changing the to-be-deleted state track when tracking loss occurs during tracking;
S352, determining that the state track is changed into a deleted state track to be deleted if the continuous loss times exceeds a fixed value;
s353, the to-be-deleted state track is deleted later.
Specifically, as shown in fig. 3, the unmatched detection frame is used as a pending state track, the unmatched detection frame is continuously tracked, the unmatched detection frame is changed from the pending state track to a determined state track after continuous tracking is successful for a fixed number of times, the unmatched detection frame is changed from the pending state track to a deleted state track to be deleted when tracking loss occurs during continuous tracking, and the deleted state track is changed to the deleted state track to be deleted when the continuous loss number of times exceeds a fixed value.
It should be noted that, the determined state track and the pending state track may be changed to the to-be-deleted state, and the to-be-deleted state track may not be changed to the other two states, and the to-be-deleted state track is deleted later. And tracking accuracy is increased and target loss probability is reduced by tracking the unmatched detection frames again.
According to one embodiment of the present invention, in step S4, the track point is a center point of a lower frame of the detection frame, and step S5 includes:
s51, acquiring tracking results to obtain the latest n track points;
And S52, if the number n of track points is smaller than a set threshold value, the target leaving position is considered, and the target leaving position is judged to be the leaving position.
According to one embodiment of the invention, step S5 further comprises:
And S53, counting the points a and b in and out of the positions if the track point n is larger than a set threshold, judging whether the points a and b meet a set value, and if not, judging that the target leaves the position, wherein a=0.2n and b=0.2n.
According to one embodiment of the invention, step S5 further comprises:
S54, judging that the points of the two sides of the center point of the lower frame of the latest track point are out of positions, wherein width is the width of the pixels of the lower frame of the detection frame, if the width is met, judging that the target leaves the position, and n=10, and m is 6-8.
Specifically, as shown in fig. 4, the latest n track points of the monitored person are obtained through the tracking result, wherein the track points take the center point of the lower frame of the human body detection frame, a threshold value is set, if the track point number n is smaller than the set threshold value, the monitored person is considered to leave the position, and the monitored person is judged to leave the position. If n is greater than the set threshold, counting the points a and b in and out of the position, judging whether the points a and b meet the set value, and if so, judging that the monitored person is out of position from the position. Meanwhile, in order to avoid that the monitored person possibly shakes at the edge when sitting at the position edge, the track point is misjudged to be off-position, so that a judgment condition is added, namely that the distance from the position edge to the track point of the current frame of the off-position track is larger than the width/m pixel distance, namely that the points of the two sides of the center point of the frame under the latest track point are out of position, and if the current frame is satisfied, the monitored person is considered to be off-position, and the monitored person is judged to be off-position. Wherein width is the width of the pixel of the lower frame of the detection frame, and m is a value obtained by tracking actual test and is generally 6-8.
In summary, the dislocation detection method according to the present invention has at least the following advantages:
(1) The video image is obtained only through the camera to carry out off-position analysis, so that a hardware system of off-position detection is simplified, and the monitoring range is larger.
(2) Under the condition of more people, the method of video analysis is utilized, and the moving track of the target person is obtained in real time by adding a human body tracking mode, so that whether the multi-person dislocation behavior occurs can be judged simultaneously.
(3) The human body track is added, so that the dislocation behavior judgment algorithm is simple and accurate, and the misjudgment is effectively judged that the person is from the inside to the outside of the position, but not that the person outside the position is close to the inside of the position.
The dislocation detection system comprises an image acquisition module, an image processing module, a human body tracking module, a track acquisition module and a judgment module.
Specifically, the image acquisition module acquires image data of human bodies in a certain area based on a time sequence, the image processing module processes the image data to obtain detection frames corresponding to each human body in the area one by one, the human body tracking module performs human body tracking on the human bodies in the area, the track acquisition module calculates track points of each frame according to human body tracking results to generate a human body displacement track, and the judgment module judges whether the human body is out of position according to whether the human body displacement track spans the position edge.
Therefore, according to the dislocation detection system provided by the embodiment of the invention, the dislocation analysis is carried out only by obtaining the video image, the hardware system of dislocation detection is simplified, the monitoring range is larger, the displacement tracks of multiple persons can be obtained through one image data, whether the multiple persons are out of position or not is judged, and the human body track is added, so that the dislocation behavior judgment algorithm is simple and accurate.
The function of each module of the dislocation detection system according to the embodiment of the present invention is described in detail in the above embodiment, and thus will not be described in detail.
In addition, the invention also provides a computer storage medium, which comprises one or more computer instructions, and the one or more computer instructions realize any one of the dislocation detection methods when being executed.
That is, the computer storage medium stores a computer program which, when executed by a processor, causes the processor to perform any one of the above-described dislocation detection methods.
As shown in fig. 5, an embodiment of the present invention provides an electronic device 300, including a memory 310 and a processor 320, where the memory 310 is configured to store one or more computer instructions, and the processor 320 is configured to invoke and execute the one or more computer instructions, thereby implementing any of the methods described above.
That is, the electronic device 300 includes: a processor 320 and a memory 310, in which memory 310 computer program instructions are stored which, when executed by the processor, cause the processor 320 to perform any of the methods described above.
Further, as shown in fig. 5, the electronic device 300 also includes a network interface 330, an input device 340, a hard disk 350, and a display device 360.
The interfaces and devices described above may be interconnected by a bus architecture. The bus architecture may be a bus and bridge that may include any number of interconnects. One or more Central Processing Units (CPUs), represented in particular by processor 320, and various circuits of one or more memories, represented by memory 310, are connected together. The bus architecture may also connect various other circuits together, such as peripheral devices, voltage regulators, and power management circuits. It is understood that a bus architecture is used to enable connected communications between these components. The bus architecture includes, in addition to a data bus, a power bus, a control bus, and a status signal bus, all of which are well known in the art and therefore will not be described in detail herein.
The network interface 330 may be connected to a network (e.g., the internet, a local area network, etc.), and may obtain relevant data from the network and store the relevant data in the hard disk 350.
The input device 340 may receive various instructions from an operator and transmit the instructions to the processor 320 for execution. The input device 340 may include a keyboard or pointing device (e.g., a mouse, a trackball, a touch pad, or a touch screen, among others).
The display device 360 may display results obtained by the processor 320 executing instructions.
The memory 310 is used for storing programs and data necessary for the operation of the operating system, and data such as intermediate results in the calculation process of the processor 320.
It will be appreciated that memory 310 in embodiments of the invention may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be Read Only Memory (ROM), programmable Read Only Memory (PROM), erasable Programmable Read Only Memory (EPROM), electrically Erasable Programmable Read Only Memory (EEPROM), or flash memory, among others. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. The memory 310 of the apparatus and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some implementations, the memory 310 stores the following elements, executable modules or data structures, or a subset thereof, or an extended set thereof: an operating system 311 and applications 312.
The operating system 311 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application programs 312 include various application programs such as a Browser (Browser) and the like for implementing various application services. A program implementing the method of the embodiment of the present invention may be included in the application program 312.
The method disclosed in the above embodiment of the present invention may be applied to the processor 320 or implemented by the processor 320. Processor 320 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware or instructions in software in processor 320. The processor 320 may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components, which may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 310 and the processor 320 reads the information in the memory 310 and in combination with its hardware performs the steps of the method described above.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
In particular, the processor 320 is further configured to read the computer program and execute any of the methods described above.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may be physically included separately, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform part of the steps of the transceiving method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk, or an optical disk, etc., which can store program codes.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (9)

1. The dislocation detection method is characterized by comprising the following steps of:
s1, acquiring image data of a human body in a certain area based on a time sequence;
s2, processing the image data to obtain detection frames corresponding to each human body in the region one by one;
s3, tracking the human body in the region according to the detection frame;
s4, calculating track points of each frame according to the human body tracking result to generate a human body displacement track;
S5, judging whether the human body leaves a certain position according to whether the human body displacement track spans the edge of the position;
The step S3 includes:
S31, inputting the detection frames, and extracting the corresponding features of the detection frames through a feature extraction network;
s32, performing feature matching on the human body through a pedestrian re-recognition method, and connecting the detection frames of the front frame and the rear frame of each human body to obtain a human body detection frame image sequence with a time sequence;
S33, calculating the intersection ratio of one detection frame in the human body detection frame image sequence and the tracking track frame of the previous frame, if the intersection ratio is larger than a set threshold, continuing cosine matching of the detection frame and the tracking track frame of the previous frame, and if the intersection ratio is smaller than the set threshold, then performing intersection ratio matching of the detection frame which is not matched with the tracking track frame;
S34, performing cosine matching on the detection frame and the tracking track frame of the previous frame, if the matching is successful, sending the tracking result into track updating, and if the matching is unsuccessful, performing cross-union matching on the detection frame which is not matched with the tracking track frame;
And S35, carrying out cross-matching ratio matching on the unmatched detection frame and the tracking track frame, if the matching is successful, tracking the track successfully, and if the matching is unsuccessful, tracking the track lost or the detection frame is not matched, and sending the track lost or the track lost to track update.
2. The method according to claim 1, wherein in step S1, the image data of the human body in a certain area based on time series is acquired in the background by a camera installed in the area.
3. The method according to claim 1, wherein step S2 comprises:
s21, inputting the image data into a detection network to obtain a human body detection frame;
s22, removing redundant human body detection frames through the NMS to obtain the detection frames corresponding to each human body in the area one by one.
4. The method according to claim 1, wherein in step S35, if the tracking result is that the detection frame is not matched, the following steps are performed:
s351, taking the unmatched detection frames as undetermined state tracks, changing the undetermined state tracks into the determined state tracks after continuous tracking is successful for fixed times, and changing the state tracks into deleted state tracks to be deleted when tracking loss occurs during tracking;
S352, determining that the state track is changed into a deleted state track to be deleted if the continuous loss times exceeds a fixed value;
s353, the to-be-deleted state track is deleted later.
5. The method according to claim 1, wherein in step S4, the track point is a center point of a lower frame of the detection frame, and step S5 includes:
s51, acquiring tracking results to obtain the latest n track points;
And S52, if the number n of track points is smaller than a set threshold value, the target leaving position is considered, and the target leaving position is judged to be the leaving position.
6. The method of claim 5, wherein step S5 further comprises:
And S53, counting the points a and b in and out of the positions if the track point n is larger than a set threshold, judging whether the points a and b meet a set value, and if not, judging that the target leaves the position, wherein a=0.2n and b=0.2n.
7. The method of claim 6, wherein step S5 further comprises:
S54, judging that the points of the two sides of the center point of the lower frame of the latest track point are out of positions, wherein width is the width of the pixels of the lower frame of the detection frame, if the width is met, judging that the target leaves the position, and n=10, and m is 6-8.
8. An off-position detection system, comprising:
the image acquisition module acquires image data of human bodies in a certain area based on a time sequence;
the image processing module is used for processing the image data to obtain detection frames which are respectively corresponding to each human body in the area one by one;
The human body tracking module is used for tracking the human body in the region;
The track acquisition module is used for calculating track points of each frame according to the human body tracking result to generate a human body displacement track;
the judging module judges whether the human body is out of position according to whether the human body displacement track spans the position edge or not;
the human body tracking module performs human body tracking on the human body in the region, and comprises the following steps:
S31, inputting the detection frames, and extracting the corresponding features of the detection frames through a feature extraction network;
s32, performing feature matching on the human body through a pedestrian re-recognition method, and connecting the detection frames of the front frame and the rear frame of each human body to obtain a human body detection frame image sequence with a time sequence;
S33, calculating the intersection ratio of one detection frame in the human body detection frame image sequence and the tracking track frame of the previous frame, if the intersection ratio is larger than a set threshold, continuing cosine matching of the detection frame and the tracking track frame of the previous frame, and if the intersection ratio is smaller than the set threshold, then performing intersection ratio matching of the detection frame which is not matched with the tracking track frame;
S34, performing cosine matching on the detection frame and the tracking track frame of the previous frame, if the matching is successful, sending the tracking result into track updating, and if the matching is unsuccessful, performing cross-union matching on the detection frame which is not matched with the tracking track frame;
And S35, carrying out cross-matching ratio matching on the unmatched detection frame and the tracking track frame, if the matching is successful, tracking the track successfully, and if the matching is unsuccessful, tracking the track lost or the detection frame is not matched, and sending the track lost or the track lost to track update.
9. A computer storage medium comprising one or more computer instructions which, when executed, implement the method of any of claims 1-7.
CN202011016989.7A 2020-09-24 2020-09-24 Off-position detection method, off-position detection system and computer storage medium Active CN112183304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011016989.7A CN112183304B (en) 2020-09-24 2020-09-24 Off-position detection method, off-position detection system and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011016989.7A CN112183304B (en) 2020-09-24 2020-09-24 Off-position detection method, off-position detection system and computer storage medium

Publications (2)

Publication Number Publication Date
CN112183304A CN112183304A (en) 2021-01-05
CN112183304B true CN112183304B (en) 2024-07-16

Family

ID=73956619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011016989.7A Active CN112183304B (en) 2020-09-24 2020-09-24 Off-position detection method, off-position detection system and computer storage medium

Country Status (1)

Country Link
CN (1) CN112183304B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392776B (en) * 2021-06-17 2022-07-12 深圳日海物联技术有限公司 Seat leaving behavior detection method and storage device combining seat information and machine vision
CN113553950A (en) * 2021-07-23 2021-10-26 上海商汤智能科技有限公司 Abnormal event detection method and device, electronic equipment and storage medium
CN116309692B (en) * 2022-09-08 2023-10-20 广东省机场管理集团有限公司工程建设指挥部 Method, device and medium for binding airport security inspection personal packages based on deep learning
CN116862980B (en) * 2023-06-12 2024-01-23 上海玉贲智能科技有限公司 Target detection frame position optimization correction method, system, medium and terminal for image edge

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425967A (en) * 2013-07-21 2013-12-04 浙江大学 Pedestrian flow monitoring method based on pedestrian detection and tracking

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101903615B1 (en) * 2016-12-14 2018-10-02 케이에스아이 주식회사 Visual observation system and visual observation method using the same
CN107423708A (en) * 2017-07-25 2017-12-01 成都通甲优博科技有限责任公司 The method and its device of pedestrian's flow of the people in a kind of determination video
CN109522854B (en) * 2018-11-22 2021-05-11 广州众聚智能科技有限公司 Pedestrian traffic statistical method based on deep learning and multi-target tracking
CN109903312B (en) * 2019-01-25 2021-04-30 北京工业大学 Football player running distance statistical method based on video multi-target tracking

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425967A (en) * 2013-07-21 2013-12-04 浙江大学 Pedestrian flow monitoring method based on pedestrian detection and tracking

Also Published As

Publication number Publication date
CN112183304A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN112183304B (en) Off-position detection method, off-position detection system and computer storage medium
CN108470332B (en) Multi-target tracking method and device
CN108446669B (en) Motion recognition method, motion recognition device and storage medium
CN110268440B (en) Image analysis device, image analysis method, and storage medium
CN112639873A (en) Multi-object pose tracking device and method based on single-object pose estimator
CN109740590B (en) ROI accurate extraction method and system based on target tracking assistance
CN108009466B (en) Pedestrian detection method and device
KR102261880B1 (en) Method, appratus and system for providing deep learning based facial recognition service
US11170512B2 (en) Image processing apparatus and method, and image processing system
CN111079536B (en) Behavior analysis method, storage medium and device based on human body key point time sequence
CN114332157B (en) Long-time tracking method for double-threshold control
CN114093022A (en) Activity detection device, activity detection system, and activity detection method
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
JP2018124801A (en) Gesture recognition device and gesture recognition program
CN113837006A (en) Face recognition method and device, storage medium and electronic equipment
CN117133050A (en) Method, device, equipment and storage medium for detecting abnormal behavior of car
CN107886060A (en) Pedestrian's automatic detection and tracking based on video
CN111695404A (en) Pedestrian falling detection method and device, electronic equipment and storage medium
CN107392100B (en) Detection method for automatically detecting local abnormality in monitoring video
CN115661521A (en) Fire hydrant water leakage detection method and system, electronic equipment and storage medium
JP2015049702A (en) Object recognition device, object recognition method, and program
CN114067390A (en) Old people falling detection method, system, device and medium based on video image
CN111274899B (en) Face matching method, device, electronic equipment and storage medium
CN114399721A (en) People flow analysis method, equipment and non-volatile computer readable medium
CN113657169A (en) Gait recognition method, device, system and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant