CN111738240A - Region monitoring method, device, equipment and storage medium - Google Patents

Region monitoring method, device, equipment and storage medium Download PDF

Info

Publication number
CN111738240A
CN111738240A CN202010841202.4A CN202010841202A CN111738240A CN 111738240 A CN111738240 A CN 111738240A CN 202010841202 A CN202010841202 A CN 202010841202A CN 111738240 A CN111738240 A CN 111738240A
Authority
CN
China
Prior art keywords
target object
target
area
image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010841202.4A
Other languages
Chinese (zh)
Inventor
许义军
张�浩
张财
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Shencai Technology Co ltd
Original Assignee
Jiangsu Shencai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Shencai Technology Co ltd filed Critical Jiangsu Shencai Technology Co ltd
Priority to CN202010841202.4A priority Critical patent/CN111738240A/en
Publication of CN111738240A publication Critical patent/CN111738240A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a region monitoring method, a region monitoring device, a region monitoring equipment and a storage medium. The method is executed by an edge end device and comprises the following steps: identifying a target object in the image based on a multilayer convolutional neural network model according to the image sent by the image collector, and determining the position relation between the target object in the image and a target area; the target object is an object for autonomously controlling motion; if the target object is detected to be located in the target area, tracking the target object through the continuous video frame images and the target object identification, and determining the motion track of the target object; and if the motion track of the target object is detected to be in the target area, determining that the target object invades the target area. According to the scheme, the target object which enters the target area and moves in the autonomous control mode is tracked, and if the target object is located in the target area for a long time, the target object is determined to invade the target area, so that misjudgment of an invasion event is avoided, and the invasion event of the target area is accurately monitored.

Description

Region monitoring method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of target detection, in particular to a region monitoring method, a region monitoring device, a region monitoring equipment and a storage medium.
Background
The intrusion refers to the behavior of illegally entering the warning area or triggering a warning object. For an environment protection area, the safety of the area needs to be monitored, so that the invasion of illegal molecules is avoided, and the safety of the protection area is threatened.
When a protection area is monitored at present, an intrusion event of the protection area is determined as long as an intrusion object exists in the protection area in an image, and actually, only a boundary line of a target object temporarily touching or entering the protection area exists, but not a real intrusion, so that the misjudgment rate is high. In addition, at present, an intelligent camera is generally used for recognizing a target object, the requirement on image acquisition equipment is high, or the camera sends the camera to a cloud end and the cloud end is used for processing, so that the transmission pressure is increased, the high requirement on the transmission bandwidth is generated, and the real-time performance is difficult to realize.
Disclosure of Invention
Embodiments of the present invention provide a region monitoring method, apparatus, device, and storage medium, so as to determine whether a target object actually invades a target region by tracking the target object, improve accuracy of target region monitoring, and implement real-time monitoring.
In one embodiment, an area monitoring method is provided in an embodiment of the present application, where the method is performed by an edge device, and the method includes:
according to the image sent by the image collector, identifying a target object in the image based on a multilayer convolutional neural network model, and determining the position relation between the target object and a target area in the image; wherein the target object is an object of autonomous control motion;
if the target object is detected to be located in the target area, tracking the target object through continuous video frame images and a target object identifier, and determining a motion track of the target object;
and if the motion track of the target object is detected to be in the target area, determining that the target object invades the target area.
In another embodiment, an area monitoring apparatus configured to an edge device is further provided in an embodiment of the present application, and the apparatus includes:
the position relation determining module is used for identifying a target object in the image based on the multilayer convolutional neural network model according to the image sent by the image collector and determining the position relation between the target object in the image and a target area; wherein the target object is an object of autonomous control motion;
the motion track determining module is used for tracking the target object through continuous video frame images and a target object identifier and determining the motion track of the target object if the target object is detected to be located in the target area;
and the intrusion determination module is used for determining that the target object intrudes into the target area if the movement track of the target object is detected to be in the target area.
In another embodiment, an embodiment of the present application further provides an area monitoring device, including: one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the area monitoring method according to any one of the embodiments of the present application.
In yet another embodiment, the present application further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the area monitoring method according to any one of the embodiments of the present application.
In the embodiment of the application, when the target object is detected to start to enter the target area and is the target object which autonomously controls movement, the target object is tracked, and if the target object is tracked to be in the target area for a long time, the target object is determined to invade the target area, so that misjudgment of an invasion event is avoided, and the invasion event of the target area is accurately monitored. In addition, the method is executed by the edge terminal equipment, so that real-time monitoring can be realized.
Drawings
Fig. 1 is a flowchart of a region monitoring method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a cloud and terminal cooperation structure according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating intrusion of a target object into a target area according to an embodiment of the present invention;
fig. 4 is a flowchart of a region monitoring method according to another embodiment of the present invention;
fig. 5 is a schematic structural diagram of a region monitoring apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an area monitoring device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a flowchart of a region monitoring method according to an embodiment of the present invention. The area monitoring method provided by the embodiment can be suitable for monitoring the target area, and typically, the embodiment of the application can be suitable for monitoring the intrusion event of the target area such as the environmental protection area. The method may be specifically performed by a regional monitoring device, which may be implemented in software and/or hardware, which may be integrated in a regional monitoring apparatus. Referring to fig. 1, the method of the embodiment of the present application is executed by an edge device, and specifically includes:
s110, identifying a target object in the image based on a multilayer convolutional neural network model according to the image sent by the image collector, and determining the position relation between the target object in the image and a target area; wherein the target object is an object which autonomously controls motion.
The image collector can be arranged around the target area and used for collecting images or videos of the target area. The target area may be an environmental protection area, an important sea area, or the like, and the target object may be a vehicle, a ship, or the like. The image may be a current image of the target region acquired by the image acquirer or a current frame image of a video of the target region. The position relationship of the target object in the target area may be a relative position between the center of the target object and the boundary line of the target area.
Illustratively, the embodiment of the application is executed by the edge device, so that the dependence on the terminal device is greatly reduced, and the real-time performance of video analysis is realized. The requirement on the image collector is reduced, the image collector is not required to be an intelligent camera, the camera is a common camera in time, the data processing capability is not provided, and the data processing can be carried out by edge equipment. The edge terminal equipment receives the image sent by the image collector, detects the target object in the image and analyzes the position relation between the target object and the target area, thereby monitoring whether the target object invades the target area in real time. In the embodiment of the application, the method is executed by the edge device, the edge device can realize edge calculation, and the edge device is an open platform which is close to the network edge side of an object or a data source and integrates network, calculation, storage and application core capabilities. The edge terminal equipment can cooperate with the cloud equipment to provide intelligent interconnection service, and the key requirements of the industry on the aspects of real-time business, business intelligence, data aggregation and interoperation, safety, privacy protection and the like in the digital revolution process are met. In this embodiment of the application, as shown in fig. 2, the edge device may receive a training image sent by the image collector, send the training image to the cloud, perform model training according to training data by the cloud, obtain a target detection model, an intrusion recognition model, and the like, issue the model to the edge device by the cloud, and process the received image sent by the image collector based on the model by the edge device, so as to implement area monitoring. The image processing method has the advantages that the image processing method does not need to transmit the image to the cloud for processing, the requirement of transmission bandwidth is reduced, and the real-time performance of image detection is realized. The requirement on the image collector is reduced, the image collector is not required to be an intelligent camera, if the image collector is a common camera, the image can be sent to the edge end equipment, and the edge end equipment carries out image detection. By utilizing the cooperative capability of the cloud and the edge, the accuracy of the recognition algorithm is continuously corrected at the cloud, and the algorithm is issued to the edge to be executed, so that the accuracy of actual recognition is improved.
In an embodiment of the present application, an intrusion detection model may be constructed using a multi-layer convolutional neural network, which may include convolutional layers, downsampling layers, and full-link layers, and a back propagation algorithm. When an intrusion recognition model is constructed, the design of image labels needs to correspond to the obvious characteristics of detected images, the selected labels can be images with higher distinction degree between the picture main object and the background, and each label is the whole result of the expected recognition of the detected images. After the label design of the object is completed, data of the image is prepared based on the designed label, the number of training images of each label to be recognized can be set according to actual conditions, the training images can comprise images in various area scenes, the number of the training images in each area scene is balanced, and if some image labels have similarity, the number of the training images is properly increased. The pixel RGB value of each point position of the image is input into a neural network for calculation, the weight of the neural network is adjusted according to data of each time, and then a gradient descent algorithm is utilized for model construction and recognition training.
And identifying the target object by adopting an intrusion detection model obtained by multilayer convolutional neural network training, and determining the target object as the target object if the object in the image is identified to be an object which can autonomously control the motion, such as a ship, a vehicle, a person, an airplane and the like. If the object in the image is recognized as a passive moving object such as leaves, wood, garbage and the like, the object is not further processed and ignored. The method has the advantages that the method can identify and monitor the object possibly having the intrusion behavior in a targeted manner, and does not monitor and analyze the foreign matters passively entering the target area, so that unnecessary monitoring operation is reduced.
S120, if the target object is detected to be located in the target area, tracking the target object through continuous video frame images and a target object identifier, and determining a motion track of the target object.
Specifically, if it is detected that the edge of the target object touches the boundary line of the target area, that is, the target object is determined to intrude into the target area and alarm, a misjudgment may occur, the target object may only be out of control and temporarily enter the target area, and then leave the target area, but not intrude into the target area, and if the alarm is directly performed, a false alarm may be generated, which affects normal safety monitoring management. In the embodiment of the application, when the target object is detected to be located in the target area, the alarm is not directly given, but the target object is continuously tracked, the motion track of the target object is determined, and therefore whether the target object is an behavior invading the target area or not is determined according to the motion track of the target object in the preset time period and the motion condition of the target object in a period of time, the accuracy of invasion monitoring is improved, and false alarm is avoided. After the target objects are identified, unique identification is set for each target object to represent the target objects. When the target object is tracked, only the target object with the same target object identifier in the continuous video frame images needs to be monitored, so that the target object is determined to be tracked.
In the embodiment of the present application, as shown in fig. 3, when it is detected that the target object is located in the target area, it may be determined that the target object is located in the target area if the central point of the target object is located within the boundary line of the target area, so as to accurately determine the relative position of the target object in the target area. According to the difference calculation between the positions of the target objects and the target area at different time points, the result output is only performed once on the whole process of the same object in the target area, and the efficiency of tracking and analyzing the target objects is improved. Meanwhile, the system solves the problem of identification and tracking of multi-target objects in and out of the area, and can help business personnel to perform identification and tracking output of a full flow only once when the multi-target objects are in and out of the area, so that the invaded object is locked quickly, and the efficiency is improved.
S130, if the motion track of the target object is detected to be in the target area, determining that the target object invades the target area.
Illustratively, the behavior of the target object is tracked. A preset time period may be set, and if the motion trajectories formed by the motion of the target object within the preset time period are all within the target area, it is determined that the target object has invaded the target area. The number of the continuous video frame images can also be set, and a preset number of continuous video frame images are selected from the video stream sent by the image collector, starting from the image in which the target object is detected to enter the target area, and then detected. And if the motion trail of the target object is determined to be located in the target area through the detection of the preset number of continuous video frames, determining that the target object invades the target area. Whether the target object invades the target area is determined according to the motion track of the target object by tracking the target object, so that the actual behavior of the target object is accurately analyzed, and misjudgment is avoided.
According to the method and the device, the target object is tracked when the target object is detected to start entering the target area, and if the target object is tracked to be in the target area for a long time, the target object is determined to invade the target area, so that misjudgment of an invasion event is avoided, and the invasion event of the target area is accurately monitored. In addition, the method is executed by the edge terminal equipment, so that real-time monitoring can be realized.
Fig. 4 is a flowchart of a region monitoring method according to another embodiment of the present invention. In the embodiment of the present application, details that are not described in detail in the embodiment are described in detail in order to optimize the embodiment. Referring to fig. 4, the area monitoring method provided in this embodiment may include:
s210, carrying out target detection on the image and determining the current target object area.
Illustratively, the edge device performs target detection on a video frame image sent by the image collector, determines a current target object region of a target object therein, and may show the current target object region in the form of a target detection frame. When the number of the target objects is multiple, the target objects can be marked so as to facilitate the subsequent tracking of the multiple target objects to determine the motion trail.
In the present embodiment, at a resolution of 1920 x 1080, the target requirement size limit is: the length and the width can not be less than 30 pixels, and the scaling relation is as follows:
video minimum target length (height) =
Figure 599102DEST_PATH_IMAGE001
S220, matching the current target object area with a historical target object area in a historical image, determining an actual target object area, and identifying a target object in the target object area.
For example, there may be a case where the image capturing device shakes, and the current target object region deviates, and therefore, in order to correct the region of the target object, the current target object region is matched with the target object region in the past history image, so as to perform the correction. Specifically, the cross-comparison of the current target object area and the historical target object area can be calculated, the Hungarian maximum matching algorithm is used for matching, if the matching is successful, the current target object area and the historical target object area are determined to be the same target object, and then the actual target object area is determined. And if not, determining that the current target object area is a new target object, and taking the current target object area as the actual target object area of the new target object.
And S230, if the target object is detected to be located in the target area, acquiring continuous video frame images in a preset time period, or acquiring a preset number of continuous video frame images.
For example, a preset time period may be set, and after the target object is detected to start entering the target area, the continuous video images within the preset time period after the current time are acquired, and the motion trajectory of the target object within the preset time period is determined. The preset number can also be set, and a preset number of continuous video frame images after the current moment are obtained from the beginning of detecting that the target object enters the target area, so that the motion track of the target object is determined according to the preset number of video frame images.
S240, determining the position relation between the target object and the target area in the continuous video frame images.
And S250, if the target objects are all located in the target area in the continuous video frame images, determining that the motion trail of the target object is located in the target area.
And S260, if the mark of the target object is in the non-alarm state, alarming aiming at the event that the target object invades the target area.
For example, in order to avoid performing repeated alarm on a target object intruding into the target area, when the target object is alarmed, a mark state of the target object is detected first, if the target object is marked as an unalarmed state, an event that the target object is intruded into the target area is alarmed, and if the mark state of the target object is an alarmed state, it is indicated that the event that the target object is intruded into the target area has been alarmed, and repeated alarm is not required.
And S270, marking the target object as an alarm state.
For example, in order to avoid repeated alarm, the state of the target object that has been subjected to alarm is marked as an alarm state, and when the target object is subjected to alarm, if the state of the target object is detected as the alarm state, the target object does not need to be subjected to repeated alarm.
According to the technical scheme of the embodiment of the application, the motion track of the target object is accurately determined by acquiring the continuous video frame images in the preset time period or acquiring the preset number of continuous video frame images, so that the invasion event of the target object is accurately judged according to the motion track. In addition, the target object marked as the non-alarm state is alarmed, and the target object marked as the alarm state is marked as the alarm state, so that the situation that the monitoring work efficiency is influenced by repeated alarming of the target object in the same intrusion target area is avoided.
Fig. 5 is a schematic structural diagram of an area monitoring device according to an embodiment of the present invention. The device can be suitable for the condition of monitoring the target area, and typically, the embodiment of the application can be suitable for the condition of monitoring the intrusion event of the target areas such as the environmental protection area. The apparatus may be implemented by software and/or hardware, and the apparatus may be integrated in the area monitoring device. Referring to fig. 5, the apparatus specifically includes:
the position relation determining module 310 is configured to identify a target object in an image based on a multilayer convolutional neural network model according to the image sent by the image collector, and determine a position relation between the target object in the image and a target area; wherein the target object is an object of autonomous control motion;
a motion trajectory determining module 320, configured to, if it is detected that the target object is located in the target area, track the target object through continuous video frame images and a target object identifier, and determine a motion trajectory of the target object;
an intrusion determining module 330, configured to determine that the target object intrudes into the target area if it is detected that the motion trajectory of the target object is within the target area.
In this embodiment of the application, the position relationship determining module 310 includes:
a current region determining unit, configured to perform target detection on the image and determine a current target object region;
the actual region determining unit is used for matching the current target object region with a historical target object region in a historical image to determine an actual target object region;
and the target object identification unit is used for identifying a target object in the target object area.
In this embodiment of the application, the motion trajectory determining module 320 includes:
the device comprises a continuous image acquisition unit, a video acquisition unit and a video processing unit, wherein the continuous image acquisition unit is used for acquiring continuous video frame images in a preset time period or acquiring a preset number of continuous video frame images;
and the position tracking unit is used for determining the position relation between the target object and the target area in the continuous video frame images.
In this embodiment of the present application, the intrusion determining module 330 is specifically configured to:
and if the target objects are all located in the target area in the continuous video frame images, determining that the motion trail of the target object is located in the target area.
In an embodiment of the present application, the apparatus further includes:
the alarm module is used for alarming aiming at the event that the target object invades the target area if the mark of the target object is in the non-alarm state;
and the state marking module is used for marking the target object as an alarm state.
In an embodiment of the present application, the apparatus further includes:
the intrusion record image determining module is used for rendering an actual target object area and the target area according to the configuration of a user to obtain an intrusion record image;
and the intrusion time frequency band determining module is used for determining the intrusion video band according to the intrusion time and duration of the target object.
The area monitoring device provided by the embodiment of the application can execute the area monitoring method provided by any embodiment of the application, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 6 is a schematic structural diagram of an area monitoring device according to an embodiment of the present invention. FIG. 6 illustrates a block diagram of an exemplary zone monitoring device 412 suitable for use in implementing embodiments of the present application. The area monitoring device 412 shown in fig. 6 is only an example, and should not bring any limitation to the function and the range of use of the embodiments of the present application.
As shown in fig. 6, the area monitoring device 412 may include: one or more processors 416; the memory 428 is configured to store one or more programs, and when the one or more programs are executed by the one or more processors 416, the one or more processors 416 are enabled to implement the area monitoring method provided in the embodiment of the present application, including:
according to the image sent by the image collector, identifying a target object in the image based on a multilayer convolutional neural network model, and determining the position relation between the target object and a target area in the image; wherein the target object is an object of autonomous control motion;
if the target object is detected to be located in the target area, tracking the target object through continuous video frame images and a target object identifier, and determining a motion track of the target object;
and if the motion track of the target object is detected to be in the target area, determining that the target object invades the target area.
The components of the area monitoring device 412 may include, but are not limited to: one or more processors or processors 416, a memory 428, and a bus 418 that couples the various device components including the memory 428 and the processors 416.
Bus 418 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Area monitoring device 412 typically includes a variety of computer device readable storage media. These storage media may be any available storage media that can be accessed by the area monitoring device 412, including volatile and non-volatile storage media, removable and non-removable storage media.
Memory 428 can include computer-device readable storage media in the form of volatile memory, such as Random Access Memory (RAM) 430 and/or cache memory 432. The area monitoring device 412 may further include other removable/non-removable, volatile/nonvolatile computer device storage media. By way of example only, storage system 434 may be used to read from and write to non-removable, nonvolatile magnetic storage media (not shown in FIG. 6, commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical storage medium) may be provided. In these cases, each drive may be connected to bus 418 by one or more data storage media interfaces. Memory 428 can include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 440 having a set (at least one) of program modules 442 may be stored, for instance, in memory 428, such program modules 442 including, but not limited to, an operating device, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. The program modules 442 generally perform the functions and/or methodologies of the described embodiments of the invention.
The area monitoring device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, display 424, etc.), with one or more devices that enable a user to interact with the area monitoring device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the area monitoring device 412 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 422. Also, the area monitoring device 412 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 420. As shown in FIG. 6, the network adapter 420 communicates with the other modules of the area monitoring device 412 via the bus 418. It should be appreciated that although not shown in FIG. 6, other hardware and/or software modules may be used in conjunction with the area monitoring device 412, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID devices, tape drives, and data backup storage devices, among others.
The processor 416 executes various functional applications and data processing, such as implementing a region monitoring method provided by embodiments of the present application, by executing at least one of the other programs stored in the memory 428.
One embodiment of the present invention provides a storage medium containing computer-executable instructions which, when executed by a computer processor, perform a region monitoring method, comprising:
according to the image sent by the image collector, identifying a target object in the image based on a multilayer convolutional neural network model, and determining the position relation between the target object and a target area in the image; wherein the target object is an object of autonomous control motion;
if the target object is detected to be located in the target area, tracking the target object through continuous video frame images and a target object identifier, and determining a motion track of the target object;
and if the motion track of the target object is detected to be in the target area, determining that the target object invades the target area.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable storage media. The computer readable storage medium may be a computer readable signal storage medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device, apparatus, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the present application, a computer readable storage medium may be any tangible storage medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus.
A computer readable signal storage medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal storage medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution apparatus, device, or apparatus.
Program code embodied on a computer readable storage medium may be transmitted using any appropriate storage medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or device. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method of area monitoring, performed by an edge-end device, the method comprising:
according to the image sent by the image collector, identifying a target object in the image based on a multilayer convolutional neural network model, and determining the position relation between the target object and a target area in the image; wherein the target object is an object of autonomous control motion;
if the target object is detected to be located in the target area, tracking the target object through continuous video frame images and a target object identifier, and determining a motion track of the target object;
and if the motion track of the target object is detected to be in the target area, determining that the target object invades the target area.
2. The method according to claim 1, characterized in that according to the image sent by the image collector, a target object in the image is identified based on a multilayer convolutional neural network model, and the position relation between the target object and a target area in the image is determined; wherein the target object is an object for autonomously controlling motion, and comprises:
carrying out target detection on the image and determining a current target object area;
matching the current target object area with a historical target object area in a historical image to determine an actual target object area;
identifying a target object in the target object region.
3. The method of claim 1, wherein tracking the target object comprises:
acquiring continuous video frame images in a preset time period or acquiring a preset number of continuous video frame images;
and determining the position relation between the target object and the target area in the continuous video frame images.
4. The method of claim 3, wherein determining that the target object invades the target area if the motion trajectory of the target object is detected to be within the target area comprises:
and if the target objects are all located in the target area in the continuous video frame images, determining that the motion trail of the target object is located in the target area.
5. The method of claim 1, wherein after determining that the target object has invaded the target area, the method further comprises:
if the mark of the target object is in a non-alarm state, alarming aiming at the event that the target object invades the target area;
and marking the target object as an alarm state.
6. The method of claim 1, wherein after determining that the target object has invaded the target area, the method further comprises:
rendering an actual target object area and the target area according to the configuration of a user to obtain an intrusion record image;
and determining the intrusion video segment according to the intrusion time and duration of the target object.
7. An area monitoring apparatus, configured to be disposed at an edge-end device, the apparatus comprising:
the position relation determining module is used for identifying a target object in the image based on the multilayer convolutional neural network model according to the image sent by the image collector and determining the position relation between the target object in the image and a target area; wherein the target object is an object of autonomous control motion;
the motion track determining module is used for tracking the target object through continuous video frame images and a target object identifier and determining the motion track of the target object if the target object is detected to be located in the target area;
and the intrusion determination module is used for determining that the target object intrudes into the target area if the movement track of the target object is detected to be in the target area.
8. The apparatus of claim 7, wherein the positional relationship determining module comprises:
a current region determining unit, configured to perform target detection on the image and determine a current target object region;
the actual region determining unit is used for matching the current target object region with a historical target object region in a historical image to determine an actual target object region;
and the target object identification unit is used for identifying a target object in the target object area.
9. An area monitoring device, the device comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the area monitoring method of any one of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the region monitoring method according to any one of claims 1 to 6.
CN202010841202.4A 2020-08-20 2020-08-20 Region monitoring method, device, equipment and storage medium Pending CN111738240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010841202.4A CN111738240A (en) 2020-08-20 2020-08-20 Region monitoring method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010841202.4A CN111738240A (en) 2020-08-20 2020-08-20 Region monitoring method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111738240A true CN111738240A (en) 2020-10-02

Family

ID=72658611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010841202.4A Pending CN111738240A (en) 2020-08-20 2020-08-20 Region monitoring method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111738240A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560664A (en) * 2020-12-11 2021-03-26 清华大学苏州汽车研究院(吴江) Method, device, medium and electronic equipment for detecting intrusion of forbidden region
CN112734699A (en) * 2020-12-24 2021-04-30 浙江大华技术股份有限公司 Article state warning method and device, storage medium and electronic device
CN112784738A (en) * 2021-01-21 2021-05-11 上海云从汇临人工智能科技有限公司 Moving object detection alarm method, device and computer readable storage medium
CN113936405A (en) * 2021-10-08 2022-01-14 国能榆林能源有限责任公司 Alarm method, alarm system and storage medium
CN114005068A (en) * 2021-11-08 2022-02-01 支付宝(杭州)信息技术有限公司 Method and device for monitoring movement of goods
CN114419859A (en) * 2021-12-27 2022-04-29 湖南中联重科应急装备有限公司 Safety detection method, processor and device for supporting leg and fire fighting truck
CN114495011A (en) * 2022-02-15 2022-05-13 辽宁奥普泰通信股份有限公司 Non-motor vehicle and pedestrian illegal intrusion identification method based on target detection, storage medium and computer equipment
CN114494358A (en) * 2022-04-07 2022-05-13 中航信移动科技有限公司 Data processing method, electronic equipment and storage medium
CN114973573A (en) * 2022-06-14 2022-08-30 浙江大华技术股份有限公司 Target intrusion determination method and device, storage medium and electronic device
WO2023245833A1 (en) * 2022-06-22 2023-12-28 清华大学 Scene monitoring method and apparatus based on edge computing, device, and storage medium
CN118411503A (en) * 2024-06-26 2024-07-30 杭州海康威视系统技术有限公司 Target object behavior detection method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106981163A (en) * 2017-03-26 2017-07-25 天津普达软件技术有限公司 A kind of personnel invade abnormal event alarming method
CN107133564A (en) * 2017-03-26 2017-09-05 天津普达软件技术有限公司 A kind of frock work hat detection method
CN107645652A (en) * 2017-10-27 2018-01-30 深圳极视角科技有限公司 A kind of illegal geofence system based on video monitoring
CN110675586A (en) * 2019-09-25 2020-01-10 捻果科技(深圳)有限公司 Airport enclosure intrusion monitoring method based on video analysis and deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106981163A (en) * 2017-03-26 2017-07-25 天津普达软件技术有限公司 A kind of personnel invade abnormal event alarming method
CN107133564A (en) * 2017-03-26 2017-09-05 天津普达软件技术有限公司 A kind of frock work hat detection method
CN107645652A (en) * 2017-10-27 2018-01-30 深圳极视角科技有限公司 A kind of illegal geofence system based on video monitoring
CN110675586A (en) * 2019-09-25 2020-01-10 捻果科技(深圳)有限公司 Airport enclosure intrusion monitoring method based on video analysis and deep learning

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560664B (en) * 2020-12-11 2023-08-01 清华大学苏州汽车研究院(吴江) Method, device, medium and electronic equipment for intrusion detection in forbidden area
CN112560664A (en) * 2020-12-11 2021-03-26 清华大学苏州汽车研究院(吴江) Method, device, medium and electronic equipment for detecting intrusion of forbidden region
CN112734699A (en) * 2020-12-24 2021-04-30 浙江大华技术股份有限公司 Article state warning method and device, storage medium and electronic device
CN112784738A (en) * 2021-01-21 2021-05-11 上海云从汇临人工智能科技有限公司 Moving object detection alarm method, device and computer readable storage medium
CN112784738B (en) * 2021-01-21 2023-09-19 上海云从汇临人工智能科技有限公司 Moving object detection alarm method, moving object detection alarm device and computer readable storage medium
CN113936405A (en) * 2021-10-08 2022-01-14 国能榆林能源有限责任公司 Alarm method, alarm system and storage medium
CN114005068A (en) * 2021-11-08 2022-02-01 支付宝(杭州)信息技术有限公司 Method and device for monitoring movement of goods
CN114419859A (en) * 2021-12-27 2022-04-29 湖南中联重科应急装备有限公司 Safety detection method, processor and device for supporting leg and fire fighting truck
CN114495011A (en) * 2022-02-15 2022-05-13 辽宁奥普泰通信股份有限公司 Non-motor vehicle and pedestrian illegal intrusion identification method based on target detection, storage medium and computer equipment
CN114494358B (en) * 2022-04-07 2022-06-21 中航信移动科技有限公司 Data processing method, electronic equipment and storage medium
CN114494358A (en) * 2022-04-07 2022-05-13 中航信移动科技有限公司 Data processing method, electronic equipment and storage medium
CN114973573A (en) * 2022-06-14 2022-08-30 浙江大华技术股份有限公司 Target intrusion determination method and device, storage medium and electronic device
WO2023245833A1 (en) * 2022-06-22 2023-12-28 清华大学 Scene monitoring method and apparatus based on edge computing, device, and storage medium
CN118411503A (en) * 2024-06-26 2024-07-30 杭州海康威视系统技术有限公司 Target object behavior detection method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111738240A (en) Region monitoring method, device, equipment and storage medium
US10242282B2 (en) Video redaction method and system
WO2019179024A1 (en) Method for intelligent monitoring of airport runway, application server and computer storage medium
US11367330B2 (en) Information processing system, method and computer readable medium for determining whether moving bodies appearing in first and second videos are the same or not using histogram
US20150310297A1 (en) Systems and methods for computer vision background estimation using foreground-aware statistical models
US9292939B2 (en) Information processing system, information processing method and program
Zabłocki et al. Intelligent video surveillance systems for public spaces–a survey
Lee et al. Real-time illegal parking detection in outdoor environments using 1-D transformation
CN109766867B (en) Vehicle running state determination method and device, computer equipment and storage medium
US20200005613A1 (en) Video Surveillance Method Based On Object Detection and System Thereof
US10210392B2 (en) System and method for detecting potential drive-up drug deal activity via trajectory-based analysis
CN109544870B (en) Alarm judgment method for intelligent monitoring system and intelligent monitoring system
CN111079621B (en) Method, device, electronic equipment and storage medium for detecting object
CN112776856A (en) Track foreign matter intrusion monitoring method, device and system and monitoring host equipment
CN112836683B (en) License plate recognition method, device, equipment and medium for portable camera equipment
CN112733598A (en) Vehicle law violation determination method and device, computer equipment and storage medium
CN113673311A (en) Traffic abnormal event detection method, equipment and computer storage medium
KR101454644B1 (en) Loitering Detection Using a Pedestrian Tracker
CN111460917B (en) Airport abnormal behavior detection system and method based on multi-mode information fusion
CN113538513A (en) Method, device and equipment for controlling access of monitored object and storage medium
CN102789645A (en) Multi-objective fast tracking method for perimeter precaution
CN116993265A (en) Intelligent warehouse safety management system based on Internet of things
US20230360402A1 (en) Video-based public safety incident prediction system and method therefor
CN116012360A (en) Method, apparatus, device, medium, and program product for detecting legacy items
CN113869163B (en) Target tracking method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201002