CN110874953A - Area alarm method and device, electronic equipment and readable storage medium - Google Patents

Area alarm method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN110874953A
CN110874953A CN201810994283.4A CN201810994283A CN110874953A CN 110874953 A CN110874953 A CN 110874953A CN 201810994283 A CN201810994283 A CN 201810994283A CN 110874953 A CN110874953 A CN 110874953A
Authority
CN
China
Prior art keywords
image
area
monitored object
target
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810994283.4A
Other languages
Chinese (zh)
Other versions
CN110874953B (en
Inventor
陈晓
童俊艳
任烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810994283.4A priority Critical patent/CN110874953B/en
Publication of CN110874953A publication Critical patent/CN110874953A/en
Application granted granted Critical
Publication of CN110874953B publication Critical patent/CN110874953B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G3/00Traffic control systems for marine craft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Ocean & Marine Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Burglar Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an area alarm method, an area alarm device, electronic equipment and a readable storage medium. The method comprises the following steps: inputting the acquired current image into a trained neural network model so as to identify a monitored object from the current image by the neural network model and output position information of the monitored object; determining a target area of the monitored object in the current image according to the position information of the monitored object; and judging whether intersection exists between the target area and a pre-designated alarm area, if so, determining the monitored object as a target monitored object, and outputting an alarm signal. The invention adopts the neural network model to detect the monitored object, has higher accuracy and can further improve the accuracy of the regional alarm method.

Description

Area alarm method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of monitoring technologies, and in particular, to a method and an apparatus for area alarm, an electronic device, and a readable storage medium.
Background
In order to strengthen the sea traffic management, maintain the sea traffic environment and order, and ensure the safety of ships, facilities and personnel, certain restricted navigation areas on the sea surface, such as areas for engineering construction or military exercises, are not allowed to pass by the ships. Therefore, the no-navigation area needs to be monitored, and ships passing by are discovered and alarmed, so that relevant workers are reminded.
In the prior art, by acquiring an image including a designated area, a ship in the image is detected according to the shape of the ship by adopting a pixel value segmentation method. However, in the prior art, the accuracy of ship detection is low, and missing detection or false detection is easy to occur.
Disclosure of Invention
In order to solve the problems in the related art, the invention provides an area alarm method, an area alarm device, an electronic device and a readable storage medium, so as to solve the defects in the prior art.
According to a first aspect of embodiments of the present invention, there is provided an area alarm method, the method including:
inputting the acquired current image into a trained neural network model so as to identify a monitored object from the current image by the neural network model and output position information of the monitored object;
determining a target area of the monitored object in the current image according to the position information of the monitored object;
and judging whether intersection exists between the target area and a pre-designated alarm area, if so, determining the monitored object as a target monitored object, and outputting an alarm signal.
In one embodiment of the present invention, the identifying a monitoring object from the current image by the neural network model and outputting the position information of the monitoring object includes:
carrying out convolution processing on the input current image by the convolution layer of the neural network model to obtain high-dimensional characteristic data and outputting the high-dimensional characteristic data to the pooling layer of the neural network model;
performing data dimensionality reduction on the input high-dimensional feature data by the pooling layer of the neural network model to obtain low-dimensional feature data and outputting the low-dimensional feature data to the full-connection layer of the neural network model;
and identifying a target object and position information of the target object in the current image by a full connection layer of the neural network model according to the input low-dimensional feature data, and outputting the position information of the target object.
In an embodiment of the present invention, before the determining whether there is an intersection between the target area and a pre-specified alarm area, the method further includes:
correcting the size of the target area.
In an embodiment of the present invention, the correcting the size of the target area includes:
determining a reference monitoring object matched with the monitoring object in the current image in the previous N images of the current image, wherein N is an integer greater than or equal to 2;
acquiring at least one reference area where the reference monitoring object is located in the previous N images;
carrying out average value operation on the size of at least one reference area to obtain an average size;
scaling the target region to the average size.
In one embodiment of the invention, the method further comprises:
and counting the number of the target monitoring objects in the set period.
In an embodiment of the present invention, the counting the number of target monitoring objects in the set period includes:
counting the number T1 of target monitoring objects in the first image acquired in a set period;
taking the next image of the first image as an image to be processed;
removing target monitoring objects matched with the feature information in the previous image from the image to be processed, and counting the number T2 of the remaining target monitoring objects in the image to be processed;
and judging whether the image to be processed is the last image acquired in a set period, if so, determining the sum of the T1 and the T2 as the number of target monitoring objects appearing in the alarm area in the set period, and if not, taking the next image of the image to be processed acquired in the set period as the image to be processed and returning to the operation of excluding the target monitoring objects with the same characteristic information as the previous image from the image to be processed.
According to a second aspect of embodiments of the present invention, there is provided an area alarm apparatus, the apparatus comprising:
the position information acquisition module is used for inputting the acquired current image into a trained neural network model so as to identify a monitored object from the current image by the neural network model and output the position information of the monitored object;
the target area determining module is used for determining a target area of the monitored object in the current image according to the position information of the monitored object;
and the alarm module is used for judging whether an intersection exists between the target area and a pre-designated alarm area, if so, determining that the monitored object is a target monitored object, and outputting an alarm signal.
In an embodiment of the present invention, the location information acquiring module is specifically configured to:
carrying out convolution processing on the input current image by the convolution layer of the neural network model to obtain high-dimensional characteristic data and outputting the high-dimensional characteristic data to the pooling layer of the neural network model;
performing data dimensionality reduction on the input high-dimensional feature data by the pooling layer of the neural network model to obtain low-dimensional feature data and outputting the low-dimensional feature data to the full-connection layer of the neural network model;
and identifying a target object and position information of the target object in the current image by a full connection layer of the neural network model according to the input low-dimensional feature data, and outputting the position information of the target object.
In an embodiment of the present invention, the apparatus further includes a correction module, configured to correct a size of the target area before the determining whether there is an intersection between the target area and a pre-specified alarm area.
In one embodiment of the invention, the correction module comprises:
a reference monitoring object determining module, configured to determine a reference monitoring object that matches the monitoring object in the current image in the first N images of the current image, where N is an integer greater than or equal to 2;
a reference region obtaining module, configured to obtain at least one reference region where the reference monitored object is located in the first N images;
the average size acquisition module is used for carrying out average value operation on the size of at least one reference area to obtain an average size;
a scaling module to scale the target area to the average size.
In an embodiment of the present invention, the apparatus further includes a quantity counting module, configured to count the quantity of the target monitoring objects in the set period.
In one embodiment of the present invention, the quantity statistics module comprises:
the first quantity counting unit is used for counting the quantity T1 of the target monitoring objects in the first image acquired in the set period;
the image switching unit is used for taking the next image of the first image as an image to be processed;
a second quantity counting unit, configured to exclude, from the to-be-processed image, a target monitoring object that matches the feature information in the previous image, and count the number T2 of remaining target monitoring objects in the to-be-processed image;
and the judging unit is used for judging whether the image to be processed is the last image acquired in a set period, if so, determining the sum of the T1 and the T2 as the number of target monitoring objects appearing in the alarm area in the set period, and if not, taking the next image of the image to be processed acquired in the set period as the image to be processed and returning to the operation of eliminating the target monitoring objects with the same characteristic information as the previous image from the image to be processed.
According to a third aspect of embodiments of the present invention, there is provided an electronic device, comprising a processor and a memory; the memory stores a program that can be called by the processor; wherein the processor, when executing the program, implements the area alarm method as in any one of the preceding embodiments.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the area alarm method as in any one of the preceding embodiments.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
according to the area alarm method, the area alarm device, the electronic equipment and the readable storage medium, the trained neural network model is used for detecting the monitored object in the current image, and the alarm signal is output when the intersection exists between the target area where the monitored object is located and the alarm area. Compared with the prior art, the method has the advantages that the probability of false detection or missed detection is reduced, the detection rate and the detection accuracy are higher, and the reliability of the regional alarm method can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic view of an application scenario of an area alarm method according to an exemplary embodiment of the present invention;
FIG. 2A is a flow chart illustrating a method of area alerting in accordance with an exemplary embodiment of the present invention;
FIG. 2B is a schematic diagram of a trained neural network model according to an exemplary embodiment of the present invention;
FIG. 2C is a flow chart illustrating another area alarm method in accordance with an exemplary embodiment of the present invention;
FIG. 2D is a flow chart illustrating yet another area alarm method in accordance with an exemplary embodiment of the present invention;
FIG. 3 is a flow chart illustrating yet another area alarm method in accordance with an exemplary embodiment of the present invention;
FIG. 4 is a flow chart illustrating yet another area alarm method in accordance with an exemplary embodiment of the present invention;
FIG. 5 is a block diagram illustrating an area alarm apparatus in accordance with an exemplary embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In order to make the description of the present invention clearer and more concise, some technical terms in the present invention are explained below:
a neural network: a technique for simulating brain structure abstraction features that a great number of simple functions are connected to form a network system, which can fit very complex function relations, including convolution/deconvolution, activation, pooling, addition, subtraction, multiplication, division, channel merging and element rearrangement. Training the network with specific input data and output data, adjusting the connections therein, allows the neural network to learn the mapping between the fitting inputs and outputs.
The area alarm method and the area alarm device can be applied to electronic equipment such as camera equipment and can also be applied to a server.
Fig. 1 is a schematic view of an exemplary application scenario of the area alarm method provided in the present application. Referring to fig. 1, in the application scenario, the area alarm method provided by the present application is applied to a server, and the server may perform analysis based on images acquired by a camera to realize monitoring.
In this embodiment, the camera may be a thermal imaging camera. The thermal imaging camera receives infrared rays emitted from an object to image, and can monitor a target object even at night, in rainy and foggy weather or in other environments with dark light. The image collected by the thermal imaging camera can be displayed through the display screen, and the pseudo-color image can be displayed according to the setting of a user.
Fig. 2A is a flowchart illustrating a zone alarm method according to an exemplary embodiment, which may include the following steps 210 to 230, as shown in fig. 2A:
in step 210, the acquired current image is input to the trained neural network model, so that the neural network model identifies the monitoring object from the current image and outputs the position information of the monitoring object.
The current image contains a monitoring object, and the monitoring object is an object expected to be monitored so as to give an alarm according to the position of the object. In one exemplary embodiment, the monitored object may be a ship traveling above the sea surface.
In one embodiment, the current image is in RGB format, and each pixel point of the current image includes sub-pixel points of three different colors (R, G, B).
In one embodiment, the current image may be a video frame image obtained by splitting a video captured by a camera, or may be a still image captured by a camera.
In one embodiment, as shown in FIG. 2B, the trained neural network model may include a convolutional layer, a pooling layer, and a fully-connected layer. Correspondingly, as shown in fig. 2C, identifying a monitoring object from the current image by the neural network model and outputting the position information of the monitoring object may include steps 211 to 213 as follows.
In step 211, the convolution layer of the neural network model performs convolution processing on the input current image to obtain high-dimensional feature data, and outputs the high-dimensional feature data to the pooling layer of the neural network model.
In one embodiment, the convolutional layer of the neural network model is used to perform a convolution operation on the input current image to obtain high-dimensional feature data. The high-dimensional feature data is high in dimension, the included feature data is more detailed and sufficient, but the feature data which is irrelevant to the target object and redundant can be included. In one exemplary embodiment, the high-dimensional feature data is depth feature data of the monitored object.
In step 212, the pooling layer of the neural network model performs data dimension reduction on the input high-dimensional feature data to obtain low-dimensional feature data, and outputs the low-dimensional feature data to the full connection layer of the neural network model.
The pooling layer of the neural network model is used for carrying out dimensionality reduction processing on the received high-dimensional feature data so as to effectively remove irrelevant and redundant feature data and output the low-dimensional feature data after dimensionality reduction, and therefore the efficiency of the neural network model for identifying the monitored object is improved. Wherein the dimension of the low-dimensional feature data is smaller than the dimension of the high-dimensional feature data.
In one embodiment, the low-dimensional sub-feature data is used to reflect local features of the monitored object. For example, the low-dimensional sub-feature data may include data that may be used to characterize the target object, such as at least one of color, position, and contour data of the monitored object.
In step 213, the fully connected layer of the neural network model identifies the monitored object and the position information of the monitored object in the current image according to the input low-dimensional feature data, and outputs the position information of the monitored object.
And the full connection layer identifies each object in the current image according to the low-dimensional feature data and outputs the category information of each object and the position information of each object. The category information of each object may be represented numerically, for example, 1 represents a monitored object, and 0 represents a non-monitored object. In an exemplary embodiment, the current image is an acquired image of the sea surface, and the class information output by the neural network includes 1 and 0, where 1 represents a vessel on the sea surface and 0 represents a non-vessel object on the sea surface. According to the category information and the position information output by the neural network model, the position information of the monitored object can be obtained.
In one embodiment, the fully connected layer may also output a confidence level of the monitored object. In an exemplary embodiment, the fully connected layer combines the position information, the category information and the confidence level and outputs the combined result.
In one embodiment, when the neural network model outputs the confidence level, the class information and the confidence level may be combined to determine the monitored object in the current image. For example, an object whose reliability is greater than a preset value and whose category information is a monitored object may be selected as the finally determined monitored object. The preset value may be determined by a user, and may be, for example, 70%, 80%, or the like.
In one embodiment, the detection sensitivity of the neural network model can be set by a user according to actual needs during training.
Of course, the neural network model may also include other types of computational layers such as an activation computational layer, and the like. The specific number and interconnection relationship of the computing units of each layer in the neural network model are not limited.
In the embodiment of the invention, the neural network model is adopted to detect the monitored object in the current image, and the monitored objects in various shapes can be detected due to the robustness and the generalization of the neural network model.
In one embodiment, the position information of the monitoring object output by the neural network model is the position information of the smallest rectangular region including the monitoring object, and may be, for example, coordinates of four vertices of the rectangular region.
In step 220, a target area of the monitored object in the current image is determined according to the position information of the monitored object.
Since the position information of the monitored object is detected from the current image, the image area corresponding to the position information in the current image is the target area where the monitored object is located. The target area where the corresponding position information points in the current image can be located, namely the target area where the monitored object is located in the current video frame.
In one embodiment, the position information of the monitored object is coordinates of four vertices of a rectangular area, and the rectangular area where the monitored object is located is obtained in the current image according to the coordinates of the four vertices.
In one embodiment, after determining the target area where the monitoring object is located in the current image, a target frame of the target area may be displayed in the display screen, so that the user may more intuitively observe the area location where the monitoring object is located.
In step 230, it is determined whether an intersection exists between the target area and a pre-designated alarm area, and if so, it is determined that the monitored object is a target monitored object, and an alarm signal is output.
In one embodiment, before the determining whether there is an intersection between the target area and a pre-specified warning area, the method further includes: correcting the size of the target area.
In the embodiment of the present invention, if the noise of the camera is large or the monitored object is partially blocked when the camera captures an image at a certain time, the size of the detected target area where the monitored object is located in the current image may be small. Before judging whether the monitored object is the target monitored object, the size of the target area where the monitored object is located in the current image is corrected, and the accuracy of the area alarm method can be improved.
In one embodiment, as shown in fig. 2D, the correcting the size of the target region may include the following steps 241 to 244.
In step 241, a reference monitoring object matching the monitoring object in the current image is determined in the first N images of the current image, where N is an integer greater than or equal to 2.
A multi-target tracking algorithm may be employed to determine a reference monitored object in the previous N images that matches the monitored object of the current image. Specifically, the monitored object in the previous N images is used as the tracking object, each monitored object in the current image is matched according to the feature information of the plurality of tracking objects in the previous N images, and the monitored object matched with the feature information of the tracking object in the previous N images in the current image is used as the tracking result, so that the tracked object can be determined to be the reference monitored object of the tracking result. The feature information includes appearance features and motion features, the appearance features may include information such as color, aspect ratio, height, and contour of the target monitoring object, and the motion feature information includes motion direction, motion speed, and the like of the monitoring object.
In step 242, at least one reference region where the reference monitoring object is located is obtained in the first N images.
And for each monitored object, respectively acquiring a reference area where a reference monitored object of the monitored object is located in the previous N images.
Specifically, after the reference monitoring object is determined in the previous N images, the position information of the reference monitoring object is obtained, and then the reference area where the reference monitoring object is located is obtained according to the position information of the reference monitoring object.
In step 243, the size of at least one of the reference regions is averaged to obtain an average size.
In one embodiment, the reference region is a rectangular region, and the dimensions of the reference region include at least a length and a width of the rectangular region. The length of the rectangular area can be calculated through coordinates of two vertexes in the length direction, and the width of the rectangular area can be calculated through coordinates of two vertexes in the width direction. And for each monitored object, respectively carrying out average value operation on the length and the width of at least one reference area corresponding to the monitored object to obtain an average length and an average width, namely the average size.
In step 244, the target region is scaled to the average size.
When the target area is zoomed, the center position of the target area is kept unchanged, and the length and the width of the target area are changed to enable the length of the target area to be equal to the average length and the width to be equal to the average width.
And after the target area is zoomed, judging whether an intersection exists between the zoomed target area and a pre-designated alarm area in the current image. If yes, the monitored object can be determined to be the target monitored object, and an alarm signal is output. By correcting the size of the target area, the problem that the accuracy of judging the target monitoring object is influenced by the inaccurate size of the target area caused by noise can be avoided.
In one embodiment, the presence of an intersection between the target object and the pre-designated warning region means that the target object is partially or fully within the warning region.
The alarm area is an area to be monitored, and may be, for example, a piece of area defined on the sea surface, in which no vessel is allowed to enter.
In one embodiment, the output alarm signal may be a beep or a flashing light to alert the presence of the target monitoring object. Alternatively, the output alarm signal may be a prompt message sent to a terminal device of the worker, such as a mobile phone, to remind the relevant worker.
According to the regional alarm method provided by the embodiment of the invention, the trained neural network model is used for detecting the monitored object in the current image, and the alarm signal is output when the target region where the monitored object is located is intersected with the alarm region. Compared with the prior art, the method has the advantages that the probability of false detection or missed detection is reduced, the detection rate and the detection accuracy are higher, and the reliability of the regional alarm method can be improved.
One of the combinations is exemplified below.
As shown in fig. 3, fig. 3 is a flowchart illustrating still another area alarm method according to an exemplary embodiment of the present invention, which may include the following steps 310 to 360:
in step 310, the acquired current image is input to the trained neural network model, and the convolution layer of the neural network model performs convolution processing on the input current image to obtain high-dimensional feature data and outputs the high-dimensional feature data to the pooling layer of the neural network.
In step 320, the pooling layer of the neural network model performs data dimension reduction on the input high-dimensional feature data to obtain low-dimensional feature data, and outputs the low-dimensional feature data to the full connection layer of the neural network.
In step 330, identifying, by the fully connected layer of the neural network model, the monitored object and the position information of the monitored object in the current image according to the input low-dimensional feature data, and outputting the position information of the monitored object.
In step 340, a target area where the monitored object is located in the current image is determined according to the position information of the monitored object.
In step 350, the size of the target area is corrected.
In step 360, it is determined whether an intersection exists between the target area and a pre-designated alarm area, and if so, it is determined that the monitored object is a target monitored object, and an alarm signal is output.
According to the regional alarm method provided by the embodiment of the invention, the trained neural network model is used for detecting the monitored object in the current image, and the alarm signal is output when the target region where the monitored object is located is intersected with the alarm region. Compared with the prior art, the method has the advantages that the probability of false detection or missed detection is reduced, the detection rate and the detection accuracy are higher, and the reliability of the regional alarm method can be improved.
Fig. 4 is a flowchart illustrating yet another area alarm method according to an exemplary embodiment of the present invention, which can be used to count the number of target monitoring objects in a set period, as shown in fig. 4. The method may include the following steps 410 to 450:
in step 410, the number T1 of target monitoring objects in the first image collected in the set period is counted.
Before step 410, the target monitoring object in the image collected in the set period is detected respectively. The method for detecting the target monitoring object in the image can be the method provided by the above embodiment, and is not described herein again.
The setting period may be set by a user, and may be set to one day or one hour, for example.
In step 420, the next image of the first image is taken as the image to be processed.
In step 430, the target monitoring objects matched with the feature information in the previous image are excluded from the image to be processed, and the number T2 of the remaining target monitoring objects in the image to be processed is counted.
In step 440, it is determined whether the image to be processed is the last image acquired in a set period.
In step 450, if yes, the sum of the T1 and the T2 is determined as the number of the target monitoring objects appearing in the alarm area in the set period, and if not, the next image of the to-be-processed images collected in the set period is taken as the to-be-processed image, and the process returns to step 430.
In this step, if the number of images acquired in the set period is three or more, a plurality of T2 is obtained, and the number of target monitoring objects appearing in the alarm area in the set period is calculated as the sum of T1 and a plurality of T2.
According to the area alarm method provided by the embodiment of the invention, the number of the target monitoring objects appearing in the alarm area in the set period can be counted by collecting and analyzing the images in the designated area in the set period.
Corresponding to the embodiment of the area alarm method, the application also provides an embodiment of an area alarm device.
Fig. 5 is a block diagram of a monitoring apparatus according to an embodiment of the present invention. The monitoring device provided by the embodiment can be implemented by software, hardware or a combination of software and hardware. As shown in fig. 5, the area alarm device provided in this embodiment may include:
a position information obtaining module 510, configured to input the acquired current image into a trained neural network model, so that the neural network model identifies a monitored object from the current image and outputs position information of the monitored object;
a target area determining module 520, configured to determine, according to the position information of the monitored object, a target area where the monitored object is located in the current image;
and the alarm module 530 is configured to determine whether an intersection exists between the target area and a pre-designated alarm area, determine that the monitored object is a target monitored object if the intersection exists, and output an alarm signal.
In an embodiment of the present invention, the location information obtaining module 510 is specifically configured to:
carrying out convolution processing on the input current image by the convolution layer of the neural network model to obtain high-dimensional characteristic data and outputting the high-dimensional characteristic data to the pooling layer of the neural network model;
performing data dimensionality reduction on the input high-dimensional feature data by the pooling layer of the neural network model to obtain low-dimensional feature data and outputting the low-dimensional feature data to the full-connection layer of the neural network model;
and identifying a target object and position information of the target object in the current image by a full connection layer of the neural network model according to the input low-dimensional feature data, and outputting the position information of the target object.
In an embodiment of the present invention, the apparatus further includes a correction module, configured to correct a size of the target area before the determining whether there is an intersection between the target area and a pre-specified alarm area.
In one embodiment of the invention, the correction module comprises:
a reference monitoring object determining module, configured to determine a reference monitoring object that matches the monitoring object in the current image in the first N images of the current image, where N is an integer greater than or equal to 2;
a reference region obtaining module, configured to obtain at least one reference region where the reference monitored object is located in the first N images;
the average size acquisition module is used for carrying out average value operation on the size of at least one reference area to obtain an average size;
a scaling module to scale the target area to the average size.
In an embodiment of the present invention, the apparatus further includes a quantity counting module, configured to count the quantity of the target monitoring objects in the set period.
In one embodiment of the present invention, the quantity statistics module comprises:
the first quantity counting unit is used for counting the quantity T1 of the target monitoring objects in the first image acquired in the set period;
the image switching unit is used for taking the next image of the first image as an image to be processed;
a second quantity counting unit, configured to exclude, from the to-be-processed image, a target monitoring object that matches the feature information in the previous image, and count the number T2 of remaining target monitoring objects in the to-be-processed image;
and the judging unit is used for judging whether the image to be processed is the last image acquired in a set period, if so, determining the sum of the T1 and the T2 as the number of target monitoring objects appearing in the alarm area in the set period, and if not, taking the next image of the image to be processed acquired in the set period as the image to be processed and returning to the operation of eliminating the target monitoring objects with the same characteristic information as the previous image from the image to be processed.
The area alarm device provided by the embodiment of the invention detects the monitored object in the current image through the trained neural network model, and outputs an alarm signal when the target area where the monitored object is located and the alarm area have intersection. Compared with the prior art, the method has the advantages that the probability of false detection or missed detection is reduced, the detection rate and the detection accuracy are higher, and the reliability of the regional alarm method can be improved.
It should be noted that: the area alarm device provided in the above embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the area alarm device and the area alarm method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Embodiments of an electronic device and computer-readable storage medium for use with an area alarm apparatus are also provided. Taking a software implementation as an example, as a logical device, the device is formed by reading, by a processor of the electronic device where the device is located, a corresponding computer program instruction in the nonvolatile memory into the memory for operation. From a hardware aspect, as shown in fig. 6, fig. 6 is a hardware structure diagram of an electronic device where an area alarm device is located according to an exemplary embodiment of the present invention, and except for the processor 610, the memory 630, the interface 620, and the nonvolatile memory 640 shown in fig. 6, the electronic device where the area alarm device is located in the embodiment may also include other hardware generally according to the actual function of the electronic device, which is not described again.
The present invention also provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the area alarm method as described in any one of the preceding embodiments.
The present invention may take the form of a computer program product embodied on one or more storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, having program code embodied therein. Machine-readable storage media include both permanent and non-permanent, removable and non-removable media, and the storage of information may be accomplished by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of machine-readable storage media include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (14)

1. An area alarm method, characterized in that the method comprises:
inputting the acquired current image into a trained neural network model so as to identify a monitored object from the current image by the neural network model and output position information of the monitored object;
determining a target area of the monitored object in the current image according to the position information of the monitored object;
and judging whether intersection exists between the target area and a pre-designated alarm area, if so, determining the monitored object as a target monitored object, and outputting an alarm signal.
2. The area alarm method of claim 1, wherein the identifying and outputting location information of the monitored object from the current image by the neural network model comprises:
carrying out convolution processing on the input current image by the convolution layer of the neural network model to obtain high-dimensional characteristic data and outputting the high-dimensional characteristic data to the pooling layer of the neural network model;
performing data dimensionality reduction on the input high-dimensional feature data by the pooling layer of the neural network model to obtain low-dimensional feature data and outputting the low-dimensional feature data to the full-connection layer of the neural network model;
and identifying the monitored object and the position information of the monitored object in the current image by the full connection layer of the neural network model according to the input low-dimensional feature data, and outputting the position information of the monitored object.
3. The area alarm method of claim 1, wherein prior to determining whether there is an intersection between the target area and a pre-designated alarm area, the method further comprises:
correcting the size of the target area.
4. The area alarm method of claim 3, wherein said correcting the size of the target area comprises:
determining a reference monitoring object matched with the monitoring object in the current image in the previous N images of the current image, wherein N is an integer greater than or equal to 2;
acquiring at least one reference area where the reference monitoring object is located in the previous N images;
carrying out average value operation on the size of at least one reference area to obtain an average size;
scaling the target region to the average size.
5. The area alarm method of claim 1, further comprising:
and counting the number of the target monitoring objects in the set period.
6. The area alarm method of claim 5, wherein the counting the number of target monitoring objects in a set period comprises:
counting the number T1 of target monitoring objects in the first image acquired in a set period;
taking the next image of the first image as an image to be processed;
removing target monitoring objects matched with the feature information in the previous image from the image to be processed, and counting the number T2 of the remaining target monitoring objects in the image to be processed;
and judging whether the image to be processed is the last image acquired in a set period, if so, determining the sum of the T1 and the T2 as the number of target monitoring objects appearing in the alarm area in the set period, and if not, taking the next image of the image to be processed acquired in the set period as the image to be processed and returning to the operation of excluding the target monitoring objects with the same characteristic information as the previous image from the image to be processed.
7. An area alarm device, the device comprising:
the position information acquisition module is used for inputting the acquired current image into a trained neural network model so as to identify a monitored object from the current image by the neural network model and output the position information of the monitored object;
the target area determining module is used for determining a target area of the monitored object in the current image according to the position information of the monitored object;
and the alarm module is used for judging whether an intersection exists between the target area and a pre-designated alarm area, if so, determining that the monitored object is a target monitored object, and outputting an alarm signal.
8. The area alarm device of claim 7, wherein the location information acquisition module is specifically configured to:
carrying out convolution processing on the input current image by the convolution layer of the neural network model to obtain high-dimensional characteristic data and outputting the high-dimensional characteristic data to the pooling layer of the neural network model;
performing data dimensionality reduction on the input high-dimensional feature data by the pooling layer of the neural network model to obtain low-dimensional feature data and outputting the low-dimensional feature data to the full-connection layer of the neural network model;
and identifying the monitored object and the position information of the monitored object in the current image by the full connection layer of the neural network model according to the input low-dimensional feature data, and outputting the position information of the monitored object.
9. The area alarm apparatus of claim 7, further comprising a correction module configured to correct the size of the target area prior to said determining whether there is an intersection between the target area and a pre-designated alarm area.
10. The area alarm apparatus of claim 9, wherein the correction module comprises:
a reference monitoring object determining module, configured to determine a reference monitoring object that matches the monitoring object in the current image in the first N images of the current image, where N is an integer greater than or equal to 2;
a reference region obtaining module, configured to obtain at least one reference region where the reference monitored object is located in the first N images;
the average size acquisition module is used for carrying out average value operation on the size of at least one reference area to obtain an average size;
a scaling module to scale the target area to the average size.
11. The area alarm device of claim 7, further comprising a number counting module for counting the number of target monitoring objects in a set period.
12. The area alarm apparatus of claim 11, wherein the quantity statistics module comprises:
the first quantity counting unit is used for counting the quantity T1 of the target monitoring objects in the first image acquired in the set period;
the image switching unit is used for taking the next image of the first image as an image to be processed;
a second quantity counting unit, configured to exclude, from the to-be-processed image, a target monitoring object that matches the feature information in the previous image, and count the number T2 of remaining target monitoring objects in the to-be-processed image;
and the judging unit is used for judging whether the image to be processed is the last image acquired in a set period, if so, determining the sum of the T1 and the T2 as the number of target monitoring objects appearing in the alarm area in the set period, and if not, taking the next image of the image to be processed acquired in the set period as the image to be processed and returning to the operation of eliminating the target monitoring objects with the same characteristic information as the previous image from the image to be processed.
13. An electronic device comprising a processor and a memory; the memory stores a program that can be called by the processor; wherein the processor, when executing the program, implements the area alarm method of any one of claims 1 to 6.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the area alarm method of any one of claims 1 to 6.
CN201810994283.4A 2018-08-29 2018-08-29 Area alarm method and device, electronic equipment and readable storage medium Active CN110874953B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810994283.4A CN110874953B (en) 2018-08-29 2018-08-29 Area alarm method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810994283.4A CN110874953B (en) 2018-08-29 2018-08-29 Area alarm method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN110874953A true CN110874953A (en) 2020-03-10
CN110874953B CN110874953B (en) 2022-09-06

Family

ID=69714541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810994283.4A Active CN110874953B (en) 2018-08-29 2018-08-29 Area alarm method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN110874953B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111736140A (en) * 2020-06-15 2020-10-02 杭州海康微影传感科技有限公司 Object detection method and camera equipment
CN113473076A (en) * 2020-07-21 2021-10-01 青岛海信电子产业控股股份有限公司 Community alarm method and server
CN113489945A (en) * 2020-12-18 2021-10-08 深圳市卫飞科技有限公司 Target positioning method, device and system and computer readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001016916A1 (en) * 1999-09-01 2001-03-08 Reinhard Mueller Method for resolving traffic conflicts by using master-slave structures in locally limited areas in shipping
CN102426804A (en) * 2011-11-17 2012-04-25 浣石 Early warning system for protecting bridge from ship collision based on far-infrared cross thermal imaging
CN102685525A (en) * 2011-03-15 2012-09-19 富士胶片株式会社 Image processing apparatus and image processing method as well as image processing system
KR101241638B1 (en) * 2013-01-14 2013-03-11 (주)안세기술 The positioning information verification system for operated vessels and aid to navigation by mobile application platform
CN106228812A (en) * 2016-07-29 2016-12-14 浙江宇视科技有限公司 Illegal vehicle image-pickup method and system
CN106710224A (en) * 2015-07-16 2017-05-24 杭州海康威视系统技术有限公司 Evidence taking method and device for vehicle illegal driving
CN107221133A (en) * 2016-03-22 2017-09-29 杭州海康威视数字技术股份有限公司 A kind of area monitoring warning system and alarm method
CN107609601A (en) * 2017-09-28 2018-01-19 北京计算机技术及应用研究所 A kind of ship seakeeping method based on multilayer convolutional neural networks
CN107731011A (en) * 2017-10-27 2018-02-23 中国科学院深圳先进技术研究院 A kind of harbour is moored a boat monitoring method, system and electronic equipment
CN107729866A (en) * 2017-10-31 2018-02-23 武汉理工大学 Ship based on timing diagram picture touches mark automatic detection device and method
CN107818651A (en) * 2017-10-27 2018-03-20 华润电力技术研究院有限公司 A kind of illegal cross-border warning method and device based on video monitoring
CN108171752A (en) * 2017-12-28 2018-06-15 成都阿普奇科技股份有限公司 A kind of sea ship video detection and tracking based on deep learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001016916A1 (en) * 1999-09-01 2001-03-08 Reinhard Mueller Method for resolving traffic conflicts by using master-slave structures in locally limited areas in shipping
CN102685525A (en) * 2011-03-15 2012-09-19 富士胶片株式会社 Image processing apparatus and image processing method as well as image processing system
CN102426804A (en) * 2011-11-17 2012-04-25 浣石 Early warning system for protecting bridge from ship collision based on far-infrared cross thermal imaging
KR101241638B1 (en) * 2013-01-14 2013-03-11 (주)안세기술 The positioning information verification system for operated vessels and aid to navigation by mobile application platform
CN106710224A (en) * 2015-07-16 2017-05-24 杭州海康威视系统技术有限公司 Evidence taking method and device for vehicle illegal driving
CN107221133A (en) * 2016-03-22 2017-09-29 杭州海康威视数字技术股份有限公司 A kind of area monitoring warning system and alarm method
CN106228812A (en) * 2016-07-29 2016-12-14 浙江宇视科技有限公司 Illegal vehicle image-pickup method and system
CN107609601A (en) * 2017-09-28 2018-01-19 北京计算机技术及应用研究所 A kind of ship seakeeping method based on multilayer convolutional neural networks
CN107731011A (en) * 2017-10-27 2018-02-23 中国科学院深圳先进技术研究院 A kind of harbour is moored a boat monitoring method, system and electronic equipment
CN107818651A (en) * 2017-10-27 2018-03-20 华润电力技术研究院有限公司 A kind of illegal cross-border warning method and device based on video monitoring
CN107729866A (en) * 2017-10-31 2018-02-23 武汉理工大学 Ship based on timing diagram picture touches mark automatic detection device and method
CN108171752A (en) * 2017-12-28 2018-06-15 成都阿普奇科技股份有限公司 A kind of sea ship video detection and tracking based on deep learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111736140A (en) * 2020-06-15 2020-10-02 杭州海康微影传感科技有限公司 Object detection method and camera equipment
CN113473076A (en) * 2020-07-21 2021-10-01 青岛海信电子产业控股股份有限公司 Community alarm method and server
CN113473076B (en) * 2020-07-21 2023-03-14 青岛海信电子产业控股股份有限公司 Community alarm method and server
CN113489945A (en) * 2020-12-18 2021-10-08 深圳市卫飞科技有限公司 Target positioning method, device and system and computer readable storage medium

Also Published As

Publication number Publication date
CN110874953B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
US10261574B2 (en) Real-time detection system for parked vehicles
CN107016367B (en) Tracking control method and tracking control system
CN110874953B (en) Area alarm method and device, electronic equipment and readable storage medium
US20120328161A1 (en) Method and multi-scale attention system for spatiotemporal change determination and object detection
CN109815863B (en) Smoke and fire detection method and system based on deep learning and image recognition
CN108875531B (en) Face detection method, device and system and computer storage medium
CN110067274B (en) Equipment control method and excavator
CN108647587B (en) People counting method, device, terminal and storage medium
CN112766137B (en) Dynamic scene foreign matter intrusion detection method based on deep learning
CN112784725B (en) Pedestrian anti-collision early warning method, device, storage medium and stacker
CN110874910B (en) Road surface alarm method, device, electronic equipment and readable storage medium
CN116311084B (en) Crowd gathering detection method and video monitoring equipment
CN112084826A (en) Image processing method, image processing apparatus, and monitoring system
CN112883768B (en) Object counting method and device, equipment and storage medium
CN112417955A (en) Patrol video stream processing method and device
CN111753587B (en) Ground falling detection method and device
CN116052026A (en) Unmanned aerial vehicle aerial image target detection method, system and storage medium
CN109841022B (en) Target moving track detecting and alarming method, system and storage medium
CN113486775A (en) Target tracking method, system, electronic equipment and storage medium
CN112633228A (en) Parking detection method, device, equipment and storage medium
CN112926426A (en) Ship identification method, system, equipment and storage medium based on monitoring video
CN116721516A (en) Early warning method, device and storage medium based on video monitoring
Arandjelović et al. CCTV scene perspective distortion estimation from low-level motion features
WO2018110377A1 (en) Video monitoring device
JP2016103246A (en) Image monitoring device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant