CN113473076B - Community alarm method and server - Google Patents

Community alarm method and server Download PDF

Info

Publication number
CN113473076B
CN113473076B CN202010703527.6A CN202010703527A CN113473076B CN 113473076 B CN113473076 B CN 113473076B CN 202010703527 A CN202010703527 A CN 202010703527A CN 113473076 B CN113473076 B CN 113473076B
Authority
CN
China
Prior art keywords
area
monitoring target
monitoring
specified
segmentation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010703527.6A
Other languages
Chinese (zh)
Other versions
CN113473076A (en
Inventor
张玉
李蕾
高雪松
陈维强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Holding Co Ltd
Original Assignee
Qingdao Hisense Electronic Industry Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Electronic Industry Holdings Co Ltd filed Critical Qingdao Hisense Electronic Industry Holdings Co Ltd
Priority to CN202010703527.6A priority Critical patent/CN113473076B/en
Publication of CN113473076A publication Critical patent/CN113473076A/en
Application granted granted Critical
Publication of CN113473076B publication Critical patent/CN113473076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Abstract

The application provides a community alarm method and a server. The method comprises the following steps: respectively inputting the community images into an instance segmentation model and a watershed segmentation model to obtain a first region range and a second region range of a plurality of monitoring targets; aiming at any one of the designated monitoring targets, searching a second area range meeting a first designated condition in intersection with the first area range from the second area range as a seed area; continuously performing area growth in the second area range by taking the seed area as a reference until a growth area meeting a second specified condition with the intersection ratio of the growth area and the first area range is obtained and is used as a final area of a specified monitoring target; and when the final area of at least one movable monitoring target in the specified monitoring targets and the final area of any fixed monitoring target in the specified monitoring targets meet the specified relative position relationship, sending alarm information. Thus, the present disclosure does not need to rely on hardware devices and does not have certain distance limitations.

Description

Community alarm method and server
Technical Field
The application relates to the technical field of intelligent terminals, in particular to a community alarm method and a server.
Background
There are many potential safety hazards in the community. For example, residents swim in landscape lakes, children play near transformer boxes, and the like. Safety accidents occur because residents are not stopped in time when approaching a dangerous area. Along with the popularization and the deepened development of the modern urban residential community system in China. The informatization construction of communities is deepened continuously, and the construction of an information network platform is accelerated by various communities, particularly large and medium-sized urban communities. Community systems are gradually moving to new stages of centrally managing information using networks and computers. The community server can automatically monitor 24 hours, alarm linkage is achieved, the alarm condition can be timely handled, and the working efficiency of security personnel and the response speed of handling various emergency events are improved.
In the prior art, as shown in fig. 1, fig. 1 is a schematic structural diagram of an area alarm device in the prior art. The area alarm device comprises an area alarm device body and an RFID (Radio Frequency Identification) base station terminal. The area alarm device body comprises a perimeter alarm controller, an output module, a communication module, an alarm module, a storage module, a positioning module, a displacement sensing module, a temperature sensing module and an infrared sensing module. The area alarm device is connected with the area alarm device body through the RFID base station terminal, so that the RFID base station terminal monitors a monitoring source entering a monitoring range set by the area alarm device body. However, in this method, the RFID base station terminal needs to rely on hardware devices such as the area alarm apparatus main body. And the distance between the RFID base station terminal and each sensing module in the area alarm device body is limited to a certain extent. Therefore, a new method for solving the above problems is urgently needed.
Disclosure of Invention
The application provides a community alarm method and a server, which are used for solving the problem that an RFID base station terminal in the related technology needs to rely on hardware equipment such as an area alarm device body. And the distance between the RFID base station terminal and each sensing module in the area alarm device body is limited to a certain extent.
In a first aspect, the present application provides a server comprising a memory and a processor, wherein:
the memory for storing a computer program executable by the processor;
the processor is coupled to the memory and configured to:
respectively inputting the community images into an example segmentation model and a watershed segmentation model to obtain first region ranges of a plurality of monitoring targets output by the example segmentation model and second region ranges of the plurality of monitoring targets output by the watershed segmentation model; wherein, all the plurality of monitoring targets comprise fixed monitoring targets and movable monitoring targets with fixed positions;
aiming at any appointed monitoring target in the multiple monitoring targets, searching a second area range meeting a first appointed condition in a merging ratio with a first area range of the appointed monitoring target from second area ranges of the multiple monitoring targets as a seed area;
continuously performing area growth in a second area range of the multiple monitoring targets by taking the seed area as a reference until a growth area meeting a second specified condition in the intersection ratio with the first area range is obtained, and taking the growth area meeting the second specified condition in the intersection ratio as a final area of the specified monitoring target;
and when the final area of at least one movable monitoring target in the specified monitoring targets and the final area of any fixed monitoring target in the specified monitoring targets meet the specified relative position relationship, sending alarm information.
In one embodiment, the processor, when executing the growing of the area in the second area range of the multiple monitoring targets by taking the seed area as a reference until the growing area meeting the first area range and satisfying a second specified condition is obtained, is configured to:
traversing a second region range adjacent to the seed region as a neighborhood range;
if the intersection ratio of the neighborhood range and the first region range is increased after the neighborhood range is merged into the seed region, merging the neighborhood range into the seed region;
and taking the seed area obtained after the traversal is finished as the final area of the specified monitoring target.
In one embodiment, the processor, in executing when the final area of at least one movable monitoring target of the designated monitoring targets and the final area of any fixed monitoring target of the designated monitoring targets satisfy a designated relative positional relationship, then sending alarm information, is configured to execute:
determining the intersection ratio of the final area of any fixed monitoring target in the specified monitoring targets and the final area of at least one movable monitoring target in the specified monitoring targets;
and aiming at any movable monitoring target, when the intersection ratio of the movable monitoring target and the fixed monitoring target is greater than a specified preset value, sending the alarm information.
In one embodiment, the processor is further configured to:
after the community images are input into the example segmentation model, the categories of the multiple monitoring targets output by the example segmentation model are obtained; and generating description information of the dangerous event according to the type of the movable monitoring target and the type of the fixed monitoring target, the intersection ratio of which is greater than a specified preset value.
In one embodiment, the alarm information further includes an identifier of a camera for collecting the community image;
the description information of the dangerous event also comprises the identification of the camera.
In one embodiment, the processor is further configured to:
before inputting the community images into the example segmentation model, training the example segmentation model according to the following method:
acquiring a training sample, wherein the training sample comprises a plurality of monitoring targets in the community, labeled area ranges of the plurality of monitoring targets and categories of the plurality of monitoring targets;
and training the example segmentation model according to the training samples.
In one embodiment, the processor is further configured to:
and searching at least one monitoring target with a danger attribute in a plurality of monitoring targets in a monitoring target danger attribute database, and taking the at least one monitoring target with the danger attribute as the specified monitoring target.
In one embodiment, the processor is further configured to:
before the community images are respectively input into the example segmentation model and the watershed segmentation model, the community images are scaled to a specified size, and the community images with the scaled sizes are used as the community images respectively input into the example segmentation model and the watershed segmentation model.
In a second aspect, the present disclosure provides a community alarm method, the method comprising:
respectively inputting the community images into an example segmentation model and a watershed segmentation model to obtain first region ranges of a plurality of monitoring targets output by the example segmentation model and second region ranges of the plurality of monitoring targets output by the watershed segmentation model; the monitoring targets comprise fixed monitoring targets and movable monitoring targets, wherein the fixed monitoring targets and the movable monitoring targets are fixed in positions;
aiming at any appointed monitoring target in the multiple monitoring targets, searching a second area range meeting a first appointed condition in a merging ratio with a first area range of the appointed monitoring target from second area ranges of the multiple monitoring targets as a seed area;
continuously performing area growth in a second area range of the multiple monitoring targets by taking the seed area as a reference until a growth area meeting a second specified condition in the intersection ratio with the first area range is obtained, and taking the growth area meeting the second specified condition in the intersection ratio as a final area of the specified monitoring target;
and when the final area of at least one movable monitoring target in the specified monitoring targets and the final area of any fixed monitoring target in the specified monitoring targets meet the specified relative position relationship, sending alarm information.
In one embodiment, the continuously performing region growing in a second region range of the multiple monitoring targets by using the seed region as a reference until a growing region meeting a second specified condition with the first region range is obtained includes:
traversing a second region range adjacent to the seed region as a neighborhood range;
if the intersection ratio of the neighborhood range and the first region range is increased after the neighborhood range is merged into the seed region, merging the neighborhood range into the seed region;
and taking the seed area obtained after the traversal is finished as the final area of the specified monitoring target.
In one embodiment, when the final area of at least one movable monitoring target in the designated monitoring targets and the final area of any fixed monitoring target in the designated monitoring targets satisfy a designated relative position relationship, sending alarm information, including:
determining the intersection ratio of the final area of any fixed monitoring target in the specified monitoring targets and the final area of at least one movable monitoring target in the specified monitoring targets;
and aiming at any movable monitoring target, when the intersection ratio of the movable monitoring target and the fixed monitoring target is greater than a specified preset value, the alarm information is sent.
In one embodiment, after the community image is input into the instance segmentation model, the categories of the multiple monitoring targets output by the instance segmentation model are also obtained;
the alarm information includes a dangerous event, and determining the dangerous event includes:
and generating the description information of the dangerous event according to the type of the movable monitoring target and the type of the fixed monitoring target, the intersection ratio of which is greater than the specified preset value.
In one embodiment, the alarm information further includes an identifier of a camera for collecting the community image;
the description information of the dangerous event also comprises the identification of the camera.
In one embodiment, before the inputting the community image to the instance segmentation model, the method further comprises:
training the example segmentation model according to the following method:
acquiring a training sample, wherein the training sample comprises a plurality of monitoring targets in the community, labeled area ranges of the plurality of monitoring targets and categories of the plurality of monitoring targets;
and training the example segmentation model according to the training samples.
In one embodiment, determining the specified monitoring target comprises:
and searching at least one monitoring target with a danger attribute in a plurality of monitoring targets in a monitoring target danger attribute database, and taking the at least one monitoring target with the danger attribute as the specified monitoring target.
In one embodiment, before the inputting the community images into the instance segmentation model and the watershed segmentation model respectively, the method further comprises:
and after the community image is scaled by a specified size, the community image after the preprocessing operation is used as the community image which is respectively input into an example segmentation model and a watershed segmentation model.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the application provides a community alarm method and a server. According to the method, after the first area ranges of the multiple monitoring targets output by the example segmentation model and the second area ranges of the multiple monitoring targets output by the watershed segmentation model, the first area range and the second area range corresponding to any appointed monitoring target in the multiple monitoring targets are determined, so that the final area of the appointed monitoring target is determined, and therefore when the final area of the movable target and the final area of any fixed area in the appointed monitoring target meet the appointed relative position relation, alarm information is sent out. Therefore, the distance between the sensing module and each sensing module in the area alarm device body is not limited to a certain extent without depending on hardware equipment such as the area alarm device body. The problem that the RFID base station terminal in the prior art needs to rely on hardware equipment such as an area alarm device body is solved. And the distance between the RFID base station terminal and each sensing module in the area alarm device body is limited to a certain extent. And the final region of the monitoring target obtained in the disclosure is more accurate.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic structural view according to the related art in the present application;
FIG. 2 is a schematic diagram of a community alarm system in accordance with one embodiment of the present application;
FIG. 3 is a schematic diagram of a server architecture according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating one embodiment of a community alarm method according to the present application;
FIG. 5 is a diagram illustrating an example segmentation model structure in a community alarm method according to an embodiment of the present application;
6A-6D are schematic interfaces of a community alerting method according to one embodiment of the present application;
FIG. 7 is a second flowchart of a community alarm method according to an embodiment of the present application;
FIG. 8 is a third flowchart illustrating a community alerting method according to an embodiment of the present application;
FIG. 9 is a diagram of a community alarm device according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort shall fall within the protection scope of the present application.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may include one or more of that feature either explicitly or implicitly. In the description of the present application, "a plurality" means two or more unless otherwise specified.
As described above, in the prior art, the RFID base station terminal monitors the monitoring source entering the monitoring range set by the area alarm apparatus body. Therefore, when a dangerous event occurs, alarm linkage is realized. However, the inventor researches and discovers that in the method, the RFID base station terminal needs to depend on hardware equipment such as an area alarm device body. And the distance between the RFID base station terminal and each sensing module in the area alarm device body is limited to a certain extent. Therefore, the embodiment of the application provides a community alarm method and a server. The following describes the present application in detail with reference to the accompanying drawings.
As shown in fig. 2, fig. 2 is a schematic structural diagram of a community alarm system provided in an embodiment of the present application, where the community alarm system includes at least one monitoring device 203, a network 201, a server 202, a terminal device 204, and a monitoring target risk attribute database 205. In specific implementation, the number of the monitoring devices, the monitoring devices and the terminal devices is not limited.
The invention concept of the present disclosure is to determine a first region range and a second region range of each designated monitoring target in a community by using an example segmentation model and a watershed segmentation model, respectively, and determine a final region of each designated monitoring target according to the first region range and the second region range. Therefore, the relative position relation between the final area of the movable monitoring target and the final area of the fixed monitoring target in each appointed monitoring target is identified, whether a dangerous event occurs or not can be determined, and therefore community workers are reminded to rescue in time.
In one possible application scenario, the server 202 is connected to the monitoring device 203, the terminal device 204 and the monitoring target risk attribute database 205 via the network 201. The monitoring equipment 203 acquires a community image and sends the community image to the server 202, and after receiving the community image, the server 202 inputs the community image into the example segmentation model and the watershed segmentation model respectively to obtain a first region range of a plurality of monitoring targets in the community image output by the example segmentation model and a second region range of the plurality of monitoring targets in the community image output by the watershed segmentation model; the monitoring targets comprise fixed monitoring targets and movable monitoring targets with fixed positions; then, the server 202 searches each monitoring target with the risk attribute in the plurality of monitoring targets in the monitoring target risk attribute database 205, and takes the plurality of monitoring targets with the risk attribute as the designated monitoring target. Searching a second area range meeting a first specified condition in a merging ratio with a first area range of the specified monitoring target from second area ranges of the multiple monitoring targets as a seed area aiming at any specified monitoring target in the multiple monitoring targets; continuously performing area growth in a second area range of the plurality of monitoring targets by taking the seed area as a reference until a growth area meeting a second specified condition with the intersection ratio of the growth area and the first area range is obtained, and taking the growth area meeting the second specified condition with the intersection ratio as a final area of the specified monitoring target; and when the final area of at least one movable monitoring target in the specified monitoring targets and the final area of any fixed monitoring target in the specified monitoring targets meet the specified relative position relationship, sending alarm information to the terminal equipment 204.
Therefore, according to the method, after the first area range of each monitoring target output by the example segmentation model and the second area range of the multiple monitoring targets output by the watershed segmentation model are determined, the first area range and the second area range corresponding to any appointed monitoring target in the multiple monitoring targets are determined, and the final area of the appointed monitoring target is determined, so that when the final area of the movable target and the final area of any fixed area in the appointed monitoring targets meet the appointed relative position relation, alarm information is sent out. Thus, the present disclosure does not need to rely on hardware devices and does not have certain distance limitations. The problem of in the correlation technique RFID base station terminal need rely on hardware equipment, and have certain restriction to the distance of RFID base station terminal and the sensor in the alarm device body is solved. And the obtained final area of each monitoring target is more accurate.
After introducing one possible structure of the community alarm system, the structure of the server in the present disclosure will be described in detail. As shown in fig. 3, fig. 3 is a schematic structural diagram of a server in the present disclosure. The server shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The server 300 is represented in the form of a general-purpose server. The components of server 300 may include, but are not limited to: at least one processor 301, at least one computer storage medium 302, and a bus 303 that connects the various system components (including the computer storage medium 302 and the processor 301).
Bus 303 represents one or more of any of several types of bus structures, including a computer storage media bus or computer storage media controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The computer storage medium 302 may include readable media in the form of volatile computer storage media, such as random access computer storage media (RAM) 321 and/or cache storage media 322, and may further include read-only computer storage media (ROM) 323.
The computer storage medium 302 may also include a program/utility 325 having a set (at least one) of program modules 324, such program modules 324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The server 300 may also communicate with one or more external devices 305 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the server 300, and/or with any devices (e.g., router, modem, etc.) that enable the server 300 to communicate with one or more other electronic devices. Such communication may occur via input/output (I/O) interfaces 307. Further, the server 300 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via a network adapter 306. As shown, the network adapter 306 communicates with the other modules for the server 300 over a bus 303. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the server 300, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
Next, the community alarm method in the present application is introduced in detail, as shown in fig. 4, where fig. 4 is a schematic flow diagram of the community alarm method in the present application, and may include the following steps:
step 401: respectively inputting the community images into an example segmentation model and a watershed segmentation model to obtain first region ranges of a plurality of monitoring targets output by the example segmentation model and second region ranges of the plurality of monitoring targets output by the watershed segmentation model; the monitoring targets comprise fixed monitoring targets and movable monitoring targets with fixed positions;
wherein, fixed position's control target can be transformer box in the community, artifical lake, resident's building, meadow, trees and sky etc. and portable control target can be people, pet and car etc..
In order to make the community image meet the standard of the image of the instance segmentation model, in one embodiment, before the community image is input into the instance segmentation model and the watershed segmentation model respectively, the community image needs to be scaled by a specified size, and the scaled community image is taken as the community image input into the instance segmentation model and the watershed segmentation model respectively. In the embodiment of the application, the size of the community image is adjusted, so that the community image can be input into the example segmentation model to obtain the first area range of each monitoring target. And inputting the image after being adjusted in size into a watershed segmentation model so as to be capable of better determining the final area of each monitoring target and further reduce errors.
The example segmentation model and watershed segmentation model used in this disclosure are introduced below, respectively:
(I) an example segmentation model:
(1) Training an example segmentation model:
the example segmentation model needs to be trained according to the following method: acquiring a training sample, wherein the training sample comprises a plurality of monitoring targets in the community, the marked area ranges of the plurality of monitoring targets and the categories of the plurality of monitoring targets; and training the example segmentation model according to the training samples.
For example, the categories of the plurality of monitoring targets in the training sample may include artificial lakes, transformer boxes, lawns, cars, people, and the like. The training sample also comprises area ranges respectively corresponding to artificial lakes, transformer boxes, lawns, vehicles, people and the like.
Therefore, training can be performed according to different monitoring targets in different communities, and the first area range of each monitoring target can be obtained.
(2) Use of an instance segmentation model:
the purpose of instance segmentation is to identify multiple monitoring targets and multiple categories of monitoring targets in the community image. As shown in fig. 5, fig. 5 is a frame diagram of an example segmentation model. The steps of the example segmentation model are described in detail below with reference to FIG. 5:
step 501: performing feature extraction on an input community picture with the size of P x Q to obtain a feature map;
step 501 is described in detail below: in the embodiment of the application, the feature is input into a residual deep network Resnet101 for feature extraction. The specific steps of the network for feature extraction are as follows: firstly, the picture is cut to obtain the size of the community picture after cutting is M multiplied by N. And (3) passing the picture through 13 convolutional layers to obtain a characteristic value of the community picture, and then filtering the picture with the characteristic value smaller than 0 through 13 nonlinear activation function layers. And finally, performing downsampling through 4 pooling layers to obtain a feature map of the community picture.
Step 502: inputting the feature map into a Region generation Network (RPN) to obtain an interested Region;
specifically, firstly, the feature map is convolved by a 3 x 3 convolutional layer to obtain 9 candidate regions, and then, the full connection operation is performed twice for any one candidate region. Namely, the full connect operation in step 5021 and the full connect operation in step 5022.
After the full join operation in step 5021, each candidate region obtains two scores, namely a foreground score and a background score. And then, carrying out region adjustment operation on the 9 candidate regions according to a certain proportion, namely mapping the candidate regions to the original community image. And then, inputting a classifier to classify the foreground and the background.
After the full connection operation in step 5022, each candidate region obtains 4 coordinates, that is, the frame coordinates of the candidate region. A total of 36 coordinates. Then, the frame of each candidate area is regressed. Calculating the deviation value between the frame of each candidate area and the actual frame of the real monitoring target, and performing non-maximum inhibition according to the calculated deviation value to obtain the adjusted frame of each candidate area.
Step 5023: and fusing the obtained foreground area and the adjusted frame of each candidate area, and then sequencing according to the foreground scores from large to small to obtain the interested areas with the specified number.
Step 503: and performing regional feature focusing operation roiign on the specified number of regions of interest, namely, firstly, corresponding the original community image to the pixels in the feature map in step 501, and then, corresponding the feature map to the specified number of regions of interest.
Step 504: the regions of interest are classified by convolution layers. And performing frame regression again, namely accurately adjusting the deviation value of the frame of the region of interest and the frame of the real monitoring target. And inputting the regions of interest into a full convolution neural network to obtain final regions of the regions of interest, and obtaining first region ranges of the monitoring targets.
As shown in fig. 6A, fig. 6A is a community image. Fig. 6B shows first region ranges of a plurality of monitoring targets and categories of the plurality of monitoring targets obtained by inputting the community image into the example segmentation model. Different gray values can represent different monitoring targets, and different colors can be adopted for representing in actual output.
(II) a watershed segmentation model:
a watershed segmentation model is a segmentation method of mathematical morphology based on a topological theory, and the basic idea is that an image is regarded as a topological landform on geodesic science, the gray value of each point pixel in the image represents the altitude of the point, each local minimum value and an influence area of the local minimum value are called as a catchbasin, and the boundary of the catchbasin forms a watershed. The watershed computation process is an iterative labeling process. The watershed segmentation model comprises two steps: one is a sorting process and one is a flooding process.
The watershed segmentation model is used because it results in boundaries for each monitored object, and is a continuous, closed boundary. Since the watershed segmentation model has good response to weak edges, the phenomenon of over-segmentation can be generated aiming at noise in an input image and slight gray level change on the surface of an object. This very sensitive feature to noise and fine texture often serves as the biggest drawback of watershed segmentation models, causing users to forgo using the model. However, in the present disclosure, the feature can be used as a supplement to the example segmentation model, so as to make up for the uncertainty and incompleteness of the edge information of the region range, and enhance the accuracy of identifying the edge of the region range of each monitored target.
The following describes a specific implementation of inputting the community image into the watershed segmentation model to identify the second region range of each monitoring target, and as shown in fig. 7, the method may include the following steps:
(I) a sequencing process:
step 701: acquiring a gray value of the community image, and obtaining a gradient value of each pixel point in the community image according to the gray value through a Sobel operator;
step 702: sequencing all the pixel points according to the gradient values from small to large, and determining the pixel points with the same gradient value as the same gradient level; the gradient level with the minimum gradient value is the first gradient level, and the gradient levels are sorted according to the arrangement sequence of the pixel values.
(II) flooding process:
step 703: carrying out flooding operation to obtain the region mark of each pixel point in each gradient level;
it should be noted that, before the flooding operation is performed, the region marking is performed on the pixel point at the specified position in each gradient level, and the pixel point at the specified position is determined as the current pixel point of the gradient level where the pixel point is located.
Step 7031: traversing from the current pixel point according to eight neighborhoods from the first gradient level to obtain a neighborhood pixel point of the pixel point;
step 7032: judging whether the obtained neighborhood pixel point is marked or not, if the neighborhood pixel point which is not marked exists, if the gradient level of the neighborhood pixel point is the same as that of the current pixel point, refreshing the region identification of the neighborhood pixel point by using the region mark of the current pixel point.
Step 7033: and taking the refreshed neighborhood pixel points as the current pixel points to continue to execute the step 7032 until no neighborhood pixel points meeting the specified conditions exist. Wherein the specified conditions include: the gradient level of the current pixel point is the same or not marked.
Step 7034: and scanning each pixel point in the current gradient level again, and if determining that the pixel point is not identified, giving a new region identification to the pixel point which is not identified as the region identification of the pixel point. Then, step 7032 is continued to be performed with the pixel as the current pixel until it is determined that there is no unidentified pixel in the current gradient level.
Step 704: and after traversing all gradient levels, fusing pixel points with the same zone marks to obtain second zone ranges of a plurality of monitoring targets in the community image.
For example, fig. 6A shows the community image input to the watershed segmentation model, and fig. 6C shows the second region range of each monitoring target in each monitoring image obtained by the watershed segmentation model. Wherein the different gray values may represent different second area ranges. It should be noted that: the classification of each monitoring target cannot be output in the watershed segmentation model. In particular implementations, different colors may represent different second area ranges, but second area ranges of the same color do not represent second area ranges that they belong to the same category.
Step 402: aiming at any appointed monitoring target in the multiple monitoring targets, searching a second area range meeting a first appointed condition in a merging ratio with a first area range of the appointed monitoring target from second area ranges of the multiple monitoring targets as a seed area; wherein the first specified condition may be that the cross-over ratio is maximum or greater than a specified cross-over ratio threshold.
In one embodiment, at least one monitoring target with a danger attribute in a plurality of monitoring targets is searched in a monitoring target danger attribute database, and the at least one control target with the danger attribute is used as the specified monitoring target. The maintenance of the monitoring target dangerous attribute database is to add an attribute of whether each monitoring target object in all the example partitions is dangerous or not. May contain objects common in the community: such as grasslands, artificial lakes, people, vehicles, slates, trees, flower gardens, transformer boxes, residential buildings, and the like. When initialization is performed, artificial lakes, transformer boxes and the like default to dangerous attributes.
However, because the situations of different communities may be different, a manual judgment interface is added, and the interface is opened to community management personnel. Community managers may modify the hazard attributes of each monitored object in the database.
For example, the monitoring targets include a transformer tank, an artificial lake, and a lawn. And querying the danger attributes of the three monitoring targets in the monitoring target danger attribute database. And determining that the transformer box and the artificial lake both have dangerous attributes, and determining that the designated monitoring target is the transformer box and the artificial lake.
From this, confirm the community region that has dangerous attribute in the community in the dangerous attribute database of control target that the accessible is preset to when this people or animals etc. are close to the region that has dangerous attribute, can in time obtain alarm information.
In the following, the procedure of determining the seed region is described by taking the artificial lake as an example of a specific monitoring target:
as shown in fig. 6B, the O region in fig. 6B is a first region range of the artificial lake. Then, a second area range with the largest intersection ratio with the O area needs to be searched in each second area range in fig. 6C, and if it is determined that the a area in fig. 6C is the second area range whose intersection ratio with the O area satisfies the first specified condition, it is determined that the a area is the seed area.
Step 403: continuously performing area growth in a second area range of the multiple monitoring targets by taking the seed area as a reference until a growth area meeting a second specified condition in the intersection ratio with the first area range is obtained, and taking the growth area meeting the second specified condition in the intersection ratio as a final area of the specified monitoring target;
in one embodiment, region generation may be implemented to traverse a second region range adjacent to the seed region as a neighborhood range; if the intersection ratio of the neighborhood range and the first region range is increased after the neighborhood range is merged into the seed region, merging the neighborhood range into the seed region; and obtaining a seed area after traversing is finished and taking the seed area as a final area of the specified monitoring target.
As described above, the seed region is determined to be the region a in fig. 6B, and a second region range connected to the region a is traversed to serve as a neighborhood range, which includes: region B, region C, region D, region E, region F, region G, region H, region I, region J. If the intersection ratio is increased after the region C, the region D, the region E, the region F, the region G, the region H, the region I and the region J are merged into the region A, the finally obtained region is the final region of the artificial lake, and the final region of the artificial lake is the region Q in fig. 6D.
Therefore, the final area of the designated monitoring target can be determined by determining the intersection ratio, and the division result of the final area can be improved.
Step 404: and when the final area of at least one movable monitoring target in the appointed monitoring targets and the final area of any fixed monitoring target in the appointed monitoring targets meet the appointed relative position relationship, sending alarm information.
In one embodiment, determining the intersection ratio of the final area of any fixed monitoring target in the specified monitoring targets and the final area of at least one movable monitoring target in the specified monitoring targets; and aiming at any movable monitoring target, when the intersection ratio of the movable monitoring target and the fixed monitoring target is greater than a specified preset value, the alarm information is sent.
For example, the movable monitoring target is a person, the fixed monitoring target is an artificial lake, if the intersection ratio of the final area of the person and the final area of the artificial lake is determined to be 100%, and if the specified preset value is 85%, the intersection ratio of the final area of the person and the final area of the artificial lake is determined to be greater than the specified preset value, and then alarm information is sent.
The range of the intersection ratio larger than the specified preset value is different, and the sent alarm information can be different. For example, if the cross-over ratio ranges from 85% to 90%, the transmitted alarm information can be that the distance between a person and the artificial lake exceeds the safe distance. The intersection ratio ranges from 90% to 98%, and the sent alarm information can be in danger of lake falling. More than 98% of people fall into the artificial lake.
From this, the crossing of accessible portable monitoring target and fixed monitoring target compares can be accurate confirm whether take place the dangerous event to this can be timely send alarm information so that the timely rescue of community staff.
In order to enable community staff to accurately know the specific location and specific dangerous events of danger in the community:
in one embodiment, after the community image is input into the instance segmentation model, the categories of the multiple monitoring targets output by the instance segmentation model are also obtained; wherein the alarm information comprises a dangerous event; determining the dangerous event may be implemented as: and generating the description information of the dangerous event according to the type of the movable monitoring target and the type of the fixed monitoring target with the intersection ratio larger than the specified preset value. The alarm information further comprises an identifier of a camera for collecting the community image; the description information of the dangerous event also comprises the identification of the camera.
For example, if the type of the movable monitoring target with the intersection ratio larger than the preset value is determined as a person, the type of the fixed monitoring target is an artificial lake, and the identifier of the camera is determined as 3, the generated description information can be obtained, and the camera No. 3 shoots that someone falls into the artificial lake.
Therefore, the description information of the dangerous events in the alarm information is determined by determining the category of the movable monitoring target and the category of the fixed monitoring target, so that community staff can have corresponding rescue measures according to the dangerous events.
Referring to fig. 8, the following detailed description of the embodiments of the present application may include the following steps:
step 801: acquiring a training sample, wherein the training sample comprises a plurality of monitoring targets in the community, labeled area ranges of the plurality of monitoring targets and categories of the plurality of monitoring targets;
step 802: training the example segmentation model according to the training sample;
step 803: after the community image is zoomed to a specified size, the community image after being zoomed to the specified size is used as the community image which is respectively input into an example segmentation model and a watershed segmentation model;
step 804: respectively inputting the community images into an example segmentation model and a watershed segmentation model to obtain first region ranges of a plurality of monitoring targets output by the example segmentation model and second region ranges of the plurality of monitoring targets output by the watershed segmentation model; the monitoring targets comprise fixed monitoring targets and movable monitoring targets, wherein the fixed monitoring targets and the movable monitoring targets are fixed in positions;
step 805: aiming at any appointed monitoring target in the multiple monitoring targets, searching a second area range meeting a first appointed condition in a merging ratio with a first area range of the appointed monitoring target from second area ranges of all the monitoring targets as a seed area;
step 806: traversing a second region range adjacent to the seed region as a neighborhood range;
step 807: if the intersection ratio of the neighborhood range and the first region range is increased after the neighborhood range is merged into the seed region, merging the neighborhood range into the seed region;
step 808: after traversing, obtaining a seed area as a final area of the specified monitoring target;
step 809: determining the intersection ratio of the final area of any fixed monitoring target in the specified monitoring targets and the final area of at least one movable monitoring target in the specified monitoring targets;
step 810: and aiming at any movable monitoring target, when the intersection ratio of the movable monitoring target and the fixed monitoring target is greater than a specified preset value, the alarm information is sent.
Based on the same technical concept, fig. 9 exemplarily illustrates a community alarm apparatus 900 provided by the embodiment of the present application, which may execute a flow of a community alarm method. The method comprises the following steps:
a region range determining module 901, configured to input the community images into an example segmentation model and a watershed segmentation model respectively, to obtain first region ranges of multiple monitoring targets output by the example segmentation model and second region ranges of the multiple monitoring targets output by the watershed segmentation model; the monitoring targets comprise fixed monitoring targets and movable monitoring targets which are fixed in positions;
a seed region determining module 902, configured to, for any specified monitoring target in the multiple monitoring targets, search, as a seed region, a second region range that meets a first specified condition with an intersection ratio of a first region range of the specified monitoring target from second region ranges of the multiple monitoring targets;
a final area determining module 903, configured to continuously perform area growth in a second area range of the multiple monitoring targets by using the seed area as a reference until a growth area whose intersection ratio with the first area range meets a second specified condition is obtained, and use the growth area whose intersection ratio meets the second specified condition as a final area of the specified monitoring target;
and an alarm information sending module 904, configured to send alarm information when a final area of at least one movable monitoring target of the designated monitoring targets and a final area of any fixed monitoring target of the designated monitoring targets satisfy a designated relative position relationship.
In an embodiment, the final area determining module 903 is specifically configured to:
traversing a second region range adjacent to the seed region as a neighborhood range;
if the intersection ratio of the neighborhood range and the first region range is increased after the neighborhood range is merged into the seed region, merging the neighborhood range into the seed region;
and taking the seed area obtained after the traversal is finished as a final area of the specified monitoring target.
In an embodiment, the alarm information sending module 904 is specifically configured to:
determining the intersection ratio of the final area of any fixed monitoring target in the specified monitoring targets and the final area of at least one movable monitoring target in the specified monitoring targets;
and aiming at any movable monitoring target, when the intersection ratio of the movable monitoring target and the fixed monitoring target is greater than a specified preset value, the alarm information is sent.
In one embodiment, the apparatus further comprises:
and the dangerous event determining module 905 is configured to, after the community image is input into the example segmentation model, further obtain the categories of the multiple monitoring targets output by the example segmentation model, and generate description information of the dangerous event according to the category of the movable monitoring target and the category of the fixed monitoring target, where an intersection ratio of the categories of the movable monitoring target and the fixed monitoring target is greater than a specified preset value.
In one embodiment, the alarm information further includes an identifier of a camera for collecting the community image;
the description information of the dangerous event also comprises the identification of the camera.
In one embodiment, the apparatus further comprises:
a training sample acquisition module 906 for training the example segmentation model according to the following method: acquiring a training sample, wherein the training sample comprises a plurality of monitoring targets in the community, the marked area ranges of the plurality of monitoring targets and the categories of the plurality of monitoring targets;
a training module 907 for training the example segmentation model according to the training samples.
In one embodiment, the apparatus further comprises:
the designated monitoring target determining module 908 is configured to search the monitoring target risk attribute database for at least one monitoring target with a risk attribute in the multiple monitoring targets, and use the at least one monitoring target with the risk attribute as the designated monitoring target.
In one embodiment, the apparatus further comprises:
an image size scaling module 909, which is configured to scale the community image by a predetermined size before the community image is input into the instance segmentation model and the watershed segmentation model, and to use the scaled community image as the community image input into the instance segmentation model and the watershed segmentation model.
Having described a community alarm device, a community alarm method and apparatus according to exemplary embodiments of the present application, an electronic device according to another exemplary embodiment of the present application will be described next.
In some possible embodiments, aspects of a community alarm method provided by the present application may also be implemented in the form of a program product, which includes program code for causing a computer device to perform the steps of a community alarm method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, terminal device, or apparatus, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for data processing of an embodiment of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, terminal device, or apparatus.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, terminal device, or apparatus.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several units or sub-units of the terminal device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (14)

1. A server, comprising a memory and a processor, wherein:
the memory for storing a computer program executable by the processor;
the processor is coupled to the memory and configured to:
respectively inputting the community images into an example segmentation model and a watershed segmentation model to obtain first region ranges of a plurality of monitoring targets and categories of the plurality of monitoring targets output by the example segmentation model and second region ranges of the plurality of monitoring targets output by the watershed segmentation model; the monitoring targets comprise fixed monitoring targets and movable monitoring targets with fixed positions;
aiming at any appointed monitoring target in the multiple monitoring targets, searching a second area range meeting a first appointed condition with the intersection ratio of a first area range of the appointed monitoring target from a second area range of the multiple monitoring targets as a seed area, wherein the first appointed condition can be that the intersection ratio is maximum or is larger than an appointed intersection ratio threshold;
continuously performing region growing in a second region range of the multiple monitoring targets by taking the seed region as a reference until a growing region meeting a second specified condition with the intersection ratio of the first region range is obtained, and configuring the growing region as follows:
traversing a second region range adjacent to the seed region as a neighborhood range; if the intersection ratio of the neighborhood range and the first region range is increased after the neighborhood range is merged into the seed region, merging the neighborhood range into the seed region; taking the seed area obtained after the traversal is finished as a final area of the specified monitoring target;
taking the growing area with the intersection ratio meeting the second specified condition as a final area of the specified monitoring target;
and when the final area of at least one movable monitoring target in the appointed monitoring targets and the final area of any fixed monitoring target in the appointed monitoring targets meet the appointed relative position relationship, sending alarm information.
2. The server according to claim 1, wherein the processor, when executing sending alarm information when the final area of at least one movable monitoring target among the designated monitoring targets and the final area of any fixed monitoring target among the designated monitoring targets satisfy a designated relative positional relationship, is configured to:
determining the intersection ratio of the final area of any fixed monitoring target in the specified monitoring targets and the final area of at least one movable monitoring target in the specified monitoring targets;
and aiming at any movable monitoring target, when the intersection ratio of the movable monitoring target and the fixed monitoring target is greater than a specified preset value, the alarm information is sent.
3. The server of claim 2, wherein the processor is further configured to:
after the community images are input into the example segmentation model, the categories of the multiple monitoring targets output by the example segmentation model are obtained; and generating description information of the dangerous event according to the type of the movable monitoring target and the type of the fixed monitoring target, wherein the intersection ratio of the movable monitoring target and the fixed monitoring target is greater than a specified preset value.
4. The server according to claim 3, wherein the alarm information further includes an identifier of a camera that collects the community image;
the description information of the dangerous event also comprises the identification of the camera.
5. The server of claim 1, wherein the processor is further configured to:
before inputting the community image into an example segmentation model, training the example segmentation model according to the following method:
acquiring a training sample, wherein the training sample comprises a plurality of monitoring targets in the community, labeled area ranges of the plurality of monitoring targets and categories of the plurality of monitoring targets;
and training the example segmentation model according to the training samples.
6. The server of claim 1, wherein the processor is further configured to:
and searching at least one monitoring target with a danger attribute in a plurality of monitoring targets in a monitoring target danger attribute database, and taking the at least one monitoring target with the danger attribute as the specified monitoring target.
7. The server of claim 1, wherein the processor is further configured to:
before the community images are respectively input into the example segmentation model and the watershed segmentation model, the community images are scaled to be specified in size, and the community images with the scaled specified sizes are used as the community images input into the example segmentation model and the watershed segmentation model.
8. A community alarm method, the method comprising:
respectively inputting the community images into an example segmentation model and a watershed segmentation model to obtain first region ranges of a plurality of monitoring targets and categories of the plurality of monitoring targets output by the example segmentation model and second region ranges of the plurality of monitoring targets output by the watershed segmentation model; the monitoring targets comprise fixed monitoring targets and movable monitoring targets, wherein the fixed monitoring targets and the movable monitoring targets are fixed in positions;
aiming at any appointed monitoring target in the multiple monitoring targets, searching a second area range meeting a first appointed condition with the intersection ratio of a first area range of the appointed monitoring target from a second area range of the multiple monitoring targets as a seed area, wherein the first appointed condition can be that the intersection ratio is maximum or is larger than an appointed intersection ratio threshold;
continuously performing region growing in a second region range of the multiple monitoring targets by taking the seed region as a reference until a growing region meeting a second specified condition in a merging ratio with the first region range is obtained, wherein the growing region comprises:
traversing a second region range adjacent to the seed region as a neighborhood range; if the intersection ratio of the neighborhood range and the first region range is increased after the neighborhood range is merged into the seed region, merging the neighborhood range into the seed region; taking the seed area obtained after the traversal is finished as a final area of the specified monitoring target;
taking the growth area of which the intersection ratio meets the second specified condition as a final area of the specified monitoring target;
and when the final area of at least one movable monitoring target in the specified monitoring targets and the final area of any fixed monitoring target in the specified monitoring targets meet the specified relative position relationship, sending alarm information.
9. The method according to claim 8, wherein when the final area of at least one movable monitoring target among the designated monitoring targets and the final area of any fixed monitoring target among the designated monitoring targets satisfy a designated relative positional relationship, then sending alarm information, comprises:
determining the intersection ratio of the final area of any fixed monitoring target in the specified monitoring targets and the final area of at least one movable monitoring target in the specified monitoring targets;
and aiming at any movable monitoring target, when the intersection ratio of the movable monitoring target and the fixed monitoring target is greater than a specified preset value, the alarm information is sent.
10. The method according to claim 9, wherein after the community image is input to an instance segmentation model, the categories of the monitoring targets output by the instance segmentation model are also obtained;
the alarm information includes a dangerous event, and determining the dangerous event includes:
and generating description information of the dangerous event according to the type of the movable monitoring target and the type of the fixed monitoring target, the intersection ratio of which is greater than a specified preset value.
11. The method according to claim 10, wherein the alarm information further includes an identifier of a camera that captures the community image;
the description information of the dangerous event also comprises the identification of the camera.
12. The method of claim 8, wherein prior to inputting the community image into the instance segmentation model, the method further comprises:
training the example segmentation model according to the following method:
acquiring a training sample, wherein the training sample comprises a plurality of monitoring targets in the community, labeled area ranges of the plurality of monitoring targets and categories of the plurality of monitoring targets;
and training the example segmentation model according to the training samples.
13. The method of claim 8, wherein determining the specified monitoring target comprises:
and searching at least one monitoring target with a danger attribute in a plurality of monitoring targets in a monitoring target danger attribute database, and taking the at least one monitoring target with the danger attribute as the specified monitoring target.
14. The method of claim 8, wherein before inputting the community images into the instance segmentation model and the watershed segmentation model, respectively, the method further comprises:
and after the community image is zoomed to a specified size, taking the community image after being zoomed to the specified size as the community image input into an example segmentation model and a watershed segmentation model.
CN202010703527.6A 2020-07-21 2020-07-21 Community alarm method and server Active CN113473076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010703527.6A CN113473076B (en) 2020-07-21 2020-07-21 Community alarm method and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010703527.6A CN113473076B (en) 2020-07-21 2020-07-21 Community alarm method and server

Publications (2)

Publication Number Publication Date
CN113473076A CN113473076A (en) 2021-10-01
CN113473076B true CN113473076B (en) 2023-03-14

Family

ID=77868244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010703527.6A Active CN113473076B (en) 2020-07-21 2020-07-21 Community alarm method and server

Country Status (1)

Country Link
CN (1) CN113473076B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114329058B (en) * 2021-12-29 2023-05-16 重庆紫光华山智安科技有限公司 Image file gathering method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105575049A (en) * 2015-06-26 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Early warning method, device and terminal
CN110874953A (en) * 2018-08-29 2020-03-10 杭州海康威视数字技术股份有限公司 Area alarm method and device, electronic equipment and readable storage medium
CN111161275A (en) * 2018-11-08 2020-05-15 腾讯科技(深圳)有限公司 Method and device for segmenting target object in medical image and electronic equipment
CN111191486A (en) * 2018-11-14 2020-05-22 杭州海康威视数字技术股份有限公司 Drowning behavior recognition method, monitoring camera and monitoring system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279898A (en) * 2015-10-28 2016-01-27 小米科技有限责任公司 Alarm method and device
CN107818326B (en) * 2017-12-11 2018-07-20 珠海大横琴科技发展有限公司 A kind of ship detection method and system based on scene multidimensional characteristic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105575049A (en) * 2015-06-26 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Early warning method, device and terminal
CN110874953A (en) * 2018-08-29 2020-03-10 杭州海康威视数字技术股份有限公司 Area alarm method and device, electronic equipment and readable storage medium
CN111161275A (en) * 2018-11-08 2020-05-15 腾讯科技(深圳)有限公司 Method and device for segmenting target object in medical image and electronic equipment
CN111191486A (en) * 2018-11-14 2020-05-22 杭州海康威视数字技术股份有限公司 Drowning behavior recognition method, monitoring camera and monitoring system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
航空影像落水区域空三自动分区方法研究与应用;李能能等;《陕西理工学院学报(自然科学版)》;20160420(第02期);全文 *

Also Published As

Publication number Publication date
CN113473076A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
US11367265B2 (en) Method and system for automated debris detection
US10719641B2 (en) Methods and apparatus for automatically defining computer-aided design files using machine learning, image analytics, and/or computer vision
Mirzaei et al. 3D point cloud data processing with machine learning for construction and infrastructure applications: A comprehensive review
CN106127204B (en) A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks
CN107735794A (en) Use the condition detection of image procossing
Li et al. Street tree segmentation from mobile laser scanning data
CN107835997A (en) Use the vegetation management for being used for power line corridor and monitoring of computer vision
CN109583345A (en) Roads recognition method, device, computer installation and computer readable storage medium
Ziaei et al. A rule-based parameter aided with object-based classification approach for extraction of building and roads from WorldView-2 images
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
Conley et al. Using a deep learning model to quantify trash accumulation for cleaner urban stormwater
CN111125290B (en) Intelligent river patrol method and device based on river growth system and storage medium
CN113936210A (en) Anti-collision method for tower crane
CN113473076B (en) Community alarm method and server
Varun et al. A road traffic signal recognition system based on template matching employing tree classifier
Forlani et al. Adaptive filtering of aerial laser scanning data
Posner et al. Describing composite urban workspaces
Stark Using deep convolutional neural networks for the identification of informal settlements to improve a sustainable development in urban environments
CN113569801B (en) Distribution construction site live equipment and live area identification method and device thereof
CN111242010A (en) Method for judging and identifying identity of litter worker based on edge AI
CN115171214A (en) Construction site abnormal behavior detection method and system based on FCOS target detection
Yajima et al. AI-Driven 3D point cloud-based highway infrastructure monitoring system using UAV
Franceschi et al. Identifying treetops from aerial laser scanning data with particle swarming optimization
CN113569954A (en) Intelligent wild animal classification and identification method
Xu et al. Identification of street trees’ main nonphotosynthetic components from mobile laser scanning data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 266555, No. 218, Bay Road, Qingdao economic and Technological Development Zone, Shandong

Patentee after: Hisense Group Holding Co.,Ltd.

Address before: 266555, No. 218, Bay Road, Qingdao economic and Technological Development Zone, Shandong

Patentee before: QINGDAO HISENSE ELECTRONIC INDUSTRY HOLDING Co.,Ltd.