AU2020102906A4 - A drowning detection method based on deep learning - Google Patents

A drowning detection method based on deep learning Download PDF

Info

Publication number
AU2020102906A4
AU2020102906A4 AU2020102906A AU2020102906A AU2020102906A4 AU 2020102906 A4 AU2020102906 A4 AU 2020102906A4 AU 2020102906 A AU2020102906 A AU 2020102906A AU 2020102906 A AU2020102906 A AU 2020102906A AU 2020102906 A4 AU2020102906 A4 AU 2020102906A4
Authority
AU
Australia
Prior art keywords
stage
area
water
target
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020102906A
Inventor
Zhufeng Fan
Wei Jiang
Lei Tian
Jinyu Zhan
Qiaoyu Zhou
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhan Jinyu Miss
Original Assignee
Zhan Jinyu Miss
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhan Jinyu Miss filed Critical Zhan Jinyu Miss
Priority to AU2020102906A priority Critical patent/AU2020102906A4/en
Application granted granted Critical
Publication of AU2020102906A4 publication Critical patent/AU2020102906A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

, Due to insufficient water surface protection measures in the scenic area and the limitations of manpower monitoring, it is difficult for the scenic area to get a timely response when a drowning incident occurs, which has big security risks. In this invention, a method based on deep learning object detection for judging whether a person falls into the water is proposed to realize the identification and early warning of drowning incident in scenic waters. The details of the manipulations are a two-stages scheme. In the first stage, an object detector is applied to determine whether there is a human body in the image. In the second stage, the proportion of the overlapping area between the human body area and the water area is calculated according to the recognized human body coordinates to judge whether the human body falls into the water. This invention not only can effectively identify the falling water behavior for early warning.

Description

Editorial Note 2020102906 There is 4 pages of Description only.
Description
The description consists of notations & symbols, object detection, drowning judgement, appendix.
Notations and Symbols
Ii(x,y) The image of frame minA The frame differential S Open operator ED Expansion operator 8 Corrosion operator T Differential threshold Li(x, y) Moving target M The smallest adjacency matrix C Confidence obi The symbol of target in bounding box ij The symbol of no target in bounding box
IOtruth IOU of bounding box and true bounding box Acoord Correction value of coordError Aloobj Correction value of IOUError xi, yi, wi, Ci, pi Predictivevalue xi, i, fvyi, Ci,C p Labelvalue Q1 , Q2 , P1 , P2 Vector of polygon side x 1, x 2, x 3 --- xn Vertex coordinates of polygon edges Y1, Y2, y 3 --- yn Vertex coordinates of polygon edges xii, xi2 , xi 3 --- xin Vertex coordinates of overlapping area yil, Yi2, yi3 --- Yin Vertex coordinates of overlapping area Poly1 the polygon marked S Area of overlapping area Si Area of target box bX, by, bf, b Predicted coordinates tX, ty, tw, th Offset value cx, cy Preset value
A. Object detection
Due to the real-time requirements and the complex water background, the background modeling is very difficult, so the frame differential method is used to extract moving targets in the water. The frame differential method is as follow: minA =1 Y7=II(y)I 1 (yI() To avoid the influence of water reflection, we change operator detection parameters m1 to m 2 and do differential method.
A(mm)2 X=1 y=1 || fm 1(XY)|- 1 fm 2 (XY) 1
=N -- x=1 EM Ey=1 4x m5m, (2) M=z z m~mi M=M m=mz
Then we use morphological method to get the connected domain corresponding to complete moving target. Connected domain is calculated as follow: Ni(x,y) = (minA>TOS) E S (3) The smallest adjacency matrix can be found through the connected domain, which is the moving target.
L i(x, y) = Ii (x, y) (x,y,m (4) After that, for moving target, detector is used to do object detection and get object coordinates. Confidence is calculated by
C = Pr(obj)* rut (5)
And coordinates are calculated as follow
bX = U(tX) + cX by = a (ty) + cy (6) b, p, e t bh =pheth
Pr(object)*IOU(b,object)=a(t) Loss can get by
+lcoord o J -2
ti) 2 (7) + 0V 0q r?( -
+AnoobE=J= i noj(
+ O 0 bc classes 2
Then non-maximal suppression is used to extract the most likely objects and locations.
Score = P(Ci | Object) * Confidence (8)
Get the coordinates of bounding boxes from image, the whole detecting algorithm is summarized in Stage 1.
Stage 1. Object detection. Input: Video v (1) Given the arguments: CO (2) Initialize coordinates = list (3) Do in each frame in turn (4) for each video frame vi in v do (5) calculate the bounding box (c, cy), C (6) if C > Co,then (7) put (C., cy) into coordinates (8) end if (9) end for (10) end Do Output: The coordinates
B. Drowning judgement
Coordinates are compared with water area marked to judge whether there is overlap. The Water area is treated as a marked polygon. Two polygons are considered as a combination of line segments. Determine whether the polygons overlap by comparing whether the line segments have intersections. For a set of line segments P1 P2, Q 1Q2 , whether they have an intersection can be calculated by
(Q 1 - P1) x (P2 - P1) * (2 - P1 ) x (P2 - P1) < 0 (9) and
(P1 - Q 1) x (Q 2 - Q1) * (P2 - Q1 ) x (Q2 - Q1) < 0 (10) If there is no intersection between these line segments, the next set of line segments will be compared until intersection is found. When all the line segments are compared but the intersection is still not found, the two polygons are considered to have no intersection. If intersection is found, Intersection coordinates can be calculated by end point coordinates of line segment (x,y), (x2 ,y2 ),(X3 ,y), (x 4 ,y 4 ) as follow: b1=(Y2 -Y 1 )x 1 +(x1 -x2)Y1 b2 = (Y4 - y3)x 3 + (x 3 - x4)y3 |D I= (x2 - X 1)(4 - Y3) - (X4 - X3 )(Y2 - 71) |D 1 1= b 2 (X2 - x 1) - b 1 (x 4 - x 3 )
|D2| = b 2 (Y2 - y 1 ) - b1 (y 4 - y 3 )
xO = |D 1 |||D|,yo = |D 2 1|DI
After finding all intersections, we can get all vertices coordinates of overlapping area. Then we calculate the area to compared with area of bounding box. Area S is calculated by A1 = x11y12 - X12711
A2 = X1271s - Y12X1s
A3 = X1314 - 713X14 (12) An =xiny11 - yinx11
S= 0.5 x Ail i=1
S is compared with the area of bounding box to judge whether there is drowning. The whole algorithm is summarized in Stage 2.
Stage 2. Drowning judgement Input: coordinates (1) Given the arguments: Poly_1, Wo (2) Initialize count = 0, intersections = list (3) for each coordinate ci in coordinates do (4) calculate intersection inter with Poly_1 (5) if inter ! = null (6) put inter into intersections (7) calculate S, Si with intersections (8) if S/S > Wo,then (9) print Drowning (10) count + +
(11) end if (12) end if (13) end for Output: count
Appendix There are two parameters to be set in the proposed method. One of the parameters, Co is used in Stage 1, and the other parameters, Wo is used in Stage 2. Their values recommended by the author can be seen from Table 1.
Table 1. The parameter values recommended by the author Para. Value Description Usedin
CO 0.50 Threshold of object detection stage 1 WO 0.95 Threshold of drowning judgement stage 2
Editorial Note 2020102906 There is 1 page of Claims only.

Claims (1)

  1. Claim In this claim, a method based on deep learning target detection is proposed to judge whether a person falls into the water in scenic waters. This method consists of a two-stage scheme. In the first stage, the frame difference method is used to extract the moving target, and an object detector is designed to identify and judge the target. In the second stage, the coordinates of each target identified by the target detector are to judge whether they overlap with the water area on the image, and the overlapping area is calculated to judge whether a drowning incident occurs. The process is briefly descripted as follows: Stage 1: The method of inter-frame difference is used to extract the moving target, and a target detector is used to identify the target. Stage 2: Each identified bounding box is to judge whether it overlaps with the water area on the image, and the overlapping area is calculated according to the coordinates. Repeat Stage 2 until all bounding boxes are processed.
AU2020102906A 2020-10-20 2020-10-20 A drowning detection method based on deep learning Ceased AU2020102906A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020102906A AU2020102906A4 (en) 2020-10-20 2020-10-20 A drowning detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020102906A AU2020102906A4 (en) 2020-10-20 2020-10-20 A drowning detection method based on deep learning

Publications (1)

Publication Number Publication Date
AU2020102906A4 true AU2020102906A4 (en) 2020-12-17

Family

ID=73746656

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020102906A Ceased AU2020102906A4 (en) 2020-10-20 2020-10-20 A drowning detection method based on deep learning

Country Status (1)

Country Link
AU (1) AU2020102906A4 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699776A (en) * 2020-12-28 2021-04-23 南京星环智能科技有限公司 Training sample optimization method, target detection model generation method, device and medium
CN113361364A (en) * 2021-05-31 2021-09-07 北京市商汤科技开发有限公司 Target behavior detection method, device, equipment and storage medium
CN114267082A (en) * 2021-09-16 2022-04-01 南京邮电大学 Bridge side falling behavior identification method based on deep understanding

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699776A (en) * 2020-12-28 2021-04-23 南京星环智能科技有限公司 Training sample optimization method, target detection model generation method, device and medium
CN112699776B (en) * 2020-12-28 2022-06-21 南京星环智能科技有限公司 Training sample optimization method, target detection model generation method, device and medium
CN113361364A (en) * 2021-05-31 2021-09-07 北京市商汤科技开发有限公司 Target behavior detection method, device, equipment and storage medium
CN113361364B (en) * 2021-05-31 2022-11-01 北京市商汤科技开发有限公司 Target behavior detection method, device, equipment and storage medium
CN114267082A (en) * 2021-09-16 2022-04-01 南京邮电大学 Bridge side falling behavior identification method based on deep understanding
CN114267082B (en) * 2021-09-16 2023-08-11 南京邮电大学 Bridge side falling behavior identification method based on depth understanding

Similar Documents

Publication Publication Date Title
AU2020102906A4 (en) A drowning detection method based on deep learning
German et al. Rapid entropy-based detection and properties measurement of concrete spalling with machine vision for post-earthquake safety assessments
CN110232320B (en) Method and system for detecting danger of workers approaching construction machinery on construction site in real time
KR100773393B1 (en) Real-time Monitoring System and Method for DAM
Blanchet et al. Interference detection for cable-driven parallel robots (CDPRs)
KR100773344B1 (en) Station positioning system using landmark
CN103559703A (en) Crane barrier monitoring and prewarning method and system based on binocular vision
CN110259514B (en) Dangerous area personnel early warning method, storage medium, electronic equipment and early warning system
Bloisi et al. Camera based target recognition for maritime awareness
EP2546807A2 (en) Traffic monitoring device
CN109345787A (en) A kind of anti-outer damage monitoring and alarming system of the transmission line of electricity based on intelligent image identification technology
Vishwakarma et al. Analysis of lane detection techniques using opencv
CN102496030B (en) Identification method and identification device for dangerous targets in power monitoring system
JP7348575B2 (en) Deterioration detection device, deterioration detection system, deterioration detection method, and program
CN117124332A (en) Mechanical arm control method and system based on AI vision grabbing
CN111650863A (en) Engineering safety monitoring instrument and monitoring method
Onita et al. Quality control in porcelain industry based on computer vision techniques
JP7174601B2 (en) Crest surface level difference extraction system and crown surface level difference extraction method
Vignali et al. Performance evaluation and cost analysis of a 2D laser scanner to enhance the operator’s safety
Sun et al. Detection and tracking of safety helmet in factory environment
CN111401276A (en) Method and system for identifying wearing of safety helmet
CN205240617U (en) Belt is vertically torn and is detected and alarm device based on image discontinuity point is surveyed
Liu et al. Corner detection on hexagonal pixel based images
KR20190088745A (en) A method of displaying the position of a foreign object in the runway with GPS coordinates in an airport grid map
Shrigandhi et al. Systematic Literature Review on Object Detection Methods at Construction Sites

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry