KR20160015728A - Method and Apparatus for Detecting Fall Down in Video - Google Patents

Method and Apparatus for Detecting Fall Down in Video Download PDF

Info

Publication number
KR20160015728A
KR20160015728A KR1020140098359A KR20140098359A KR20160015728A KR 20160015728 A KR20160015728 A KR 20160015728A KR 1020140098359 A KR1020140098359 A KR 1020140098359A KR 20140098359 A KR20140098359 A KR 20140098359A KR 20160015728 A KR20160015728 A KR 20160015728A
Authority
KR
South Korea
Prior art keywords
image frame
box shape
surveillance
image
detected
Prior art date
Application number
KR1020140098359A
Other languages
Korean (ko)
Other versions
KR101614412B1 (en
Inventor
이동훈
임희진
유수인
이승준
김정민
Original Assignee
삼성에스디에스 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성에스디에스 주식회사 filed Critical 삼성에스디에스 주식회사
Priority to KR1020140098359A priority Critical patent/KR101614412B1/en
Priority to PCT/KR2015/007741 priority patent/WO2016018006A1/en
Publication of KR20160015728A publication Critical patent/KR20160015728A/en
Application granted granted Critical
Publication of KR101614412B1 publication Critical patent/KR101614412B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

According to an embodiment of the present invention, a method for sensing the fall down in a video comprises the following steps: detecting a monitoring object from the N number of image frames from a first image frame; displaying the monitoring object detected from each image frame in a box shape; calculating variation of a long axis length in each box shape which indicates the monitoring object detected from the remaining image frames except for the first image frame with respect to the long axis length in the box shape which indicates the monitoring object detected from the first image frame; calculating the distance between an upper side center point in the box shape which indicates the monitoring object detected from the first image frame, and an upper side center point in each box shape which indicates the monitoring object detected from the remaining image frames except for the first image frame; and sensing the fall down of the monitoring object by using the calculated variation of the long axis length, and the calculated distance between the upper side center points.

Description

TECHNICAL FIELD [0001] The present invention relates to a method and apparatus for detecting a fall in a video,

The present invention relates to a method and apparatus for detecting a fall in an image and a system using the apparatus. More particularly, the present invention relates to a method and apparatus for detecting a fall in an image in which an object such as a person in a video image can be detected, and a system using the method.

There are various methods of detecting movement of an object through image data collected by a camera or the like. For example, there is a method of detecting movement of an object through color analysis or feature point extraction as a method of detecting movement of an object in an image.

There are also various ways of determining whether a specific event occurs in addition to detecting the movement of an object.

An example of an event for judging whether or not an object has occurred is a determination of the collapse of an object such as a person.

There are two methods of judging the person's fallen position in the image: color analysis or fall in the box.

However, in the case of the color analysis method, it is often difficult to recognize the object due to the color change depending on the light or illumination state.

In addition, in the method of judging the person's fall through the change of the width and height ratio of the box, the form in which the person falls in the left or right direction can be detected relatively well, but the form falling in the forward direction or the backward direction is extremely There is a difficult problem.

SUMMARY OF THE INVENTION An object of the present invention is to provide a method and apparatus for detecting a fall in an image that can be detected even when an object such as a person falls in any direction and a system using the method.

The technical problems of the present invention are not limited to the above-mentioned technical problems, and other technical problems which are not mentioned can be clearly understood by those skilled in the art from the following description.

According to an aspect of the present invention, there is provided a method of detecting a fall in an image, comprising: detecting a surveillance object in N image frames from a first image frame; Displaying the detected surveillance object in each image frame in a box form; Calculating a change amount of a major axis length in each box shape representing a surveillance object detected in a remaining image frame excluding the first image frame with respect to a major axis length in a box shape representing a surveillance object detected in the first image frame, ; Calculating a distance between a top center point and a top center point in a box shape representing the monitored object detected in the first image frame and a box shape representing a monitored object detected in the remaining image frames except for the first image frame; And detecting a collapse of the monitored object using the calculated amount of change in the major axis length and the calculated distance between the calculated top-center points.

According to an embodiment of the present invention, the step of detecting the collapse of the surveillance object may include detecting a collapse of the surveillance object by multiplying a change amount of the long axis length calculated using the first image frame and a specific image frame by a distance between the center points of the upper side, The method may further include detecting that the monitoring object is collapsed in the specific image frame.

According to an embodiment, the step of calculating the change amount of the long axis length may include calculating a change amount of the long axis length by calculating a change amount of the long axis length in each box shape representing the surveillance object detected in the first image frame and the remaining image frame, And correcting a variation amount of the length.

According to an embodiment, the step of correcting the variation of the long axis length may include: determining a long axis length in a box shape representing the monitored object detected in the first image frame and a box shape Dividing the change amount of the major axis length into a long axis length in a box shape representing a monitored object detected in the first image frame, and multiplying the corrected long axis length by 100.

According to an embodiment of the present invention, the box shape is composed of two sides, an upper side and a lower side in the major axis direction, and the box shape representing the monitored object detected in the Nth image frame, The center point of the lower side in the box shape representing the monitored object detected in one image frame can be determined as the center point of the lower side in the box shape representing the monitored object detected in the Nth image frame.

According to an exemplary embodiment, the step of calculating the distance between the center points of the upper and lower sides may include calculating a distance between the center of the lower side and the box shape representing the monitoring object detected in the remaining image frame in a box shape representing the monitoring object detected in the first image frame And calculating a distance between the center points of the upper and lower sides after moving the box shape representing the respective monitoring objects detected in the remaining image frames so that the lower center coincides with the lower center.

According to an embodiment of the present invention, the surveillance object is a specific person, the box form is composed of two sides, an upper side and a lower side in the major axis direction, a head end portion of the specific person is located on the upper side, And the length of the long axis may be proportional to the height of the specific person.

According to an embodiment, a method of detecting a fall in an image may include detecting a fall of the surveillance object, maintaining the fallen state of the surveillance object when the box shape has not changed by a predetermined level or more for a predetermined time As shown in FIG.

According to an embodiment of the present invention, the step of determining that the surveillance object is maintained in a collapsed state may include determining whether a surplus of the surveillance object is maintained in a predetermined range or more of a box shape representing a surveillance object detected in a predetermined number of frames And determining that the surveillance object is in a collapsed state when the surveillance object coincides with a box shape representing a surveillance object detected in the frame in which the collapse is detected.

According to an exemplary embodiment, the step of displaying the monitoring object in the form of a box may further include a step of displaying the monitoring object in a box form by applying Principal Component Analysis (PCA) using contour pixel information of the monitored object can do.

According to a second aspect of the present invention, there is provided an apparatus for detecting a fall in an image, the apparatus comprising: an object detector for detecting a surveillance object in N image frames from a first image frame; A boxing unit for displaying a surveillance object detected in each of the image frames in a box form; Calculating a change amount of a long axis length in each box shape representing a surveillance object detected in a remaining image frame excluding the first image frame based on a long axis length in a box shape representing a surveillance object detected in the first image frame, A length variation calculating unit; An upper-side center point distance calculating unit for calculating a distance between upper-left and lower-right center points in a box shape representing the monitoring object detected in the first image frame and a distance between the upper and lower center points in each box shape representing the monitoring object detected in the remaining image frames excluding the first image frame; A calculating unit; And a fall detection unit for detecting a fall of the monitored object using the calculated change amount of the major axis length and the calculated distance between the center points of the upper and lower sides.

According to a third aspect of the present invention, there is provided a system for detecting a fall in an image, the system comprising: an image capturing device for generating an image frame; A method of detecting a surveillance object from a first image frame to N image frames, respectively, displaying a surveillance object detected in each frame in a box form, and displaying the surveillance object in a box form representing the surveillance object detected in the first image frame Calculating a change amount of the major axis length in each box shape representing the monitoring object detected in the remaining image frames excluding the first image frame with reference to the major axis length, The distance between the center point of the upper side and the center of the upper side in each box shape representing the monitoring object detected in the image frame excluding the upper side center point and the first image frame is calculated and the calculated distance of the calculated long axis length and the calculated distance between the center points of the upper side are used To detect the collapse of the surveillance object, Sensing devices; And a notification device for providing a visual or auditory notification when the falling of the monitoring object is detected.

According to a fourth aspect of the present invention, there is provided a computer-readable recording medium having recorded thereon a computer program for performing a method of detecting a fall in an image.

According to an embodiment of the present invention, it is possible to detect whether a surveillance object collapses in any direction.

In addition, according to an embodiment of the present invention, it is possible to detect collapse of not only a person but also other objects.

In addition, according to an embodiment of the present invention, it is possible to detect the collapse of an object without using special equipment such as a 3D camera, and to use the existing infrastructure at all places where cost is low and general cameras can only be used It is applicable.

According to an embodiment of the present invention, it is possible to quickly cope with a safety accident by detecting a person's collapse, prevent collapse of an object such as chemical or dangerous material, prevent an accident, and quickly cope with it.

The effects of the present invention are not limited to the effects mentioned above, and other effects not mentioned can be clearly understood to those of ordinary skill in the art from the following description.

1 is a block diagram of an apparatus for detecting a fallout in an image according to an embodiment of the present invention.
FIG. 2 is a view for explaining an example in which a boxing unit displays a surveillance object detected in an image frame in a box form.
FIG. 3 is a view for explaining another example in which a boxing unit displays a monitoring object detected in a box form using a principal component analysis method.
4 is a view showing an example of a box shape in which a surveillance object is changed for each image frame when the surveillance object is collapsed toward the image capturing apparatus.
5 is a view showing an example of a box shape in which a surveillance object is changed for each image frame when the surveillance object is collapsed in the left direction or the right direction with respect to the image capturing apparatus.
6 is a diagram showing an example of a box shape in which a surveillance object is changed for each image frame when the surveillance object is collapsed toward a direction in which the image capturing apparatus captures an image.
FIG. 7 is a diagram for explaining an example in which the decay-maintaining decision unit determines that the monitored object is maintained in a collapsed state.
8 is a flowchart illustrating a method of detecting a fall in an image according to another embodiment of the present invention.
9 is a flowchart showing an example of a method of calculating a change amount of the major axis length.
10 is a diagram for explaining an example of correcting each of the major axis lengths and calculating the major axis length change amount.
11 is a flowchart showing an example of a method of calculating the distance between the center points of the upper and lower sides.
12 is a diagram for explaining an example of calculating the distance between the center points of the upper and lower sides.
13 is a diagram illustrating an example in which a method of detecting a fall in an image is applied according to another embodiment of the present invention.
FIG. 14 is another configuration diagram of an apparatus for detecting a fallout in an image according to an embodiment of the present invention.
FIG. 15 is a block diagram illustrating a system for detecting a fallout in an image according to another embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Is provided to fully convey the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims. Like reference numerals refer to like elements throughout the specification.

Unless defined otherwise, all terms (including technical and scientific terms) used herein may be used in a sense commonly understood by one of ordinary skill in the art to which this invention belongs. Also, commonly used predefined terms are not ideally or excessively interpreted unless explicitly defined otherwise.

In this specification, a singular form may include plural forms unless specifically stated in the phrase. It is noted that the terms "comprises" and / or "comprising" used in the specification are intended to be inclusive in a manner similar to the components, steps, operations, and / Or additions.

1 is a block diagram of an apparatus for detecting a fallout in an image according to an embodiment of the present invention.

1, an apparatus 100 for detecting a fall in an image according to an embodiment of the present invention includes an object detection unit 110, a boxing unit 120, a long axis length variation calculation unit 130, A deceleration detector 140 and a declination detector 150, and may further include a declination deciding unit 160. [

The object detecting unit 110 detects a surveillance object from image frames collected from a camera such as a camera.

For example, when the first to fifth image frames exist, the object detection unit 110 detects a surveillance object in the first image frame, detects the same surveillance object in the second image frame, The same surveillance object may be detected in the fourth image frame, and the surveillance object may be detected in the fifth image frame. The same surveillance object means that the object is the same, and does not mean that the position or shape of the object is the same. Even if the object is the same object, the position and the shape of the object can be changed according to the motion of the object.

When the first to Nth image frames are present in the image frame, the surveillance object detected by the object detecting unit 110 may be detected in all the frames or only in some frames.

A known technique may be used as a method for the object detection unit 110 to detect a monitored object in a frame.

The object to be monitored for falling may be a person, but the present invention is not limited thereto, and may be a non-human object.

The boxing unit 120 may display the detected surveillance object in each frame in the form of a box.

For example, the boxing unit 120 may display the surveillance object detected in the first image frame in a box form, the surveillance object detected in the second image frame in a box form, and the surveillance object detected in the third image frame It can be represented in box form.

The boxing unit 120 can display the surveillance objects detected in the respective image frames by the object detection unit 110 in a box shape using a principal component analysis (PCA) method.

FIG. 2 is a view for explaining an example in which a boxing unit displays a surveillance object detected in an image frame in a box form.

Referring to FIG. 2 (a), the object detection unit 110 detects a person who is sitting and standing by an arm in an image frame, and displays the detected surveillance object in box form.

The box type shown in (a) of FIG. 2 is a box type created to include the outermost angle of the detected surveillance object.

In the case where the box shape is included to include the outermost periphery of the detected surveillance object as shown in FIG. 2 (a), the present invention can provide a somewhat inaccurate result in detecting the collapse of the object.

That is, in FIG. 2 (a), it is possible to derive a somewhat inaccurate result on the result of detecting the collapse of the object due to the components due to the extended arms which are not related to the main components for analyzing the collapse of the object.

Accordingly, the boxing unit 120 of the apparatus 100 for detecting the fall-down in the image according to an embodiment of the present invention can display the detected surveillance objects in a box form using the principal component analysis method.

The known content can be used for the principal component analysis method. Referring to FIG. 2B, for example, the boxing unit 120 analyzes the principal component using the principal component analysis method at the contour points of the detected surveillance object, Box type.

FIG. 3 is a view for explaining another example in which a boxing unit displays a monitoring object detected in a box form using a principal component analysis method.

In FIG. 3, (a) represents the outermost of the detected surveillance object in box form. In FIG. 3, (b) is a box shape showing the main component of the detected surveillance object.

The boxing unit 120 uses a principal component analysis method to improve the detection accuracy of the object, rather than representing the detected monitoring objects in box form as shown in FIG. 2 (a) and FIG. 3 (a) b) and

It is preferable to display the detected surveillance object in box form as shown in FIG. 3 (b).

However, the boxed part 120 may represent the detected surveillance object in box form using the principal component analysis method, and the present invention is not limited thereto. The boxing unit 120 can display the detected surveillance objects in a box form using a method other than the principal component analysis method. The boxing unit 120 preferably displays the detected surveillance objects in box form when the surveillance objects detected in each image frame are displayed in a box shape using the same method for each image frame.

Before describing the long axis length variation calculating unit 130, the center-of-gravity point distance calculating unit 140, and the falling-down detecting unit 150 with reference to FIG. 1, the apparatus 100 for detecting a fall-off in an image according to an embodiment of the present invention, The detection of the collapse of the detected surveillance object will be described with reference to FIGS.

4 is a view showing an example of a box shape in which a surveillance object is changed for each image frame when the surveillance object is collapsed toward the image capturing apparatus.

41a and 41b show the monitoring objects detected in the first image frame in box form. Reference numerals 42a and 42b denote, in box form, the surveillance objects detected in the video frames (Kth frame) captured after the first video frame. 43a and 43b show the monitoring objects detected in the image frames captured after the Kth image frame in a box form.

The long axis length of the box shape is long or equal when the pollen or person as a monitor object is collided against the image capturing device and the movement of the center point of the top side is shifted You can see the distance is long.

In the box form, the length of the major axis is proportional to the height of the person if the monitored object is a person.

Specifically, the box shape is composed of two sides, an upper side and a lower side of a major axis. The upper side is the head end of the person who is the surveillance object, and the lower side is the foot of a certain person. If a surveillance object is an object similar to a person, the upper end of the object will be on the upper side and the lower end of the object will be on the lower side while the object is normally standing.

The center point of the upper side is the point corresponding to the center of the upper side, and the lower side center point corresponds to the center of the lower side.

The head of a person is located at the top of the head, and the foot of the person is not located at the bottom. Some differences may occur depending on the manner in which the boxing unit 120 displays the surveillance object in a box form. Also, as the monitoring object collapses, the positions of the upper and lower sides may be reversed. However, when the surveillance object is a person, for example, the upper side and the lower side of the box shape are defined as the upper side near the head of the person and the lower side near the foot of the person.

More specifically, in the present invention, if the surveillance object detected in the first image frame is assumed to be standing, the foot will be positioned near the lower edge of the box shape representing the surveillance object detected in the first image frame. (Hereinafter, referred to as "M-th lower center point ") and an upper-left center point (hereinafter referred to as" M " upper-side center point) in a box shape representing a surveillance object detected in an M- In the box shape representing the surveillance object detected in the frame, the lower center point and the center point further adjacent to the lower center can be determined as the M lower center point.

5 is a view showing an example of a box shape in which a surveillance object is changed for each image frame when the surveillance object is collapsed in the left direction or the right direction with respect to the image capturing apparatus.

In FIG. 5, (a) is a case in which the pollen of the monitoring object is collapsed in the left direction, and (b) is a case in which the person of the monitoring object is collapsing in the right direction.

Reference numerals 51a and 51b denote the monitoring objects detected in the first image frame in box form. Reference numerals 52a and 52b denote the surveillance objects detected in an image frame (L frame) photographed after the first image frame in box form. Reference numerals 53a and 53b denote the surveillance objects detected in the video frames photographed after the Lth video frame in box form.

52a and 53a are compared on the basis of 51a and 52b and 53b are compared with each other on the basis of 51b. When the surveillance object is collapsed in the left direction or the right direction, the long axis length of the box shape representing the surveillance object is almost the same, It can be seen that the moving distance is long.

6 is a diagram showing an example of a box shape in which a surveillance object is changed for each image frame when the surveillance object is collapsed toward a direction in which the image capturing apparatus captures an image.

61a and 61b show the monitoring objects detected in the first image frame in box form. Reference numerals 62a and 62b denote monitoring objects detected in an image frame (M frame) captured after the first image frame in box form. Reference numerals 63a and 63b denote the monitoring objects detected in the image frames photographed after the Mth image frame in box form.

When the monitoring object (flower pot and / or person) collapses toward the direction in which the image capturing device shoots the image, the comparison between 62a and 63a on the basis of 61a and 62b and 63b on the basis of 61, It can be seen that there is a slight amount of change in the long axis length of the shape and the moving distance of the center point of the upper side.

4 to 6, when the surveillance object is collapsed, a change in the length of the long axis of the box shape representing the surveillance object or a change in the distance between the centers of the upper and lower sides occurs.

The present invention detects the collapse of the surveillance object by using a combination of the variation of the major axis length and the distance between the center of the upper and lower sides.

In FIGS. 4 to 6, the monitoring object is exemplified by two cases, a flowerpot and a human, showing that the present invention can detect not only a person but also a fall of an object. The present invention may detect two or more surveillance objects in one video frame to detect the collapse of two or more surveillance objects.

1, the long axis length variation calculation unit 130, the center-of-gravity point distance calculation unit 140, and the fall detection unit 150 will be described.

The long axis length variation calculating unit 130 calculates a long axis length change amount calculating unit 130 based on the long axis length variation calculating unit 130 and the long axis length variation calculating unit 130. [ The change amount of the length can be calculated.

The upper-side center-point distance calculating unit 140 calculates a distance between the upper-left center point and the upper-right center point in the box shape representing the monitoring object detected in the first image frame, The distance can be calculated.

The first image frame may mean the first frame of the image captured by the image capturing device, but it means the first frame among the frames used for detecting the collapse of the surveillance object. Also, the surveillance object in the first image frame is generally in a standing state before being collapsed.

In addition, the frame used by the present invention to detect a surveillance object may use all the image frames generated by the image photographing device, but may use only image frames corresponding to a predetermined period.

For example, if the image capturing device generates 30 image frames per second, the frame used by the present invention to detect the surveillance object may use only image frames corresponding to a multiple of 5 out of 30 image frames. That is, if the image capturing apparatus generates 18000 image frames for 10 minutes, the frame used by the present invention to detect the surveillance object may be about 3600 frames for 10 minutes.

The frame used by the present invention to detect the surveillance object can be set in consideration of the image capturing ability of the image capturing apparatus, the computing performance of the computer, and the speed of collapse detection.

The fall detection unit 150 can detect the fall of the monitored object by using the calculated change amount of the major axis length and the calculated distance between the center points of the upper and lower sides.

Specifically, the collapse sensing unit 150 multiplies the calculated amount of change in the major axis length and the distance between the calculated center points of the upper side, and can detect that the monitored object has collapsed when the multiplied value is greater than a predetermined value.

When the value obtained by multiplying the variation amount of the major axis length calculated by using the first image frame and the specific frame and the distance between the center points of the upper side is equal to or greater than a preset value, the declination detection unit 150 detects that the monitored object is collapsed can do.

The fall-down maintenance determining unit 160 detects a fall of the surveillance object by the fall-down detection unit 150, and if the box type representing the surveillance object does not change by more than a predetermined level for a preset time, As shown in FIG.

That is, the fall-down maintenance determining unit 160 can determine whether the fall-off detection unit 150 has restarted even if it is determined that the monitored object is fallen.

FIG. 7 is a diagram for explaining an example in which the decay-maintaining decision unit determines that the monitored object is maintained in a collapsed state.

Specifically, the collapsing maintenance determining unit 160 determines whether or not a surveillance object detected in a frame that detects collapse of a box shape over a certain range indicating a surveillance object detected in a predetermined number of frames after a frame in which the surveillance object collapses is detected It can be determined that the monitoring object has been kept in a collapsed state if it matches the box type shown.

A predetermined number of frames can be set in consideration of the time that the monitoring object can be regarded as maintaining the collapsed state. For example, if there is no significant change in the box shape for 4 seconds, it is possible to set the frame corresponding to 4 seconds to a predetermined number of frames when it is considered to be in a collapsed state.

For example, a case where a predetermined number of frames are two will be described with reference to FIG.

In FIG. 7, F N denotes a box shape representing a surveillance object detected in the Nth image frame, and F N -1 denotes a box shape representing a surveillance object detected in the (N-1) th image frame. Also, it is assumed that the collapse detection unit 150 determines that the monitored object is collapsed in the (N-1) th image frame. F N -1 and F N can be regarded as matching most box-shaped regions. In this case, it can be determined that the surveillance object is kept in a collapsed state.

A detailed description of the apparatus 100 for detecting a fallout in an image according to an embodiment of the present invention will be described in a method for detecting fallout in an image according to another embodiment of the present invention described with reference to FIGS.

In other words, the contents described in the method of detecting a fall detection in an image according to another embodiment of the present invention and the contents of the fall detection apparatus 100 according to an embodiment of the present invention are applied to each other.

Specifically, the contents described with reference to Figs. 8 to 10 can be applied to the long axis length variation calculating unit 130. [

8, 11, and 12 may be applied to the center-of-gravity point distance calculator 140. [

8 is a flowchart illustrating a method of detecting a fall in an image according to another embodiment of the present invention.

The fall detection apparatus 100 detects a surveillance object in each of a plurality of image frames (S810).

The collapse sensing apparatus 100 in the image displays each of the surveillance objects detected in the plurality of image frames in box form (S820).

The fall detection apparatus 100 in the image calculates the long axis length change amount (S830).

A specific example in which the fall detection apparatus 100 calculates an amount of change in the major axis length will be described with reference to FIGS. 9 and 10. FIG.

The fall detection apparatus 100 calculates the distance between the center points of the upper and lower sides of the image (S840).

A specific example in which the fall detection apparatus 100 calculates the distance between the center points of the upper and lower sides will be described later with reference to FIGS. 11 and 12. FIG.

In operation S850, the collapse sensing apparatus 100 determines whether the surveillance object is collapsed by using the computed long-axis length variation and the distance between the computed top / middle points.

If the collapse detection apparatus 100 determines that the object is collapsed, the collision detection apparatus 100 may determine whether the collision object remains collapsed (S860).

A specific method of determining whether or not the falling object detecting apparatus 100 maintains the falling state of the monitored object may be applied to the fall-down maintenance determining unit 160 described with reference to FIGS. 1 and 7 .

If the alarm device determines that the object is in a collapsed state, it may provide visual and / or auditory notification to the administrator (S870).

9 is a flowchart showing an example of a method of calculating a change amount of the major axis length.

Referring to FIG. 9, the collapse sensing apparatus 100 calculates a long axis length (hereinafter, referred to as a "first long axis length") in a box shape representing a surveillance object detected in the first image frame (S831)

The fall detection apparatus 100 in the image calculates the respective major axis lengths (hereinafter referred to as "remaining major axis lengths") in respective box shapes representing the monitored objects detected in the remaining image frames except for the first image frame (S833 ).

The fall detection apparatus 100 in the image corrects each of the remaining major axis lengths using the first major axis length (S835).

An example of correcting each of the remaining major axis lengths using the first major axis length (S835) will be described with reference to FIG.

10 is a diagram for explaining an example of correcting each of the major axis lengths and calculating the major axis length change amount.

Referring to FIG. 10, F 1 represents a surveillance object detected in the first image frame in box form. F N is a box shape of the monitored object detected in the Nth image frame.

The long axis length of the box shape (F 1 ) representing the surveillance object detected in the first image frame is 60. The long axis length of the box shape (F N ) representing the surveillance object detected in the Nth image frame is 55.

Depending on whether the surveillance object is located close to a camera or the like, the size of the box shape may be different. That is, it is necessary to minimize the influence of the perspective on the size of the box shape. In order to minimize the influence of the perspective, the apparatus 100 may correct the size of the object by multiplying the length of the long axis of F N by the length of the long axis of 100 / F 1 . In addition, the length of the long axis of F1 may be multiplied by the length of the long axis of 100 / F1 to be 100 by the fall detection apparatus 100 in the image.

If this applies to the case of FIG. 10, sseureojim sensing device 100 in the image is multiplied by (the length of the major axis of the 1 F) 100/60 to the length of the major axis of the F 1. Further, the fall detection apparatus 100 in the image multiplies the long axis length of F N by 55, which is 100/60. The corrected long axis length of F1 becomes 100, and the corrected long axis length of FN becomes 91.67.

Referring again to FIG. 9, the deceleration sensor 100 can calculate the amount of change through the difference between the corrected first major axis length and the corrected remaining major axis length, respectively.

That is, in the example of FIG. 10, the difference between the corrected first major axis length 100 and the corrected N major axis length 91.67 is 8.33. 8.33 can be the amount of change of the first major axis length and the N major axis length.

Other examples will be described with reference to Table 1.

Comparison frame The long axis length of the comparison frame Compensated long axis length Long axis length variation 2 59 59.4 One 3 59 98.3 1.7 4 58 96.7 3.3 5 57 95 5 6 56 93.3 6.7 7 55 91.7 8.3 8 54 90 10

In Table 1, the long axis length is 60 and the corrected long axis length is assumed to be 100 in the box shape representing the surveillance object detected in the first image frame as a reference. The unit of the long axis length can be set in consideration of the pixels of the image frame. For example, if the pixel corresponding to the major axis length is 55 pixels, the major axis length can be 55 pixels.

The comparison frame 2 means the second image frame and the comparison frame 8 means the eighth image frame. That is, the comparison frame means the order of frames, and the higher the number, the larger the time difference from the first image frame.

The long axis length of the comparison frame is the long axis length in the box shape representing the monitoring object detected in the comparison frame.

The corrected long axis length is a value obtained by correcting the long axis length as described above with reference to FIG.

The long axis length variation means the difference between the corrected first major axis length 100 and the corrected long axis length of the comparison frame.

It can be inferred that the change in the length of the major axis increases as the frame moves to the next frame.

9 may be performed by the long axis length variation calculating unit 130 of the apparatus 100 for detecting a fall in the image according to an embodiment of the present invention.

A detailed description of determining the fall of the monitored object by using the long axis length variation calculated by the fall detection apparatus 100 in the image will be described with reference to Table 3 below.

11 is a flowchart showing an example of a method of calculating the distance between the center points of the upper and lower sides.

Referring to FIG. 11, in the case where the apparatus 100 for detecting collapse of an image detects a top center point (hereinafter referred to as a "first top center point") and a bottom center point 1 lower-side center point ") (S841)

The collapsing detection apparatus 100 detects the respective bottom center points in a box shape (hereinafter, referred to as "remaining box shape") representing surveillance objects detected in the remaining image frames except for the first image frame (S843).

The collapse detection apparatus 100 moves the remaining box shapes in parallel so that the respective lower center points detected in the remaining box shapes coincide with the positions of the first lower center point in step S845.

(Hereinafter referred to as "remaining upper-and-upper-center point") of each of the remaining box shapes of the shifted detection apparatus 100 in the image is detected (S847).

The fall detection apparatus 100 calculates a distance between each of the upper-side center points and the first upper-side center point in step S849.

A specific example of Fig. 11 will be described with reference to Fig.

12 is a diagram for explaining an example of calculating the distance between the center points of the upper and lower sides.

F1 and FN are the same as those shown in Fig. That is, F 1 represents a surveillance object detected in the first image frame in box form, and F N represents a surveillance object detected in the Nth image frame in box form.

The reason for calculating the distance between the center points of the upper side is that the position of the head is changed when the person falls down. The same is true of non-human objects.

In order to calculate the movement distance of a pure person's head, the fall detection apparatus 100 in the image matches the foot position of the person detected in the remaining image frames to the foot position of the person, which is the surveillance object detected in the first image frame The moving distance of the head can be calculated.

That is, it is possible to translate the location of the mobile to match the F N P DN in the lower side of the center point F N in FIG. 10 and the lower side of the center point P D1 of the F 1.

When the position of F N is moved in parallel, the position of P UN , which is the center point of the upper side of F N , is changed to P UN ' .

Sseureojim in the image sensing device 100 may calculates the distance to the upper side of the center point F 1 P U1 and UN P 'to calculate the distance between F 1 and F N and the center point of the phase transition.

The distance between the center points of the upper and lower sides can be expressed in units of pixels or the like.

For example, it can be as shown in Table 2.

Comparison frame Distance between center points
(Unit pixel)
2 3 3 10 4 35 5 87 6 174 7 330 8 370

In Table 2, the reference image frame is the first image frame as in Table 1. In Table 2, the comparison frame has the same meaning as in Table 1. In other words, the distance 3 pixels between the upper and the lower center points of the lower frame of the comparison frame 2 is shifted from the upper center of the box representing the monitoring object detected in the first image frame and the box shape representing the monitoring object detected from the second image frame And the distance between the center points of the upper side of the image is 3 pixels.

The increase in the distance between the upper and lower center points toward the latter frame can be inferred to be the state in which the monitoring object is collapsing.

The method for calculating the distance between the center points of the upper and lower sides described with reference to FIG. 11 can be performed by the center-of-vertex-center-distance calculating unit 140 of the apparatus 100 for detecting the fallout in the image according to an embodiment of the present invention.

Referring back to FIG. 8, in operation S850, the fall detection apparatus 100 can determine whether the surveillance object is collapsed by multiplying the computed long-axis length change amount by the calculated distance between the calculated upper-right center points.

Specifically, when the value obtained by multiplying the amount of change in the long axis length calculated for each comparison frame by the distance between the center point of the upper side and the calculated value is equal to or greater than a preset value, the collapse sensing apparatus 100 can detect that the monitoring object has collapsed in the comparison frame .

With reference to Tables 1 and 2 and Table 3 below.

Comparison frame Long axis length variation Distance between center points
(Unit pixel)
F-Value
2 One 3 3 3 1.7 10 17 4 3.3 35 115.5 5 5 87 435 6 6.7 174 1165.8 7 8.3 330 2739 8 10 370 3700

Table 3 shows values obtained by multiplying the values of Table 2, which shows an example of the distance between the center of the upper side and Table 1, showing an example of the calculated long axis length variation.

The F-Value is a value obtained by multiplying the distance between the long-axis length variation calculated using the first frame, which is the reference frame, and the same comparison frame, and the center point of the upper left corner.

When the value of the F-Value is equal to or larger than a predetermined value, the collapse sensing apparatus 100 can determine that the monitoring object is collapsed in the comparison frame.

For example, if the predetermined value is 2500, the collapse sensing apparatus 100 in the image may determine that the monitored object has collapsed in the seventh image frame. If there is no value greater than a preset value in any image frame, it is possible to continuously detect the fall of the monitored object using a later-ranked frame.

According to another embodiment of the present invention, the method of detecting a fall in an image can accurately detect the fall of a monitored object regardless of a direction in which the monitored object falls, by determining the fall of the monitored object using the F-Value.

13 is a diagram illustrating an example in which a method of detecting a fall in an image is applied according to another embodiment of the present invention.

Referring to FIG. 13, a method for detecting a fall in an image according to another embodiment of the present invention may be applied to a case where a plurality of objects exist in one image frame.

According to another exemplary embodiment of the present invention, when there are a plurality of objects in one image frame, a method of detecting a fall in images may be applied to all objects to detect an object that is collapsed. Therefore, according to another embodiment of the present invention, a method of detecting a fall in an image can simultaneously detect a fall of a plurality of objects.

As described above, the method of detecting a fall in an image may be implemented as a computer-readable code on a computer-readable recording medium.

That is, a recording medium implementing a method for detecting a fall in an image according to the present invention includes a process of detecting a surveillance object from N first image frames to N image frames, respectively, A method for displaying a surveillance object in a box form representing a surveillance object detected in the first video frame, a method of displaying a surveillance object in a box shape representing a surveillance object detected in the first video frame, A distance between the center points of the upper and lower side in the box shape representing the monitoring object detected in the first image frame and a distance between the center points of the upper and lower side in each box shape representing the monitoring object detected in the remaining image frames excluding the first image frame, And calculating a change amount of the calculated long axis length and a difference And a program for performing a process of detecting a fall of the monitoring object by using a distance between the upper and lower center points.

A computer-readable recording medium includes all kinds of recording media in which data that can be read by a computer system is stored. Examples of the computer-readable recording medium include a RAM, a ROM, a CD-ROM, a magnetic tape, an optical data storage device, a floppy disk, and the like.

In addition, the computer-readable recording medium may be distributed over network-connected computer systems so that computer readable codes can be stored and executed in a distributed manner. And, functional programs, codes, and code segments for implementing the recording method can be easily deduced by programmers in the technical field to which the present invention belongs.

FIG. 14 is another configuration diagram of an apparatus for detecting a fallout in an image according to an embodiment of the present invention.

The collapse sensing apparatus 100 in the image may have the configuration shown in Fig.

The collapse sensing apparatus 100 includes a processor 20 for executing instructions, a storage 40 for storing collapse detection program data in the image, a memory 30 such as a RAM, A data bus 10 connected to the network interface 50, the processor 20 and the memory 30 and serving as a data movement path. Also, it may include a database 60 in which image frames and the like are stored.

FIG. 15 is a block diagram illustrating a system for detecting a fallout in an image according to another embodiment of the present invention.

Referring to FIG. 15, a collapse detection system 1000 includes an image capturing apparatus 200 such as a camera, a collapse detection apparatus 100 in an image, and a notification apparatus 300.

The image capturing apparatus 200 may capture an image to generate an image frame, and may transmit the generated image frame to the collapse sensing apparatus 100 in the image.

The overlapping description of the collapse detection apparatus 100 in the image is omitted, as described above with reference to FIG. 1 to FIG.

The notification device 300 can provide visual and / or auditory notification to the manager when the monitored object is kept in a collapsed state by the collapse detection apparatus 100 in the image.

Each component in FIG. 1 may refer to software or hardware such as a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). However, the components are not limited to software or hardware, and may be configured to be in an addressable storage medium and configured to execute one or more processors. The functions provided in the components may be implemented by a more detailed component or may be implemented by a single component that performs a specific function by combining a plurality of components.

While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, You will understand. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive.

Claims (18)

Detecting a monitored object in N image frames from a first image frame;
Displaying the detected surveillance object in each image frame in a box form;
Calculating a change amount of a major axis length in each box shape representing a surveillance object detected in a remaining image frame excluding the first image frame with respect to a major axis length in a box shape representing a surveillance object detected in the first image frame, ;
Calculating a distance between a top center point and a top center point in a box shape representing the monitored object detected in the first image frame and a box shape representing a monitored object detected in the remaining image frames except for the first image frame; And
And detecting a fall of the monitored object by using the calculated variation of the major axis length and the calculated distance between the center points of the upper and lower sides.
The method according to claim 1,
Wherein the step of detecting the collapse of the monitoring object comprises:
When the value obtained by multiplying the amount of change of the long axis length calculated using the first image frame and the specific image frame by the distance between the center of the upper side is equal to or greater than a preset value, Further comprising the step of detecting a collapse in the image.
The method according to claim 1,
The step of calculating the change amount of the major axis length comprises:
Further comprising correcting a change amount of the major axis length in each box shape representing the monitored object detected in the first image frame and the remaining image frame by considering the size of the object according to the perspective, Way.
The method of claim 3,
The step of correcting the change amount of the long axis length comprises:
A monitoring object detected in the first image frame in a change amount of a major axis length in each box shape representing a monitoring object detected in the remaining image frame in a box shape representing a monitoring object detected in the first image frame, Further comprising the step of dividing the length of the long axis by the length of the long axis and multiplying by 100 to correct the collapse.
The method according to claim 1,
The box shape is composed of two sides, an upper side and a lower side in the major axis direction,
A point adjacent to a center point of a lower side in a box shape representing a monitored object detected in the (N-1) th image frame among the center point of the upper side and the center point of the lower side in a box shape representing the monitored object detected in the And determining a center point of a lower side in a box shape representing a surveillance object detected in an N image frame.
The method according to claim 1,
Calculating a distance between the upper and lower center points,
Wherein each of the surveillance objects detected in the remaining video frames matches the center of the lower side in the box shape representing the surveillance object detected in the first video frame and the box shape representing the surveillance object detected in the remaining video frames, And calculating a distance between the center points of the upper side and the lower side of the box.
The method according to claim 1,
The monitoring object is a specific person,
Wherein the box shape is composed of two sides, an upper side and a lower side in the major axis direction, a head end portion of the specific person is located at the upper side, a foot of the specific person is located at the lower side, Detecting a fall in the image.
The method according to claim 1,
A method for detecting a fall in an image,
Further comprising the step of determining that the surveillance object has remained in a collapsed state when the box shape has not changed by more than a predetermined level for a preset time after detecting the collapse of the surveillance object, .
9. The method of claim 8,
Wherein the step of determining that the monitored object is maintained in a collapsed state comprises:
When a box shape of a surveillance object detected in a predetermined number of frames after the frame in which the surge of the surveillance object is sensed coincides with a box shape representing a surveillance object detected in the surveillance frame, Further comprising determining that the object remains in a collapsed state.
The method according to claim 1,
Wherein the step of displaying the monitoring object in box form comprises:
Further comprising the step of, in box form, applying Principal Component Analysis (PCA) using contour pixel information of the surveillance object.
An object detector for detecting a surveillance object in N image frames from a first image frame;
A boxing unit for displaying a surveillance object detected in each of the image frames in a box form;
Calculating a change amount of a long axis length in each box shape representing a surveillance object detected in a remaining image frame excluding the first image frame based on a long axis length in a box shape representing a surveillance object detected in the first image frame, A length variation calculating unit;
An upper-side center point distance calculating unit for calculating a distance between upper-left and lower-right center points in a box shape representing the monitoring object detected in the first image frame and a distance between the upper and lower center points in each box shape representing the monitoring object detected in the remaining image frames excluding the first image frame; A calculating unit;
And a fall detection unit for detecting a fall of the monitored object using the calculated change amount of the major axis length and the calculated distance between the center points of the upper and lower sides.
12. The method of claim 11,
The falling-
When the value obtained by multiplying the amount of change of the long axis length calculated using the first image frame and the specific image frame by the distance between the center of the upper side is equal to or greater than a preset value, , A fall detection device in a video.
12. The method of claim 11,
The long axis length change amount calculation unit calculates,
Wherein the amount of change of the major axis length is corrected in each of box shapes representing the surveillance object detected in the first image frame and the remaining image frame in consideration of the size of the object according to the perspective.
12. The method of claim 11,
The center-of-
Wherein each of the surveillance objects detected in the remaining video frames matches the center of the lower side in the box shape representing the surveillance object detected in the first video frame and the box shape representing the surveillance object detected in the remaining video frames, And calculates a distance between the center points of the upper side and the lower side of the image.
12. The method of claim 11,
Further comprising a collapsing maintenance determining unit for determining that the monitoring object is maintained in a collapsed state when the box shape has not changed by more than a predetermined level for a preset time after detecting the collapse of the monitored object, Sensing device.
A video photographing device for generating a video frame;
Detects the surveillance object from the first image frame to N image frames,
A monitoring object detected in each frame is represented in a box form,
Calculating a change amount of a long axis length in each box shape representing a surveillance object detected in a remaining image frame excluding the first image frame based on a long axis length in a box shape representing a surveillance object detected in the first image frame,
Calculating a distance between a center point of an upper side and a center of an upper side in a box shape representing the monitored object detected in the first image frame and a distance between the center point of the upper side and the center in each box shape representing the monitored object detected in the remaining image frames excluding the first image frame,
A fall detection unit for detecting an fall of the surveillance object using the calculated change amount of the major axis length and the calculated distance between the center points of the upper left side; And
And a notification device for providing a visual or auditory notification when the falling of the monitored object is detected.
17. The method of claim 16,
The apparatus for detecting a fall in an image,
Further comprising a collapsing maintenance determining unit for determining that the monitored object is maintained in a collapsed state when the box shape has not changed by more than a predetermined level for a preset time after detecting the collapse of the monitored object,
The notification device,
And a notification device for providing a visual or auditory notification when the monitoring object is determined to be maintained in a collapsed state.
A computer-readable recording medium on which a computer program for performing the method of any one of claims 1 to 9 is recorded.
KR1020140098359A 2014-07-31 2014-07-31 Method and Apparatus for Detecting Fall Down in Video KR101614412B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020140098359A KR101614412B1 (en) 2014-07-31 2014-07-31 Method and Apparatus for Detecting Fall Down in Video
PCT/KR2015/007741 WO2016018006A1 (en) 2014-07-31 2015-07-24 Method and device for sensing collapse in image, and system using same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140098359A KR101614412B1 (en) 2014-07-31 2014-07-31 Method and Apparatus for Detecting Fall Down in Video

Publications (2)

Publication Number Publication Date
KR20160015728A true KR20160015728A (en) 2016-02-15
KR101614412B1 KR101614412B1 (en) 2016-04-29

Family

ID=55217820

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140098359A KR101614412B1 (en) 2014-07-31 2014-07-31 Method and Apparatus for Detecting Fall Down in Video

Country Status (2)

Country Link
KR (1) KR101614412B1 (en)
WO (1) WO2016018006A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102155724B1 (en) 2020-04-21 2020-09-14 호서대학교 산학협력단 Method and system for risk detection of objects in ships using deep neural networks

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080026326A (en) * 2006-09-20 2008-03-25 연세대학교 산학협력단 Apparatus and method for image based-monitoring elderly people with principal component analysis
KR20140056992A (en) * 2012-11-02 2014-05-12 삼성전자주식회사 Method of tracking motion using depth image and device thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008152717A (en) * 2006-12-20 2008-07-03 Sanyo Electric Co Ltd Apparatus for detecting fall-down state
KR100927776B1 (en) * 2008-02-25 2009-11-20 동서대학교산학협력단 Hazardous Behavior Monitoring System
JP4753320B2 (en) * 2008-09-09 2011-08-24 東芝エレベータ株式会社 Escalator monitoring system
KR101180887B1 (en) * 2010-09-08 2012-09-07 중앙대학교 산학협력단 Apparatus and method for detecting abnormal behavior
KR101309366B1 (en) * 2012-02-16 2013-09-17 부경대학교 산학협력단 System and Method for Monitoring Emergency Motion based Image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080026326A (en) * 2006-09-20 2008-03-25 연세대학교 산학협력단 Apparatus and method for image based-monitoring elderly people with principal component analysis
KR20140056992A (en) * 2012-11-02 2014-05-12 삼성전자주식회사 Method of tracking motion using depth image and device thereof

Also Published As

Publication number Publication date
KR101614412B1 (en) 2016-04-29
WO2016018006A1 (en) 2016-02-04

Similar Documents

Publication Publication Date Title
CN104902246B (en) Video monitoring method and device
CN105283129B (en) Information processor, information processing method
US10893251B2 (en) Three-dimensional model generating device and three-dimensional model generating method
US9818026B2 (en) People counter using TOF camera and counting method thereof
US7627199B2 (en) Image surveillance/retrieval system
JP6786929B2 (en) Face monitoring system
US9875408B2 (en) Setting apparatus, output method, and non-transitory computer-readable storage medium
JP5853141B2 (en) People counting device, people counting system, and people counting method
US9781336B2 (en) Optimum camera setting device and optimum camera setting method
JP2016201756A5 (en)
US9476703B2 (en) Size measuring and comparing system and method
US20170289526A1 (en) Calibration device
US20150104067A1 (en) Method and apparatus for tracking object, and method for selecting tracking feature
US20180095549A1 (en) Detection method and detection apparatus for detecting three-dimensional position of object
US20150227806A1 (en) Object information extraction apparatus, object information extraction program, and object information extraction method
US20180211098A1 (en) Facial authentication device
US11715236B2 (en) Method and system for re-projecting and combining sensor data for visualization
KR101469099B1 (en) Auto-Camera Calibration Method Based on Human Object Tracking
JP7224832B2 (en) Information processing device, information processing method, and program
US20190385318A1 (en) Superimposing position correction device and superimposing position correction method
JP7188240B2 (en) Human detection device and human detection method
US9836655B2 (en) Information processing apparatus, information processing method, and computer-readable medium
KR101614412B1 (en) Method and Apparatus for Detecting Fall Down in Video
JP4707019B2 (en) Video surveillance apparatus and method
US10792817B2 (en) System, method, and program for adjusting altitude of omnidirectional camera robot

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
FPAY Annual fee payment

Payment date: 20190401

Year of fee payment: 4