WO2016018006A1 - Procédé et dispositif de détection d'un affaissement dans une image, et système les utilisant - Google Patents

Procédé et dispositif de détection d'un affaissement dans une image, et système les utilisant Download PDF

Info

Publication number
WO2016018006A1
WO2016018006A1 PCT/KR2015/007741 KR2015007741W WO2016018006A1 WO 2016018006 A1 WO2016018006 A1 WO 2016018006A1 KR 2015007741 W KR2015007741 W KR 2015007741W WO 2016018006 A1 WO2016018006 A1 WO 2016018006A1
Authority
WO
WIPO (PCT)
Prior art keywords
image frame
image
surveillance object
box
surveillance
Prior art date
Application number
PCT/KR2015/007741
Other languages
English (en)
Korean (ko)
Inventor
이동훈
임희진
유수인
이승준
김정민
Original Assignee
삼성에스디에스 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성에스디에스 주식회사 filed Critical 삼성에스디에스 주식회사
Publication of WO2016018006A1 publication Critical patent/WO2016018006A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a method and apparatus for detecting collapse in an image and a system using the same. More specifically, the present invention relates to a method and apparatus for detecting collapse in an image capable of detecting a collapsed state of an object such as a person in the image and a system using the same.
  • a method of detecting an object's movement in an image includes a method of detecting an object's movement through color analysis or feature point extraction.
  • An event for determining whether an event occurs is an example of determining the collapse of an object such as a person.
  • An object of the present invention is to provide a method and apparatus for detecting collapse in an image that can be detected even if an object such as a person falls in any direction, and a system using the same.
  • a method of detecting a fall in an image including detecting a surveillance object in N image frames from a first image frame; Displaying a surveillance object detected in each image frame in the form of a box; Calculating an amount of change in the long axis length in each box type representing the surveillance object detected in the remaining video frames other than the first image frame based on the long axis length in the box shape indicating the surveillance object detected in the first image frame ; Calculating a distance between an upper edge center point in the box form representing the surveillance object detected in the first image frame and an upper edge center point in each box form representing the surveillance object detected in the remaining image frames except for the first image frame; And detecting a fall of the surveillance object by using the calculated change amount of the long axis length and the calculated distance between the center point of the upper side.
  • the detecting of the collapse of the surveillance object may include: a value obtained by multiplying a distance between the long axis center point and the change amount of the long axis length calculated using the first image frame and a specific image frame is equal to or greater than a preset value
  • the method may further include detecting that the surveillance object has fallen in the specific image frame.
  • the calculating of the change amount of the long axis length may include: a long axis in each box type representing a surveillance object detected in the first image frame and the remaining image frame in consideration of the size of the object according to the perspective.
  • the method may further include correcting the change amount of the length.
  • the correcting of the change amount of the long axis length may include: a box shape indicating a surveillance object detected in the first image frame and a box shape representing a long axis length and a surveillance object detected in the remaining image frames.
  • the method may further include dividing the amount of change in the long axis length by the long axis length in a box form representing the surveillance object detected in the first image frame, and multiplying it by 100 to correct the amount of change in the long axis length.
  • the box shape includes two sides, a top side, and a bottom side in a long axis direction, and in the box form representing a surveillance object detected in the Nth image frame, the N- A point adjacent to the center point of the lower side in the box form representing the surveillance object detected in the first image frame may be determined as the center point of the bottom side in the box form representing the surveillance object detected in the Nth image frame.
  • the calculating of the distance between the upper center points may include: a box form representing a surveillance object detected in the first image frame and a box form representing a surveillance object detected in the lower center and the remaining image frames.
  • the method may further include calculating a distance between the upper center points after moving a box shape representing each surveillance object detected in the remaining image frames such that the lower center points coincide with each other.
  • the monitoring object is a specific person
  • the box shape is composed of two sides in the long axis direction, the upper side and the lower side, the upper end of the head of the particular person is located, the lower side is the specific A person's foot is located and the length of the long axis may be proportional to the height of the particular person.
  • the watched object after detecting the fall of the watched object, if the box shape does not change more than a predetermined level for a preset time, the watched object remains in a collapsed state. It may further comprise the step of determining.
  • the determining that the watched object is in a collapsed state may include: a predetermined range or more of a box shape indicating a watched object detected in a preset number of frames after the frame of the watched object has been sensed.
  • the method may further include determining that the surveillance object maintains the collapsed state when it matches the box shape indicating the surveillance object detected in the frame in which the collapse is detected.
  • the step of displaying the surveillance object in the form of a box may further include the step of applying a principal component analysis (PCA, Principal Component Analysis) using contour pixel information of the surveillance object. can do.
  • PCA principal component analysis
  • a apparatus for detecting a fall in an image including: an object detector configured to detect a surveillance object in N image frames from a first image frame; A box unit for displaying the surveillance object detected in each image frame in a box form; A long axis for calculating a change amount of a long axis length in each box shape representing a surveillance object detected in the remaining video frames except for the first image frame based on the long axis length in the box shape representing the surveillance object detected in the first image frame.
  • Length change calculation unit An upper edge center point distance for calculating a distance between an upper edge center point in a box shape representing a surveillance object detected in the first image frame and an upper edge center point in each box shape representing a surveillance object detected in the other image frames except for the first image frame.
  • a calculator ; And a fall detection unit that senses a fall of the surveillance object by using the calculated change in the major axis length and the calculated distance between the center point of the upper side.
  • the fall detection system the imaging device for generating an image frame;
  • the surveillance object is detected in each of N image frames from the first image frame, and the surveillance object detected in each frame is represented by a box, and in the box form representing the surveillance object detected by the first image frame.
  • the amount of change in the long axis length is calculated in each box form representing the surveillance object detected in the remaining image frames except the first image frame based on the long axis length, and in the box form representing the surveillance object detected in the first image frame.
  • the computer-readable recording medium according to the fourth aspect of the present invention for achieving the above technical problem may be a computer program for performing the fall detection method in the image.
  • FIG. 1 is a block diagram of a device for detecting a fall in an image according to an exemplary embodiment.
  • FIG. 2 is a diagram for explaining an example in which a boxer displays a surveillance object detected in an image frame in a box form.
  • FIG. 3 is a view for explaining another example in which a boxing unit shows, in a box form, a monitored object detected using a principal component analysis method.
  • FIG. 4 is a diagram illustrating an example of a box shape that is changed for each image frame when the surveillance object falls toward the image capturing apparatus.
  • FIG. 5 is a diagram illustrating an example of a box shape that is changed for each image frame when the surveillance object falls to the left or right direction based on the image photographing apparatus.
  • FIG. 6 is a diagram illustrating an example of a box shape that is changed for each image frame when the surveillance object falls toward a direction in which the image photographing apparatus captures an image.
  • FIG. 7 is a diagram for explaining an example in which the fall maintaining determination unit determines that the surveillance object maintains the collapsed state.
  • FIG. 8 is a flowchart illustrating a fall detection method for an image according to another exemplary embodiment of the present invention.
  • FIG. 9 is a flowchart illustrating an example of a method of calculating a long axis length change amount.
  • 10 is a view for explaining an example of correcting each of the major axis lengths and calculating the amount of change in the major axis lengths.
  • 11 is a flowchart illustrating an example of a method of calculating a distance between upper edge center points.
  • FIG. 12 is a diagram for describing an example of calculating a distance between upper edge center points.
  • FIG. 13 is a diagram illustrating an example of applying a fall detection method to an image, according to another exemplary embodiment.
  • FIG. 14 is another configuration diagram of the apparatus for detecting a fall in an image according to an exemplary embodiment.
  • 15 is a configuration diagram illustrating a system for detecting a fall in an image according to another exemplary embodiment of the present invention.
  • FIG. 1 is a block diagram of a device for detecting a fall in an image according to an exemplary embodiment.
  • the apparatus 100 for detecting fall in an image may include an object detector 110, a boxer 120, a long axis length change calculator 130, and an upper edge center point distance calculator. 140 and a fall detection unit 150, and may further include a fall maintaining determiner 160.
  • the object detector 110 detects a surveillance object from image frames collected from an image capturing apparatus such as a camera.
  • the object detector 110 detects the surveillance object in the first image frame, detects the same surveillance object in the second image frame, and detects the third image frame.
  • the same surveillance object may be detected in, the same surveillance object is detected in the fourth image frame, and the surveillance object may be detected in the fifth image frame.
  • the same watched object means that the object is the same, and does not mean that the object is the same position or shape. Even if the object of the same target, the position or shape of the object may be changed for each image frame according to the movement of the object.
  • the surveillance object detected by the object detector 110 may be detected in every frame, or may be detected only in some frames.
  • the object detecting unit 110 may detect a surveillance object in a frame using a known technique.
  • the monitoring object to detect the fall may be a person, but is not limited thereto, and may be a non-human object.
  • the boxing unit 120 may represent the surveillance object detected in each frame in the form of a box.
  • the boxing unit 120 displays the surveillance object detected in the first image frame in the form of a box, represents the surveillance object detected in the second image frame in the form of a box, and displays the surveillance object detected in the third image frame. It can be represented as a box.
  • the boxer 120 may represent the surveillance object detected in each image frame by the object detector 110 in the form of a box by using a principal component analysis (PCA) method.
  • PCA principal component analysis
  • FIG. 2 is a diagram for explaining an example in which a boxer displays a surveillance object detected in an image frame in a box form.
  • the object detector 110 detects a person sitting with his arms extended in an image frame and displays the detected surveillance object in the form of a box.
  • the box shape shown in (a) of FIG. 2 is a box shape generated to include the outermost angle of the detected surveillance object.
  • (a) of FIG. 2 may lead to a somewhat inaccurate result in the fall detection result of the object due to a component due to an extended arm, which has little correlation with the main component for analyzing the fall of the object.
  • the boxing unit 120 of the apparatus 100 for detecting a fall in an image may display the monitored object detected using a principal component analysis method in the form of a box.
  • Principal component analysis method can use the existing well-known content.
  • the boxing unit 120 analyzes a principal component using a principal component analysis method at contour points of the detected monitoring object, and rotates using the analyzed principal component. It can represent a box shape.
  • FIG. 3 is a view for explaining another example in which a boxing unit shows, in a box form, a monitored object detected using a principal component analysis method.
  • FIG. 3 (a) is shown in the form of a box based on the outermost angle of the detected monitoring object.
  • (b) is shown in the form of a box around the main component of the detected monitoring object.
  • the boxing unit 120 uses the principal component analysis method in FIG. 2 to improve the accuracy of detecting the fall of the object, rather than presenting the detected monitoring object in the form of a box as shown in FIGS. 2A and 3A. b) and
  • FIG. 3 it is preferable to display the detected monitoring object in the form of a box as shown in (b).
  • the boxing unit 120 represents only one example of the monitored object detected using the principal component analysis method in the form of a box, but is not limited thereto.
  • the boxing unit 120 may display the monitored object detected in a box form using a method other than the principal component analysis method.
  • the boxed unit 120 displays the monitored object detected in each image frame in the form of a box, it is preferable to display the detected surveillance object in the form of a box by using the same method for each image frame.
  • FIG. 4 is a diagram illustrating an example of a box shape that is changed for each image frame when the surveillance object falls toward the image capturing apparatus.
  • 41a and 41b show surveillance objects detected in the first image frame in the form of a box.
  • 42a and 42b show surveillance objects detected in an image frame (K-th frame) photographed after the first image frame in the form of a box.
  • 43a and 43b show surveillance objects detected in an image frame photographed after the K-th image frame in the form of a box.
  • the length of the long axis in the form of a box is longer or the same, and the movement of the upper center point when the pollen or the person, a watched object, falls down toward the imaging device. You can see the distance is long.
  • the major axis length is proportional to the height of the person if the watch object is a person.
  • the box shape is composed of two sides, the upper side and the lower side of the long axis.
  • the upper side is located at the tip of the head of the person, which is a surveillance object, and the specific human foot is located at the lower side. Similar to humans, when a monitored object is an object, the upper end of the object will be located on the upper side while the object is standing in the normal state, and the lower end of the object will be located on the lower side.
  • the upper edge center point corresponds to the center on the upper side
  • the lower edge center point corresponds to the center on the lower side
  • the tip of the person's head is not necessarily located on the upper side, and the person's feet are not located on the lower side. Some differences may occur depending on how the boxer 120 displays the monitored object in the form of a box. In addition, as the watchdog object falls, the positions of the upper and lower sides may be reversed. For example, in the case where the monitoring object is a human, the upper side and the lower side of the box shape are defined as the upper side and the lower side near the human foot.
  • the foot will be positioned near the lower side of the box shape representing the surveillance object detected in the first image frame.
  • M-th lower center point the upper center point
  • M-th upper center point M-1 image among the lower center point and the upper edge center point in the form of a box representing a surveillance object detected in the M-th image frame (1 ⁇ M ⁇ N).
  • the center point closer to the lower center point may be determined as the Mth lower center point.
  • FIG. 5 is a diagram illustrating an example of a box shape that is changed for each image frame when the surveillance object falls to the left or right direction based on the image photographing apparatus.
  • (a) is a case in which the pollen, which is a monitoring object, falls in the left direction
  • (b) is a case in which a person, which is a monitoring object, falls in the right direction.
  • 51a and 51b show surveillance objects detected in the first image frame in the form of a box.
  • 52a and 52b show surveillance objects detected in an image frame (L-th frame) photographed after the first image frame in the form of a box.
  • 53a and 53b show surveillance objects detected in an image frame captured after the L-th image frame in the form of a box.
  • FIG. 6 is a diagram illustrating an example of a box shape that is changed for each image frame when the surveillance object falls toward a direction in which the image photographing apparatus captures an image.
  • 61a and 61b show surveillance objects detected in the first image frame in the form of a box.
  • 62a and 62b show surveillance objects detected in an image frame (M-th frame) photographed after the first image frame in the form of a box.
  • 63a and 63b show surveillance objects detected in an image frame captured after the M-th image frame in the form of a box.
  • the present invention detects the fall of the surveillance object by using a combination of the change in the long axis length and the distance between the center of the upper edge.
  • the present invention can sense not only the person but also the fall of an object.
  • the present invention may detect the fall of two or more surveillance objects by detecting two or more surveillance objects in one image frame.
  • the long axis length variation calculating unit 130 has a long axis in each box shape representing a surveillance object detected in the remaining video frame except for the first video frame based on the long axis length in the box shape representing the surveillance object detected in the first image frame. The amount of change in length can be calculated.
  • the upper edge center point distance calculating unit 140 is configured between the upper edge center point and the upper edge center point in each box type representing the surveillance object detected in the remaining image frames except the first image frame in the box form representing the surveillance object detected in the first image frame. The distance can be calculated.
  • the first image frame may mean the first frame of the image captured by the image capturing apparatus, but means the first frame of the frames used to detect the fall of the surveillance object. Also, the surveillance object in the first image frame is generally in a standing state, which is a state before falling.
  • the frames used to detect the surveillance object may use all the image frames generated by the image capturing apparatus, only the image frames corresponding to a predetermined period may be used.
  • the frame used by the present invention to detect the surveillance object may use only an image frame corresponding to a multiple of 5 of the 30 image frames. That is, if the image capturing apparatus generates 18000 image frames in 10 minutes, the present invention may use about 3600 frames in 10 minutes for detecting the surveillance object.
  • the frame used to detect the surveillance object according to the present invention may be set in consideration of the image capturing capability of the image capturing apparatus, the computational performance of the computer, the speed of falling detection, and the like.
  • the fall detection unit 150 may detect the fall of the monitoring object by using the calculated change in the major axis length and the calculated distance between the center points.
  • the fall detector 150 may multiply the calculated change in the major axis length by the calculated distance between the calculated center points, and detect that the monitor object has fallen when the multiplied value is greater than or equal to a preset value.
  • the fall detection unit 150 detects that the monitoring object is collapsed in the specific frame when the value obtained by multiplying the distance between the first axis and the length of the long axis calculated by using the specific frame and the distance between the upper center point is equal to or more than a preset value. can do.
  • the fall maintaining determiner 160 detects the fall of the watched object by the fall detector 150, and when the box form representing the watched object does not change more than a predetermined level for a preset time, the watched object is in a collapsed state. Can be determined to be maintained.
  • the fall maintaining determiner 160 may determine whether or not the case has occurred again soon, even if it is determined by the fall detector 150 that the monitored object has fallen.
  • FIG. 7 is a diagram for explaining an example in which the fall maintaining determination unit determines that the surveillance object maintains the collapsed state.
  • the collapse maintaining determiner 160 detects a surveillance object detected in a frame that detects a fall of a predetermined range or more in a box form indicating a surveillance object detected in a preset number of frames after the frame of the surveillance object is collapsed. If it matches the box type that it represents, it can be determined that the watched object has fallen.
  • the preset number of frames may be set in consideration of the time for which the watched object can be considered to be in a collapsed state. For example, if there is no significant change in the box shape for 4 seconds, when it is considered to be in a collapsed state, a frame corresponding to 4 seconds may be a predetermined number of frames.
  • F N denotes a box form indicating a surveillance object detected in the N-th image frame
  • F N ⁇ 1 denotes a box form indicating a surveillance object detected in the N-th image frame.
  • the fall detection unit 150 determines that the surveillance object is collapsed in the N-th image frame.
  • F N -1 and F N can be seen to coincide with most box-shaped regions. In this case, you can determine that the watched object has fallen.
  • the long axis length variation calculator 130 may be applied.
  • FIG. 8 is a flowchart illustrating a fall detection method for an image according to another exemplary embodiment of the present invention.
  • the apparatus 100 for detecting falling in an image detects a surveillance object in each of the plurality of image frames (S810).
  • the apparatus 100 for detecting falling in an image displays each of the surveillance objects detected in the plurality of image frames in the form of a box (S820).
  • the fall detection apparatus 100 calculates a long axis length change amount in the image (S830).
  • the fall detection apparatus 100 in the image calculates the distance between the center points of the upper sides (S840).
  • the apparatus 100 for detecting a fall in the image determines whether the surveillance object falls by using the calculated long axis length change amount and the calculated distance between the center points of the upper edges (S850).
  • the apparatus 100 may determine whether the surveillance object maintains the collapsed state (S860).
  • the content of the fall maintaining determiner 160 described with reference to FIGS. 1 and 7 may be applied. .
  • the alarm device may provide a notification visually and / or audibly to the manager (S870).
  • FIG. 9 is a flowchart illustrating an example of a method of calculating a long axis length change amount.
  • the apparatus 100 for detecting falling in an image calculates a long axis length (hereinafter, referred to as a “first long axis length”) in a box shape representing a surveillance object detected in a first image frame (S831).
  • the apparatus 100 for detecting falling in an image calculates each long axis length (hereinafter, referred to as "remaining long axis length") in each box form representing a surveillance object detected in the other image frames except for the first image frame (S833). ).
  • the fall detection apparatus 100 corrects each of the remaining long axis lengths by using the first long axis length in operation S835.
  • 10 is a view for explaining an example of correcting each of the major axis lengths and calculating the amount of change in the major axis lengths.
  • F 1 represents a surveillance object detected in a first image frame in the form of a box.
  • F N represents the surveillance object detected in the Nth image frame in the form of a box.
  • the long axis length of the box shape F 1 representing the surveillance object detected in the first image frame is 60.
  • the long axis length of the box form F N representing the surveillance object detected in the Nth image frame is 55.
  • the size of the box shape may vary depending on whether the monitoring object is located close to an image capturing device such as a camera. In other words, it is necessary to minimize the effect of perspective on the size of the box shape.
  • the fall detection apparatus 100 in the image may correct the size of the object by multiplying the length of the long axis of F N by the length of the long axis of 100 / F 1 .
  • the collapse detection apparatus 100 in the image may multiply the length of the long axis of F1 by 100 / F1 to make it 100.
  • sseureojim sensing device 100 in the image is multiplied by (the length of the major axis of the 1 F) 100/60 to the length of the major axis of the F 1. Also, the fall detection apparatus 100 in the image multiplies 55, which is the length of the long axis of F N , by 100/60.
  • the corrected long axis length of F1 is 100
  • the corrected long axis length of FN is 91.67.
  • the fall detection apparatus 100 in the image may calculate a change amount based on a difference between each of the corrected first long axis length and the remaining corrected long axis length.
  • the difference between the corrected first long axis length 100 and the corrected Nth long axis length 91.67 is 8.33.
  • 8.33 can be the amount of change in the length of the first long axis and the length of the Nth long axis.
  • Comparison frame Long axis length of the comparison frame Calibrated long axis length Long axis length change 2 59 59.4 One 3 59 98.3 1.7 4 58 96.7 3.3 5 57 95 5 6 56 93.3 6.7 7 55 91.7 8.3 8 54 90 10
  • the long axis length is 60 and the corrected long axis length is 100 in the box form representing the surveillance object detected in the first image frame.
  • the unit of the long axis length may be set in consideration of the pixels of the image frame. For example, when the pixel corresponding to the long axis length is 55 pixels, the long axis length may be 55 pixels.
  • Comparison frame 2 refers to the second image frame and comparison frame 8 refers to the eighth image frame. That is, the comparison frame refers to the order of the frames, and the higher the number, the larger the time difference from the first image frame.
  • the long axis length of the comparison frame becomes the long axis length in the form of a box representing the surveillance object detected in the comparison frame.
  • the corrected long axis length is a value obtained by correcting the long axis length as described above with reference to FIG. 10.
  • the long-axis length change amount means a difference between the corrected long-axis length 100 and the corrected long-axis length of the comparison frame.
  • the method for calculating the long axis length change amount described with reference to FIG. 9 may be performed by the long axis length change amount calculation unit 130 of the fall detection apparatus 100 in an image according to an exemplary embodiment.
  • 11 is a flowchart illustrating an example of a method of calculating a distance between upper edge center points.
  • first upper center point the top of the center point (hereinafter, referred to as “first upper center point”) and the bottom side center point (hereinafter, referred to as “first upper side center point”) in a box form in which a fall detection apparatus 100 in an image indicates a surveillance object detected in a first image frame.
  • first upper center point the top of the center point
  • first upper side center point the bottom side center point
  • first upper side center point the bottom side center point
  • the fall detection apparatus 100 in the image detects each lower center point in a box form (hereinafter, referred to as a "remaining box form") representing a surveillance object detected in the other image frames except the first image frame (S843).
  • a box form hereinafter, referred to as a "remaining box form” representing a surveillance object detected in the other image frames except the first image frame (S843).
  • the fall detection apparatus 100 in the image moves the remaining box shapes in parallel so that each lower center point detected in the remaining box shapes coincides with the position of the first lower center point (S845).
  • the apparatus 100 for detecting falling in the image detects an upper center point (hereinafter, referred to as a "left upper center point") of each of the remaining box shapes (S847).
  • the fall detection apparatus 100 in the image calculates a distance between each of the remaining upper edge center points and the first upper edge center point (S849).
  • FIG. 11 A specific example of FIG. 11 will be described with reference to FIG. 12.
  • FIG. 12 is a diagram for describing an example of calculating a distance between upper edge center points.
  • F1 and FN are the same as that described with reference to FIG. That is, F 1 represents the surveillance object detected in the first image frame in the form of a box, and F N represents the surveillance object detected in the Nth image frame in the form of a box.
  • the reason for calculating the distance between the upper center points is that the head position changes when a person falls. The same is true for things that are not humans.
  • the fall detection apparatus 100 in the image matches the foot position of the person detected in the remaining image frames with the foot position of the person, which is a surveillance object detected in the first image frame, to calculate the moving distance of the pure human head. You can calculate the moving distance of the head.
  • Sseureojim in the image sensing device 100 may calculates the distance to the upper side of the center point F 1 P U1 and UN P 'to calculate the distance between F 1 and F N and the center point of the phase transition.
  • the distance between the upper center points may be expressed in units such as pixels.
  • the reference image frame is the first image frame like Table 1.
  • the comparison frame has the same meaning as in Table 1. That is, a distance of 3 pixels between the upper center point after moving the lower center point of the comparison frame 2 moves the box form representing the upper center point and the surveillance object detected in the second image frame from the box form representing the surveillance object detected in the first image frame. This means that the distance between the center points of the later phases is 3 pixels.
  • the method for calculating the distance between the upper center points described with reference to FIG. 11 may be performed by the upper center point distance calculator 140 of the apparatus 100 for detecting a fall in an image according to an exemplary embodiment.
  • the fall detection apparatus 100 may determine whether the surveillance object falls by using a product of the calculated long axis length change and the calculated distance between the center points of the upper edges.
  • the apparatus 100 for detecting falling in an image may detect that the surveillance object is collapsed in the corresponding comparison frame when a value obtained by multiplying the distance between the long axis length change calculated for each comparison frame and the distance between the upper center point and the preset value is greater than or equal to a preset value.
  • Comparison frame Long axis length change Distance between top edge center points in pixel F-Value 2 One 3 3 3 1.7 10 17 4 3.3 35 115.5 5 5 87 435 6 6.7 174 1165.8 7 8.3 330 2739 8 10 370 3700
  • Table 3 has shown the value which multiplied the value of Table 1 which shows the example of the calculated amount of long-axis length change, and Table 2 which shows an example of the distance between the upper edge center point.
  • the F-Value is a value obtained by multiplying the distance between the major axis length change amount and the upper edge center point calculated using the first frame having the reference frame and the same comparison frame.
  • the fall detection apparatus 100 in the image may determine that the surveillance object falls in a corresponding frame when the value of the F-Value is equal to or greater than a preset value.
  • the fall detection apparatus 100 in the image may determine that the surveillance object falls in the seventh image frame. If there is no value more than the preset value in any video frame, it may be able to continuously detect the fall of the monitoring object by using a frame of a lower rank in time.
  • the fall detection method in the image according to another embodiment of the present invention can accurately detect the fall of the watched object regardless of the direction in which the watched object falls by determining the fall of the watched object using the F-Value.
  • FIG. 13 is a diagram illustrating an example of applying a fall detection method to an image, according to another exemplary embodiment.
  • the fall detection method of an image according to another exemplary embodiment of the present invention may be applied to a case where a plurality of objects exist in one image frame.
  • the falling object when a plurality of objects exist in one image frame, the falling object may be detected by applying the fall detection method in the image to all objects. Therefore, the fall detection method in the image according to another embodiment of the present invention can detect the fall for a plurality of objects at the same time.
  • the fall detection method of an image as described above may be embodied as computer readable codes on a computer readable recording medium.
  • the recording medium implementing the fall detection method in the image according to the present invention detects the surveillance object in the N image frames from the first image frame, respectively, and the surveillance object detected in each frame
  • the step of displaying in the form in the box form indicating the surveillance object detected in the first image frame, the length of the long axis in each box form representing the surveillance object detected in the remaining image frame other than the first image frame based on the long axis length Computing the amount of change, the distance between the center of the upper edge in the box form indicating the surveillance object detected in the first image frame and the center of the upper edge in each box form representing the surveillance object detected in the remaining image frame except the first image frame
  • a program for performing a process of detecting a fall of the surveillance object may be recorded by using a distance between the top-viewed center points.
  • Computer-readable recording media include all kinds of recording media on which data that can be read by a computer system is stored. Examples of computer-readable recording media include RAM, ROM, CD-ROM, magnetic tape, optical data storage, floppy disk, and the like, and may be implemented in the form of a carrier wave or transmission via the Internet.
  • the computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
  • functional programs, codes, and code segments for implementing the recording method can be easily inferred by programmers in the art to which the present invention belongs.
  • FIG. 14 is another configuration diagram of the apparatus for detecting a fall in an image according to an exemplary embodiment.
  • the apparatus 100 for detecting falling in an image may have a configuration illustrated in FIG. 14.
  • the apparatus 100 for detecting falling in an image may include a processor 20 for executing a command, a storage 40 in which the falling detection program data is stored in an image, a memory 30 such as a RAM, and an external device.
  • the data bus 10 may be connected to the network interface 50, the processor 20, and the memory 30 to serve as a data movement path. It may also include a database 60 in which image frames and the like are stored.
  • 15 is a configuration diagram illustrating a system for detecting a fall in an image according to another exemplary embodiment of the present invention.
  • the fall detection system 1000 in an image includes an image photographing device 200 such as a camera, a fall detection device 100 in an image, and a notification device 300.
  • the image capturing apparatus 200 may generate an image frame by capturing an image and transmit the generated image frame to the collapse detection apparatus 100 in the image.
  • the apparatus 100 for detecting a fall in an image will not be described repeatedly with reference to FIGS. 1 to 14.
  • the notification device 300 may provide a notification visually and / or acoustically to the manager when the monitor object maintains the collapsed state by the fall detection device 100 in the image.
  • Each component of FIG. 1 may refer to software or hardware such as a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC).
  • FPGA field-programmable gate array
  • ASIC application-specific integrated circuit
  • the components are not limited to software or hardware, and may be configured to be in an addressable storage medium and may be configured to execute one or more processors.
  • the functions provided in the above components may be implemented by more detailed components, or may be implemented as one component that performs a specific function by combining a plurality of components.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

Un procédé de détection d'un affaissement dans une image selon un mode de réalisation de la présente invention peut comprendre les étapes consistant à : détecter un objet surveillé, dans un nombre N de trames d'image, à partir d'une première trame d'image ; afficher, sous forme de case, l'objet surveillé détecté dans les trames d'image respectives ; calculer une variation d'une longueur de grand axe dans chaque forme de case pour indiquer un objet surveillé détecté à partir des autres trames d'image à l'exclusion de la première trame d'image, d'après la longueur de grand axe dans la forme de case pour indiquer l'objet surveillé détecté à partir de la première trame d'image ; calculer une distance entre un point central de côté supérieur dans la forme de case pour indiquer l'objet surveillé détecté à partir de la première trame d'image et un point central de côté supérieur dans chaque forme de case pour indiquer l'objet surveillé détecté à partir des autres trames d'image à l'exclusion de la première trame d'image ; et détecter un affaissement de l'objet surveillé, au moyen de la variation calculée de la longueur de grand axe et la distance calculée entre les points centraux de côté supérieur.
PCT/KR2015/007741 2014-07-31 2015-07-24 Procédé et dispositif de détection d'un affaissement dans une image, et système les utilisant WO2016018006A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020140098359A KR101614412B1 (ko) 2014-07-31 2014-07-31 영상에서의 쓰러짐 감지 방법 및 장치와 이를 이용한 시스템
KR10-2014-0098359 2014-07-31

Publications (1)

Publication Number Publication Date
WO2016018006A1 true WO2016018006A1 (fr) 2016-02-04

Family

ID=55217820

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2015/007741 WO2016018006A1 (fr) 2014-07-31 2015-07-24 Procédé et dispositif de détection d'un affaissement dans une image, et système les utilisant

Country Status (2)

Country Link
KR (1) KR101614412B1 (fr)
WO (1) WO2016018006A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102155724B1 (ko) 2020-04-21 2020-09-14 호서대학교 산학협력단 심층신경망을 이용한 선박 내 객체의 위험 검출 방법 및 시스템

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008152717A (ja) * 2006-12-20 2008-07-03 Sanyo Electric Co Ltd 転倒状況検知装置
KR20090091539A (ko) * 2008-02-25 2009-08-28 동서대학교산학협력단 위험 동작 발생 감시 시스템
JP2010064821A (ja) * 2008-09-09 2010-03-25 Toshiba Elevator Co Ltd エスカレータ監視システム
KR20120025718A (ko) * 2010-09-08 2012-03-16 중앙대학교 산학협력단 이상행위 검출장치 및 방법
KR101309366B1 (ko) * 2012-02-16 2013-09-17 부경대학교 산학협력단 영상 기반 이상 동작 감시 시스템 및 방법

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100882509B1 (ko) * 2006-09-20 2009-02-06 연세대학교 산학협력단 주성분 요소방법을 이용한 노인 움직임 영상 감시 장치 및방법
KR101925879B1 (ko) * 2012-11-02 2019-02-26 삼성전자주식회사 깊이 영상을 이용하는 동작 추정 방법 및 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008152717A (ja) * 2006-12-20 2008-07-03 Sanyo Electric Co Ltd 転倒状況検知装置
KR20090091539A (ko) * 2008-02-25 2009-08-28 동서대학교산학협력단 위험 동작 발생 감시 시스템
JP2010064821A (ja) * 2008-09-09 2010-03-25 Toshiba Elevator Co Ltd エスカレータ監視システム
KR20120025718A (ko) * 2010-09-08 2012-03-16 중앙대학교 산학협력단 이상행위 검출장치 및 방법
KR101309366B1 (ko) * 2012-02-16 2013-09-17 부경대학교 산학협력단 영상 기반 이상 동작 감시 시스템 및 방법

Also Published As

Publication number Publication date
KR101614412B1 (ko) 2016-04-29
KR20160015728A (ko) 2016-02-15

Similar Documents

Publication Publication Date Title
WO2019103484A1 (fr) Dispositif de reconnaissance d'émotions multimodal, procédé et support d'informations à l'aide d'intelligence artificielle
WO2012064106A2 (fr) Procédé et appareil de stabilisation de vidéo par compensation de direction de visée de caméra
WO2017008224A1 (fr) Procédé de détection de distance à un objet mobile, dispositif et aéronef
WO2018062647A1 (fr) Appareil de génération de métadonnées normalisées, appareil de détection d'occlusion d'objet et procédés associés
WO2017195980A1 (fr) Dispositif électronique et procédé permettant de déterminer un état d'un conducteur
WO2018004298A1 (fr) Système et procédé de résumé d'images
WO2019151735A1 (fr) Procédé de gestion d'inspection visuelle et système d'inspection visuelle
WO2013085193A1 (fr) Appareil et procédé pour améliorer la reconnaissance d'un utilisateur
WO2015102391A1 (fr) Procédé de génération d'image pour analyser la position d'élan de golf d'un utilisateur au moyen d'une analyse d'image de profondeur, et procédé et dispositif pour analyser une position d'élan de golf à l'aide de celui-ci
WO2015160207A1 (fr) Système et procédé de détection de région d'intérêt
WO2015135443A1 (fr) Procédé et appareil pour simuler un son dans un scénario virtuel, et terminal
WO2019088555A1 (fr) Dispositif électronique et procédé de détermination du degré d'hyperémie conjonctivale faisant appel à ce dernier
WO2015126044A1 (fr) Procédé de traitement d'image et appareil électronique associé
WO2020059939A1 (fr) Dispositif d'intelligence artificielle
WO2013105720A1 (fr) Dispositif et procédé pour analyser la qualité d'une image stéréoscopique en trois dimensions
WO2018097620A1 (fr) Procédé de détection de source sonore anormale et appareil permettant la mise en œuvre de ce dernier
WO2021040214A1 (fr) Robot mobile et son procédé de commande
WO2014051362A1 (fr) Capteur de proximité et procédé de détection de proximité utilisant un capteur de vision fondé sur des évènements
WO2020101420A1 (fr) Procédé et appareil de mesurer des caractéristiques optiques d'un dispositif de réalité augmentée
WO2020141900A1 (fr) Robot mobile et son procédé d'avancement
WO2020168606A1 (fr) Procédé, appareil et dispositif d'optimisation de vidéo publicitaire, et support d'informations lisible par ordinateur
WO2020230931A1 (fr) Robot générant une carte sur la base d'un multi-capteur et d'une intelligence artificielle, configurant une corrélation entre des nœuds et s'exécutant au moyen de la carte, et procédé de génération de carte
WO2023182727A1 (fr) Procédé de vérification d'image, système de diagnostic l'exécutant, et support d'enregistrement lisible par ordinateur sur lequel le procédé est enregistré
WO2022065763A1 (fr) Appareil d'affichage et son procédé de commande
WO2016018006A1 (fr) Procédé et dispositif de détection d'un affaissement dans une image, et système les utilisant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15826700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15826700

Country of ref document: EP

Kind code of ref document: A1