WO2010024281A1 - Système de contrôle - Google Patents

Système de contrôle Download PDF

Info

Publication number
WO2010024281A1
WO2010024281A1 PCT/JP2009/064844 JP2009064844W WO2010024281A1 WO 2010024281 A1 WO2010024281 A1 WO 2010024281A1 JP 2009064844 W JP2009064844 W JP 2009064844W WO 2010024281 A1 WO2010024281 A1 WO 2010024281A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
storage means
reference information
monitoring system
information
Prior art date
Application number
PCT/JP2009/064844
Other languages
English (en)
Japanese (ja)
Inventor
俊和 赤間
Original Assignee
有限会社ラムロック映像技術研究所
株式会社修成工業
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 有限会社ラムロック映像技術研究所, 株式会社修成工業 filed Critical 有限会社ラムロック映像技術研究所
Priority to JP2010526736A priority Critical patent/JP5047361B2/ja
Publication of WO2010024281A1 publication Critical patent/WO2010024281A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19613Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras

Definitions

  • the present invention relates to a monitoring system. More specifically, the present invention relates to a monitoring system that monitors the actions of suspicious persons or persons who want to watch over, for the purpose of crime prevention or watching.
  • Conventional monitoring systems generally have a function of simply recording a video by connecting a video signal from a camera to a monitor and a recording device such as a VTR. For this reason, in order to prevent crimes in advance, there are problems in terms of labor costs and labor of the person (monitorer) to be monitored, such as the fact that the person must always be watching the appearance of the person being shown.
  • network cameras cameras that can watch images from remote locations using an Internet line or the like are becoming widespread, and using them reduces the time and effort required to check the images on site. be able to.
  • network cameras Even in the case of such a network camera, it is necessary to always watch the camera image, and using the network camera alone cannot improve the problems in terms of labor cost and labor of the observer.
  • the surveillance system has an increasing social need not only for the purpose of crime prevention described above but also in terms of “watching”.
  • old people living alone die lonely. It is necessary to be resident or to install a network camera at the site and always monitor the video transmitted from the network camera by someone from a remote place.
  • “watching” also has a problem in terms of labor cost and labor when a conventional monitoring system is used.
  • a thermal sensor that reacts to people, etc. is installed in addition to the camera, and an Internet line etc.
  • a thermal sensor detects a person
  • a method of notifying that a person has been detected by sending an e-mail, for example, to a remote monitor via the Internet line is conceivable.
  • this method has a problem that a suspicious person cannot be detected at a location where a heat sensor cannot be installed (for example, a public road in front of the house).
  • the moving object detection sensor is a technique for analyzing a video signal from a camera and detecting a screen change. In this case, a heat sensor is unnecessary.
  • a specific method of the moving object detection sensor stores a plurality of videos (frames) at different times. For example, the current frame and the previous frame are stored. Then, the absolute value of the difference between the current frame and the immediately preceding frame is obtained as the amount of change, and when the amount of change is greater than a certain threshold, it is determined that there is movement.
  • the thermal sensor or moving object detection sensor reacts to small animals such as dogs and cats and determines that there is an abnormality. Is a point. For example, when used for crime prevention outdoors, it reacts not only to intruders but also to small animals such as dogs and cats.
  • a cat or a motion detection sensor reacts every time a cat passes through the site and notifies the observer that there is an abnormality via the Internet line, there is no possibility of a crime. Therefore, the supervisor is busy with the situation confirmation work such as checking the video of the network camera and dispatching people to the scene to confirm the safety situation of the scene. What is a good monitoring system? I can not say.
  • a second problem in the conventional monitoring system is that, if there is a moving object or a subject, the moving body detection sensor reacts to an object other than a person and determines that there is an abnormality. For example, when used for crime prevention outdoors, not only intruders but also cars that are traveling on the road in front of the site will react. In addition, for example, when the vehicle detection sensor reacts every time a car passes the road and notifies the observer that there is an abnormality through the Internet, the observer In order to check the safety status of the camera, it is not possible to say that it is a good monitoring system because it will be followed by the status check work such as checking the video of the network camera or dispatching people to the site.
  • the third problem in the conventional monitoring system is that, for example, even if a heat sensor or a motion detection sensor is installed in the living room of an elderly living alone to watch the life of the elderly living alone, for example, Even if it gets worse, it's crouching or falling down, it can't distinguish between the normal state and the situation, and someone stays on site, frequently visits on site, or network camera It is necessary to keep watching the video, and there are still problems in terms of labor costs and labor of the observer.
  • the present invention has been made in view of the above points, and an object thereof is to provide a monitoring system capable of solving the first to third problems.
  • object reference information storage means for storing information on the upper and lower limits of the shape of the object is input as reference information of the object to be detected from the video.
  • Video analysis means for determining whether or not the shape of the object in the video is within the upper and lower limits of the reference information stored in the object reference information storage means.
  • the object reference information storage means As reference information of the object to be detected from the video, information on the upper limit (for example, large rectangle) and the lower limit (for example, small rectangle) of the shape of the object is stored in the object reference information storage means, and the video analysis means is By determining whether or not the shape of the object in the input video is within the upper and lower limits of the reference information stored in the object reference information storage means, the first and second problems described above Can be solved.
  • the large rectangle is made slightly larger than the size of the person, and the small rectangle is made larger than the small animal. Since a larger automobile or the like can be excluded from the detection target, the first problem and the second problem can be solved as described above.
  • object reference information storage means for storing upper and lower limit information of the shape of an object as reference information of an object to be detected from an image
  • imaging Effective area information storage means for storing, as effective area information, information specifying a predetermined area in the area to be recorded, and an object in the input video in the effective area information stored in the effective area information storage means
  • Image analysis means for determining whether or not the object is located and determining whether the shape of the object is within the upper and lower limits of the reference information stored in the object reference information storage means.
  • information specifying a predetermined area in the imaged area is stored as effective area information in the effective area information storage means, and an object in the video input using the video analysis means is stored in the effective area information storage means.
  • a distant human and a nearby bird are actually the same size on the screen, even though they are completely different in size. There is a possibility that.
  • the problem of detecting a nearby bird can be avoided, and the above-mentioned first purpose is to distinguish small animals from humans.
  • the first problem can be solved with higher accuracy.
  • the situation of the installation site of an imaging device such as a camera is provided by providing video display means for displaying video input to the video analysis means and object reference information input means for inputting reference information to the object reference information storage means.
  • the first, second, and third problems can be solved without being affected by the above.
  • a television monitor is used as an image display means for displaying an image
  • a mouse used in a computer as an object reference information input means for inputting reference information of an object to be detected stored in the object reference information storage means.
  • an object for example, even if the shape or size of a person changes, the object (person) can be detected, and the first problem, the second problem, and The third problem can be solved.
  • an installation site of an imaging device such as a camera is provided by including video display means for displaying video input to the video analysis means and effective area information input means for inputting effective area information to the effective area information storage means.
  • video display means for displaying video input to the video analysis means
  • effective area information input means for inputting effective area information to the effective area information storage means.
  • a mouse used in a computer as an effective area information input means for inputting effective area information of an object to be detected, which is stored in an effective area information storage means, is used as a video display means for displaying an image.
  • the analysis result notification means for notifying the judgment result of the video analysis means can notify the monitor of the situation. For example, by using communication means using the Internet as analysis result notification means, it becomes possible to notify the situation to a remote supervisor. Further, for example, by using an acoustic means using a speaker or the like as the analysis result notification means, the situation can be notified to the supervisor using voice. Furthermore, by using a light emitting means using a lamp or the like as the analysis result notification means, the situation can be notified to the supervisor using the lighting of the lamp.
  • the video transmission unit Privacy can be protected by transmitting video.
  • the “communication line” here means a path for signal transmission / reception, and includes both a wired case and a wireless case.
  • “Terminals connected via a communication line” “Includes a terminal connected by a wired cable such as a LAN cable, a terminal connected by a wireless LAN, a mobile phone terminal, and the like.
  • the video from the monitoring system when transmitting the video of the monitoring system to a remote terminal, the video from the monitoring system is not always transmitted, but the video from the monitoring system is transmitted only in the event of an emergency, by the monitoring system
  • the privacy of the person being monitored can be protected. Even if the video from the surveillance system is always transmitted, it is possible to display the video received on the terminal side only when the signal indicating the emergency state is transmitted together with the signal indicating whether the emergency state is received or not. Privacy protection is also realized by configuring.
  • the monitoring system of the present invention stores the upper and lower limit information of the shape of the first object as reference information of the first object to be detected from the video, Object reference information storage means for storing upper and lower limit information of the shape of the second object as reference information of the second object to be detected from the image, and the shape of the object in the input image is the object First image analysis means for determining whether the reference information of the first object stored in the reference information storage means is within the upper and lower limits; and the shape of the object in the input image is the object reference A second video analysis unit that determines whether the reference information of the second object stored in the information storage unit is within the upper and lower limits; a determination result of at least the first video analysis unit; and the second Logic based on the results of video analysis And a logic determination means for performing constant.
  • the logical determination (for example, the determination result of the first video analysis means and the determination result of the second video analysis means).
  • the above third problem can be solved by the logic determination means that performs logic determination based on AND and NAND.
  • the first camera detects a fallen person
  • the second camera detects a pedestrian different from the fallen person.
  • notification is required to call for help
  • the second camera detects another pedestrian, a pedestrian (that is, a person who can assist) )
  • the detection of the faller of the first camera is NANDed (denied), so that the supervisor's labor can be reduced by not using help. Therefore, the third problem can be solved.
  • “At least based on the judgment result of the first video analysis means and the judgment result of the second video analysis means” means “the judgment result of the first video analysis means and the judgment result of the second video analysis means”.
  • “judgment result of the first video analysis unit and the judgment result of the second video analysis unit” in addition to the case where the logical determination is performed based on “ This also includes a case where a logical determination is made based on the determination result.
  • object reference information storage means for storing upper and lower limit information of the shape of an object as reference information of an object to be detected from an image, and input Video analysis means for determining whether the shape of the object in the received video is within the upper and lower limits of the reference information stored in the object reference information storage means, and a terminal connected via a communication line
  • a first monitoring system, a second monitoring system, and a video transmission means of at least the first monitoring system each having video transmission means configured to transmit the video input to the video analysis means
  • Storage means for readable storage of the reception history information of the video together with the video transmitted from the video transmission means of the second monitoring system, and the reception history information stored in the storage means It has Shimesuru display means and selecting the reception history information displayed on the display unit, and a terminal for video corresponding to the received history information is configured to be displayed on the display means.
  • object reference information storage means for storing upper and lower limit information of the shape of an object as reference information of an object to be detected from an image
  • imaging Effective area information storage means for storing, as effective area information, information specifying a predetermined area in the area to be recorded, and an object in the input video in the effective area information stored in the effective area information storage means
  • a video analysis means for judging whether or not the position of the object is within the upper and lower limits of the reference information stored in the object reference information storage means, and a communication line
  • Storage means for readablely storing the reception history information of the video together with the video transmitted from the video transmission means of the monitoring system and the video transmission means of the second monitoring system, and the reception history information stored in the storage means
  • a terminal configured to display a video corresponding to the reception history information on the display means by selecting the
  • the video corresponding to the reception history information is configured to be displayed on the display means, so that the images are simultaneously received from a plurality of surveillance cameras.
  • a desired video can be confirmed based on the reception history information.
  • the “communication line” here means a path for signal transmission / reception, and includes both a wired case and a wireless case.
  • “Terminals connected via a communication line” “Includes a terminal connected by a wired cable such as a LAN cable, a terminal connected by a wireless LAN, a mobile phone terminal, and the like.
  • “at least the video transmitted from the video transmission unit of the first monitoring system and the video unit of the second monitoring system” means “the video transmission unit of the first monitoring system and the video unit of the second monitoring system”.
  • “video transmitted from the video transmission means” in addition to “video transmitted from the video transmission means of the first monitoring system and the video transmission means of the second monitoring system”, 3 ”is also included.
  • the monitoring system of the present invention can solve the first to third problems described above.
  • Example 1 of the monitoring system of this invention It is the figure which showed the structure of Example 1 of the monitoring system of this invention. It is the figure which showed the state when a camera image
  • a first embodiment of the present invention for solving the problems of the conventional monitoring system will be described below.
  • FIG. 1 shows the configuration of a monitoring system according to the first embodiment of the present invention.
  • reference numeral 1 denotes a video storage means for storing a video signal, and as an example, it is assumed that videos of a plurality of frames at different times are stored.
  • Reference numeral 2 denotes an object reference information storage means for storing reference information of an object to be detected from the subject in the video
  • 3 denotes an analysis of the video signal stored in the video storage means 1 so that the subject is stored in the object reference information.
  • Reference numeral 6 denotes a camera, and reference numeral 7 denotes a monitor (video display means) for displaying video.
  • a mouse 8 is used as an object reference information input unit. Assume that the camera 6 can convert an analog video signal into a digital signal and input it to the computer 5.
  • the video signal of the camera 6 is stored in the video storage means 1 and the stored video is analyzed by the video analysis means 3 to determine the shape and size of the subject.
  • Information on the shape and size of the intruder (person) to be detected in advance is stored in the object reference information storage means 2 as object reference information, and the object reference information is compared with the shape and size of the subject. Are determined, that is, whether the shape and size of the intruder (person) are satisfied, and the result is stored in the analysis result information storage means 4.
  • the camera installation position is not always horizontal as shown in FIG. 3A, and there are infinite combinations of distances and angles between the camera and the subject depending on the situation. .
  • FIG. 2B schematically shows a camera image when the camera is photographed from the top to the bottom as shown in FIG. 2A, and FIG. As shown in a), a camera image in a case where the camera photographs a person from the side to the side is schematically shown.
  • information on the shape and size of an object (eg, a person) to be detected is stored in the object reference information storage means 2.
  • a small rectangle (hereinafter referred to as “small rectangle”) is used as the lower limit of the object reference information, as shown in FIG. B) and a large rectangle (hereinafter referred to as “large rectangle”) A as an upper limit.
  • the small rectangle in FIG. 4 is set to be slightly larger than the small animal (cat) b.
  • the large rectangle shown in FIG. 4 is set to be slightly larger than the size of the person a.
  • the vertical length and the horizontal length of each of the small rectangle and the large rectangle are stored in the object reference information storage unit 2 in FIG. 1 as reference information of the object to be detected.
  • FIG. 6 is a diagram showing the vertical length D and the horizontal length E of the object (person) C.
  • This vertical length D is the vertical length of the small rectangle in FIG. 6 and below the vertical length of the large rectangle
  • the horizontal length E in FIG. 6 is not less than the horizontal length of the small rectangle in FIG. 5 and not more than the horizontal length of the large rectangle. If there is, such an object (person) will be detected.
  • the video signal from the camera 6 in FIG. 1 is stored in the video storage means 1 and the video analysis means 3 analyzes the shape and size of the subject.
  • the analysis method is not limited. For example, there is a method of obtaining an absolute value of the interframe difference between a past frame several seconds before where no subject (moving object) is reflected and a current frame where the subject exists. Can be mentioned.
  • FIG. 8 shows the past frame
  • FIG. 8B shows the current frame
  • FIG. 8C shows the absolute value of the difference.
  • FIG. 9 shows the absolute value of the difference.
  • the past frame is only the background, but when the person f is shown in the current frame, the absolute value of the difference between the past frame and the current frame becomes large. That is, the absolute value of the difference in the portion where the person is shown increases. Therefore, it is possible to know the shape and size of the subject by extracting a portion where the absolute value of the difference is large.
  • the shape and size information (object reference information) of the “person” to be detected stored in the object reference information storage unit 2 in FIG. 1 is compared with the shape and size of the subject.
  • the result is stored in the analysis result information storage unit 4.
  • the result stored in the analysis result information storage means 4 is “intruder (person) by connecting with the external system according to the purpose, such as connecting the monitoring system of this embodiment and the Internet line”. If the result is ⁇ Yes '', for example, a remote person is notified by e-mail, a warning sound is emitted through an external speaker, or an external light is emitted, depending on the purpose. Various applications are possible.
  • the monitoring system video is transmitted to a remote terminal (for example, a stationary personal computer or a portable small personal computer), and although it is possible to check the video on the terminal, in consideration of privacy protection issues, an emergency situation occurs, such as the video being hidden in the normal state (the state where no abnormality has occurred) and the person falling down. It is preferable that the video (moving image or still image) is displayed only in the case where it is performed.
  • a remote terminal for example, a stationary personal computer or a portable small personal computer
  • a plurality of (for example, two) monitoring systems of this embodiment are used to watch over a plurality of places, a plurality of cameras transmit images of occurrence of an abnormality simultaneously or one after another.
  • the supervisor is overlooked, for example, by switching the first video to the later video.
  • the history is displayed on the terminal of the supervisor and the video selected from the history can be viewed.
  • a rectangular set meaning a combination of a small rectangle and a large rectangle stored in the object reference information storage unit 2 in FIG. 1
  • small animals such as dogs and cats
  • the monitoring system of a present Example can respond to various uses by changing the setting of a rectangular set. For example, it can be used to distinguish between children and adults. In that case, a small rectangle that is slightly larger than an infant or a child is set, and a large rectangle that is slightly larger than an adult person is set.
  • a rectangular set For example, it can be used to distinguish between children and adults. In that case, a small rectangle that is slightly larger than an infant or a child is set, and a large rectangle that is slightly larger than an adult person is set.
  • it can be applied as a system that automatically notifies the staff room when an adult (suspicious person) comes in without reacting to an infant or child in a kindergarten or elementary school. it can.
  • the monitoring system of the present embodiment it is possible to flexibly cope with the problem that the subject looks different depending on the installation position of the camera. For example, as shown in FIG. 3A, when the camera captures a person from the side horizontally, the person (subject) in the video looks like a long rectangle. On the other hand, for example, as shown in FIG. 2A, when the camera captures a person from above, the image of the camera looks circular or nearly square. In this way, even the same person (subject) looks different depending on the relative relationship with the position of the camera to shoot. In view of these circumstances, the monitoring system of the present embodiment has a system that is less dependent on the installation location because the shape of the object to be detected can be specified according to the site while viewing the screen while the camera is installed. can do.
  • the monitoring system of the present embodiment it is also possible to detect “a state where a person has fallen” by setting a plurality of rectangular sets. For example, an elderly person living alone might fall seriously and fall down, leading to a serious accident that would result in death if left unattended for several days. It is possible to prevent such a serious accident of an elderly person living alone by detecting “a state where a person has fallen”.
  • the “state where a person has fallen” appears to be a different shape and size as a camera image, depending on the relative relationship between the position of the camera and the position of the person who has fallen. For example, as shown in FIG. 10, when the camera is photographed from the side in a state where a person is lying down, the video of the camera looks as shown in FIG. In addition, when the camera photographs a person who is lying down as shown in FIG. 12 from the side, the video of the camera looks as shown in FIG. As can be seen by comparing FIG. 11 and FIG. 13, even if the camera captures a person who has fallen from the same position, it looks so different depending on the direction of the fall.
  • the monitoring system of the present embodiment can sufficiently cope. That is, the object reference information storage means shown in FIG. 1 is designated as reference information of an object to be detected by designating a plurality of rectangular sets so that both of FIGS. 11 and 13 can be detected as being in a fallen state. 2 and when the subject that matches any of the plurality of rectangular sets is detected in the video analysis means 3, it is determined that “the person has fallen”, thereby sufficiently indicating “the person has fallen” It becomes possible to detect.
  • a rectangle set as shown in FIG. 14 is designated to detect the fallen state as shown in FIG. 11, and a rectangle as shown in FIG. 15 is used to detect the fallen state as shown in FIG. Specify a set.
  • symbol A indicates a large rectangle
  • symbol B indicates a small rectangle.
  • FIG. 16 when the camera 6 is installed so as to photograph a person from above as shown in FIG. 16, a standing person looks like FIG. 17 in the image of the camera. However, when a person who is lying down is photographed from above as shown in FIG. 18, the video of the camera looks as shown in FIG. The image of the camera in FIG. 19 when the camera captures a person from above is different from the image of the camera when the camera captures a person from the side (FIGS. 11 and 13). Therefore, a rectangular set as shown in FIG. 20 is required to deal with a fallen image as shown in FIG.
  • symbol A indicates a large rectangle
  • symbol B indicates a small rectangle.
  • the object reference information is represented by a rectangle.
  • the object reference information is not limited to a rectangle. For example, as shown in FIG. You may select and use what you want to use, or adjust (enlarge, reduce, or deform) the shape and size of the selected model.
  • the model is not limited to the human form, but by preparing various object models such as animals and cars and using them as object reference information, for example, the number of cars It can also be used as a system that can aggregate and count the number of people.
  • the video stored in the video storage unit 1 of FIG. 1 is not limited to a frame, and may be a field.
  • the video stored in the video storage unit 1 of FIG. 1 is not limited to a frame, and may be a field.
  • luminance and color difference when converted into luminance and color difference, only luminance or color difference is used. It may be configured, and it may be enlarged or reduced, partly excerpted, frequency converted, filtering processing such as differentiation in the time axis direction or spatial direction, color number change, quantization, etc.
  • filtering processing such as differentiation in the time axis direction or spatial direction, color number change, quantization, etc.
  • the tone change of each signal may be performed.
  • the camera 6 in FIG. 1 can convert an analog video signal into a digital signal and input it to the computer 5.
  • the camera 6 outputs an analog signal
  • the video signal is used while being directly input from the camera 6 to the computer 5.
  • the video of the camera 6 may be transmitted to the remote computer 5 using an Internet line or the like. It has the same meaning.
  • the method of extracting the shape and size of the subject of the camera 6 is not limited to the method shown in this embodiment.
  • the main configuration of the monitoring system is software that operates on a computer.
  • the monitoring system is in the form of a program that operates on the processor of the computer.
  • Various controls are performed by the CPU, and various storage means. Consists of a computer memory and hard disk. These functions may be configured as system LSI, other hardware, software operating on a computer, hardware incorporated in a computer, or both software and hardware.
  • moves on a computer may be sufficient.
  • FIG. 26 shows the configuration of the monitoring system according to the second embodiment of the present invention.
  • reference numeral 1 denotes video storage means for storing a video signal, and it is assumed that videos of a plurality of frames having different times are stored as an example.
  • Reference numeral 2 denotes an object reference information storage means for storing reference information of an object to be detected from the subject in the video, and 3 denotes an analysis of the video signal stored in the video storage means 1 so that the subject is stored in the object reference information.
  • An image analysis means for determining whether or not the object reference information stored in the means 2 matches, 4 is an analysis result information storage means for storing the analysis result information of the image analysis means 3, 5 is a computer, Assuming that the video storage means 1, the object reference information storage means 2, the video analysis means 3, and the analysis result information storage means 4 are implemented as software programs on the computer 5.
  • Reference numeral 6 denotes a camera, and reference numeral 7 denotes a monitor (video display means) for displaying video.
  • a mouse 8 is used as an object reference information input unit. The camera 6 can convert an analog video signal to digital and input it to the computer 5, and the above is the same as the configuration of FIG.
  • Reference numeral 9 denotes effective area information storage means for storing effective area information by using effective area coordinates in the screen of the reference information of the object as effective area information. 8 mouse is also used as an effective area information input means. The difference from the first embodiment is that this effective area information storage means 9 is provided.
  • the video signal of the camera 6 is stored in the video storage means 1 and the stored video is analyzed by the video analysis means 3 to determine the shape and size of the subject. .
  • Information on the shape and size of the intruder (person) to be detected in advance is stored in the object reference information storage means 2 as object reference information, and the object reference information is compared with the shape and size of the subject. Are determined, that is, whether the shape and size of the intruder (person) are satisfied, and the result is stored in the analysis result information storage means 4. Up to this point, the operation is the same as in the first embodiment.
  • This embodiment is different from the first embodiment in that the effective area coordinates to which the object reference information is applied are stored in the effective area information storage means 9 in the screen, and the effective area is limited so that a bird or the like This is to improve the accuracy of discriminating between small animals and humans.
  • FIG. 22 shows a situation where a person g and a bird h exist in the distance. Both humans and birds exist at the same distance from the camera. In this case, the size of each is completely different, so by setting a small rectangle sufficiently larger than the bird as object reference information, Is not mistaken for a person.
  • FIG. 23 shows a state in which a person g exists in the distance and a bird h exists in the vicinity of the camera.
  • the size the case where a bird appears as large as a human being is a problem.
  • the difference in size and shape is small, there is a possibility that a bird is mistakenly recognized as a person.
  • the coordinates of the area to which the object reference information is applied are set as the effective area rectangle X as shown in FIG. It is possible to avoid misrecognizing that a bird existing nearby is a person.
  • information (object reference information) on the shape and size of an object (for example, a person) to be detected is stored in the object reference information storage means 2.
  • object reference information information on the shape and size of an object (for example, a person) to be detected
  • the object reference information storage means 2 As an example of a method for inputting the object reference information, a small rectangle B as a lower limit and a large rectangle A as an upper limit of the object reference information as shown in FIG. Set.
  • the small rectangle in FIG. 4 is set to be slightly larger than the small animal (cat) b.
  • the large rectangle shown in FIG. 4 is set to be slightly larger than the size of the person a.
  • the vertical and horizontal lengths of the small rectangle and the large rectangle are stored in the object reference information storage unit 2 in FIG. 26 as reference information of the object to be detected.
  • FIG. 6 is a diagram showing the vertical length D and the horizontal length E of the object (person) C.
  • This vertical length D is the vertical length of the small rectangle in FIG. 6 and below the vertical length of the large rectangle
  • the horizontal length E in FIG. 6 is not less than the horizontal length of the small rectangle in FIG. 5 and not more than the horizontal length of the large rectangle. If there is, such an object (person) is detected.
  • the video signal from the camera 6 of FIG. 26 is stored in the video storage means 1 and the video analysis means 3 analyzes the shape and size of the subject.
  • the analysis method is not limited. For example, there is a method of obtaining an absolute value of the interframe difference between a past frame several seconds before where no subject (moving object) is reflected and a current frame where the subject exists. Can be mentioned.
  • an effective area rectangle is set as shown in FIG. 24, for example, using the mouse 8 while watching the image on the monitor 7 in FIG.
  • the coordinate information of the upper left end and the lower right end of this effective area is stored as effective area information in the effective area information storage means 9 of FIG.
  • the shape and size information (object reference information) of the “person” to be detected stored in the object reference information storage unit 2 in FIG. 26 is compared with the shape and size of the subject. And whether or not the subject exists in the effective area stored in the effective area information storage means 9 and the result is stored in the analysis result information storage means 4. To do.
  • an effective area to which the object reference information is applied in this way only a distant person within the effective area is detected as shown in FIG. 24, and a bird near the effective area is erroneously recognized as a person. That can be suppressed.
  • the result stored in the analysis result information storage means 4 is “there is an intruder (person)” by linking with the external system according to the purpose, such as connecting the monitoring system of the present invention and the Internet line. ”, For example, notification to a person at a remote location by e-mail, warning sound through an external speaker, emission of light from an external light, etc. Application becomes possible.
  • a plurality of different rectangular sets are set, a person can be detected in one effective area on the screen, and a car can be detected in another effective area.
  • Possible monitoring system can be constructed.
  • the advantage of the configuration of the first embodiment is provided, and only a far person in the effective area is detected as shown in FIG.
  • FIG. 27 shows the configuration of the monitoring system according to the third embodiment of the present invention.
  • the case where there is one camera is shown as an example.
  • two cameras are provided and various units corresponding to the two cameras are provided.
  • reference numeral 1 denotes a video storage means for storing a video signal, and as an example, it is assumed that videos of a plurality of frames having different times are stored.
  • Reference numeral 2 denotes an object reference information storage means for storing reference information of an object to be detected from the subject in the video
  • 3 denotes an analysis of the video signal stored in the video storage means 1 so that the subject is stored in the object reference information.
  • the effective area information storage means for storing the effective area information using the coordinates of the effective area in the screen as the effective area information, 6 is the camera, and the above is the means for the camera 6.
  • reference numeral 11 denotes video storage means for storing a video signal, and it is assumed that, for example, videos of a plurality of frames having different times are stored.
  • Reference numeral 12 denotes an object reference information storage means for storing reference information of an object to be detected from the subjects in the video.
  • Reference numeral 13 analyzes the video signal stored in the video storage means 11 to store the object reference information.
  • An image analysis means for determining whether or not the object reference information stored in the means 12 matches, 14 is an analysis result information storage means for storing the analysis result information of the image analysis means 13, and 15 is an object reference.
  • Effective area information storage means for storing effective area information using the coordinates of the effective area in the screen as effective area information, 10 is a camera, and the above is means for the camera 10.
  • Reference numeral 16 denotes logical determination means for performing logical determination such as NAND of the analysis result corresponding to the camera 6 and the analysis result corresponding to the camera 10. This is to determine the importance of the detection state by comprehensively considering the analysis results of a plurality of cameras. It is possible to use AND and NAND properly according to the purpose.
  • Reference numeral 5 denotes a computer, for example, video storage means 1, video storage means 11, object reference information storage means 2, object reference information storage means 12, video analysis means 3, video analysis means 13, analysis result information storage means 4 and analysis.
  • the result information storage unit 14 and the logic determination unit 16 are implemented as software programs on the computer 5.
  • Reference numeral 7 denotes a monitor (video display means) for displaying video.
  • a mouse 8 is used as an object reference information input unit.
  • the camera 6 and the camera 10 can convert analog video signals into digital signals and input them to the computer 5.
  • FIG. 25 shows an installation example of each camera, and the role of each camera will be described.
  • the camera 6 photographs from above and is used for detecting a fallen person.
  • the camera 10 shoots from the side and is used to detect a pedestrian different from the fallen person. Normally, when the camera 6 detects a fallen person, notification is required to call for help. However, when the camera 10 detects another pedestrian, a pedestrian (that is, a person who can assist) ), And there is no need to call for new help. Therefore, when the camera 10 detects a pedestrian in the logic determination unit 16, the detection of a fallen person of the camera 6 is NANDed (denied), so that it is applied to a usage that does not call for help. In cases where help is not required, not giving notifications can reduce the effort required for monitoring the site of the supervisor and dispatching the person to the site.
  • information (object reference information) of the shape and size of an object (in this case, “falling person”) to be detected by the camera 6 is stored in the object reference information storage unit 2.
  • information (object reference information) of the shape and size of the object (in this case, “a pedestrian different from the fallen person”) to be detected by the camera 10 is stored in the object reference information storage unit 12.
  • a small rectangle is set as the lower limit and a large rectangle is set as the upper limit of the object reference information using the mouse 8 while viewing the video on the monitor 7 of FIG.
  • effective area information is stored in the effective area information storage unit 15 as necessary.
  • the flow until the analysis result is stored in the analysis result information storage unit 14 is the same as that of the second embodiment with respect to the unit having the same name.
  • the camera 10 detects the pedestrian j by the logic determination means 16
  • the detection of the fallen person k of the camera 6 is NANDed. That is, when there is a pedestrian who can help a fallen person, it is determined that the situation is not an important situation requiring notification.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)
  • Burglar Alarm Systems (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

La présente invention concerne un système selon lequel l'information de la limite supérieure et de la limite inférieure de la forme d'un objet détecté de manière souhaitée parmi des images est stockée dans un moyen de stockage d'information de référence d'objet (1) comme information de référence de l'objet, et on détermine si la forme d'un objet se trouve à l'intérieur des limites supérieure et inférieure de l'information de référence stockée dans le moyen de stockage de référence d'objet (1) parmi les images entrées au moyen d'un analyseur d'images (3). Ainsi, seule la forme souhaitée va être une cible permettant ainsi d'économiser le coût de main d'oevre et de travail d'un observateur.
PCT/JP2009/064844 2008-08-28 2009-08-26 Système de contrôle WO2010024281A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2010526736A JP5047361B2 (ja) 2008-08-28 2009-08-26 監視システム

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2008-219635 2008-08-28
JP2008219635 2008-08-28
JP2009-146442 2009-06-19
JP2009146442 2009-06-19

Publications (1)

Publication Number Publication Date
WO2010024281A1 true WO2010024281A1 (fr) 2010-03-04

Family

ID=41721448

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/064844 WO2010024281A1 (fr) 2008-08-28 2009-08-26 Système de contrôle

Country Status (2)

Country Link
JP (1) JP5047361B2 (fr)
WO (1) WO2010024281A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011227614A (ja) * 2010-04-16 2011-11-10 Secom Co Ltd 画像監視装置
JP2013134544A (ja) * 2011-12-26 2013-07-08 Asahi Kasei Corp 転倒検出装置、転倒検出方法、情報処理装置及びプログラム
JP2014149584A (ja) * 2013-01-31 2014-08-21 Ramrock Co Ltd 通知システム
JP2017036945A (ja) * 2015-08-07 2017-02-16 株式会社Ihiエアロスペース 移動体とその障害物検出方法
EP2953349A4 (fr) * 2013-01-29 2017-03-08 Ramrock Video Technology Laboratory Co., Ltd. Système de surveillance
JP2018093347A (ja) * 2016-12-01 2018-06-14 キヤノン株式会社 情報処理装置、情報処理方法およびプログラム
US10147288B2 (en) 2015-10-28 2018-12-04 Xiaomi Inc. Alarm method and device
JP2020024669A (ja) * 2018-08-07 2020-02-13 キヤノン株式会社 検知装置およびその制御方法
JP2020052826A (ja) * 2018-09-27 2020-04-02 株式会社リコー 情報提供装置、情報提供システム、情報提供方法、及びプログラム

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6286990A (ja) * 1985-10-11 1987-04-21 Matsushita Electric Works Ltd 異常監視装置
JP2001160146A (ja) * 1999-12-01 2001-06-12 Matsushita Electric Ind Co Ltd 画像認識方法および画像認識装置
JP2007157005A (ja) * 2005-12-07 2007-06-21 Matsushita Electric Ind Co Ltd オブジェクト行動検知・通知システム、センター装置、コントローラ装置及びオブジェクト行動検知・通知方法並びにオブジェクト行動検知・通知プログラム
JP2008059487A (ja) * 2006-09-01 2008-03-13 Basic:Kk 看視装置及び看視方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3956484B2 (ja) * 1998-05-26 2007-08-08 株式会社ノーリツ 入浴監視装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6286990A (ja) * 1985-10-11 1987-04-21 Matsushita Electric Works Ltd 異常監視装置
JP2001160146A (ja) * 1999-12-01 2001-06-12 Matsushita Electric Ind Co Ltd 画像認識方法および画像認識装置
JP2007157005A (ja) * 2005-12-07 2007-06-21 Matsushita Electric Ind Co Ltd オブジェクト行動検知・通知システム、センター装置、コントローラ装置及びオブジェクト行動検知・通知方法並びにオブジェクト行動検知・通知プログラム
JP2008059487A (ja) * 2006-09-01 2008-03-13 Basic:Kk 看視装置及び看視方法

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011227614A (ja) * 2010-04-16 2011-11-10 Secom Co Ltd 画像監視装置
JP2013134544A (ja) * 2011-12-26 2013-07-08 Asahi Kasei Corp 転倒検出装置、転倒検出方法、情報処理装置及びプログラム
EP2953349A4 (fr) * 2013-01-29 2017-03-08 Ramrock Video Technology Laboratory Co., Ltd. Système de surveillance
US9905009B2 (en) 2013-01-29 2018-02-27 Ramrock Video Technology Laboratory Co., Ltd. Monitor system
JP2014149584A (ja) * 2013-01-31 2014-08-21 Ramrock Co Ltd 通知システム
JP2017036945A (ja) * 2015-08-07 2017-02-16 株式会社Ihiエアロスペース 移動体とその障害物検出方法
EP3163543B1 (fr) * 2015-10-28 2018-12-26 Xiaomi Inc. Procédé et dispositif d'alarme
US10147288B2 (en) 2015-10-28 2018-12-04 Xiaomi Inc. Alarm method and device
JP2018093347A (ja) * 2016-12-01 2018-06-14 キヤノン株式会社 情報処理装置、情報処理方法およびプログラム
JP2020024669A (ja) * 2018-08-07 2020-02-13 キヤノン株式会社 検知装置およびその制御方法
JP7378223B2 (ja) 2018-08-07 2023-11-13 キヤノン株式会社 検知装置およびその制御方法
JP2020052826A (ja) * 2018-09-27 2020-04-02 株式会社リコー 情報提供装置、情報提供システム、情報提供方法、及びプログラム
JP7172376B2 (ja) 2018-09-27 2022-11-16 株式会社リコー 情報提供装置、情報提供システム、情報提供方法、及びプログラム

Also Published As

Publication number Publication date
JPWO2010024281A1 (ja) 2012-01-26
JP5047361B2 (ja) 2012-10-10

Similar Documents

Publication Publication Date Title
JP5047361B2 (ja) 監視システム
JP4617269B2 (ja) 監視システム
US9311794B2 (en) System and method for infrared intruder detection
KR101544019B1 (ko) 합성 영상을 이용한 화재감지시스템 및 방법
KR20070029760A (ko) 감시 장치
US9053621B2 (en) Image surveillance system and image surveillance method
KR101467352B1 (ko) 위치기반 통합관제시스템
KR101381924B1 (ko) 카메라 감시 장치를 이용한 보안 감시 시스템 및 방법
KR102230552B1 (ko) 움직임 감지와 레이더 센서를 이용한 탐지 객체 위치 산출 장치
KR20120140518A (ko) 스마트폰 기반의 원격 감시 시스템 및 제어방법
US20040216165A1 (en) Surveillance system and surveillance method with cooperative surveillance terminals
JP2009015536A (ja) 不審者通報装置、不審者監視装置及びこれを用いた遠隔監視システム
KR101046819B1 (ko) 소프트웨어 휀스에 의한 침입감시방법 및 침입감시시스템
JP2001069268A (ja) 通信装置
JP4702184B2 (ja) 監視カメラ装置
JP2005309965A (ja) 宅内セキュリティ装置
JP2008148138A (ja) 監視システム
JP2006003941A (ja) 緊急通報システム
JP6754451B2 (ja) 監視システム、監視方法及びプログラム
JP4096953B2 (ja) 人体検知器
JP2004110234A (ja) 非常報知器および非常報知システム
JP2008186283A (ja) 人体検出装置
KR100368448B1 (ko) 다목적 경보 시스템
JP4540456B2 (ja) 不審者検出装置
JP4650346B2 (ja) 監視カメラ装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09809932

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2010526736

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09809932

Country of ref document: EP

Kind code of ref document: A1