WO2010024281A1 - Monitoring system - Google Patents

Monitoring system Download PDF

Info

Publication number
WO2010024281A1
WO2010024281A1 PCT/JP2009/064844 JP2009064844W WO2010024281A1 WO 2010024281 A1 WO2010024281 A1 WO 2010024281A1 JP 2009064844 W JP2009064844 W JP 2009064844W WO 2010024281 A1 WO2010024281 A1 WO 2010024281A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
storage means
reference information
monitoring system
information
Prior art date
Application number
PCT/JP2009/064844
Other languages
French (fr)
Japanese (ja)
Inventor
俊和 赤間
Original Assignee
有限会社ラムロック映像技術研究所
株式会社修成工業
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 有限会社ラムロック映像技術研究所, 株式会社修成工業 filed Critical 有限会社ラムロック映像技術研究所
Priority to JP2010526736A priority Critical patent/JP5047361B2/en
Publication of WO2010024281A1 publication Critical patent/WO2010024281A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19613Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras

Definitions

  • the present invention relates to a monitoring system. More specifically, the present invention relates to a monitoring system that monitors the actions of suspicious persons or persons who want to watch over, for the purpose of crime prevention or watching.
  • Conventional monitoring systems generally have a function of simply recording a video by connecting a video signal from a camera to a monitor and a recording device such as a VTR. For this reason, in order to prevent crimes in advance, there are problems in terms of labor costs and labor of the person (monitorer) to be monitored, such as the fact that the person must always be watching the appearance of the person being shown.
  • network cameras cameras that can watch images from remote locations using an Internet line or the like are becoming widespread, and using them reduces the time and effort required to check the images on site. be able to.
  • network cameras Even in the case of such a network camera, it is necessary to always watch the camera image, and using the network camera alone cannot improve the problems in terms of labor cost and labor of the observer.
  • the surveillance system has an increasing social need not only for the purpose of crime prevention described above but also in terms of “watching”.
  • old people living alone die lonely. It is necessary to be resident or to install a network camera at the site and always monitor the video transmitted from the network camera by someone from a remote place.
  • “watching” also has a problem in terms of labor cost and labor when a conventional monitoring system is used.
  • a thermal sensor that reacts to people, etc. is installed in addition to the camera, and an Internet line etc.
  • a thermal sensor detects a person
  • a method of notifying that a person has been detected by sending an e-mail, for example, to a remote monitor via the Internet line is conceivable.
  • this method has a problem that a suspicious person cannot be detected at a location where a heat sensor cannot be installed (for example, a public road in front of the house).
  • the moving object detection sensor is a technique for analyzing a video signal from a camera and detecting a screen change. In this case, a heat sensor is unnecessary.
  • a specific method of the moving object detection sensor stores a plurality of videos (frames) at different times. For example, the current frame and the previous frame are stored. Then, the absolute value of the difference between the current frame and the immediately preceding frame is obtained as the amount of change, and when the amount of change is greater than a certain threshold, it is determined that there is movement.
  • the thermal sensor or moving object detection sensor reacts to small animals such as dogs and cats and determines that there is an abnormality. Is a point. For example, when used for crime prevention outdoors, it reacts not only to intruders but also to small animals such as dogs and cats.
  • a cat or a motion detection sensor reacts every time a cat passes through the site and notifies the observer that there is an abnormality via the Internet line, there is no possibility of a crime. Therefore, the supervisor is busy with the situation confirmation work such as checking the video of the network camera and dispatching people to the scene to confirm the safety situation of the scene. What is a good monitoring system? I can not say.
  • a second problem in the conventional monitoring system is that, if there is a moving object or a subject, the moving body detection sensor reacts to an object other than a person and determines that there is an abnormality. For example, when used for crime prevention outdoors, not only intruders but also cars that are traveling on the road in front of the site will react. In addition, for example, when the vehicle detection sensor reacts every time a car passes the road and notifies the observer that there is an abnormality through the Internet, the observer In order to check the safety status of the camera, it is not possible to say that it is a good monitoring system because it will be followed by the status check work such as checking the video of the network camera or dispatching people to the site.
  • the third problem in the conventional monitoring system is that, for example, even if a heat sensor or a motion detection sensor is installed in the living room of an elderly living alone to watch the life of the elderly living alone, for example, Even if it gets worse, it's crouching or falling down, it can't distinguish between the normal state and the situation, and someone stays on site, frequently visits on site, or network camera It is necessary to keep watching the video, and there are still problems in terms of labor costs and labor of the observer.
  • the present invention has been made in view of the above points, and an object thereof is to provide a monitoring system capable of solving the first to third problems.
  • object reference information storage means for storing information on the upper and lower limits of the shape of the object is input as reference information of the object to be detected from the video.
  • Video analysis means for determining whether or not the shape of the object in the video is within the upper and lower limits of the reference information stored in the object reference information storage means.
  • the object reference information storage means As reference information of the object to be detected from the video, information on the upper limit (for example, large rectangle) and the lower limit (for example, small rectangle) of the shape of the object is stored in the object reference information storage means, and the video analysis means is By determining whether or not the shape of the object in the input video is within the upper and lower limits of the reference information stored in the object reference information storage means, the first and second problems described above Can be solved.
  • the large rectangle is made slightly larger than the size of the person, and the small rectangle is made larger than the small animal. Since a larger automobile or the like can be excluded from the detection target, the first problem and the second problem can be solved as described above.
  • object reference information storage means for storing upper and lower limit information of the shape of an object as reference information of an object to be detected from an image
  • imaging Effective area information storage means for storing, as effective area information, information specifying a predetermined area in the area to be recorded, and an object in the input video in the effective area information stored in the effective area information storage means
  • Image analysis means for determining whether or not the object is located and determining whether the shape of the object is within the upper and lower limits of the reference information stored in the object reference information storage means.
  • information specifying a predetermined area in the imaged area is stored as effective area information in the effective area information storage means, and an object in the video input using the video analysis means is stored in the effective area information storage means.
  • a distant human and a nearby bird are actually the same size on the screen, even though they are completely different in size. There is a possibility that.
  • the problem of detecting a nearby bird can be avoided, and the above-mentioned first purpose is to distinguish small animals from humans.
  • the first problem can be solved with higher accuracy.
  • the situation of the installation site of an imaging device such as a camera is provided by providing video display means for displaying video input to the video analysis means and object reference information input means for inputting reference information to the object reference information storage means.
  • the first, second, and third problems can be solved without being affected by the above.
  • a television monitor is used as an image display means for displaying an image
  • a mouse used in a computer as an object reference information input means for inputting reference information of an object to be detected stored in the object reference information storage means.
  • an object for example, even if the shape or size of a person changes, the object (person) can be detected, and the first problem, the second problem, and The third problem can be solved.
  • an installation site of an imaging device such as a camera is provided by including video display means for displaying video input to the video analysis means and effective area information input means for inputting effective area information to the effective area information storage means.
  • video display means for displaying video input to the video analysis means
  • effective area information input means for inputting effective area information to the effective area information storage means.
  • a mouse used in a computer as an effective area information input means for inputting effective area information of an object to be detected, which is stored in an effective area information storage means, is used as a video display means for displaying an image.
  • the analysis result notification means for notifying the judgment result of the video analysis means can notify the monitor of the situation. For example, by using communication means using the Internet as analysis result notification means, it becomes possible to notify the situation to a remote supervisor. Further, for example, by using an acoustic means using a speaker or the like as the analysis result notification means, the situation can be notified to the supervisor using voice. Furthermore, by using a light emitting means using a lamp or the like as the analysis result notification means, the situation can be notified to the supervisor using the lighting of the lamp.
  • the video transmission unit Privacy can be protected by transmitting video.
  • the “communication line” here means a path for signal transmission / reception, and includes both a wired case and a wireless case.
  • “Terminals connected via a communication line” “Includes a terminal connected by a wired cable such as a LAN cable, a terminal connected by a wireless LAN, a mobile phone terminal, and the like.
  • the video from the monitoring system when transmitting the video of the monitoring system to a remote terminal, the video from the monitoring system is not always transmitted, but the video from the monitoring system is transmitted only in the event of an emergency, by the monitoring system
  • the privacy of the person being monitored can be protected. Even if the video from the surveillance system is always transmitted, it is possible to display the video received on the terminal side only when the signal indicating the emergency state is transmitted together with the signal indicating whether the emergency state is received or not. Privacy protection is also realized by configuring.
  • the monitoring system of the present invention stores the upper and lower limit information of the shape of the first object as reference information of the first object to be detected from the video, Object reference information storage means for storing upper and lower limit information of the shape of the second object as reference information of the second object to be detected from the image, and the shape of the object in the input image is the object First image analysis means for determining whether the reference information of the first object stored in the reference information storage means is within the upper and lower limits; and the shape of the object in the input image is the object reference A second video analysis unit that determines whether the reference information of the second object stored in the information storage unit is within the upper and lower limits; a determination result of at least the first video analysis unit; and the second Logic based on the results of video analysis And a logic determination means for performing constant.
  • the logical determination (for example, the determination result of the first video analysis means and the determination result of the second video analysis means).
  • the above third problem can be solved by the logic determination means that performs logic determination based on AND and NAND.
  • the first camera detects a fallen person
  • the second camera detects a pedestrian different from the fallen person.
  • notification is required to call for help
  • the second camera detects another pedestrian, a pedestrian (that is, a person who can assist) )
  • the detection of the faller of the first camera is NANDed (denied), so that the supervisor's labor can be reduced by not using help. Therefore, the third problem can be solved.
  • “At least based on the judgment result of the first video analysis means and the judgment result of the second video analysis means” means “the judgment result of the first video analysis means and the judgment result of the second video analysis means”.
  • “judgment result of the first video analysis unit and the judgment result of the second video analysis unit” in addition to the case where the logical determination is performed based on “ This also includes a case where a logical determination is made based on the determination result.
  • object reference information storage means for storing upper and lower limit information of the shape of an object as reference information of an object to be detected from an image, and input Video analysis means for determining whether the shape of the object in the received video is within the upper and lower limits of the reference information stored in the object reference information storage means, and a terminal connected via a communication line
  • a first monitoring system, a second monitoring system, and a video transmission means of at least the first monitoring system each having video transmission means configured to transmit the video input to the video analysis means
  • Storage means for readable storage of the reception history information of the video together with the video transmitted from the video transmission means of the second monitoring system, and the reception history information stored in the storage means It has Shimesuru display means and selecting the reception history information displayed on the display unit, and a terminal for video corresponding to the received history information is configured to be displayed on the display means.
  • object reference information storage means for storing upper and lower limit information of the shape of an object as reference information of an object to be detected from an image
  • imaging Effective area information storage means for storing, as effective area information, information specifying a predetermined area in the area to be recorded, and an object in the input video in the effective area information stored in the effective area information storage means
  • a video analysis means for judging whether or not the position of the object is within the upper and lower limits of the reference information stored in the object reference information storage means, and a communication line
  • Storage means for readablely storing the reception history information of the video together with the video transmitted from the video transmission means of the monitoring system and the video transmission means of the second monitoring system, and the reception history information stored in the storage means
  • a terminal configured to display a video corresponding to the reception history information on the display means by selecting the
  • the video corresponding to the reception history information is configured to be displayed on the display means, so that the images are simultaneously received from a plurality of surveillance cameras.
  • a desired video can be confirmed based on the reception history information.
  • the “communication line” here means a path for signal transmission / reception, and includes both a wired case and a wireless case.
  • “Terminals connected via a communication line” “Includes a terminal connected by a wired cable such as a LAN cable, a terminal connected by a wireless LAN, a mobile phone terminal, and the like.
  • “at least the video transmitted from the video transmission unit of the first monitoring system and the video unit of the second monitoring system” means “the video transmission unit of the first monitoring system and the video unit of the second monitoring system”.
  • “video transmitted from the video transmission means” in addition to “video transmitted from the video transmission means of the first monitoring system and the video transmission means of the second monitoring system”, 3 ”is also included.
  • the monitoring system of the present invention can solve the first to third problems described above.
  • Example 1 of the monitoring system of this invention It is the figure which showed the structure of Example 1 of the monitoring system of this invention. It is the figure which showed the state when a camera image
  • a first embodiment of the present invention for solving the problems of the conventional monitoring system will be described below.
  • FIG. 1 shows the configuration of a monitoring system according to the first embodiment of the present invention.
  • reference numeral 1 denotes a video storage means for storing a video signal, and as an example, it is assumed that videos of a plurality of frames at different times are stored.
  • Reference numeral 2 denotes an object reference information storage means for storing reference information of an object to be detected from the subject in the video
  • 3 denotes an analysis of the video signal stored in the video storage means 1 so that the subject is stored in the object reference information.
  • Reference numeral 6 denotes a camera, and reference numeral 7 denotes a monitor (video display means) for displaying video.
  • a mouse 8 is used as an object reference information input unit. Assume that the camera 6 can convert an analog video signal into a digital signal and input it to the computer 5.
  • the video signal of the camera 6 is stored in the video storage means 1 and the stored video is analyzed by the video analysis means 3 to determine the shape and size of the subject.
  • Information on the shape and size of the intruder (person) to be detected in advance is stored in the object reference information storage means 2 as object reference information, and the object reference information is compared with the shape and size of the subject. Are determined, that is, whether the shape and size of the intruder (person) are satisfied, and the result is stored in the analysis result information storage means 4.
  • the camera installation position is not always horizontal as shown in FIG. 3A, and there are infinite combinations of distances and angles between the camera and the subject depending on the situation. .
  • FIG. 2B schematically shows a camera image when the camera is photographed from the top to the bottom as shown in FIG. 2A, and FIG. As shown in a), a camera image in a case where the camera photographs a person from the side to the side is schematically shown.
  • information on the shape and size of an object (eg, a person) to be detected is stored in the object reference information storage means 2.
  • a small rectangle (hereinafter referred to as “small rectangle”) is used as the lower limit of the object reference information, as shown in FIG. B) and a large rectangle (hereinafter referred to as “large rectangle”) A as an upper limit.
  • the small rectangle in FIG. 4 is set to be slightly larger than the small animal (cat) b.
  • the large rectangle shown in FIG. 4 is set to be slightly larger than the size of the person a.
  • the vertical length and the horizontal length of each of the small rectangle and the large rectangle are stored in the object reference information storage unit 2 in FIG. 1 as reference information of the object to be detected.
  • FIG. 6 is a diagram showing the vertical length D and the horizontal length E of the object (person) C.
  • This vertical length D is the vertical length of the small rectangle in FIG. 6 and below the vertical length of the large rectangle
  • the horizontal length E in FIG. 6 is not less than the horizontal length of the small rectangle in FIG. 5 and not more than the horizontal length of the large rectangle. If there is, such an object (person) will be detected.
  • the video signal from the camera 6 in FIG. 1 is stored in the video storage means 1 and the video analysis means 3 analyzes the shape and size of the subject.
  • the analysis method is not limited. For example, there is a method of obtaining an absolute value of the interframe difference between a past frame several seconds before where no subject (moving object) is reflected and a current frame where the subject exists. Can be mentioned.
  • FIG. 8 shows the past frame
  • FIG. 8B shows the current frame
  • FIG. 8C shows the absolute value of the difference.
  • FIG. 9 shows the absolute value of the difference.
  • the past frame is only the background, but when the person f is shown in the current frame, the absolute value of the difference between the past frame and the current frame becomes large. That is, the absolute value of the difference in the portion where the person is shown increases. Therefore, it is possible to know the shape and size of the subject by extracting a portion where the absolute value of the difference is large.
  • the shape and size information (object reference information) of the “person” to be detected stored in the object reference information storage unit 2 in FIG. 1 is compared with the shape and size of the subject.
  • the result is stored in the analysis result information storage unit 4.
  • the result stored in the analysis result information storage means 4 is “intruder (person) by connecting with the external system according to the purpose, such as connecting the monitoring system of this embodiment and the Internet line”. If the result is ⁇ Yes '', for example, a remote person is notified by e-mail, a warning sound is emitted through an external speaker, or an external light is emitted, depending on the purpose. Various applications are possible.
  • the monitoring system video is transmitted to a remote terminal (for example, a stationary personal computer or a portable small personal computer), and although it is possible to check the video on the terminal, in consideration of privacy protection issues, an emergency situation occurs, such as the video being hidden in the normal state (the state where no abnormality has occurred) and the person falling down. It is preferable that the video (moving image or still image) is displayed only in the case where it is performed.
  • a remote terminal for example, a stationary personal computer or a portable small personal computer
  • a plurality of (for example, two) monitoring systems of this embodiment are used to watch over a plurality of places, a plurality of cameras transmit images of occurrence of an abnormality simultaneously or one after another.
  • the supervisor is overlooked, for example, by switching the first video to the later video.
  • the history is displayed on the terminal of the supervisor and the video selected from the history can be viewed.
  • a rectangular set meaning a combination of a small rectangle and a large rectangle stored in the object reference information storage unit 2 in FIG. 1
  • small animals such as dogs and cats
  • the monitoring system of a present Example can respond to various uses by changing the setting of a rectangular set. For example, it can be used to distinguish between children and adults. In that case, a small rectangle that is slightly larger than an infant or a child is set, and a large rectangle that is slightly larger than an adult person is set.
  • a rectangular set For example, it can be used to distinguish between children and adults. In that case, a small rectangle that is slightly larger than an infant or a child is set, and a large rectangle that is slightly larger than an adult person is set.
  • it can be applied as a system that automatically notifies the staff room when an adult (suspicious person) comes in without reacting to an infant or child in a kindergarten or elementary school. it can.
  • the monitoring system of the present embodiment it is possible to flexibly cope with the problem that the subject looks different depending on the installation position of the camera. For example, as shown in FIG. 3A, when the camera captures a person from the side horizontally, the person (subject) in the video looks like a long rectangle. On the other hand, for example, as shown in FIG. 2A, when the camera captures a person from above, the image of the camera looks circular or nearly square. In this way, even the same person (subject) looks different depending on the relative relationship with the position of the camera to shoot. In view of these circumstances, the monitoring system of the present embodiment has a system that is less dependent on the installation location because the shape of the object to be detected can be specified according to the site while viewing the screen while the camera is installed. can do.
  • the monitoring system of the present embodiment it is also possible to detect “a state where a person has fallen” by setting a plurality of rectangular sets. For example, an elderly person living alone might fall seriously and fall down, leading to a serious accident that would result in death if left unattended for several days. It is possible to prevent such a serious accident of an elderly person living alone by detecting “a state where a person has fallen”.
  • the “state where a person has fallen” appears to be a different shape and size as a camera image, depending on the relative relationship between the position of the camera and the position of the person who has fallen. For example, as shown in FIG. 10, when the camera is photographed from the side in a state where a person is lying down, the video of the camera looks as shown in FIG. In addition, when the camera photographs a person who is lying down as shown in FIG. 12 from the side, the video of the camera looks as shown in FIG. As can be seen by comparing FIG. 11 and FIG. 13, even if the camera captures a person who has fallen from the same position, it looks so different depending on the direction of the fall.
  • the monitoring system of the present embodiment can sufficiently cope. That is, the object reference information storage means shown in FIG. 1 is designated as reference information of an object to be detected by designating a plurality of rectangular sets so that both of FIGS. 11 and 13 can be detected as being in a fallen state. 2 and when the subject that matches any of the plurality of rectangular sets is detected in the video analysis means 3, it is determined that “the person has fallen”, thereby sufficiently indicating “the person has fallen” It becomes possible to detect.
  • a rectangle set as shown in FIG. 14 is designated to detect the fallen state as shown in FIG. 11, and a rectangle as shown in FIG. 15 is used to detect the fallen state as shown in FIG. Specify a set.
  • symbol A indicates a large rectangle
  • symbol B indicates a small rectangle.
  • FIG. 16 when the camera 6 is installed so as to photograph a person from above as shown in FIG. 16, a standing person looks like FIG. 17 in the image of the camera. However, when a person who is lying down is photographed from above as shown in FIG. 18, the video of the camera looks as shown in FIG. The image of the camera in FIG. 19 when the camera captures a person from above is different from the image of the camera when the camera captures a person from the side (FIGS. 11 and 13). Therefore, a rectangular set as shown in FIG. 20 is required to deal with a fallen image as shown in FIG.
  • symbol A indicates a large rectangle
  • symbol B indicates a small rectangle.
  • the object reference information is represented by a rectangle.
  • the object reference information is not limited to a rectangle. For example, as shown in FIG. You may select and use what you want to use, or adjust (enlarge, reduce, or deform) the shape and size of the selected model.
  • the model is not limited to the human form, but by preparing various object models such as animals and cars and using them as object reference information, for example, the number of cars It can also be used as a system that can aggregate and count the number of people.
  • the video stored in the video storage unit 1 of FIG. 1 is not limited to a frame, and may be a field.
  • the video stored in the video storage unit 1 of FIG. 1 is not limited to a frame, and may be a field.
  • luminance and color difference when converted into luminance and color difference, only luminance or color difference is used. It may be configured, and it may be enlarged or reduced, partly excerpted, frequency converted, filtering processing such as differentiation in the time axis direction or spatial direction, color number change, quantization, etc.
  • filtering processing such as differentiation in the time axis direction or spatial direction, color number change, quantization, etc.
  • the tone change of each signal may be performed.
  • the camera 6 in FIG. 1 can convert an analog video signal into a digital signal and input it to the computer 5.
  • the camera 6 outputs an analog signal
  • the video signal is used while being directly input from the camera 6 to the computer 5.
  • the video of the camera 6 may be transmitted to the remote computer 5 using an Internet line or the like. It has the same meaning.
  • the method of extracting the shape and size of the subject of the camera 6 is not limited to the method shown in this embodiment.
  • the main configuration of the monitoring system is software that operates on a computer.
  • the monitoring system is in the form of a program that operates on the processor of the computer.
  • Various controls are performed by the CPU, and various storage means. Consists of a computer memory and hard disk. These functions may be configured as system LSI, other hardware, software operating on a computer, hardware incorporated in a computer, or both software and hardware.
  • moves on a computer may be sufficient.
  • FIG. 26 shows the configuration of the monitoring system according to the second embodiment of the present invention.
  • reference numeral 1 denotes video storage means for storing a video signal, and it is assumed that videos of a plurality of frames having different times are stored as an example.
  • Reference numeral 2 denotes an object reference information storage means for storing reference information of an object to be detected from the subject in the video, and 3 denotes an analysis of the video signal stored in the video storage means 1 so that the subject is stored in the object reference information.
  • An image analysis means for determining whether or not the object reference information stored in the means 2 matches, 4 is an analysis result information storage means for storing the analysis result information of the image analysis means 3, 5 is a computer, Assuming that the video storage means 1, the object reference information storage means 2, the video analysis means 3, and the analysis result information storage means 4 are implemented as software programs on the computer 5.
  • Reference numeral 6 denotes a camera, and reference numeral 7 denotes a monitor (video display means) for displaying video.
  • a mouse 8 is used as an object reference information input unit. The camera 6 can convert an analog video signal to digital and input it to the computer 5, and the above is the same as the configuration of FIG.
  • Reference numeral 9 denotes effective area information storage means for storing effective area information by using effective area coordinates in the screen of the reference information of the object as effective area information. 8 mouse is also used as an effective area information input means. The difference from the first embodiment is that this effective area information storage means 9 is provided.
  • the video signal of the camera 6 is stored in the video storage means 1 and the stored video is analyzed by the video analysis means 3 to determine the shape and size of the subject. .
  • Information on the shape and size of the intruder (person) to be detected in advance is stored in the object reference information storage means 2 as object reference information, and the object reference information is compared with the shape and size of the subject. Are determined, that is, whether the shape and size of the intruder (person) are satisfied, and the result is stored in the analysis result information storage means 4. Up to this point, the operation is the same as in the first embodiment.
  • This embodiment is different from the first embodiment in that the effective area coordinates to which the object reference information is applied are stored in the effective area information storage means 9 in the screen, and the effective area is limited so that a bird or the like This is to improve the accuracy of discriminating between small animals and humans.
  • FIG. 22 shows a situation where a person g and a bird h exist in the distance. Both humans and birds exist at the same distance from the camera. In this case, the size of each is completely different, so by setting a small rectangle sufficiently larger than the bird as object reference information, Is not mistaken for a person.
  • FIG. 23 shows a state in which a person g exists in the distance and a bird h exists in the vicinity of the camera.
  • the size the case where a bird appears as large as a human being is a problem.
  • the difference in size and shape is small, there is a possibility that a bird is mistakenly recognized as a person.
  • the coordinates of the area to which the object reference information is applied are set as the effective area rectangle X as shown in FIG. It is possible to avoid misrecognizing that a bird existing nearby is a person.
  • information (object reference information) on the shape and size of an object (for example, a person) to be detected is stored in the object reference information storage means 2.
  • object reference information information on the shape and size of an object (for example, a person) to be detected
  • the object reference information storage means 2 As an example of a method for inputting the object reference information, a small rectangle B as a lower limit and a large rectangle A as an upper limit of the object reference information as shown in FIG. Set.
  • the small rectangle in FIG. 4 is set to be slightly larger than the small animal (cat) b.
  • the large rectangle shown in FIG. 4 is set to be slightly larger than the size of the person a.
  • the vertical and horizontal lengths of the small rectangle and the large rectangle are stored in the object reference information storage unit 2 in FIG. 26 as reference information of the object to be detected.
  • FIG. 6 is a diagram showing the vertical length D and the horizontal length E of the object (person) C.
  • This vertical length D is the vertical length of the small rectangle in FIG. 6 and below the vertical length of the large rectangle
  • the horizontal length E in FIG. 6 is not less than the horizontal length of the small rectangle in FIG. 5 and not more than the horizontal length of the large rectangle. If there is, such an object (person) is detected.
  • the video signal from the camera 6 of FIG. 26 is stored in the video storage means 1 and the video analysis means 3 analyzes the shape and size of the subject.
  • the analysis method is not limited. For example, there is a method of obtaining an absolute value of the interframe difference between a past frame several seconds before where no subject (moving object) is reflected and a current frame where the subject exists. Can be mentioned.
  • an effective area rectangle is set as shown in FIG. 24, for example, using the mouse 8 while watching the image on the monitor 7 in FIG.
  • the coordinate information of the upper left end and the lower right end of this effective area is stored as effective area information in the effective area information storage means 9 of FIG.
  • the shape and size information (object reference information) of the “person” to be detected stored in the object reference information storage unit 2 in FIG. 26 is compared with the shape and size of the subject. And whether or not the subject exists in the effective area stored in the effective area information storage means 9 and the result is stored in the analysis result information storage means 4. To do.
  • an effective area to which the object reference information is applied in this way only a distant person within the effective area is detected as shown in FIG. 24, and a bird near the effective area is erroneously recognized as a person. That can be suppressed.
  • the result stored in the analysis result information storage means 4 is “there is an intruder (person)” by linking with the external system according to the purpose, such as connecting the monitoring system of the present invention and the Internet line. ”, For example, notification to a person at a remote location by e-mail, warning sound through an external speaker, emission of light from an external light, etc. Application becomes possible.
  • a plurality of different rectangular sets are set, a person can be detected in one effective area on the screen, and a car can be detected in another effective area.
  • Possible monitoring system can be constructed.
  • the advantage of the configuration of the first embodiment is provided, and only a far person in the effective area is detected as shown in FIG.
  • FIG. 27 shows the configuration of the monitoring system according to the third embodiment of the present invention.
  • the case where there is one camera is shown as an example.
  • two cameras are provided and various units corresponding to the two cameras are provided.
  • reference numeral 1 denotes a video storage means for storing a video signal, and as an example, it is assumed that videos of a plurality of frames having different times are stored.
  • Reference numeral 2 denotes an object reference information storage means for storing reference information of an object to be detected from the subject in the video
  • 3 denotes an analysis of the video signal stored in the video storage means 1 so that the subject is stored in the object reference information.
  • the effective area information storage means for storing the effective area information using the coordinates of the effective area in the screen as the effective area information, 6 is the camera, and the above is the means for the camera 6.
  • reference numeral 11 denotes video storage means for storing a video signal, and it is assumed that, for example, videos of a plurality of frames having different times are stored.
  • Reference numeral 12 denotes an object reference information storage means for storing reference information of an object to be detected from the subjects in the video.
  • Reference numeral 13 analyzes the video signal stored in the video storage means 11 to store the object reference information.
  • An image analysis means for determining whether or not the object reference information stored in the means 12 matches, 14 is an analysis result information storage means for storing the analysis result information of the image analysis means 13, and 15 is an object reference.
  • Effective area information storage means for storing effective area information using the coordinates of the effective area in the screen as effective area information, 10 is a camera, and the above is means for the camera 10.
  • Reference numeral 16 denotes logical determination means for performing logical determination such as NAND of the analysis result corresponding to the camera 6 and the analysis result corresponding to the camera 10. This is to determine the importance of the detection state by comprehensively considering the analysis results of a plurality of cameras. It is possible to use AND and NAND properly according to the purpose.
  • Reference numeral 5 denotes a computer, for example, video storage means 1, video storage means 11, object reference information storage means 2, object reference information storage means 12, video analysis means 3, video analysis means 13, analysis result information storage means 4 and analysis.
  • the result information storage unit 14 and the logic determination unit 16 are implemented as software programs on the computer 5.
  • Reference numeral 7 denotes a monitor (video display means) for displaying video.
  • a mouse 8 is used as an object reference information input unit.
  • the camera 6 and the camera 10 can convert analog video signals into digital signals and input them to the computer 5.
  • FIG. 25 shows an installation example of each camera, and the role of each camera will be described.
  • the camera 6 photographs from above and is used for detecting a fallen person.
  • the camera 10 shoots from the side and is used to detect a pedestrian different from the fallen person. Normally, when the camera 6 detects a fallen person, notification is required to call for help. However, when the camera 10 detects another pedestrian, a pedestrian (that is, a person who can assist) ), And there is no need to call for new help. Therefore, when the camera 10 detects a pedestrian in the logic determination unit 16, the detection of a fallen person of the camera 6 is NANDed (denied), so that it is applied to a usage that does not call for help. In cases where help is not required, not giving notifications can reduce the effort required for monitoring the site of the supervisor and dispatching the person to the site.
  • information (object reference information) of the shape and size of an object (in this case, “falling person”) to be detected by the camera 6 is stored in the object reference information storage unit 2.
  • information (object reference information) of the shape and size of the object (in this case, “a pedestrian different from the fallen person”) to be detected by the camera 10 is stored in the object reference information storage unit 12.
  • a small rectangle is set as the lower limit and a large rectangle is set as the upper limit of the object reference information using the mouse 8 while viewing the video on the monitor 7 of FIG.
  • effective area information is stored in the effective area information storage unit 15 as necessary.
  • the flow until the analysis result is stored in the analysis result information storage unit 14 is the same as that of the second embodiment with respect to the unit having the same name.
  • the camera 10 detects the pedestrian j by the logic determination means 16
  • the detection of the fallen person k of the camera 6 is NANDed. That is, when there is a pedestrian who can help a fallen person, it is determined that the situation is not an important situation requiring notification.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)
  • Burglar Alarm Systems (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

Information of the upper limit and the lower limit of the shape of an object desirably detected from among images is stored in an object reference information storage means (1) as reference information of the object, and whether the shape of an object is within the upper and lower limits of the reference information stored in the object reference storage means (1) is determined from among inputted images by using an image analyzing means (3).  Thus, only the desired shape is to be a target and manpower cost and labor of an observer can be saved.

Description

監視システムMonitoring system
 本発明は監視システムに関する。詳しくは、防犯や見守りを目的として、不審者や見守りたい者の行動を監視する監視システムに係るものである。 The present invention relates to a monitoring system. More specifically, the present invention relates to a monitoring system that monitors the actions of suspicious persons or persons who want to watch over, for the purpose of crime prevention or watching.
 従来の監視システムは、カメラからの映像信号を、モニターと、VTRなどの録画装置につなぎ、単に映像を録画するだけの機能を持つものが一般的であった。
 そのため、犯罪を事前に抑止するためには、映っている人物の様子を常に人が見ていなければならないなど、監視する人(監視者)の人件費や労力の面で問題があった。
Conventional monitoring systems generally have a function of simply recording a video by connecting a video signal from a camera to a monitor and a recording device such as a VTR.
For this reason, in order to prevent crimes in advance, there are problems in terms of labor costs and labor of the person (monitorer) to be monitored, such as the fact that the person must always be watching the appearance of the person being shown.
 近年、インターネット回線などを利用して遠隔地の映像を見ることができるカメラ(以下、「ネットワークカメラ」と称する)も普及しつつあり、これらを使うと現場に映像の確認に行く手間を削減することができる。
 しかし、こうしたネットワークカメラの場合でも、カメラ映像を常に見ておく必要があり、ネットワークカメラを使用しただけでは、監視者の人件費や労力の面での問題点を改善できない。
In recent years, cameras (hereinafter referred to as “network cameras”) that can watch images from remote locations using an Internet line or the like are becoming widespread, and using them reduces the time and effort required to check the images on site. be able to.
However, even in the case of such a network camera, it is necessary to always watch the camera image, and using the network camera alone cannot improve the problems in terms of labor cost and labor of the observer.
 また、監視システムは、上述した防犯目的だけでなく、「見守り」という点でも社会ニーズが増加している。
 例えば、独り暮らしの老人(以下、「独居老人」と称する)が、孤独死する問題などが発生しており、日々の安否状況を把握して事前に事故を防止するには、常に誰かが現場に常駐するか、もしくは、現場にネットワークカメラを設置して、遠隔地から誰かがネットワークカメラから送信される映像を常に監視している必要がある。
 即ち、「見守り」についても、従来の監視システムを利用した場合には人件費や労力の面で問題があった。
In addition, the surveillance system has an increasing social need not only for the purpose of crime prevention described above but also in terms of “watching”.
For example, there is a problem that elderly people living alone (hereinafter referred to as “old people living alone”) die lonely. It is necessary to be resident or to install a network camera at the site and always monitor the video transmitted from the network camera by someone from a remote place.
In other words, “watching” also has a problem in terms of labor cost and labor when a conventional monitoring system is used.
 ところで、上記の問題点を軽減する一つの手法として、不審者や犯罪者の存在を検知するために、カメラとは別に、人物などに反応する熱センサー等を併設し、かつ、インターネット回線などの通信手段を備え、例えば、熱センサーが人を検知した場合に、インターネット回線を通じて遠隔地にいる監視者に例えば電子メールを送信して、人を検知したことを通知する方法が考えられる。
 しかし、こうした方法は、熱センサーを設置できない箇所(例えば自宅前の公道)では不審者を検知することができないという問題があった。
By the way, as a technique to reduce the above problems, in order to detect the presence of suspicious persons and criminals, a thermal sensor that reacts to people, etc., is installed in addition to the camera, and an Internet line etc. For example, when a thermal sensor detects a person, a method of notifying that a person has been detected by sending an e-mail, for example, to a remote monitor via the Internet line is conceivable.
However, this method has a problem that a suspicious person cannot be detected at a location where a heat sensor cannot be installed (for example, a public road in front of the house).
 また、例えば、独居老人の生活を見守るために、独居老人宅の屋内にネットワークカメラと、熱センサーを設置して、かつ、インターネット回線を備えることによる対応が考えられる。
 しかし、不審者の侵入時だけでなく、暮らしている老人にも熱センサーが反応するため、その度に、遠隔地の監視者に通知が行われてしまい、監視者は、頻繁に現場映像を目視確認しなければならないことになり、監視者の労力を軽減することができない。
In addition, for example, in order to watch the life of the elderly living alone, it is conceivable to install a network camera and a heat sensor inside the elderly living alone and provide an internet line.
However, because the thermal sensor reacts not only to the suspicious person's intrusion but also to the elderly living there, every time a remote supervisor is notified, the supervisor frequently views the scene image. It will be necessary to check visually, and the labor of the supervisor cannot be reduced.
 上記において、熱センサーの代わりに、動体検知センサーを使う方法もある。動体検知センサーとは、カメラからの映像信号を解析して、画面の変化を検出する手法である。この場合、熱センサーは不要となる。
 なお、動体検知センサーの具体的な手法は、異なる時刻の複数の映像(フレーム)を記憶しておく。例えば、現在のフレームと、直前のフレームを記憶しておく。そして、現在のフレームと直前のフレームのフレーム間の差分の絶対値を得て変化量とし、その変化量がある閾値より大きい場合に動きがあると判断するのである。
In the above, there is also a method using a moving object detection sensor instead of the thermal sensor. The moving object detection sensor is a technique for analyzing a video signal from a camera and detecting a screen change. In this case, a heat sensor is unnecessary.
A specific method of the moving object detection sensor stores a plurality of videos (frames) at different times. For example, the current frame and the previous frame are stored. Then, the absolute value of the difference between the current frame and the immediately preceding frame is obtained as the amount of change, and when the amount of change is greater than a certain threshold, it is determined that there is movement.
 しかしながら、これら従来の監視システムには以下に述べる様々な課題が存在する。 However, these conventional monitoring systems have various problems described below.
 先ず、従来の監視システムにおける第1の課題は、熱センサーや、動体検知センサーが、動くものや被写体があれば、犬や猫などの小動物にも反応して「異常あり」と判断してしまう点である。
 例えば、屋外での防犯目的に利用の場合、侵入者だけでなく、犬や猫などの小動物にも反応してしまうのである。また、例えば、敷地内を猫が通るたびに熱センサー、もしくは動体検知センサーが反応して、インターネット回線を通じて監視者に「異常あり」という通知を行う場合、犯罪の可能性が無いにも関わらず、監視者は、現場の安全状況の確認のために、ネットワークカメラの映像を確認したり、現場に人を派遣したりするなど、状況確認の作業に追われることになり、良い監視システムとは言えない。
First, the first problem in the conventional monitoring system is that if there are moving objects or subjects, the thermal sensor or moving object detection sensor reacts to small animals such as dogs and cats and determines that there is an abnormality. Is a point.
For example, when used for crime prevention outdoors, it reacts not only to intruders but also to small animals such as dogs and cats. In addition, for example, if a cat or a motion detection sensor reacts every time a cat passes through the site and notifies the observer that there is an abnormality via the Internet line, there is no possibility of a crime. Therefore, the supervisor is busy with the situation confirmation work such as checking the video of the network camera and dispatching people to the scene to confirm the safety situation of the scene. What is a good monitoring system? I can not say.
 また、従来の監視システムにおける第2の課題は、動体検知センサーが、動くものや被写体があれば、人以外の物体にまで反応して「異常あり」と判断してしまう点である。
 例えば、屋外での防犯目的に利用の場合、侵入者だけでなく、例えば、敷地の前の道路を通行している自動車であっても、反応してしまうのである。また、例えば、自動車が道路を通るたびに動体検知センサーが反応して、インターネット回線を通じて監視者に「異常あり」という通知を行う場合、犯罪の可能性が無いにも関わらず、監視者は現場の安全状況の確認のために、ネットワークカメラの映像を確認したり、現場に人を派遣したりするなど、状況確認の作業に追われることになり、良い監視システムとは言えない。
A second problem in the conventional monitoring system is that, if there is a moving object or a subject, the moving body detection sensor reacts to an object other than a person and determines that there is an abnormality.
For example, when used for crime prevention outdoors, not only intruders but also cars that are traveling on the road in front of the site will react. In addition, for example, when the vehicle detection sensor reacts every time a car passes the road and notifies the observer that there is an abnormality through the Internet, the observer In order to check the safety status of the camera, it is not possible to say that it is a good monitoring system because it will be followed by the status check work such as checking the video of the network camera or dispatching people to the site.
 また、従来の監視システムにおける第3の課題は、例えば、独居老人の生活を見守るために、独居老人宅の屋内に熱センサー、もしくは動体検知センサーを設置したとしても、例えば、老人が体の具合が悪くなって、うずくまっていたり、倒れていたりしても、その状態と通常の状態を判別することができず、誰かが現場に常駐するか、頻繁に現場確認に訪れるか、もしくは、ネットワークカメラの映像を常に見続ける必要があり、監視者の人件費や労力の面でも問題が残るという点である。 In addition, the third problem in the conventional monitoring system is that, for example, even if a heat sensor or a motion detection sensor is installed in the living room of an elderly living alone to watch the life of the elderly living alone, for example, Even if it gets worse, it's crouching or falling down, it can't distinguish between the normal state and the situation, and someone stays on site, frequently visits on site, or network camera It is necessary to keep watching the video, and there are still problems in terms of labor costs and labor of the observer.
 本発明は、以上の点に鑑みて創案されたものであって、第1から第3の課題を解決することができる監視システムを提供することを目的とするものである。 The present invention has been made in view of the above points, and an object thereof is to provide a monitoring system capable of solving the first to third problems.
 上記の目的を達成するために、本発明の監視システムでは、映像の中から検出したい物体の基準情報として、物体の形状の上限と下限の情報を記憶する物体基準情報記憶手段と、入力された映像の中の物体の形状が前記物体基準情報記憶手段に記憶された前記基準情報の上下限内であるか否かを判断する映像分析手段とを備える。 In order to achieve the above object, in the monitoring system of the present invention, object reference information storage means for storing information on the upper and lower limits of the shape of the object is input as reference information of the object to be detected from the video. Video analysis means for determining whether or not the shape of the object in the video is within the upper and lower limits of the reference information stored in the object reference information storage means.
 ここで、映像の中から検出したい物体の基準情報として、物体基準情報記憶手段に物体の形状の上限(例えば、大矩形)と下限(例えば、小矩形)の情報を記憶し、映像分析手段を用いて入力された映像の中の物体の形状が物体基準情報記憶手段に記憶された基準情報の上下限内であるか否かを判断することによって、上記の第1の課題と第2の課題を解決することができる。 Here, as reference information of the object to be detected from the video, information on the upper limit (for example, large rectangle) and the lower limit (for example, small rectangle) of the shape of the object is stored in the object reference information storage means, and the video analysis means is By determining whether or not the shape of the object in the input video is within the upper and lower limits of the reference information stored in the object reference information storage means, the first and second problems described above Can be solved.
 具体的には、例えば、侵入者などの人だけを検出したい場合、大矩形を人の大きさよりやや大きな矩形とし、小矩形を小動物より大きな矩形とすることで、小矩形より小さな小動物や大矩形より大きな自動車などを検出対象から除外することができるため、上述の様に、上記第1の課題と第2の課題を解決することができるのである。 Specifically, for example, when only people such as intruders are to be detected, the large rectangle is made slightly larger than the size of the person, and the small rectangle is made larger than the small animal. Since a larger automobile or the like can be excluded from the detection target, the first problem and the second problem can be solved as described above.
 また、例えば、人が倒れた状態に相当する形状と大きさに対応する大矩形と小矩形を設定することで、老人が体の具合が悪くなって倒れてしまうような状態を検出することができ、上記の第3の課題も解決することができる。 In addition, for example, by setting a large rectangle and a small rectangle corresponding to a shape and size corresponding to a state where a person has fallen, it is possible to detect a state in which an elderly person falls in a bad condition. And the above third problem can be solved.
 また、上記の目的を達成するために、本発明の監視システムでは、映像の中から検出したい物体の基準情報として、物体の形状の上限と下限の情報を記憶する物体基準情報記憶手段と、撮像される領域中の所定領域を特定する情報を有効領域情報として記憶する有効領域情報記憶手段と、入力された映像の中の物体が前記有効領域情報記憶手段に記憶された前記有効領域情報内に位置するか否かを判断すると共に、同物体の形状が前記物体基準情報記憶手段に記憶された前記基準情報の上下限内であるか否かを判断する映像分析手段とを備える。 In order to achieve the above object, in the monitoring system of the present invention, object reference information storage means for storing upper and lower limit information of the shape of an object as reference information of an object to be detected from an image, and imaging Effective area information storage means for storing, as effective area information, information specifying a predetermined area in the area to be recorded, and an object in the input video in the effective area information stored in the effective area information storage means Image analysis means for determining whether or not the object is located and determining whether the shape of the object is within the upper and lower limits of the reference information stored in the object reference information storage means.
 ここで、撮像される領域中の所定領域を特定する情報を有効領域情報として有効領域情報記憶手段に記憶し、映像分析手段を用いて入力された映像の中の物体が有効領域情報記憶手段に記憶された有効領域情報内に位置するか否かを判断することによって、上記の第1の課題をより高精度に解決することができる。 Here, information specifying a predetermined area in the imaged area is stored as effective area information in the effective area information storage means, and an object in the video input using the video analysis means is stored in the effective area information storage means. By determining whether or not the position is within the stored effective area information, the first problem can be solved with higher accuracy.
 具体的には、例えば、遠方の人間と近くの鳥は、実際には大きさが全く違うにも関わらず、画面上では同じくらいの大きさで、鳥も人間と同じように検出対象となってしまう可能性がある。こうした場合に、例えば有効領域情報を遠方だけに制限しておくことで、近くの鳥を検出してしまう問題を回避することができ、小動物と人とを区別することを目的としている上記の第1の課題をより高精度に解決することができるのである。 Specifically, for example, a distant human and a nearby bird are actually the same size on the screen, even though they are completely different in size. There is a possibility that. In such a case, for example, by limiting the effective area information only to a distant place, the problem of detecting a nearby bird can be avoided, and the above-mentioned first purpose is to distinguish small animals from humans. The first problem can be solved with higher accuracy.
 なお、映像分析手段に入力される映像を表示する映像表示手段と、物体基準情報記憶手段に基準情報を入力する物体基準情報入力手段とを備えることによって、カメラ等の撮像装置の設置現場の状況に左右されることなく、上記の第1の課題、第2の課題及び第3の課題を解決することができる。 The situation of the installation site of an imaging device such as a camera is provided by providing video display means for displaying video input to the video analysis means and object reference information input means for inputting reference information to the object reference information storage means. The first, second, and third problems can be solved without being affected by the above.
 具体的には、例えば、映像を表示する映像表示手段としてテレビモニターを利用し、物体基準情報記憶手段に記憶する検出したい物体の基準情報を入力する物体基準情報入力手段としてコンピュータで使用するマウスを利用するとし、テレビモニターの画面に映っている人の形や大きさを目視しながら、例えば人の形と大きさに近い矩形をマウスを使って指定することで、カメラの設置位置によって物体(例えば人)の形や大きさが変わっても、物体(人)を検出することができて、カメラの設置現場の状況に左右されることなく、上記の第1の課題、第2の課題及び第3の課題を解決することができるのである。 Specifically, for example, a television monitor is used as an image display means for displaying an image, and a mouse used in a computer as an object reference information input means for inputting reference information of an object to be detected stored in the object reference information storage means. For example, by observing the shape and size of a person appearing on the screen of a television monitor, for example, by using a mouse to specify a rectangle that is close to the shape and size of the person, an object ( For example, even if the shape or size of a person changes, the object (person) can be detected, and the first problem, the second problem, and The third problem can be solved.
 同様に、映像分析手段に入力される映像を表示する映像表示手段と、有効領域情報記憶手段に有効領域情報を入力する有効領域情報入力手段とを備えることによって、カメラ等の撮像装置の設置現場の状況に左右されることなく、上記の第1の課題、第2の課題及び第3の課題を解決することができる。 Similarly, an installation site of an imaging device such as a camera is provided by including video display means for displaying video input to the video analysis means and effective area information input means for inputting effective area information to the effective area information storage means. The first problem, the second problem, and the third problem can be solved without being affected by the situation.
 具体的には、例えば、映像を表示する映像表示手段としてテレビモニターを利用し、有効領域情報記憶手段に記憶する検出したい物体の有効領域情報を入力する有効領域情報入力手段としてコンピュータで使用するマウスを利用するとし、テレビモニターの画面に映っている人の形や大きさを目視しながら、有効領域情報をマウスを使って指定することで、カメラの設置現場の状況に左右されることなく、上記の第1の課題、第2の課題及び第3の課題を解決することができるのである。 Specifically, for example, a mouse used in a computer as an effective area information input means for inputting effective area information of an object to be detected, which is stored in an effective area information storage means, is used as a video display means for displaying an image. By using the mouse to specify the effective area information while visually observing the shape and size of the person appearing on the screen of the TV monitor, it is not affected by the situation of the camera installation site. The first problem, the second problem, and the third problem can be solved.
 また、映像分析手段の判断結果を告知する分析結果告知手段によって、監視者に状況を告知することが可能となる。
 例えば、分析結果告知手段として、インターネットを用いた通信手段を利用することで、遠隔地の監視者に状況を通知することが可能となる。また、例えば、分析結果告知手段として、スピーカー等を用いた音響手段を利用することで、監視者に状況を音声を使って通知することができる。更に、分析結果告知手段として、ランプ等を用いた発光手段を利用することで、監視者に状況をランプの点灯を使って通知することができる。
Also, the analysis result notification means for notifying the judgment result of the video analysis means can notify the monitor of the situation.
For example, by using communication means using the Internet as analysis result notification means, it becomes possible to notify the situation to a remote supervisor. Further, for example, by using an acoustic means using a speaker or the like as the analysis result notification means, the situation can be notified to the supervisor using voice. Furthermore, by using a light emitting means using a lamp or the like as the analysis result notification means, the situation can be notified to the supervisor using the lighting of the lamp.
 また、通信回線を介して接続された端末に対して映像分析手段に入力された映像を送信可能に構成された映像送信手段を備える場合において、映像分析手段の判断結果に応じて映像送信手段で映像の送信を行うことによって、プライバシーの保護を実現することができる。
 なお、ここでの「通信回線」とは、信号の送受信のための通路を意味しており、有線の場合、無線の場合の双方を含むものであり、「通信回線を介して接続された端末」とは、LANケーブル等の有線ケーブルで接続されている端末、無線LANで接続されている端末、携帯電話機端末等が含まれる。
In addition, in the case of including a video transmission unit configured to transmit a video input to the video analysis unit to a terminal connected via a communication line, the video transmission unit Privacy can be protected by transmitting video.
The “communication line” here means a path for signal transmission / reception, and includes both a wired case and a wireless case. “Terminals connected via a communication line” "Includes a terminal connected by a wired cable such as a LAN cable, a terminal connected by a wireless LAN, a mobile phone terminal, and the like.
 例えば、遠隔地の端末に監視システムの映像を送信するにあたって、常に監視システムからの映像を送信するのではなく、緊急事態が発生した場合のみ監視システムからの映像を送信することによって、監視システムによって監視されている者のプライバシーを保護することができる。
 なお、常に監視システムからの映像を送信したとしても、緊急状態であるか否かの信号をも併せて送信し、緊急状態である信号を受信した場合のみ端末側で受信した映像を表示可能に構成することによってもプライバシーの保護は実現する。
For example, when transmitting the video of the monitoring system to a remote terminal, the video from the monitoring system is not always transmitted, but the video from the monitoring system is transmitted only in the event of an emergency, by the monitoring system The privacy of the person being monitored can be protected.
Even if the video from the surveillance system is always transmitted, it is possible to display the video received on the terminal side only when the signal indicating the emergency state is transmitted together with the signal indicating whether the emergency state is received or not. Privacy protection is also realized by configuring.
 また、上記の目的を達成するために、本発明の監視システムでは、映像の中から検出したい第1の物体の基準情報として同第1の物体の形状の上限と下限の情報を記憶すると共に、映像の中から検出したい第2の物体の基準情報として同第2の物体の形状の上限と下限の情報を記憶する物体基準情報記憶手段と、入力された映像の中の物体の形状が前記物体基準情報記憶手段に記憶された前記第1の物体の基準情報の上下限内であるか否かを判断する第1の映像分析手段と、入力された映像の中の物体の形状が前記物体基準情報記憶手段に記憶された前記第2の物体の基準情報の上下限内であるか否かを判断する第2の映像分析手段と、少なくとも前記第1の映像分析手段の判断結果及び前記第2の映像分析手段の判断結果に基づいて論理判定を行う論理判定手段とを備える。 In order to achieve the above object, the monitoring system of the present invention stores the upper and lower limit information of the shape of the first object as reference information of the first object to be detected from the video, Object reference information storage means for storing upper and lower limit information of the shape of the second object as reference information of the second object to be detected from the image, and the shape of the object in the input image is the object First image analysis means for determining whether the reference information of the first object stored in the reference information storage means is within the upper and lower limits; and the shape of the object in the input image is the object reference A second video analysis unit that determines whether the reference information of the second object stored in the information storage unit is within the upper and lower limits; a determination result of at least the first video analysis unit; and the second Logic based on the results of video analysis And a logic determination means for performing constant.
 ここで、少なくとも第1の映像分析手段の判断結果及び第2の映像分析手段の判断結果に基づいて論理判定(例えば、第1の映像分析手段の判断結果及び第2の映像分析手段の判断結果のアンドやナンドによる論理判定)を行う論理判定手段によって、上記の第3の課題の解決が可能となる。 Here, at least based on the determination result of the first video analysis means and the determination result of the second video analysis means, the logical determination (for example, the determination result of the first video analysis means and the determination result of the second video analysis means). The above third problem can be solved by the logic determination means that performs logic determination based on AND and NAND.
 具体的には、例えば、2台のカメラの映像信号から、第1カメラは転倒者を検知し、第2カメラは転倒者とは別の歩行者を検知するものとする。通常、第1カメラが転倒者を検知した場合は助けを呼ぶために通知が必要であるが、第2カメラが別の歩行者を検知した場合は転倒者のそばに歩行者(すなわち介助できる人)が存在するケースであり、新たな助けを呼ぶ必要が無い。よって、論理判定手段で、第2カメラが歩行者を検知した場合は、第1カメラの転倒者の検知をナンド(否定)することで、助けを呼ばないような使い方により、監視者の労力を抑えることができ、上記の第3の課題の解決が可能となるのである。 Specifically, for example, from the video signals of two cameras, the first camera detects a fallen person, and the second camera detects a pedestrian different from the fallen person. Usually, when the first camera detects a fallen person, notification is required to call for help, but when the second camera detects another pedestrian, a pedestrian (that is, a person who can assist) ), And there is no need to call for new help. Therefore, when the second camera detects a pedestrian in the logic determination means, the detection of the faller of the first camera is NANDed (denied), so that the supervisor's labor can be reduced by not using help. Therefore, the third problem can be solved.
 なお、「少なくとも第1の映像分析手段の判断結果及び第2の映像分析手段の判断結果に基づいて」とは、「第1の映像分析手段の判断結果及び第2の映像分析手段の判断結果」に基づいて論理判定を行う場合の他、「第1の映像分析手段の判断結果、第2の映像分析手段の判断結果」に加えて、「その他の要素(例えば、第3の映像分析手段の判断結果)」に基づいて論理判定を行う場合も含まれる。 “At least based on the judgment result of the first video analysis means and the judgment result of the second video analysis means” means “the judgment result of the first video analysis means and the judgment result of the second video analysis means”. In addition to the “judgment result of the first video analysis unit and the judgment result of the second video analysis unit”, in addition to the case where the logical determination is performed based on “ This also includes a case where a logical determination is made based on the determination result.
 また、上記の目的を達成するために、本発明の監視システムでは、映像の中から検出したい物体の基準情報として、物体の形状の上限と下限の情報を記憶する物体基準情報記憶手段と、入力された映像の中の物体の形状が前記物体基準情報記憶手段に記憶された前記基準情報の上下限内であるか否かを判断する映像分析手段と、通信回線を介して接続された端末に対して前記映像分析手段に入力された映像を送信可能に構成された映像送信手段を、それぞれ有する第1の監視システム、第2の監視システムと、少なくとも前記第1の監視システムの映像送信手段及び前記第2の監視システムの映像送信手段から送信された映像と共に同映像の受信履歴情報を読み出し可能に記憶する記憶手段と、該記憶手段に記憶された前記受信履歴情報を表示する表示手段を有し、前記表示手段に表示された前記受信履歴情報を選択することで、同受信履歴情報に対応する映像が前記表示手段に表示可能に構成された端末とを備える。 In order to achieve the above object, in the monitoring system of the present invention, object reference information storage means for storing upper and lower limit information of the shape of an object as reference information of an object to be detected from an image, and input Video analysis means for determining whether the shape of the object in the received video is within the upper and lower limits of the reference information stored in the object reference information storage means, and a terminal connected via a communication line On the other hand, a first monitoring system, a second monitoring system, and a video transmission means of at least the first monitoring system, each having video transmission means configured to transmit the video input to the video analysis means, and Storage means for readable storage of the reception history information of the video together with the video transmitted from the video transmission means of the second monitoring system, and the reception history information stored in the storage means It has Shimesuru display means and selecting the reception history information displayed on the display unit, and a terminal for video corresponding to the received history information is configured to be displayed on the display means.
 また、上記の目的を達成するために、本発明の監視システムでは、映像の中から検出したい物体の基準情報として、物体の形状の上限と下限の情報を記憶する物体基準情報記憶手段と、撮像される領域中の所定領域を特定する情報を有効領域情報として記憶する有効領域情報記憶手段と、入力された映像の中の物体が前記有効領域情報記憶手段に記憶された前記有効領域情報内に位置するか否かを判断すると共に、同物体の形状が前記物体基準情報記憶手段に記憶された前記基準情報の上下限内であるか否かを判断する映像分析手段と、通信回線を介して接続された端末に対して前記映像分析手段に入力された映像を送信可能に構成された映像送信手段を、それぞれ有する第1の監視システム、第2の監視システムと、少なくとも前記第1の監視システムの映像送信手段及び前記第2の監視システムの映像送信手段から送信された映像と共に同映像の受信履歴情報を読み出し可能に記憶する記憶手段と、該記憶手段に記憶された前記受信履歴情報を表示する表示手段を有し、前記表示手段に表示された前記受信履歴情報を選択することで、同受信履歴情報に対応する映像が前記表示手段に表示可能に構成された端末とを備える。 In order to achieve the above object, in the monitoring system of the present invention, object reference information storage means for storing upper and lower limit information of the shape of an object as reference information of an object to be detected from an image, and imaging Effective area information storage means for storing, as effective area information, information specifying a predetermined area in the area to be recorded, and an object in the input video in the effective area information stored in the effective area information storage means A video analysis means for judging whether or not the position of the object is within the upper and lower limits of the reference information stored in the object reference information storage means, and a communication line A first monitoring system, a second monitoring system, and at least the first monitoring system, each having video transmission means configured to transmit the video input to the video analysis means to a connected terminal; Storage means for readablely storing the reception history information of the video together with the video transmitted from the video transmission means of the monitoring system and the video transmission means of the second monitoring system, and the reception history information stored in the storage means And a terminal configured to display a video corresponding to the reception history information on the display means by selecting the reception history information displayed on the display means.
 ここで、表示手段に表示された受信履歴情報を選択することで、受信履歴情報に対応する映像が表示手段に表示可能に構成されたことによって、複数の監視カメラから一斉に映像を受信したとしても、受信履歴情報に基づいて所望の映像を確認することが可能となる。 Here, by selecting the reception history information displayed on the display means, the video corresponding to the reception history information is configured to be displayed on the display means, so that the images are simultaneously received from a plurality of surveillance cameras. In addition, a desired video can be confirmed based on the reception history information.
 なお、ここでの「通信回線」とは、信号の送受信のための通路を意味しており、有線の場合、無線の場合の双方を含むものであり、「通信回線を介して接続された端末」とは、LANケーブル等の有線ケーブルで接続されている端末、無線LANで接続されている端末、携帯電話機端末等が含まれる。 The “communication line” here means a path for signal transmission / reception, and includes both a wired case and a wireless case. “Terminals connected via a communication line” "Includes a terminal connected by a wired cable such as a LAN cable, a terminal connected by a wireless LAN, a mobile phone terminal, and the like.
 また、「少なくとも第1の監視システムの映像送信手段及び第2の監視システムの映像手段から送信された映像」とは、「第1の監視システムの映像送信手段及び第2の監視システムの映像手段から送信された映像」の場合の他、「第1の監視システムの映像送信手段及び第2の監視システムの映像送信手段から送信された映像」に加えて、「その他の監視システム(例えば、第3の監視システム)からの映像」の場合も含まれる。 In addition, “at least the video transmitted from the video transmission unit of the first monitoring system and the video unit of the second monitoring system” means “the video transmission unit of the first monitoring system and the video unit of the second monitoring system”. In addition to the “video transmitted from the video transmission means”, in addition to “video transmitted from the video transmission means of the first monitoring system and the video transmission means of the second monitoring system”, 3 ”is also included.
 本発明の監視システムでは、上記した第1から第3の課題を解決することができる。 The monitoring system of the present invention can solve the first to third problems described above.
本発明の監視システムの実施例1の構成を示した図である。It is the figure which showed the structure of Example 1 of the monitoring system of this invention. カメラが上から人を撮影した場合の状態を示した図である。It is the figure which showed the state when a camera image | photographed a person from the top. カメラが横から水平に人を撮影した場合の状態を示した図である。It is the figure which showed the state when a camera image | photographed the person from the side horizontally. 小矩形と大矩形の状態を示した図(1)である。It is the figure (1) which showed the state of the small rectangle and the large rectangle. 小矩形と大矩形の状態を示した図(2)である。It is FIG. (2) which showed the state of the small rectangle and the large rectangle. 物体(人)の縦の長さと横の長さを示した図である。It is the figure which showed the vertical length and horizontal length of an object (person). 人と自動車の形状や大きさを示した図である。It is the figure which showed the shape and size of a person and a car. 過去フレームと現在フレームのどちらにも人が存在しない場合の差分の絶対値の状態を示した図である。It is the figure which showed the state of the absolute value of the difference in case a person does not exist in both the past frame and the present frame. 現在フレームに人が存在する場合の差分の絶対値の状態を示した図である。It is the figure which showed the state of the absolute value of the difference in case a person exists in the present flame | frame. 人が倒れている様子をカメラが横から撮影している状態を示した図である。It is the figure which showed the state which the camera is image | photographing from the side that a person is falling down. カメラの映像における人が倒れている状態を示した図である。It is the figure which showed the state in which the person in the image | video of a camera has fallen. 人が倒れている様子をカメラが横から撮影している状態を示した図である。It is the figure which showed the state which the camera is image | photographing from the side that a person is falling down. カメラの映像における人が倒れている状態を示した図である。It is the figure which showed the state in which the person in the image | video of a camera has fallen. 倒れている人を検出するための矩形セットの例を示した図である。It is the figure which showed the example of the rectangular set for detecting the person who has fallen. 倒れている人を検出するための矩形セットの例を示した図である。It is the figure which showed the example of the rectangular set for detecting the person who has fallen. 立っている人をカメラが上から撮影している状態を示した図である。It is the figure which showed the state which the camera is image | photographing the person who is standing from the top. 立っている人をカメラが上から撮影した場合のカメラ映像を示した図である。It is the figure which showed the camera image | video when the camera image | photographs the person who is standing from the top. 倒れている人をカメラが上から撮影している状態を示した図である。It is the figure which showed the state which has image | photographed the person who has fallen from the top. 倒れている人をカメラが上から撮影した場合のカメラ映像を示した図である。It is the figure which showed the camera image | video when a camera image | photographs the person who has fallen from the top. 倒れている人を検出するための矩形セットの例を示した図である。It is the figure which showed the example of the rectangular set for detecting the person who has fallen. 矩形以外の物体基準情報の例を示した図である。It is the figure which showed the example of object reference | standard information other than a rectangle. カメラから遠くの人と遠くの鳥の様子を示した図である。It is the figure which showed the state of the person far from a camera and the state of a bird far away. カメラから遠くの人と近くの鳥の様子を示した図である。It is the figure which showed the state of the person far from a camera, and the nearby bird. 物体基準情報を適用する有効エリア矩形の例を示した図である。It is the figure which showed the example of the effective area rectangle which applies object reference information. カメラ1で転倒者を、カメラ2で別の歩行者を撮影している例を示した図である。It is the figure which showed the example which has photographed the fallen person with the camera 1, and another pedestrian with the camera. 本発明の監視システムの実施例2の構成を示した図である。It is the figure which showed the structure of Example 2 of the monitoring system of this invention. 本発明の監視システムの実施例3の構成を示した図である。It is the figure which showed the structure of Example 3 of the monitoring system of this invention.
 以下、本発明を実施するための形態(以下、「実施例」と称する。)について図面を参酌しながら説明を行う。 Hereinafter, modes for carrying out the present invention (hereinafter referred to as “examples”) will be described with reference to the drawings.
 以下に、上記従来の監視システムの課題を解決するための、本発明の実施例1について
説明する。
A first embodiment of the present invention for solving the problems of the conventional monitoring system will be described below.
 図1に、本発明の実施例1の監視システムの構成を示す。
 図1において、1は、映像信号を記憶する映像記憶手段で、例として時刻の異なる複数のフレームの映像を記憶するものとする。2は、映像中の被写体の中から検出したい物体の基準情報を記憶する物体基準情報記憶手段で、3は、映像記憶手段1に記憶されている映像信号を分析し、被写体が物体基準情報記憶手段2に記憶されている物体基準情報に合致するかどうかを判断する映像分析手段で、4は、映像分析手段3の分析結果情報を記憶する分析結果情報記憶手段で、5はコンピュータで、例として、映像記憶手段1と物体基準情報記憶手段2と映像分析手段3と分析結果情報記憶手段4は、コンピュータ5上のソフトウェアプログラムとして実装されるものとする。6はカメラで、7は映像を表示するモニター(映像表示手段)である。8はマウスであり、物体基準情報入力手段として活用する。カメラ6は、アナログの映像信号をデジタルに変換してコンピュータ5に入力できるものとする。
FIG. 1 shows the configuration of a monitoring system according to the first embodiment of the present invention.
In FIG. 1, reference numeral 1 denotes a video storage means for storing a video signal, and as an example, it is assumed that videos of a plurality of frames at different times are stored. Reference numeral 2 denotes an object reference information storage means for storing reference information of an object to be detected from the subject in the video, and 3 denotes an analysis of the video signal stored in the video storage means 1 so that the subject is stored in the object reference information. An image analysis means for determining whether or not the object reference information stored in the means 2 matches, 4 is an analysis result information storage means for storing the analysis result information of the image analysis means 3, 5 is a computer, Assuming that the video storage means 1, the object reference information storage means 2, the video analysis means 3, and the analysis result information storage means 4 are implemented as software programs on the computer 5. Reference numeral 6 denotes a camera, and reference numeral 7 denotes a monitor (video display means) for displaying video. A mouse 8 is used as an object reference information input unit. Assume that the camera 6 can convert an analog video signal into a digital signal and input it to the computer 5.
 まず、この監視システムの動作の概要を述べる。
 例えば、屋外での防犯目的での利用においては、侵入者(人)を検出して、例えば、警報を鳴らしたり、遠隔地の監視者に通知したりするような利用方法が考えられる。
 しかしながら、屋外には、人だけでなく、例えば、犬や猫などの小動物も存在する。犬や猫が通るたび警報が鳴ったり、遠隔地に通知が行われたりすると監視者は現場の映像の確認作業が増えてしまうため、侵入者(人)が現れるなどの重要な事象の場合のみ通知が行われるようなシステムが望まれる。
First, an outline of the operation of this monitoring system will be described.
For example, in the case of use for crime prevention outdoors, there may be a utilization method in which an intruder (person) is detected and, for example, an alarm is sounded or a remote supervisor is notified.
However, not only humans but also small animals such as dogs and cats exist outdoors. When a warning is sounded every time a dog or cat passes or a notification is sent to a remote location, the monitor will increase the work of checking the video on the site, so only in case of an important event such as an intruder (person) appearing A system in which notification is performed is desired.
 それを実現するために、本実施例では、カメラ6の映像信号を映像記憶手段1に記憶し、その記憶した映像を、映像分析手段3で分析して、被写体の形状や大きさを判断する。なお、あらかじめ検出したい侵入者(人)の形状や大きさの情報を、物体基準情報として物体基準情報記憶手段2に記憶しておき、この物体基準情報と、被写体の形状と大きさを比較して合致するかどうか、すなわち、侵入者(人)の形状や大きさを満たしているかどうかを判断し、その結果を分析結果情報記憶手段4に保存する。 In order to realize this, in this embodiment, the video signal of the camera 6 is stored in the video storage means 1 and the stored video is analyzed by the video analysis means 3 to determine the shape and size of the subject. . Information on the shape and size of the intruder (person) to be detected in advance is stored in the object reference information storage means 2 as object reference information, and the object reference information is compared with the shape and size of the subject. Are determined, that is, whether the shape and size of the intruder (person) are satisfied, and the result is stored in the analysis result information storage means 4.
 次に、この監視システムの動作の詳細を述べる。 Next, the details of the operation of this monitoring system will be described.
 犬や猫などの小動物には無反応(警報や、遠隔地への通知を行わない)で、侵入者(人)のみを検出して反応する(警報や、遠隔地へ通知を行う)ためには、小動物と人を自動的に判別する必要がある。そして、そのためには、図1のカメラ6の映像中の被写体の形状や大きさを判断する必要がある。 No reaction to small animals such as dogs and cats (no alarms or notifications to remote locations), but to detect and respond only to intruders (people) (alarms and notifications to remote locations) Need to automatically distinguish between small animals and people. For this purpose, it is necessary to determine the shape and size of the subject in the image of the camera 6 in FIG.
 しかしながら、映像中の被写体の形状や大きさは、カメラと被写体との距離や角度によって変化するため、例えば「人」を検出したい物体とした場合、その形や大きさの情報(物体基準情報)が固定のものであっては対応できないという問題が発生する。 However, since the shape and size of the subject in the video changes depending on the distance and angle between the camera and the subject, for example, when “object” is to be detected, information on the shape and size (object reference information) There is a problem that it is not possible to cope with a fixed one.
 具体的には、図2(a)のようにカメラ6が上から、下にいる人を撮影した場合と、図3(a)のように、カメラ6が横から水平に人を撮影した場合では、同じ人を撮影した場合でも形状が大きく異なって見える。そして、カメラの設置位置は、図3(a)のように常に水平になるように設置できるとは限らず、カメラと被写体との距離や角度は、状況によって無限の組み合わせが存在することになる。 Specifically, the case where the camera 6 has taken a person from the top as shown in FIG. 2A and the case where the camera 6 has taken a person from the side as shown in FIG. 3A. Then, even if the same person is photographed, the shape looks very different. The camera installation position is not always horizontal as shown in FIG. 3A, and there are infinite combinations of distances and angles between the camera and the subject depending on the situation. .
 なお、図2(b)は図2(a)で示す様に、カメラが上から下にいる人を撮影した場合のカメラ映像を模式的に示しており、図3(b)は図3(a)で示す様に、カメラが横から水平に人を撮影した場合のカメラ映像を模式的に示している。 FIG. 2B schematically shows a camera image when the camera is photographed from the top to the bottom as shown in FIG. 2A, and FIG. As shown in a), a camera image in a case where the camera photographs a person from the side to the side is schematically shown.
 このように、カメラの設置位置によって被写体の形状と大きさが変わってしまう問題に柔軟に対応できるように、検出したい物体の物体基準情報(形状と大きさ)を入力する機能を具備する点が本発明の重要な特徴の一つである。 Thus, in order to flexibly cope with the problem that the shape and size of the subject changes depending on the installation position of the camera, it has a function of inputting object reference information (shape and size) of the object to be detected. This is one of the important features of the present invention.
 本発明の監視システムでは、まず、検出したい物体(例:人)の形や大きさの情報(物体基準情報)を、物体基準情報記憶手段2に記憶させる。 In the monitoring system of the present invention, first, information (object reference information) on the shape and size of an object (eg, a person) to be detected is stored in the object reference information storage means 2.
 物体基準情報を入力する方法の例として、図1のモニター7の映像を見ながら、マウス8を使って図4のように、物体基準情報の、下限として小さい矩形(以下、「小矩形」と称する)Bと、上限として大きい矩形(以下、「大矩形」と称する)Aを設定する。
 ここで、図4の小矩形は、小動物(猫)bより、やや大きい矩形を設定する。図4の大矩形は、人aの大きさより、やや大きい矩形を設定する。例えば、この小矩形と大矩形のそれぞれの縦の長さと横の長さを、検出したい物体の基準情報として、図1の物体基準情報記憶手段2に記憶するのである。
As an example of the method for inputting the object reference information, a small rectangle (hereinafter referred to as “small rectangle”) is used as the lower limit of the object reference information, as shown in FIG. B) and a large rectangle (hereinafter referred to as “large rectangle”) A as an upper limit.
Here, the small rectangle in FIG. 4 is set to be slightly larger than the small animal (cat) b. The large rectangle shown in FIG. 4 is set to be slightly larger than the size of the person a. For example, the vertical length and the horizontal length of each of the small rectangle and the large rectangle are stored in the object reference information storage unit 2 in FIG. 1 as reference information of the object to be detected.
 上記の様に小矩形Bと大矩形Aを設定した場合には、図5のように、小矩形Bより大きく、大矩形Aより小さい物体(図5中の斜線部分に該当する大きさと形を有する物体)を検知することとなる。 When the small rectangle B and the large rectangle A are set as described above, as shown in FIG. 5, an object larger than the small rectangle B and smaller than the large rectangle A (the size and shape corresponding to the hatched portion in FIG. Object).
 具体的には、図6は、物体(人)Cの縦の長さDと、横の長さEを示した図であるが、この縦の長さDが、図5の小矩形の縦の長さ以上で、大矩形の縦の長さ以下で、かつ、図6の横の長さEが、図5の小矩形の横の長さ以上で、大矩形の横の長さ以下であった場合には、こうした物体(人)を検出することとなる。 Specifically, FIG. 6 is a diagram showing the vertical length D and the horizontal length E of the object (person) C. This vertical length D is the vertical length of the small rectangle in FIG. 6 and below the vertical length of the large rectangle, and the horizontal length E in FIG. 6 is not less than the horizontal length of the small rectangle in FIG. 5 and not more than the horizontal length of the large rectangle. If there is, such an object (person) will be detected.
 このように小矩形Bと大矩形Aを定めることで、図4のように小矩形より小さい猫のような小動物は、検出の対象から除外されることになる。そして、大矩形より小さい「人」が検出対象となり、犬や猫のような小動物に反応しない優れた監視システムを構築できる。 By defining the small rectangle B and the large rectangle A in this way, a small animal such as a cat smaller than the small rectangle as shown in FIG. 4 is excluded from the detection target. A “person” smaller than the large rectangle becomes a detection target, and an excellent monitoring system that does not react to small animals such as dogs and cats can be constructed.
 ここで、小矩形だけでなく、大矩形を設定することも重要な意味がある。
 例えば、図7のように、映像中に、人dより十分大きな自動車eなどが映るような状況を想定する。具体的には、屋外で敷地の前の道路を自動車が通行して通り抜けるケースなどである。その場合、自動車は道路を単に通り抜けるだけで、防犯上は特に問題が無いのであるが、この自動車が通るたびに反応する(警報や、遠隔地への通知が行われる)ようでは、監視者の負担を軽減できない問題につながる。そこで、大矩形を設定して、大矩形よりも大きな自動車などを無視して、大矩形よりも小さい「人」を検出できるようにすることで、例えば、建物や敷地内に侵入しようとする「人」だけに反応することとなる(警報で威嚇したり、遠隔地の監視者に通知したりする)。
 そして、このことによって監視者にとって労力の負担が小さい効率的な監視システムを構築できるため、上述した様に、小矩形だけでなく、大矩形を設定することが重要となるのである。
Here, it is important to set not only a small rectangle but also a large rectangle.
For example, as shown in FIG. 7, a situation is assumed in which an automobile e or the like sufficiently larger than the person d is shown in the video. Specifically, it is a case where an automobile passes through a road in front of the site outdoors. In that case, the car simply passes through the road, and there is no particular problem in terms of crime prevention. However, if the car reacts each time it passes (alarms or notification to remote locations), This leads to problems that cannot be reduced. Therefore, by setting a large rectangle and ignoring automobiles that are larger than the large rectangle, it is possible to detect "people" smaller than the large rectangle, for example, trying to enter a building or site. It reacts only to "people" (intimidating with alarms or notifying remote observers).
And since this makes it possible to construct an efficient monitoring system that places little burden on the observer, it is important to set not only the small rectangle but also the large rectangle as described above.
 次に、本発明の監視システムでは、図1のカメラ6からの映像信号を映像記憶手段1に記憶し、映像分析手段3で、被写体の形状や大きさを分析する。
 その分析方法としては限定されるものではなく、例えば、被写体(移動物体)が何も映っていない数秒前の過去フレームと、被写体が存在する現在フレームの、フレーム間差分の絶対値を得る方法が挙げられる。
Next, in the monitoring system of the present invention, the video signal from the camera 6 in FIG. 1 is stored in the video storage means 1 and the video analysis means 3 analyzes the shape and size of the subject.
The analysis method is not limited. For example, there is a method of obtaining an absolute value of the interframe difference between a past frame several seconds before where no subject (moving object) is reflected and a current frame where the subject exists. Can be mentioned.
 具体的には、図8(図8(a)は過去フレームを示しており、図8(b)は現在フレームを示しており、図8(c)は差分の絶対値を示している。)のように、過去フレームに背景のみが映っていて、現在フレームも背景のみが映っている場合は、過去フレームと現在フレームの差分の絶対値が小さくなる。しかし、図9(図9(a)は過去フレームを示しており、図9(b)は現在フレームを示しており、図9(c)は差分の絶対値を示している。)のように過去フレームは背景のみのであるが、現在フレームには人fが映っている場合は、過去フレームと現在フレームの差分の絶対値が大きくなる。即ち、人が映っている部分の差分の絶対値が大きくなることになる。
 よって、この差分の絶対値の大きい部分を抽出することで被写体の形状や大きさを知ることができる。
Specifically, FIG. 8 (FIG. 8A shows the past frame, FIG. 8B shows the current frame, and FIG. 8C shows the absolute value of the difference.) As described above, when only the background is shown in the past frame and only the background is shown in the current frame, the absolute value of the difference between the past frame and the current frame becomes small. However, as shown in FIG. 9 (FIG. 9A shows the past frame, FIG. 9B shows the current frame, and FIG. 9C shows the absolute value of the difference). The past frame is only the background, but when the person f is shown in the current frame, the absolute value of the difference between the past frame and the current frame becomes large. That is, the absolute value of the difference in the portion where the person is shown increases.
Therefore, it is possible to know the shape and size of the subject by extracting a portion where the absolute value of the difference is large.
 本実施例では、図1の物体基準情報記憶手段2に記憶しておいた検出したい「人」の形や大きさの情報(物体基準情報)と、被写体の形状と大きさを比較して合致するかどうかを判断し、その結果を分析結果情報記憶手段4に保存する。 In this embodiment, the shape and size information (object reference information) of the “person” to be detected stored in the object reference information storage unit 2 in FIG. 1 is compared with the shape and size of the subject. The result is stored in the analysis result information storage unit 4.
 なお、本実施例の監視システムとインターネット回線を接続するなど、目的に応じて外部のシステムと連携することで、分析結果情報記憶手段4に記憶されている結果が、「侵入者(人)がいる」という判断結果である場合は、例えば、電子メールで遠隔地の人に通知したり、外部のスピーカーを通じて警告音声を発したり、外部のライトの光を発したり、というように目的に応じて様々な応用が可能になる。 It should be noted that the result stored in the analysis result information storage means 4 is “intruder (person) by connecting with the external system according to the purpose, such as connecting the monitoring system of this embodiment and the Internet line”. If the result is `` Yes '', for example, a remote person is notified by e-mail, a warning sound is emitted through an external speaker, or an external light is emitted, depending on the purpose. Various applications are possible.
 ここで、本実施例の監視システムと外部のシステムと連携することで、遠隔地の端末(例えば、据え置き型のパソコンや携帯可能な小型パソコンなど)に監視システムの映像を送信し、遠隔地の端末で映像を確認することが可能となるのであるが、プライバシー保護の問題を考慮すると、通常状態(異常が発生していない状態)では映像を非表示とし、人が倒れる等の緊急事態が発生した場合のみ映像(動画や静止画)を表示するといった構成とすることが好ましい。 Here, by linking the monitoring system of this embodiment with an external system, the monitoring system video is transmitted to a remote terminal (for example, a stationary personal computer or a portable small personal computer), and Although it is possible to check the video on the terminal, in consideration of privacy protection issues, an emergency situation occurs, such as the video being hidden in the normal state (the state where no abnormality has occurred) and the person falling down. It is preferable that the video (moving image or still image) is displayed only in the case where it is performed.
 また、本実施例の監視システムを複数(例えば、2つ)利用して、複数の場所の見守りを行っている状況で、複数のカメラで同時に、若しくは、次々に異常発生の映像が送信された場合には、先発の映像が後発の映像に切り替えられる等によって、監視者の見落としが生じることも考えられる。そのため、監視者の端末には履歴を表示し、履歴の中から選択した映像を閲覧できるような構成とすることが好ましい。 In addition, when a plurality of (for example, two) monitoring systems of this embodiment are used to watch over a plurality of places, a plurality of cameras transmit images of occurrence of an abnormality simultaneously or one after another. In such a case, it may be possible that the supervisor is overlooked, for example, by switching the first video to the later video. For this reason, it is preferable that the history is displayed on the terminal of the supervisor and the video selected from the history can be viewed.
 また、本実施例の監視システムでは、矩形セット(図1の物体基準情報記憶手段2に記憶した小矩形と大矩形の組み合わせを意味する)を指定することで、犬や猫などの小動物や、自動車が通過するのを無視することができる。そのため、小動物や自動車が通るたびに、監視者は現場の確認のために奔走しなくて済むようになり、侵入者(人)が存在するときのみ対応すれば良く、効率の良い監視システムを構築することができる。 In the monitoring system of the present embodiment, by specifying a rectangular set (meaning a combination of a small rectangle and a large rectangle stored in the object reference information storage unit 2 in FIG. 1), small animals such as dogs and cats, You can ignore the car passing. Therefore, every time a small animal or car passes, the supervisor does not have to fend for on-site confirmation, and it is only necessary to respond when there is an intruder (person), creating an efficient surveillance system can do.
 また、本実施例の監視システムでは、矩形セットの設定を変更することで、様々な用途に対応することができる。
 例えば、子供と大人を区別するような使い方が挙げられる。その場合、幼児や児童よりも少し大きい形の小矩形を設定し、大人の人よりも少し大きい大矩形を設定しておく。このように設定しておくことで例えば、幼稚園や小学校において、幼児や児童には反応せずに、大人(不審者)が入ってきた場合に、職員室に自動通知するシステムとして応用することができる。
Moreover, in the monitoring system of a present Example, it can respond to various uses by changing the setting of a rectangular set.
For example, it can be used to distinguish between children and adults. In that case, a small rectangle that is slightly larger than an infant or a child is set, and a large rectangle that is slightly larger than an adult person is set. By setting in this way, for example, it can be applied as a system that automatically notifies the staff room when an adult (suspicious person) comes in without reacting to an infant or child in a kindergarten or elementary school. it can.
 なお、このような機能を持たない従来の熱センサーや動体検知センサーであれば、大人だけでなく、子供(幼児や児童)にも反応して通知が発生するため、防犯上、重要ではない通知が多発してしまうことになり、通知のたびに、人が確認しなければならず、監視する人の労力を減らすことができなくなってしまったり、もしくは、通知があっても「また子供(幼児や児童)が通ったのだろう」という思い込みにより、本当に不審者が侵入しているようなケースを見落としてしまったりする事態に発展してしまう恐れがある。これに対して、本実施例の監視システムでは、このような問題を大幅に軽減できる。 In the case of conventional thermal sensors and motion detection sensors that do not have such functions, notifications are generated not only for adults but also for children (infants and children), so notifications that are not important for crime prevention Will occur frequently, and each notification must be confirmed by the person, making it impossible to reduce the labor of the person who is monitoring, May be overlooked when a suspicious person is invading. On the other hand, such a problem can be greatly reduced in the monitoring system of the present embodiment.
 本実施例の監視システムでは、カメラの設置位置によって被写体が違う形に見える問題に柔軟に対応することができる。
 例えば、図3(a)のようにカメラが横から水平に人を撮影した場合、映像中の人(被写体)は、縦に長い長方形に近いような形に見える。それに対して、例えば図2(a)のように、カメラが上から人を撮影した場合は、カメラの映像は、円形もしくは正方形に近い形に見える。このように同じ人(被写体)でも、撮影するカメラの位置との相対関係により、違う形に見える。こうした状況に鑑みて、本実施例の監視システムでは、カメラを設置した状態で、画面を見ながら、現場に応じて検出したい物体の形を指定できるために、設置場所に左右されにくいシステムを構築することができる。
In the monitoring system of the present embodiment, it is possible to flexibly cope with the problem that the subject looks different depending on the installation position of the camera.
For example, as shown in FIG. 3A, when the camera captures a person from the side horizontally, the person (subject) in the video looks like a long rectangle. On the other hand, for example, as shown in FIG. 2A, when the camera captures a person from above, the image of the camera looks circular or nearly square. In this way, even the same person (subject) looks different depending on the relative relationship with the position of the camera to shoot. In view of these circumstances, the monitoring system of the present embodiment has a system that is less dependent on the installation location because the shape of the object to be detected can be specified according to the site while viewing the screen while the camera is installed. can do.
 また、本実施例の監視システムでは、複数の矩形セットを設定することで、「人が倒れた状態」を検出することもできる。
 例えば、独り暮らしの老人が、体の具合が悪くなって倒れてしまって、そのまま数日間、放置されてしまうと命を落としてしまうような重大な事故を招いてしまうかもしれない。このような独居老人の重大な事故については、「人が倒れた状態」を検出することで未然に防ぐことも可能となる。
In the monitoring system of the present embodiment, it is also possible to detect “a state where a person has fallen” by setting a plurality of rectangular sets.
For example, an elderly person living alone might fall seriously and fall down, leading to a serious accident that would result in death if left unattended for several days. It is possible to prevent such a serious accident of an elderly person living alone by detecting “a state where a person has fallen”.
 ただし、「人が倒れた状態」は、カメラの設置位置と、倒れた人の位置との相対関係によって、カメラの映像としては、違う形状と大きさで見える。例えば、図10のように、人が倒れている状態を、カメラが横から水平に撮影した場合は、カメラの映像は、図11のように見える。また、図12のように倒れている人を、カメラが横から水平に撮影した場合、カメラの映像は、図13のように見える。図11と図13を比較すると分かるように、倒れている人を同じ位置からカメラが撮影しても、倒れる向きによって、これほど違って見える。 However, the “state where a person has fallen” appears to be a different shape and size as a camera image, depending on the relative relationship between the position of the camera and the position of the person who has fallen. For example, as shown in FIG. 10, when the camera is photographed from the side in a state where a person is lying down, the video of the camera looks as shown in FIG. In addition, when the camera photographs a person who is lying down as shown in FIG. 12 from the side, the video of the camera looks as shown in FIG. As can be seen by comparing FIG. 11 and FIG. 13, even if the camera captures a person who has fallen from the same position, it looks so different depending on the direction of the fall.
 これらのような場合においても、本実施例の監視システムでは充分に対応することができる。即ち、図11及び図13のいずれも人が倒れた状態であるとして検出できる様に、複数の矩形セットを指定して、これらを検出したい物体の基準情報として、図1の物体基準情報記憶手段2に記憶し、映像分析手段3において、この複数の矩形セットのいずれかに合致する被写体を検知したら、「人が倒れている」と判定することで、「人が倒れた状態」を充分に検出することが可能となる。 Even in these cases, the monitoring system of the present embodiment can sufficiently cope. That is, the object reference information storage means shown in FIG. 1 is designated as reference information of an object to be detected by designating a plurality of rectangular sets so that both of FIGS. 11 and 13 can be detected as being in a fallen state. 2 and when the subject that matches any of the plurality of rectangular sets is detected in the video analysis means 3, it is determined that “the person has fallen”, thereby sufficiently indicating “the person has fallen” It becomes possible to detect.
 具体的には、図11のような倒れた状態を検出するために、図14のような矩形セットを指定し、図13のような倒れた状態を検出するために、図15のような矩形セットを指定する。他にも倒れ方に応じた複数の矩形セットを指定することで、人が倒れている状態を検出する精度を高めることができる。なお、図14及び図15中の符号Aは大矩形を示し、符号Bは小矩形を示している。 Specifically, a rectangle set as shown in FIG. 14 is designated to detect the fallen state as shown in FIG. 11, and a rectangle as shown in FIG. 15 is used to detect the fallen state as shown in FIG. Specify a set. In addition, by specifying a plurality of rectangular sets according to how to fall, it is possible to improve the accuracy of detecting a state where a person is falling. In FIG. 14 and FIG. 15, symbol A indicates a large rectangle, and symbol B indicates a small rectangle.
 他にも、例えば、図16のようにカメラ6が人を上から撮影するように設置した場合、立っている人は、カメラの映像では、図17のように見える。しかし、図18のように倒れている人を上から撮影した場合、カメラの映像は、図19のように見える。カメラが上から人を撮影した場合の図19のカメラの映像は、カメラが横から人を撮影した場合のカメラの映像(図11と図13)とは違った形になる。よって、図19のような倒れた映像に対応するために、図20のような矩形セットが必要になる。このように、複数の矩形セットを指定できることで、カメラの設置位置の変更にも柔軟に対応して「人が倒れた状態」を検出できるシステムを実現できる。なお、図20中の符号Aは大矩形を示し、符号Bは小矩形を示している。 In addition, for example, when the camera 6 is installed so as to photograph a person from above as shown in FIG. 16, a standing person looks like FIG. 17 in the image of the camera. However, when a person who is lying down is photographed from above as shown in FIG. 18, the video of the camera looks as shown in FIG. The image of the camera in FIG. 19 when the camera captures a person from above is different from the image of the camera when the camera captures a person from the side (FIGS. 11 and 13). Therefore, a rectangular set as shown in FIG. 20 is required to deal with a fallen image as shown in FIG. As described above, by specifying a plurality of rectangular sets, it is possible to realize a system that can flexibly cope with a change in the installation position of the camera and detect a “person who has fallen”. In FIG. 20, symbol A indicates a large rectangle, and symbol B indicates a small rectangle.
 また、上記は例として、物体基準情報を矩形で表現したが、矩形に限定する必要はなく、例えば、図21のように、人のいろいろな状態でのモデルを準備して、これらの中から利用したいものを選択して利用したり、選択したモデルの形状や大きさを調整(拡大、縮小、変形)して利用したりしても良い。 In addition, as an example, the object reference information is represented by a rectangle. However, the object reference information is not limited to a rectangle. For example, as shown in FIG. You may select and use what you want to use, or adjust (enlarge, reduce, or deform) the shape and size of the selected model.
 また、用途によって、検出したいものも変わるので、モデルも、人の形のものに限らず、動物や自動車など様々な物体のモデルを準備して物体基準情報として利用することで、例えば自動車の数や、人の数を区別して集計できるようなシステムとしても利用できる。 In addition, what you want to detect changes depending on the application, so the model is not limited to the human form, but by preparing various object models such as animals and cars and using them as object reference information, for example, the number of cars It can also be used as a system that can aggregate and count the number of people.
 なお、図1の映像記憶手段1に記憶される映像はフレームに限定する必要はなく、フィールドであっても良いし、例えば輝度と色差に変換されている場合は、輝度だけ、あるいは色差だけといった構成でも良く、また、拡大や縮小、もしくは、一部の抜粋であったり、周波数変換されていたりするものや、時間軸方向または空間方向において微分などのフィルタリング処理、色数の変更や、量子化により各信号の諧調変更などが行われていても良い。 Note that the video stored in the video storage unit 1 of FIG. 1 is not limited to a frame, and may be a field. For example, when converted into luminance and color difference, only luminance or color difference is used. It may be configured, and it may be enlarged or reduced, partly excerpted, frequency converted, filtering processing such as differentiation in the time axis direction or spatial direction, color number change, quantization, etc. Thus, the tone change of each signal may be performed.
 なお、本実施例では、図1のカメラ6は、アナログの映像信号をデジタルに変換してコンピュータ5に入力できるものとしたが、カメラ6がアナログ信号を出力するものであれば、コンピュータ5とカメラ6の間か、コンピュータ5内にデジタル信号に変換する機能を具備する構成としても良いし、また、カメラ6の映像信号が何らかの圧縮がなされていれば、コンピュータ5とカメラ6の間か、コンピュータ5内に圧縮されている信号を伸長する手段を具備する構成としても良い。 In the present embodiment, the camera 6 in FIG. 1 can convert an analog video signal into a digital signal and input it to the computer 5. However, if the camera 6 outputs an analog signal, It is good also as a structure which equips the computer 5 with the function which converts into a digital signal in the computer 6, and if the video signal of the camera 6 is some compression, between the computer 5 and the camera 6, It is good also as a structure provided with a means to expand | extract the signal compressed in the computer 5. FIG.
 また、本実施例では、映像信号をカメラ6からコンピュータ5に直接入力しながら利用する構成としたが、カメラ6の映像をインターネット回線などを用いて、遠隔地のコンピュータ5に送信する形としても同様の意味を成す。 In this embodiment, the video signal is used while being directly input from the camera 6 to the computer 5. However, the video of the camera 6 may be transmitted to the remote computer 5 using an Internet line or the like. It has the same meaning.
 なお、カメラ6の被写体の形状や大きさを抽出する方法については、本実施例に示した方法に限定するものではない。 Note that the method of extracting the shape and size of the subject of the camera 6 is not limited to the method shown in this embodiment.
 また、本実施例では監視システムの主な構成を、コンピュータ上で動作するソフトウェアとしたが、その場合はコンピュータのプロセッサ上で動作するプログラムという形態となり、各種制御はCPUで行われ、各種記憶手段はコンピュータ上のメモリやハードディスクなどで構成される。これらの機能は、システムLSIや、その他のハードウェア、またはコンピュータ上で動作するソフトウェア、またはコンピュータに組み込まれるハードウェア、またはソフトウェアとハードウェアの両方、という構成を取っても良い。更に、コンピュータ上で動作するプログラムを記録した記録媒体であっても良い。 In this embodiment, the main configuration of the monitoring system is software that operates on a computer. In that case, the monitoring system is in the form of a program that operates on the processor of the computer. Various controls are performed by the CPU, and various storage means. Consists of a computer memory and hard disk. These functions may be configured as system LSI, other hardware, software operating on a computer, hardware incorporated in a computer, or both software and hardware. Furthermore, the recording medium which recorded the program which operate | moves on a computer may be sufficient.
 なお、本実施例ではカメラが1台の場合を示したが、カメラが複数台であっても良い。 In addition, although the case where the number of cameras was one was shown in the present Example, there may be a plurality of cameras.
 また、カラーカメラのような可視光線を受光するものに限らず、例えば赤外線受光機などの、何らかの光信号を受信可能なものは、実質的に映像を取得できるため、これらも本実施例で述べたカメラに含まれるものとする。 Further, not only those that receive visible light such as a color camera, but those that can receive any optical signal, such as an infrared light receiver, can substantially acquire an image, so these are also described in the present embodiment. Included in the camera.
 以下に、上記従来の監視システムの課題を解決するための、本発明の実施例2について説明する。 Hereinafter, a second embodiment of the present invention for solving the problems of the conventional monitoring system will be described.
 図26に、本発明の実施例2の監視システムの構成を示す。
 図26において、1は、映像信号を記憶する映像記憶手段で、例として時刻の異なる複数のフレームの映像を記憶するものとする。2は、映像中の被写体の中から検出したい物体の基準情報を記憶する物体基準情報記憶手段で、3は、映像記憶手段1に記憶されている映像信号を分析し、被写体が物体基準情報記憶手段2に記憶されている物体基準情報に合致するかどうかを判断する映像分析手段で、4は、映像分析手段3の分析結果情報を記憶する分析結果情報記憶手段で、5はコンピュータで、例として、映像記憶手段1と物体基準情報記憶手段2と映像分析手段3と分析結果情報記憶手段4は、コンピュータ5上のソフトウェアプログラムとして実装されるものとする。6はカメラで、7は映像を表示するモニター(映像表示手段)である。8はマウスであり、物体基準情報入力手段として活用する。カメラ6は、アナログの映像信号をデジタルに変換してコンピュータ5に入力できるものであり、以上は、実施例1の図1の構成と同様のものである。9は、物体の基準情報の、画面内における有効エリアの座標を有効エリア情報とし、有効エリア情報を記憶する有効エリア情報記憶手段である。8のマウスを効エリア情報入力手段としても活用する。
 なお、実施例1との違いは、この有効エリア情報記憶手段9が具備されている点である。
FIG. 26 shows the configuration of the monitoring system according to the second embodiment of the present invention.
In FIG. 26, reference numeral 1 denotes video storage means for storing a video signal, and it is assumed that videos of a plurality of frames having different times are stored as an example. Reference numeral 2 denotes an object reference information storage means for storing reference information of an object to be detected from the subject in the video, and 3 denotes an analysis of the video signal stored in the video storage means 1 so that the subject is stored in the object reference information. An image analysis means for determining whether or not the object reference information stored in the means 2 matches, 4 is an analysis result information storage means for storing the analysis result information of the image analysis means 3, 5 is a computer, Assuming that the video storage means 1, the object reference information storage means 2, the video analysis means 3, and the analysis result information storage means 4 are implemented as software programs on the computer 5. Reference numeral 6 denotes a camera, and reference numeral 7 denotes a monitor (video display means) for displaying video. A mouse 8 is used as an object reference information input unit. The camera 6 can convert an analog video signal to digital and input it to the computer 5, and the above is the same as the configuration of FIG. Reference numeral 9 denotes effective area information storage means for storing effective area information by using effective area coordinates in the screen of the reference information of the object as effective area information. 8 mouse is also used as an effective area information input means.
The difference from the first embodiment is that this effective area information storage means 9 is provided.
 まず、この監視システムの動作の概要を述べる。
 例えば、屋外での防犯目的での利用においては、侵入者(人)を検出して、例えば、警報を鳴らしたり、遠隔地の監視者に通知したりするような利用方法が考えられる。
 しかしながら、屋外には、人だけでなく、例えば、犬や猫などの小動物も存在する。犬や猫が通るたび警報が鳴ったり、遠隔地に通知が行われたりすると監視者は現場の映像の確認作業が増えてしまうため、侵入者(人)が現れるなどの重要な事象の場合のみ通知が行われるようなシステムが望まれる。
First, an outline of the operation of this monitoring system will be described.
For example, in the case of use for crime prevention outdoors, there may be a utilization method in which an intruder (person) is detected and, for example, an alarm is sounded or a remote supervisor is notified.
However, not only humans but also small animals such as dogs and cats exist outdoors. When a warning is sounded every time a dog or cat passes or a notification is sent to a remote location, the monitor will increase the work of checking the video on the site, so only in case of an important event such as an intruder (person) appearing A system in which notification is performed is desired.
 それを実現するために、本実施例では、カメラ6の映像信号を映像記憶手段1に記憶し、その記憶した映像を、映像分析手段3で分析して、被写体の形状や大きさを判断する。なお、あらかじめ検出したい侵入者(人)の形状や大きさの情報を、物体基準情報として物体基準情報記憶手段2に記憶しておき、この物体基準情報と、被写体の形状と大きさを比較して合致するかどうか、すなわち、侵入者(人)の形状や大きさを満たしているかどうかを判断し、その結果を分析結果情報記憶手段4に保存する。
 ここまでは、実施例1と同様の動作である。
In order to realize this, in this embodiment, the video signal of the camera 6 is stored in the video storage means 1 and the stored video is analyzed by the video analysis means 3 to determine the shape and size of the subject. . Information on the shape and size of the intruder (person) to be detected in advance is stored in the object reference information storage means 2 as object reference information, and the object reference information is compared with the shape and size of the subject. Are determined, that is, whether the shape and size of the intruder (person) are satisfied, and the result is stored in the analysis result information storage means 4.
Up to this point, the operation is the same as in the first embodiment.
 本実施例が、実施例1と異なるのは、画面内において、物体基準情報を適用する有効エリアの座標を有効エリア情報記憶手段9に記憶しておき、有効エリアを制限することで、鳥などの小動物と人間を判別する精度を向上させるという点である。 This embodiment is different from the first embodiment in that the effective area coordinates to which the object reference information is applied are stored in the effective area information storage means 9 in the screen, and the effective area is limited so that a bird or the like This is to improve the accuracy of discriminating between small animals and humans.
 図22は、遠方に人gと鳥hが存在している様子を示したものである。人も鳥も、どちらもカメラから同じくらいの距離に存在しており、この場合、それぞれの大きさが全く異なるため、物体基準情報として鳥より十分大きな小矩形を設定しておくことで、鳥を人と誤認識することはない。 FIG. 22 shows a situation where a person g and a bird h exist in the distance. Both humans and birds exist at the same distance from the camera. In this case, the size of each is completely different, so by setting a small rectangle sufficiently larger than the bird as object reference information, Is not mistaken for a person.
 しかしながら、図23のようなケースは問題がある。即ち、図23は、人gは遠方に存在し、鳥hはカメラの近くに存在する状態を示しており、本来、人間より十分小さい鳥であっても、カメラに近づくことで、画面中の大きさとしては、鳥が人と同等くらいの大きさに映るケースが問題となる。この場合、大きさや形の違いが小さいため、鳥を人と誤認識してしまう可能性がある。
 このような問題を抑制するために、本実施例では、物体基準情報を適用するエリアの座標を、有効エリア矩形Xとして図24のように設定することで、遠方の人にのみ物体基準情報が適用されることとし、近くに存在している鳥を人であると誤認識することを回避することができる。
However, the case as shown in FIG. 23 has a problem. That is, FIG. 23 shows a state in which a person g exists in the distance and a bird h exists in the vicinity of the camera. As for the size, the case where a bird appears as large as a human being is a problem. In this case, since the difference in size and shape is small, there is a possibility that a bird is mistakenly recognized as a person.
In order to suppress such a problem, in this embodiment, the coordinates of the area to which the object reference information is applied are set as the effective area rectangle X as shown in FIG. It is possible to avoid misrecognizing that a bird existing nearby is a person.
 次に、この監視システムの動作の詳細を述べる。 Next, the details of the operation of this monitoring system will be described.
 犬や猫などの小動物には無反応(警報や、遠隔地への通知を行わない)で、侵入者(人)のみを検出して反応する(警報や、遠隔地へ通知を行う)ようにしたい場合は、小動物と人を自動的に判別する必要がある。そのためには、図26のカメラ6の映像中の被写体の形状や大きさを判断する必要がある。 No reaction to small animals such as dogs and cats (no alarms or notifications to remote locations), only reacting by detecting intruders (people) (alarms or notifications to remote locations) If you want to, you need to automatically distinguish between small animals and people. For this purpose, it is necessary to determine the shape and size of the subject in the video of the camera 6 in FIG.
 しかしながら、映像中の被写体の形状や大きさは、カメラと被写体との距離や角度によって変化するため、例えば「人」を検出したい物体とした場合、その形や大きさの情報(物体基準情報)が固定のものであっては対応できないという問題が発生する。
 具体的には、図2(a)のようにカメラ6が上から、下にいる人を撮影した場合と、図3(a)のように、カメラ6が横から水平に人を撮影した場合では、同じ人を撮影した場合でも形状が大きく異なって見える。そして、カメラの設置位置は、図3(a)のように常に水平になるように設置できるとは限らず、カメラと被写体との距離や角度は、状況によって無限の組み合わせが存在することになる。
However, since the shape and size of the subject in the video changes depending on the distance and angle between the camera and the subject, for example, when “object” is to be detected, information on the shape and size (object reference information) There is a problem that it is not possible to cope with a fixed one.
Specifically, the case where the camera 6 has taken a person from the top as shown in FIG. 2A and the case where the camera 6 has taken a person from the side as shown in FIG. 3A. Then, even if the same person is photographed, the shape looks very different. The camera installation position is not always horizontal as shown in FIG. 3A, and there are infinite combinations of distances and angles between the camera and the subject depending on the situation. .
 このように、カメラの設置位置によって被写体の形状と大きさが変わってしまう問題に柔軟に対応できるように、検出したい物体の物体基準情報(形状と大きさ)を入力する機能を具備する点が本発明の重要な特徴の一つである。 Thus, in order to flexibly cope with the problem that the shape and size of the subject changes depending on the installation position of the camera, it has a function of inputting object reference information (shape and size) of the object to be detected. This is one of the important features of the present invention.
 本発明の監視システムでは、まず、検出したい物体(例:人)の形や大きさの情報(物体基準情報)を、物体基準情報記憶手段2に記憶させる。
 物体基準情報を入力する方法の例として、図26のモニター7の映像を見ながら、マウス8を使って図4のように、物体基準情報の、下限として小矩形Bと、上限として大矩Aを設定する。
 ここで、図4の小矩形は、小動物(猫)bより、やや大きい矩形を設定する。図4の大矩形は、人aの大きさより、やや大きい矩形を設定する。例えば、この小矩形と大矩形のそれぞれの縦の長さと横の長さを、検出したい物体の基準情報として、図26の物体基準情報記憶手段2に記憶するのである。
In the monitoring system of the present invention, first, information (object reference information) on the shape and size of an object (for example, a person) to be detected is stored in the object reference information storage means 2.
As an example of a method for inputting the object reference information, a small rectangle B as a lower limit and a large rectangle A as an upper limit of the object reference information as shown in FIG. Set.
Here, the small rectangle in FIG. 4 is set to be slightly larger than the small animal (cat) b. The large rectangle shown in FIG. 4 is set to be slightly larger than the size of the person a. For example, the vertical and horizontal lengths of the small rectangle and the large rectangle are stored in the object reference information storage unit 2 in FIG. 26 as reference information of the object to be detected.
 上記の様に小矩形Bと大矩形Aを設定した場合には、図5のように、小矩形Bより大きく、大矩形Aより小さい物体(図5の中の斜線部分に該当する大きさと形を有する物体)を検知することとなる。 When the small rectangle B and the large rectangle A are set as described above, an object larger than the small rectangle B and smaller than the large rectangle A (a size and a shape corresponding to the shaded portion in FIG. 5), as shown in FIG. An object having a) is detected.
 具体的には、図6は、物体(人)Cの縦の長さDと、横の長さEを示した図であるが、この縦の長さDが、図5の小矩形の縦の長さ以上で、大矩形の縦の長さ以下で、かつ、図6の横の長さEが、図5の小矩形の横の長さ以上で、大矩形の横の長さ以下であった場合には、こうした物体(人)を検出することとする。 Specifically, FIG. 6 is a diagram showing the vertical length D and the horizontal length E of the object (person) C. This vertical length D is the vertical length of the small rectangle in FIG. 6 and below the vertical length of the large rectangle, and the horizontal length E in FIG. 6 is not less than the horizontal length of the small rectangle in FIG. 5 and not more than the horizontal length of the large rectangle. If there is, such an object (person) is detected.
 このように小矩形Bと大矩形Aを定めることで、図4のように小矩形より小さい猫のような小動物は、検出の対象から除外されることになる。そして、大矩形より小さい「人」が検出対象となり、犬や猫のような小動物に反応しない優れた監視システムを構築できる。 By defining the small rectangle B and the large rectangle A in this way, a small animal such as a cat smaller than the small rectangle as shown in FIG. 4 is excluded from the detection target. A “person” smaller than the large rectangle becomes a detection target, and an excellent monitoring system that does not react to small animals such as dogs and cats can be constructed.
 ここで、小矩形だけでなく、大矩形を設定することも重要な意味がある。
 例えば、図7のように、映像中に、人dより十分大きな自動車eなどが映るような状況を想定する。具体的には、屋外で敷地の前の道路を自動車が通行して通り抜けるケースなどである。その場合、自動車は道路を単に通り抜けるだけで、防犯上は特に問題が無いのであるが、この自動車が通るたびに反応する(警報や、遠隔地への通知が行われる)ようでは、監視者の負担を軽減できない問題につながる。そこで、大矩形を設定して、大矩形よりも大きな自動車などを無視して、大矩形よりも小さい「人」を検出できるようにすることで、例えば、建物や敷地内に侵入しようとする「人」だけに反応することとなる(警報で威嚇したり、遠隔地の監視者に通知したりする)。
 そして、このことによって監視者にとって労力の負担が小さい効率的な監視システムを構築できるため、上述した様に、小矩形だけでなく、大矩形を設定することが重要となるのである。
Here, it is important to set not only a small rectangle but also a large rectangle.
For example, as shown in FIG. 7, a situation is assumed in which an automobile e or the like sufficiently larger than the person d is shown in the video. Specifically, it is a case where an automobile passes through a road in front of the site outdoors. In that case, the car simply passes through the road, and there is no particular problem in terms of crime prevention. However, if the car reacts each time it passes (alarms or notification to remote locations), This leads to problems that cannot be reduced. Therefore, by setting a large rectangle and ignoring automobiles that are larger than the large rectangle, it is possible to detect "people" smaller than the large rectangle, for example, trying to enter a building or site. It reacts only to "people" (intimidating with alarms or notifying remote observers).
And since this makes it possible to construct an efficient monitoring system that places little burden on the observer, it is important to set not only the small rectangle but also the large rectangle as described above.
 次に、本発明の監視システムでは、図26のカメラ6からの映像信号を映像記憶手段1に記憶し、映像分析手段3で、被写体の形状や大きさを分析する。
 その分析方法としては限定されるものではなく、例えば、被写体(移動物体)が何も映っていない数秒前の過去フレームと、被写体が存在する現在フレームの、フレーム間差分の絶対値を得る方法が挙げられる。
Next, in the monitoring system of the present invention, the video signal from the camera 6 of FIG. 26 is stored in the video storage means 1 and the video analysis means 3 analyzes the shape and size of the subject.
The analysis method is not limited. For example, there is a method of obtaining an absolute value of the interframe difference between a past frame several seconds before where no subject (moving object) is reflected and a current frame where the subject exists. Can be mentioned.
 具体的には、図8のように、過去フレームに背景のみが映っていて、現在フレームも背景のみが映っている場合は、過去フレームと現在フレームの差分の絶対値が小さくなる。しかし、図9のように過去フレームは背景のみのであるが、現在フレームには人fが映っている場合は、過去フレームと現在フレームの差分の絶対値が大きくなる。即ち、人が映っている部分の差分の絶対値が大きくなることになる。
 よって、この差分の絶対値の大きい部分を抽出することで被写体の形状や大きさを知ることができる。
Specifically, as shown in FIG. 8, when only the background is shown in the past frame and only the background is shown in the current frame, the absolute value of the difference between the past frame and the current frame becomes small. However, as shown in FIG. 9, the past frame is only the background, but when the person f is reflected in the current frame, the absolute value of the difference between the past frame and the current frame becomes large. That is, the absolute value of the difference in the portion where the person is shown increases.
Therefore, it is possible to know the shape and size of the subject by extracting a portion where the absolute value of the difference is large.
 なお、本実施例では、図26のモニター7の映像を見ながら、マウス8を使って、例えば図24のように有効エリア矩形を設定する。この有効エリアの例えば左上端と、右下端の座標情報を、有効エリア情報として、図26の有効エリア情報記憶手段9に記憶することとなる。 In this embodiment, an effective area rectangle is set as shown in FIG. 24, for example, using the mouse 8 while watching the image on the monitor 7 in FIG. For example, the coordinate information of the upper left end and the lower right end of this effective area is stored as effective area information in the effective area information storage means 9 of FIG.
 本発明の監視システムでは、図26の物体基準情報記憶手段2に記憶しておいた検出したい「人」の形や大きさの情報(物体基準情報)と、被写体の形状と大きさを比較して合致するかどうかを判断し、かつ、その被写体が、有効エリア情報記憶手段9に記憶されている有効エリア内に存在しているかを判断して、その結果を分析結果情報記憶手段4に保存する。このように物体基準情報を適用する有効エリアを設けることで、図24のように、有効エリア内の遠くの人間のみを検知して、有効エリア外の近くの鳥を人であると誤認識することを抑えることができる。 In the monitoring system of the present invention, the shape and size information (object reference information) of the “person” to be detected stored in the object reference information storage unit 2 in FIG. 26 is compared with the shape and size of the subject. And whether or not the subject exists in the effective area stored in the effective area information storage means 9 and the result is stored in the analysis result information storage means 4. To do. By providing an effective area to which the object reference information is applied in this way, only a distant person within the effective area is detected as shown in FIG. 24, and a bird near the effective area is erroneously recognized as a person. That can be suppressed.
 なお、本発明の監視システムとインターネット回線を接続するなど、目的に応じて外部のシステムと連携することで、分析結果情報記憶手段4に記憶されている結果が、「侵入者(人)がいる」という判断結果である場合は、例えば、電子メールで遠隔地の人に通知したり、外部のスピーカーを通じて警告音声を発したり、外部のライトの光を発したり、というように目的に応じて様々な応用が可能になる。 It should be noted that the result stored in the analysis result information storage means 4 is “there is an intruder (person)” by linking with the external system according to the purpose, such as connecting the monitoring system of the present invention and the Internet line. ”, For example, notification to a person at a remote location by e-mail, warning sound through an external speaker, emission of light from an external light, etc. Application becomes possible.
 また、本実施例では、複数の異なる矩形セットを設定し、画面内のある有効エリア内では人を検知して、別の有効エリア内では車を検知するなど、目的に応じた柔軟な設定が可能な監視システムを構築できる。 In addition, in this embodiment, a plurality of different rectangular sets are set, a person can be detected in one effective area on the screen, and a car can be detected in another effective area. Possible monitoring system can be constructed.
 更に、本実施例では、物体基準情報を適用する有効エリアを設けることで、実施例1の構成のメリットを備えた上で、図24のように、有効エリア内の遠くの人間のみを検知して、有効エリア外の近くの鳥を人であると誤認識することを抑えることができるため、実施例1より更に高精度で誤報の少ない監視システムを実現できる。 Furthermore, in the present embodiment, by providing an effective area to which the object reference information is applied, the advantage of the configuration of the first embodiment is provided, and only a far person in the effective area is detected as shown in FIG. Thus, it is possible to suppress erroneous recognition of a bird near the outside of the effective area as a person, so that a monitoring system with higher accuracy and fewer false alarms than the first embodiment can be realized.
 以下に、上記従来の監視システムの課題を解決するための、本発明の実施例3について説明する。 Hereinafter, a third embodiment of the present invention for solving the problems of the conventional monitoring system will be described.
 図27に、本発明の実施例3の監視システムの構成を示す。
 実施例2では、カメラが1台の場合を例に示したが、本実施例では、カメラが2台で、それに対応する各種手段を具備する構成としている。図27において、1は、映像信号を記憶する映像記憶手段で、例として時刻の異なる複数のフレームの映像を記憶するものとする。2は、映像中の被写体の中から検出したい物体の基準情報を記憶する物体基準情報記憶手段で、3は、映像記憶手段1に記憶されている映像信号を分析し、被写体が物体基準情報記憶手段2に記憶されている物体基準情報に合致するかどうかを判断する映像分析手段で、4は、映像分析手段3の分析結果情報を記憶する分析結果情報記憶手段で、9は、物体の基準情報の、画面内における有効エリアの座標を有効エリア情報とし、有効エリア情報を記憶する有効エリア情報記憶手段であり、6がカメラで、以上は、カメラ6用の手段である。
FIG. 27 shows the configuration of the monitoring system according to the third embodiment of the present invention.
In the second embodiment, the case where there is one camera is shown as an example. However, in this embodiment, two cameras are provided and various units corresponding to the two cameras are provided. In FIG. 27, reference numeral 1 denotes a video storage means for storing a video signal, and as an example, it is assumed that videos of a plurality of frames having different times are stored. Reference numeral 2 denotes an object reference information storage means for storing reference information of an object to be detected from the subject in the video, and 3 denotes an analysis of the video signal stored in the video storage means 1 so that the subject is stored in the object reference information. The video analysis means for determining whether or not the object reference information stored in the means 2 matches, 4 is an analysis result information storage means for storing the analysis result information of the video analysis means 3, and 9 is an object reference. The effective area information storage means for storing the effective area information using the coordinates of the effective area in the screen as the effective area information, 6 is the camera, and the above is the means for the camera 6.
 図27において、11は、映像信号を記憶する映像記憶手段で、例として時刻の異なる複数のフレームの映像を記憶するものとする。12は、映像中の被写体の中から検出したい物体の基準情報を記憶する物体基準情報記憶手段で、13は、映像記憶手段11に記憶されている映像信号を分析し、被写体が物体基準情報記憶手段12に記憶されている物体基準情報に合致するかどうかを判断する映像分析手段で、14は、映像分析手段13の分析結果情報を記憶する分析結果情報記憶手段で、15は、物体の基準情報の、画面内における有効エリアの座標を有効エリア情報とし、有効エリア情報を記憶する有効エリア情報記憶手段であり、10がカメラで、以上は、カメラ10用の手段である。16は、カメラ6に対応した分析結果と、カメラ10に対応した分析結果のナンドなどの論理的な判定を行う論理判定手段である。これは、複数カメラの分析結果を総合的に考慮して、検出状態の重要性を判断するものである。目的に応じてアンドやナンドを使い分けることが可能である。 27, reference numeral 11 denotes video storage means for storing a video signal, and it is assumed that, for example, videos of a plurality of frames having different times are stored. Reference numeral 12 denotes an object reference information storage means for storing reference information of an object to be detected from the subjects in the video. Reference numeral 13 analyzes the video signal stored in the video storage means 11 to store the object reference information. An image analysis means for determining whether or not the object reference information stored in the means 12 matches, 14 is an analysis result information storage means for storing the analysis result information of the image analysis means 13, and 15 is an object reference. Effective area information storage means for storing effective area information using the coordinates of the effective area in the screen as effective area information, 10 is a camera, and the above is means for the camera 10. Reference numeral 16 denotes logical determination means for performing logical determination such as NAND of the analysis result corresponding to the camera 6 and the analysis result corresponding to the camera 10. This is to determine the importance of the detection state by comprehensively considering the analysis results of a plurality of cameras. It is possible to use AND and NAND properly according to the purpose.
 5はコンピュータで、例として、映像記憶手段1と映像記憶手段11と物体基準情報記憶手段2と物体基準情報記憶手段12と映像分析手段3と映像分析手段13と分析結果情報記憶手段4と分析結果情報記憶手段14と論理判定手段16は、コンピュータ5上のソフトウェアプログラムとして実装されるものとする。7は映像を表示するモニター(映像表示手段)である。8はマウスであり、物体基準情報入力手段として活用する。カメラ6とカメラ10は、アナログの映像信号をデジタルに変換してコンピュータ5に入力できるものである。 Reference numeral 5 denotes a computer, for example, video storage means 1, video storage means 11, object reference information storage means 2, object reference information storage means 12, video analysis means 3, video analysis means 13, analysis result information storage means 4 and analysis. The result information storage unit 14 and the logic determination unit 16 are implemented as software programs on the computer 5. Reference numeral 7 denotes a monitor (video display means) for displaying video. A mouse 8 is used as an object reference information input unit. The camera 6 and the camera 10 can convert analog video signals into digital signals and input them to the computer 5.
 まず、この監視システムの動作の概要を述べる。
 図25に、各カメラの設置例を示し、それぞれのカメラの役割を説明する。
 カメラ6は、上から撮影し、転倒者の検知に用いる。カメラ10は、横から撮影して、転倒者とは別の歩行者の検知に用いる。通常、カメラ6が転倒者を検知した場合は、助けを呼ぶために、通知が必要であるが、カメラ10が別の歩行者を検知した場合、転倒者のそばに歩行者(すなわち介助できる人)が存在するケースであり、新たに助けを呼ぶ必要が無い。よって、論理判定手段16で、カメラ10が歩行者を検知した場合は、カメラ6の転倒者の検知をナンド(否定)することで、新たに助けを呼ばないような使い方に適用する。助けが必要ないケースにおいては、通知を行わないことは、監視者の現場確認や、人の現場への派遣のための労力を削減することができる。
First, an outline of the operation of this monitoring system will be described.
FIG. 25 shows an installation example of each camera, and the role of each camera will be described.
The camera 6 photographs from above and is used for detecting a fallen person. The camera 10 shoots from the side and is used to detect a pedestrian different from the fallen person. Normally, when the camera 6 detects a fallen person, notification is required to call for help. However, when the camera 10 detects another pedestrian, a pedestrian (that is, a person who can assist) ), And there is no need to call for new help. Therefore, when the camera 10 detects a pedestrian in the logic determination unit 16, the detection of a fallen person of the camera 6 is NANDed (denied), so that it is applied to a usage that does not call for help. In cases where help is not required, not giving notifications can reduce the effort required for monitoring the site of the supervisor and dispatching the person to the site.
 次に、この監視システムの動作の詳細を述べる。
 まず、カメラ6で検出したい物体(この場合は、「転倒者」)の形や大きさの情報(物体基準情報)を、物体基準情報記憶手段2に記憶させる。
 続いて、カメラ10で検出したい物体(この場合は、「転倒者とは別の歩行者」)の形や大きさの情報(物体基準情報)を、物体基準情報記憶手段12に記憶させる。
Next, details of the operation of this monitoring system will be described.
First, information (object reference information) of the shape and size of an object (in this case, “falling person”) to be detected by the camera 6 is stored in the object reference information storage unit 2.
Subsequently, information (object reference information) of the shape and size of the object (in this case, “a pedestrian different from the fallen person”) to be detected by the camera 10 is stored in the object reference information storage unit 12.
 物体基準情報を入力する方法の例として、図27のモニター7の映像を見ながら、マウス8を使って物体基準情報の、下限として小矩形と、上限として大矩形を設定する。同様に必要に応じて有効エリア情報を、有効エリア情報記憶手段15に記憶する。以下、分析結果が分析結果情報記憶手段14に記憶されるまでの流れは、名称が同一の手段については、実施例2と同様の動作がなされる。 As an example of a method for inputting object reference information, a small rectangle is set as the lower limit and a large rectangle is set as the upper limit of the object reference information using the mouse 8 while viewing the video on the monitor 7 of FIG. Similarly, effective area information is stored in the effective area information storage unit 15 as necessary. Hereinafter, the flow until the analysis result is stored in the analysis result information storage unit 14 is the same as that of the second embodiment with respect to the unit having the same name.
 本実施例では、論理判定手段16で、カメラ10が歩行者jを検知した場合は、カメラ6の転倒者kの検知をナンド(否定)する。すなわち、転倒者を手助けすることが可能な歩行者がいる場合は、通知が必要な重要な状況では無いと判定するのである。 In this embodiment, when the camera 10 detects the pedestrian j by the logic determination means 16, the detection of the fallen person k of the camera 6 is NANDed. That is, when there is a pedestrian who can help a fallen person, it is determined that the situation is not an important situation requiring notification.
 なお、本実施例の監視システムとインターネット回線を接続するなど、外部のシステムと連携することで、転倒者がいて、なおかつ、手助けが可能な歩行者がいない場合は、例えば、電子メールで遠隔地の人に通知し、手助けが可能な歩行者がいる場合は、通知しないことで、監視者の負担を大幅に削減することができる。 If there is a faller and there are no pedestrians who can help by linking with the external system, such as connecting the monitoring system of this embodiment and the Internet line, for example, by e-mail If there is a pedestrian who can notify and assist, the burden on the supervisor can be greatly reduced by not providing notification.
 1  映像記憶手段
 2  物体基準情報記憶手段
 3  映像分析手段
 4  分析結果情報記憶手段
 5  コンピュータ
 6  カメラ
 7  モニター
 8  マウス
 9  有効エリア情報記憶手段
 10  カメラ
 11  映像記憶手段
 12  物体基準情報記憶手段
 13  映像分析手段
 14  分析結果情報記憶手段
 15  有効エリア情報記憶手段
 16  論理判定手段
DESCRIPTION OF SYMBOLS 1 Image | video storage means 2 Object reference | standard information storage means 3 Image | video analysis means 4 Analysis result information storage means 5 Computer 6 Camera 7 Monitor 8 Mouse 9 Effective area information storage means 10 Camera 11 Image | video storage means 12 Object reference | standard information storage means 13 Image | video analysis means 14 Analysis result information storage means 15 Effective area information storage means 16 Logic determination means

Claims (12)

  1.  映像の中から検出したい物体の基準情報として、物体の形状の上限と下限の情報を記憶する物体基準情報記憶手段と、
     入力された映像の中の物体の形状が前記物体基準情報記憶手段に記憶された前記基準情報の上下限内であるか否かを判断する映像分析手段とを備える
     監視システム。
    Object reference information storage means for storing information on the upper and lower limits of the shape of the object as reference information of the object to be detected from the image;
    A monitoring system comprising: video analysis means for determining whether or not the shape of an object in the input video is within the upper and lower limits of the reference information stored in the object reference information storage means.
  2.  映像の中から検出したい物体の基準情報として、物体の形状の上限と下限の情報を記憶する物体基準情報記憶手段と、
     撮像される領域中の所定領域を特定する情報を有効領域情報として記憶する有効領域情報記憶手段と、
     入力された映像の中の物体が前記有効領域情報記憶手段に記憶された前記有効領域情報内に位置するか否かを判断すると共に、同物体の形状が前記物体基準情報記憶手段に記憶された前記基準情報の上下限内であるか否かを判断する映像分析手段とを備える
     監視システム。
    Object reference information storage means for storing information on the upper and lower limits of the shape of the object as reference information of the object to be detected from the image;
    Effective area information storage means for storing, as effective area information, information specifying a predetermined area in the imaged area;
    It is determined whether or not an object in the input video is located within the effective area information stored in the effective area information storage means, and the shape of the object is stored in the object reference information storage means A monitoring system comprising: video analysis means for determining whether or not the reference information is within an upper and lower limit.
  3.  入力された映像を記憶する映像記憶手段を備え、
     前記映像分析手段は、前記映像記憶手段に記憶された映像の中の物体の形状が前記物体基準情報記憶手段に記憶された前記基準情報の上下限内であるか否かを判断する
     請求項1に記載の監視システム。
    Video storage means for storing the input video;
    The video analysis means determines whether or not the shape of the object in the video stored in the video storage means is within the upper and lower limits of the reference information stored in the object reference information storage means. The monitoring system described in.
  4.  入力された映像を記憶する映像記憶手段を備え、
     前記映像分析手段は、前記映像記憶手段に記憶された映像の中の物体が前記有効領域情報記憶手段に記憶された前記有効領域情報内に位置するか否かを判断すると共に、同物体が前記物体基準情報記憶手段に記憶された前記基準情報の上下限内であるか否かを判断する
     請求項2に記載の監視システム。
    Video storage means for storing the input video;
    The video analysis means determines whether or not an object in the video stored in the video storage means is located in the effective area information stored in the effective area information storage means, and the object is The monitoring system according to claim 2, wherein it is determined whether or not the reference information stored in the object reference information storage means is within an upper and lower limit.
  5.  前記映像分析手段の判断結果を記憶する分析結果情報記憶手段を備える
     請求項1、請求項2、請求項3または請求項4に記載の監視システム。
    The monitoring system according to claim 1, 2, 3, or 4, further comprising analysis result information storage means for storing a judgment result of the video analysis means.
  6.  前記映像分析手段の判断結果を告知する告知手段を備える
     請求項1、請求項2、請求項3または請求項4に記載の監視システム。
    The monitoring system according to claim 1, claim 2, claim 3, or claim 4, comprising notification means for notifying the judgment result of the video analysis means.
  7.  前記映像分析手段に入力される映像を表示する映像表示手段と、
     前記物体基準情報記憶手段に前記基準情報を入力する物体基準情報入力手段とを備える
     請求項1、請求項2、請求項3または請求項4に記載の監視システム。
    Video display means for displaying video input to the video analysis means;
    The monitoring system according to claim 1, 2, 3, or 4, further comprising an object reference information input unit that inputs the reference information to the object reference information storage unit.
  8.  前記映像分析手段に入力される映像を表示する映像表示手段と、
     前記有効領域情報記憶手段に前記有効領域情報を入力する有効領域情報入力手段とを備える
     請求項2または請求項4に記載の監視システム。
    Video display means for displaying video input to the video analysis means;
    The monitoring system according to claim 2, further comprising: effective area information input means for inputting the effective area information to the effective area information storage means.
  9.  通信回線を介して接続された端末に対して前記映像分析手段に入力された映像を送信可能に構成された映像送信手段を備え
     前記映像分析手段の判断結果に応じて前記映像送信手段は同映像分析手段に入力された映像を送信する
     請求項1、請求項2、請求項3または請求項4に記載の監視システム。
    Video transmission means configured to transmit the video input to the video analysis means to a terminal connected via a communication line, the video transmission means is configured to transmit the video according to the determination result of the video analysis means; The monitoring system according to claim 1, wherein the video input to the analysis unit is transmitted.
  10.  映像の中から検出したい第1の物体の基準情報として同第1の物体の形状の上限と下限の情報を記憶すると共に、映像の中から検出したい第2の物体の基準情報として同第2の物体の形状の上限と下限の情報を記憶する物体基準情報記憶手段と、
     入力された映像の中の物体の形状が前記物体基準情報記憶手段に記憶された前記第1の物体の基準情報の上下限内であるか否かを判断する第1の映像分析手段と、
     入力された映像の中の物体の形状が前記物体基準情報記憶手段に記憶された前記第2の物体の基準情報の上下限内であるか否かを判断する第2の映像分析手段と、
     少なくとも前記第1の映像分析手段の判断結果及び前記第2の映像分析手段の判断結果に基づいて論理判定を行う論理判定手段とを備える
     監視システム。
    Information on the upper and lower limits of the shape of the first object is stored as reference information for the first object to be detected from the image, and the second information is used as reference information for the second object to be detected from the image. Object reference information storage means for storing information on the upper and lower limits of the shape of the object;
    First image analysis means for determining whether or not the shape of the object in the input image is within the upper and lower limits of the reference information of the first object stored in the object reference information storage means;
    Second video analysis means for determining whether the shape of the object in the input video is within the upper and lower limits of the reference information of the second object stored in the object reference information storage means;
    A monitoring system comprising: a logic determination unit configured to perform a logic determination based on at least a determination result of the first video analysis unit and a determination result of the second video analysis unit.
  11.  映像の中から検出したい物体の基準情報として、物体の形状の上限と下限の情報を記憶する物体基準情報記憶手段と、入力された映像の中の物体の形状が前記物体基準情報記憶手段に記憶された前記基準情報の上下限内であるか否かを判断する映像分析手段と、通信回線を介して接続された端末に対して前記映像分析手段に入力された映像を送信可能に構成された映像送信手段を、それぞれ有する第1の監視システム、第2の監視システムと、
     少なくとも前記第1の監視システムの映像送信手段及び前記第2の監視システムの映像送信手段から送信された映像と共に同映像の受信履歴情報を読み出し可能に記憶する記憶手段と、該記憶手段に記憶された前記受信履歴情報を表示する表示手段を有し、前記表示手段に表示された前記受信履歴情報を選択することで、同受信履歴情報に対応する映像が前記表示手段に表示可能に構成された端末とを備える
     監視システム。
    Object reference information storage means for storing upper and lower limit information on the shape of an object as reference information for an object to be detected from the image, and the object shape in the input image is stored in the object reference information storage means. The video analysis means for determining whether the reference information is within the upper and lower limits, and the video input to the video analysis means can be transmitted to a terminal connected via a communication line A first monitoring system and a second monitoring system each having video transmission means;
    Storage means for readablely storing the reception history information of the video together with the video transmitted from at least the video transmission means of the first monitoring system and the video transmission means of the second monitoring system; and stored in the storage means Display means for displaying the reception history information, and by selecting the reception history information displayed on the display means, an image corresponding to the reception history information can be displayed on the display means. A monitoring system comprising a terminal.
  12.  映像の中から検出したい物体の基準情報として、物体の形状の上限と下限の情報を記憶する物体基準情報記憶手段と、撮像される領域中の所定領域を特定する情報を有効領域情報として記憶する有効領域情報記憶手段と、入力された映像の中の物体が前記有効領域情報記憶手段に記憶された前記有効領域情報内に位置するか否かを判断すると共に、同物体の形状が前記物体基準情報記憶手段に記憶された前記基準情報の上下限内であるか否かを判断する映像分析手段と、通信回線を介して接続された端末に対して前記映像分析手段に入力された映像を送信可能に構成された映像送信手段を、それぞれ有する第1の監視システム、第2の監視システムと、
     少なくとも前記第1の監視システムの映像送信手段及び前記第2の監視システムの映像送信手段から送信された映像と共に同映像の受信履歴情報を読み出し可能に記憶する記憶手段と、該記憶手段に記憶された前記受信履歴情報を表示する表示手段を有し、前記表示手段に表示された前記受信履歴情報を選択することで、同受信履歴情報に対応する映像が前記表示手段に表示可能に構成された端末とを備える
     監視システム。
    Object reference information storage means for storing information on the upper and lower limits of the shape of an object as reference information for an object to be detected from the video, and information for specifying a predetermined area in the imaged area is stored as effective area information. The effective area information storage means and whether or not the object in the input video is located within the effective area information stored in the effective area information storage means, and the shape of the object is the object reference Video analysis means for determining whether or not the reference information stored in the information storage means is within the upper and lower limits, and transmission of the video input to the video analysis means to a terminal connected via a communication line A first monitoring system and a second monitoring system each having a video transmission means configured to be possible;
    Storage means for readablely storing the reception history information of the video together with the video transmitted from at least the video transmission means of the first monitoring system and the video transmission means of the second monitoring system; and stored in the storage means Display means for displaying the reception history information, and by selecting the reception history information displayed on the display means, an image corresponding to the reception history information can be displayed on the display means. A monitoring system comprising a terminal.
PCT/JP2009/064844 2008-08-28 2009-08-26 Monitoring system WO2010024281A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2010526736A JP5047361B2 (en) 2008-08-28 2009-08-26 Monitoring system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2008-219635 2008-08-28
JP2008219635 2008-08-28
JP2009-146442 2009-06-19
JP2009146442 2009-06-19

Publications (1)

Publication Number Publication Date
WO2010024281A1 true WO2010024281A1 (en) 2010-03-04

Family

ID=41721448

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/064844 WO2010024281A1 (en) 2008-08-28 2009-08-26 Monitoring system

Country Status (2)

Country Link
JP (1) JP5047361B2 (en)
WO (1) WO2010024281A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011227614A (en) * 2010-04-16 2011-11-10 Secom Co Ltd Image monitoring device
JP2013134544A (en) * 2011-12-26 2013-07-08 Asahi Kasei Corp Fall detection device, fall detection method, information processor, and program
JP2014149584A (en) * 2013-01-31 2014-08-21 Ramrock Co Ltd Notification system
JP2017036945A (en) * 2015-08-07 2017-02-16 株式会社Ihiエアロスペース Moving body and obstacle detection method of the same
EP2953349A4 (en) * 2013-01-29 2017-03-08 Ramrock Video Technology Laboratory Co., Ltd. Monitor system
JP2018093347A (en) * 2016-12-01 2018-06-14 キヤノン株式会社 Information processing apparatus, information processing method, and program
US10147288B2 (en) 2015-10-28 2018-12-04 Xiaomi Inc. Alarm method and device
JP2020024669A (en) * 2018-08-07 2020-02-13 キヤノン株式会社 Detection device and control method thereof
JP2020052826A (en) * 2018-09-27 2020-04-02 株式会社リコー Information provision device, information provision system, information provision method, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6286990A (en) * 1985-10-11 1987-04-21 Matsushita Electric Works Ltd Abnormality supervisory equipment
JP2001160146A (en) * 1999-12-01 2001-06-12 Matsushita Electric Ind Co Ltd Method and device for recognizing image
JP2007157005A (en) * 2005-12-07 2007-06-21 Matsushita Electric Ind Co Ltd Object action detection/report system, center device, controller device, object action detection/report method, and object action detection/report program
JP2008059487A (en) * 2006-09-01 2008-03-13 Basic:Kk Supervising device and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3956484B2 (en) * 1998-05-26 2007-08-08 株式会社ノーリツ Bathing monitoring device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6286990A (en) * 1985-10-11 1987-04-21 Matsushita Electric Works Ltd Abnormality supervisory equipment
JP2001160146A (en) * 1999-12-01 2001-06-12 Matsushita Electric Ind Co Ltd Method and device for recognizing image
JP2007157005A (en) * 2005-12-07 2007-06-21 Matsushita Electric Ind Co Ltd Object action detection/report system, center device, controller device, object action detection/report method, and object action detection/report program
JP2008059487A (en) * 2006-09-01 2008-03-13 Basic:Kk Supervising device and method

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011227614A (en) * 2010-04-16 2011-11-10 Secom Co Ltd Image monitoring device
JP2013134544A (en) * 2011-12-26 2013-07-08 Asahi Kasei Corp Fall detection device, fall detection method, information processor, and program
EP2953349A4 (en) * 2013-01-29 2017-03-08 Ramrock Video Technology Laboratory Co., Ltd. Monitor system
US9905009B2 (en) 2013-01-29 2018-02-27 Ramrock Video Technology Laboratory Co., Ltd. Monitor system
JP2014149584A (en) * 2013-01-31 2014-08-21 Ramrock Co Ltd Notification system
JP2017036945A (en) * 2015-08-07 2017-02-16 株式会社Ihiエアロスペース Moving body and obstacle detection method of the same
EP3163543B1 (en) * 2015-10-28 2018-12-26 Xiaomi Inc. Alarming method and device
US10147288B2 (en) 2015-10-28 2018-12-04 Xiaomi Inc. Alarm method and device
JP2018093347A (en) * 2016-12-01 2018-06-14 キヤノン株式会社 Information processing apparatus, information processing method, and program
JP2020024669A (en) * 2018-08-07 2020-02-13 キヤノン株式会社 Detection device and control method thereof
JP7378223B2 (en) 2018-08-07 2023-11-13 キヤノン株式会社 Detection device and its control method
JP2020052826A (en) * 2018-09-27 2020-04-02 株式会社リコー Information provision device, information provision system, information provision method, and program
JP7172376B2 (en) 2018-09-27 2022-11-16 株式会社リコー Information providing device, information providing system, information providing method, and program

Also Published As

Publication number Publication date
JPWO2010024281A1 (en) 2012-01-26
JP5047361B2 (en) 2012-10-10

Similar Documents

Publication Publication Date Title
JP5047361B2 (en) Monitoring system
JP4617269B2 (en) Monitoring system
US9311794B2 (en) System and method for infrared intruder detection
KR101544019B1 (en) Fire detection system using composited video and method thereof
KR20070029760A (en) Monitoring devices
US9053621B2 (en) Image surveillance system and image surveillance method
KR101467352B1 (en) location based integrated control system
KR101381924B1 (en) System and method for monitoring security using camera monitoring apparatus
KR102230552B1 (en) Device For Computing Position of Detected Object Using Motion Detect and Radar Sensor
KR20120140518A (en) Remote monitoring system and control method of smart phone base
US20040216165A1 (en) Surveillance system and surveillance method with cooperative surveillance terminals
JP2009015536A (en) Suspicious person report device, suspicious person monitoring device and remote monitoring system using the same
KR101046819B1 (en) Method and system for watching an intrusion by software fence
JP2001069268A (en) Communication equipment
JP4702184B2 (en) Surveillance camera device
JP2005309965A (en) Home security device
JP2008148138A (en) Monitoring system
JP2006003941A (en) Emergency report system
JP6754451B2 (en) Monitoring system, monitoring method and program
JP4096953B2 (en) Human body detector
JP2004110234A (en) Emergency alarm and emergency-alarming system
JP2008186283A (en) Human body detector
KR100368448B1 (en) A Multipurpose Alarm System
JP4540456B2 (en) Suspicious person detection device
JP4650346B2 (en) Surveillance camera device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09809932

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2010526736

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09809932

Country of ref document: EP

Kind code of ref document: A1