WO2020256152A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2020256152A1
WO2020256152A1 PCT/JP2020/024372 JP2020024372W WO2020256152A1 WO 2020256152 A1 WO2020256152 A1 WO 2020256152A1 JP 2020024372 W JP2020024372 W JP 2020024372W WO 2020256152 A1 WO2020256152 A1 WO 2020256152A1
Authority
WO
WIPO (PCT)
Prior art keywords
information processing
predetermined space
predetermined
data
dangerous
Prior art date
Application number
PCT/JP2020/024372
Other languages
French (fr)
Japanese (ja)
Inventor
トニー シュウ
Original Assignee
トニー シュウ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by トニー シュウ filed Critical トニー シュウ
Publication of WO2020256152A1 publication Critical patent/WO2020256152A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an information processing device, an information processing method, and a program.
  • a surveillance camera system in which a surveillance camera is installed inside or around a house, etc., and the data of an image captured by the surveillance camera is transmitted to a monitor or the like and used for crime prevention (for example, patent documents). 1).
  • the surveillance camera merely captures an image, and the data of the image is sent to an external device such as an external observer via the Internet or the like. It was transmitted, and an abnormality was detected by the external device.
  • an abnormality for example, when an abnormality occurs on the Internet or the like, the detection of the abnormality or the like by the surveillance camera may not function.
  • the measures for the abnormality or the like may be delayed and insufficient measures may be taken. For this reason, there has been a demand for the realization of a monitoring device (stand-alone monitoring device) that functions independently within a predetermined space for detecting abnormalities, etc., but the situation is that such a request cannot be fully met. Met.
  • the present invention has been made in view of such a situation, and an object of the present invention is to realize a monitoring device (stand-alone monitoring device) that functions independently in a predetermined space for detecting an abnormality or the like.
  • the information processing device of one aspect of the present invention is In an information processing device arranged in a predetermined space An image acquisition means for acquiring image data obtained as a result of imaging at least a part of the predetermined space, and A detection means for detecting the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space based on the acquired data of the image.
  • the notification control means for controlling the notification of a predetermined warning and the notification control means.
  • Each of the information processing methods and programs of one aspect of the present invention is a method and program corresponding to the above-mentioned information processing device of one aspect of the present invention.
  • a monitoring device stand-alone monitoring device that functions independently in a predetermined space for detecting an abnormality or the like.
  • FIG. 5 It is a figure explaining the principle of the security camera which concerns on one Embodiment of information processing of this invention. It is a figure which shows the example of the processing which can be realized by operating two or more security cameras of FIG. 1 in cooperation with each other. It is a perspective view which shows the structural example of the appearance of the security camera of FIG. It is a block diagram which shows the hardware configuration example inside the security camera of FIG. It is a functional block diagram which shows an example of the functional structure of the security camera of FIG. As an example of a dangerous or abnormal object or phenomenon detected by the security camera of FIG. 5, it is a figure explaining an example of a method of detecting a flame by a fire, and detecting a dangerous act by a knife.
  • FIG. 1 is a diagram illustrating the principle of a security camera according to an embodiment of information processing of the present invention.
  • One or more security cameras (in the example of FIG. 1, two security cameras 1-1, 1-2) are arranged in a predetermined space S such as in a store, and constantly monitor dangerous or abnormal objects or phenomena (in the example of FIG. 1). It is a stand-alone (independent function) AI-equipped security camera that monitors in real time (24 hours).
  • AI Artificial Intelligence
  • machine learning is performed in advance based on the data of one or more images determined to actually contain a dangerous or abnormal object or phenomenon, and a predetermined algorithm generated or updated by the machine learning is installed. That is the meaning of installing AI.
  • a dangerous or abnormal object or phenomenon can be arbitrarily set by the designer, the user, or the like.
  • solids such as blades and weapons, liquids such as powerful drugs, and gases such as flames generated in the event of a fire are set as dangerous or abnormal objects or phenomena.
  • Dangerous acts such as swinging or fighting with a knife, violent acts, flood damage such as drowning in a pool, fire, etc. are set as dangerous or abnormal objects or phenomena.
  • the predetermined space S does not have to be a closed space such as indoors, but may be an open space such as outdoors. That is, flood damage such as drowning people on the coast, forest fires, traffic accidents, etc. are also set as dangerous or abnormal objects or phenomena.
  • the security cameras 1-1 and 1-2 have a built-in battery so that the predetermined space S can be installed even in a place where power supply is difficult, such as outdoors.
  • Such security cameras 1-1 and 1-2 can execute the following series of processes. That is, the security cameras 1-1 and 1-2 image at least a part of the predetermined space S, and the data of the captured image obtained as a result is subjected to predetermined image processing by the built-in GPU (Graphics Processing Unit). Give. Here, the image is a broad concept including a still image and a moving image. Further, the security cameras 1-1 and 1-2 record the voice of the predetermined space S, and perform the predetermined voice processing on the voice data obtained as a result by the built-in GPU (Graphics Processing Unit).
  • the security cameras 1-1 and 1-2 a dangerous or abnormal object or phenomenon exists in the predetermined space S based on the data of the captured image (for example, the feature amount) subjected to the image processing in this way.
  • the possibility of occurrence is determined using the predetermined algorithm (AI) described above.
  • the detection result of the possibility is not particularly limited, and may be, for example, a percentage indicating the possibility (a value having a certain range), or a binary value such as yes (100%) or no (0%). ..
  • the security cameras 1-1 and 1-2 notify a predetermined warning.
  • the predetermined conditions are not particularly limited, and for example, when the detection result is represented by a percentage indicating the possibility, a condition such as exceeding a threshold value such as 80% can be adopted, and for example, the detection result is present or absent. When represented by, conditions such as the output of Yes can be adopted. Further, for example, when the detection result is expressed as a percentage indicating the possibility, the security cameras 1-1 and 1-2 sequentially notify a predetermined warning in multiple stages using a plurality of threshold values (for example, 60%). In the above cases, "Caution”, and in the case of 80% or more, “Danger”, etc. can be notified in sequence).
  • the security camera 1-1 images a predetermined space S including a phenomenon in which the person H-1 expresses the knife K with respect to the person H-2.
  • the GPU of the security camera 1-1 performs predetermined image processing on the captured image data.
  • the security camera 1-1 detects the abnormal IR as a dangerous act by the blade itself or the blade based on the data of the captured image (for example, the feature amount) subjected to the image processing in this way. Therefore, the security camera 1-1 sounds an alarm A in the predetermined space S or transmits an e-mail M-1 to the effect that an abnormal IR has occurred to an external device such as the police as a notification of the abnormal IR warning. be able to.
  • the notification method is not particularly limited to the example of FIG. 1, and may be arbitrary. For example, an instruction (signal) for operating the sprinkler may be adopted as a fire notification.
  • two security cameras 1-1 and 1-2 are shown, but the number of security cameras is not particularly limited to FIG. 1, and may be arbitrary, and of course, one.
  • two or more security cameras for example, security cameras 1-1 and 1-2 in the example of FIG. 1
  • FIG. 2 is a diagram showing an example of processing that can be realized by operating two or more security cameras of FIG. 1 in cooperation with each other.
  • the imaging range becomes a blind spot with only one security camera 1-1, and a dangerous or abnormal object or phenomenon occurs.
  • a delay in detection (there is a possibility that the presence or occurrence in the predetermined space S is lower than the actual value) may occur.
  • the security cameras 1-1, 1-2, etc. which are arranged at different positions (and therefore the imaging range and the object imaging direction are different), are obtained as a result of imaging from different viewpoints.
  • the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S is detected by using the predetermined algorithm (AI) described above.
  • AI predetermined algorithm
  • a warning based on at least one of the abnormal IR-1 detected by the security camera 1-1 and the abnormal IR-2 detected by the security camera 1-2 is notified.
  • the captured image output as data from at least one of the security cameras 1-1 and 1-2 includes a phenomenon in which the person H-1 expresses the knife K.
  • the person H-1 may perform a dangerous act with a knife after destroying the security camera 1-2.
  • the dangerous act is not detected by the blade by the person H-1, and no warning may be notified.
  • the two security cameras 1-1, 1-2, etc. are obtained as a result of independent imaging.
  • the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S is detected by using the above-mentioned predetermined algorithm (AI).
  • AI predetermined algorithm
  • a person H-1 who has already performed a dangerous act may recognize that the image has been taken by the security camera 1-2 and destroy the security camera 1-2.
  • the security camera 1-2 that detects the dangerous act by the person H-1 controls to copy the data of the image in which the dangerous act of the person H-1 is captured to the storage unit of the security camera 1-1.
  • the security camera 1-2 is destroyed by the person H-1, the data of the image in which the dangerous act of the person H-1 is captured is retained.
  • the two security cameras 1-1 and 1-2 can distribute the responsibility for detection and warning that they are in charge of.
  • the security camera 1-1 is in charge of only the abnormal IR-5 regarding dangerous acts by the blade by the person H-1.
  • the security camera 1-2 can be in charge of only detecting the abnormal IR-5 for the flame F caused by the fire, without being in charge of the abnormal IR-5 for the dangerous act by the blade by the person H-1. ..
  • the processing load for detecting each of the security cameras 1-1 and 1-2 is reduced, so that it is possible to easily detect two or more dangerous or abnormal objects or phenomena, and as a result, each of them. It is possible to prevent a delay in the notification of the warning.
  • security camera 1 when it is not necessary to distinguish the security cameras 1-1, 1-2, etc. individually, these are collectively referred to as "security camera 1".
  • FIG. 3 is a perspective view showing a configuration example of the appearance of the security camera of FIG.
  • the security camera 1 has a housing 2 and a camera 3.
  • the camera 3 can image the predetermined space S from the viewpoint from the upper side to the lower side, so that the blind spot is created. It is reduced and the accuracy of detecting dangerous or abnormal objects or phenomena is improved.
  • FIG. 4 is a block diagram showing an example of the internal hardware configuration of the security camera of FIG.
  • the security camera 1 contains a CPU (Central Processing Unit) 11, a GPU 12, a ROM (Read Only Memory) 13, a RAM (Random Access Memory) 14, and a bus 15 inside the housing 2 of FIG. It includes an output interface 16, an output unit 17, an input unit 18, a storage unit 19, a communication unit 20, a drive 21, and a battery 22.
  • a CPU Central Processing Unit
  • GPU GPU
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the CPU 11 executes various processes according to the program recorded in the ROM 13 or the program loaded from the storage unit 19 into the RAM 14.
  • the GPU 12 executes various image processing according to the program recorded in the ROM 13 or the program loaded from the storage unit 19 into the RAM 14. Data and the like necessary for the CPU 11 to execute various processes and the GPU 12 to execute various image processes are also appropriately stored in the RAM 14.
  • the CPU 11, GPU 12, ROM 13 and RAM 14 are connected to each other via the bus 15.
  • An input / output interface 16 is also connected to the bus 15.
  • An output unit 17, an input unit 18, a storage unit 19, a communication unit 20, and a drive 21 are connected to the input / output interface 16.
  • the output unit 17 is composed of a speaker 17 and the like.
  • the speaker 17 outputs various sounds for warning notification.
  • the input unit 18 includes a camera 3, a microphone 41, and the like.
  • the camera 3 captures at least a part of the predetermined space S and outputs the data of the captured image obtained as a result.
  • the microphone 41 inputs the voice emitted in the predetermined space S and outputs it as voice data.
  • the storage unit 19 is composed of a DRAM (Dynamic Random Access Memory) or the like, and stores various data.
  • the communication unit 20 is connected to another security camera 1 installed nearby (for example, in the example of FIG. 1, when the security camera 1 shown in FIG. 4 is the security camera 1-1, the other security camera 1-2). Communicate with.
  • the communication unit 20 transmits information for notifying a warning to another device (for example, a device managed by a police (not shown)) via a network N including the Internet, if necessary.
  • a removable medium 31 made of a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is appropriately mounted on the drive 21.
  • the program read from the removable media 31 by the drive 21 is installed in the storage unit 19 as needed.
  • the removable media 31 can also store various data stored in the storage unit 19 in the same manner as the storage unit 19.
  • the battery 22 is a power source that supplies sufficient power for the security camera 1 to function. That is, since the security camera 1 does not require a commercial power supply, it can be installed even in a place where the predetermined space S is outdoors or where power supply is difficult, as described above.
  • FIG. 5 is a functional block diagram showing an example of the functional configuration of the security camera of FIG.
  • the storage unit 19 is provided with an image information DB (database) 200, an audio information DB 300, and a learning result DB 400.
  • the camera 3 of the security camera 1 images at least a part of the predetermined space S, and outputs the data of the captured image obtained as a result.
  • the captured image acquisition unit 101 acquires the captured image data output from the camera 3, outputs the data to the image processing unit 102, and appropriately performs compression processing or the like as log data for a certain period of time to store the data in the image information DB 200. ..
  • the image processing unit 102 performs various image processing (for example, extraction processing of the feature amount of the image) on the data of the captured image from the captured image acquisition unit 101, and outputs the data to the danger abnormality detection unit 105.
  • the microphone 41 of the security camera 1 inputs the voice emitted in the predetermined space S and outputs it as voice data.
  • the voice acquisition unit 103 acquires the voice data output from the microphone 41, outputs the voice data to the voice processing unit 104, and appropriately performs compression processing or the like as log data for a certain period of time to store the voice information DB 300.
  • the voice processing unit 104 performs various processes (for example, voice extraction processing, etc.) on the voice data from the voice acquisition unit 103, and outputs the data to the danger abnormality detection unit 105.
  • the detection of the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S by the danger abnormality detection unit 105 is based on the data of the captured image from the image processing unit 102. Is executed. However, if necessary, the voice data from the voice processing unit 104 is further taken into consideration, and the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S is detected. The reason why the voice data is considered for the detection of the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S will be described below.
  • the conventional surveillance camera system or the like detects an abnormality or the like (for example, an intrusion of a suspicious person) based only on the image data.
  • an abnormality or the like for example, an intrusion of a suspicious person
  • a predetermined type of abnormality for example, a fight, etc.
  • information that contributes to detection of the predetermined type of abnormality, etc. for example, an angry word that is often issued in a fight, etc. Etc.
  • the surveillance camera 1 of the present embodiment has a predetermined type of object or phenomenon (for example, a fight, etc.) based on audio data (for example, an angry number, etc.) in addition to image data (for example, an image of a fight, etc.). I am trying to detect about.
  • a predetermined type of abnormality for example, a fight, etc.
  • the predetermined type of abnormality etc.
  • a predetermined algorithm is used in which machine learning is performed in advance for the correlation between the image information and the audio information, and the result is generated or updated. Therefore, the accuracy of detection of a predetermined type of object or phenomenon (for example, a fight) is further improved.
  • the danger abnormality detection unit 105 has a dangerous or abnormal object or phenomenon in the predetermined space S based on the data of the captured image from the image processing unit 102 and the sound data from the sound processing unit 104 as needed. Or detect the possibility of occurrence.
  • the learning result DB 400 contains data of a predetermined algorithm generated or updated by machine learning performed in advance based on the data of one or more images determined to actually contain a dangerous or abnormal object or phenomenon. It is stored. Therefore, the danger abnormality detection unit 105 detects the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S by using the predetermined algorithm.
  • FIG. 6 is a diagram illustrating an example of a method of detecting a flame due to a fire and a method of detecting a dangerous act by a knife as an example of a dangerous or abnormal object or phenomenon detected by the security camera of FIG.
  • FIG. 6A is a diagram illustrating an example of a method for detecting a flame caused by a fire.
  • the flame F1 caused by artificially created fire hereinafter referred to as “artificial flame F1”
  • the flame F2 caused by spontaneous combustion hereinafter referred to as “natural flame F2”
  • machine learning is performed in advance based on the data of one or more images determined to contain the artificial flame F1 and the data of one or more images determined to contain the natural flame F2.
  • the data of the predetermined algorithm for distinguishing and recognizing the artificial flame F1 and the natural flame F2 is generated or updated from the data of the captured image (appropriately image-processed), and the learning result DB 400 of FIG. Pre-stored in.
  • the captured image data including the artificial flame F1 (different from the learning time) is acquired by the captured image acquisition unit 101 and image processing is performed. Image processing is performed by unit 102. Therefore, based on the data of the captured image, the danger abnormality detection unit 105 detects that it is very unlikely that a flame due to a fire exists or occurs in the predetermined space S by using the above-mentioned predetermined algorithm. As a result, the notification of a predetermined warning is prohibited (that is, the warning is not notified) by the control of the warning notification control unit 106 described later.
  • the danger abnormality detection unit 105 detects that there is a very high possibility that a flame due to a fire exists or occurs in the predetermined space S based on the data of the captured image by using the above-mentioned predetermined algorithm. As a result, a predetermined warning is notified by the control of the warning notification control unit 106, which will be described later.
  • the danger abnormality detection unit 105 detects that the flame due to the fire is very likely to exist or occur in the predetermined space S by using the predetermined algorithm described above. As a result, a predetermined warning is notified by the control of the warning notification control unit 106, which will be described later.
  • the predetermined space S is not always a closed space in the room as shown in FIG. 1, and may be an open space such as a forest, although not shown.
  • the actual natural flame F2 different from the time of learning
  • the security camera 1 when the security camera 1 is installed in a place where it is determined to be far away, it is difficult to detect the natural flame F2 (different from the time of learning) based on the data of the captured image. Therefore, in such a case, in addition to the natural flame F2, the data of one or more images determined to contain a flame (not shown) is further used, and machine learning is performed in advance, and the machine is performed.
  • data of a predetermined algorithm that distinguishes and recognizes smoke in addition to artificial flame F1 and natural flame F2 is generated or updated from the data of the captured image (with appropriate image processing), and the learning of FIG.
  • the result is stored in the DB 400 in advance.
  • spontaneous ignition is caused by a fire in a forest or the like (predetermined space S)
  • smoke at the time of learning
  • the captured image data including (different) is acquired by the captured image acquisition unit 101, and image processing is performed by the image processing unit 102.
  • the danger abnormality detection unit 105 has a moderate possibility that a flame due to a fire exists or occurs in the predetermined space S (the possibility will increase in the future). Detect using an algorithm. As a result, under the control of the warning notification control unit 106, which will be described later, for example, "there is a risk of fire" is notified as a predetermined warning. After that, when the fire further progresses, the data of the captured image including the natural flame F2 is acquired by the captured image acquisition unit 101, and the image processing unit 102 performs image processing.
  • the danger abnormality detection unit 105 detects that there is a very high possibility that a flame due to a fire exists or occurs in the predetermined space S based on the data of the captured image by using the above-mentioned predetermined algorithm. As a result, under the control of the warning notification control unit 106, which will be described later, a notification such as "fire (warning one step higher than the above-mentioned danger)" is given as a predetermined warning.
  • FIG. 6B in FIG. 6 is a diagram for explaining an example of a method for detecting a dangerous act with a knife.
  • the knife K1 that can be purchased at the store (hereinafter referred to as "purchasing knife K1") is packaged, while the knife K2 that is exposed for robbery (hereinafter referred to as "robbery").
  • robbery (Called Knife K2) is not packaged.
  • the way the person H-3 holds the purchase knife K1 when purchasing is different from the way the person H-1 holds the robbery knife K2 for robbery (to threaten the person).
  • the act of selling the purchase knife K1 and the act of being struck by the robbery knife K2 (for example, the act of raising a hand) are different.
  • Machine learning is performed in advance based on the data of one or more images determined to include the appearance of threatening -3. Then, by the machine learning, from the data of the captured image (with appropriate image processing), the normal state for purchase (the state where there is no dangerous act by the knife) and the abnormal state due to theft etc. (the state of the dangerous act by the knife). ) And the data of the predetermined algorithm to be recognized separately are generated or updated and stored in advance in the learning result DB 400 of FIG.
  • the person H-3 (different from the time of learning) brings the purchase knife K1 (different from the time of learning) and the person H-2 as a clerk (different from the time of learning). It shall be handed over to.
  • the data of the captured image including the person H-3 who purchases with the purchase knife K1 and the person H-2 as a clerk is the captured image acquisition unit 101.
  • Image processing is performed by the image processing unit 102. Therefore, the danger abnormality detection unit 105 detects that the possibility of a dangerous act by the blade is very low based on the data of the captured image by using the above-mentioned predetermined algorithm. As a result, the notification of a predetermined warning is prohibited (that is, the warning is not notified) by the control of the warning notification control unit 106 described later.
  • the person H-1 (different from the time of learning) uses the robber knife K2 (different from the time of learning) to use the person H-3 as a clerk (different from the time of learning).
  • the captured image data including the appearance of threatening the person H-3 as a clerk with the robbery knife K2 is acquired by the captured image acquisition unit 101, and the image processing unit 102 performs image processing. Therefore, the danger abnormality detection unit 105 detects that the possibility of a dangerous act by the blade is very high based on the data of the captured image by using the above-mentioned predetermined algorithm. As a result, a predetermined warning is notified by the control of the warning notification control unit 106, which will be described later.
  • the warning notification control unit 106 executes control to notify a predetermined warning when the detection result of the danger abnormality detection unit 105 satisfies the predetermined condition.
  • the warning notification control unit 106 issues an alarm (for example, alarm A in FIG. 1) as a predetermined warning from a speaker 17 included in the security camera 1 or another speaker independent of the security camera 1 via the communication unit 20.
  • Control for ringing in the predetermined space S can be executed.
  • the warning notification control unit 106 sends an e-mail (for example, the e-mail M-1 of FIG. 1) to the effect that an abnormality (for example, the abnormal IR of FIG. 1) has occurred as a predetermined warning via the communication unit 20. It is possible to execute control to transmit to an external device such as the police (not shown).
  • the other device collaborative control unit 107 performs at least a part of the above-mentioned series of processes (for example, the processes of the danger abnormality detection unit 105 and the warning notification control unit 106) with another security camera 1 (for example, the security of the example of FIG. Cooperate with Camera 1 Perform control to collaborate with (at least some of the other unshown security cameras).
  • the method of collaboration is not particularly limited, but for example, the method of collaboration as described in each of A to C in FIG. 2 described above can be adopted.
  • the power source of the security camera 1 is a built-in battery, but the power source is not particularly limited to this, and for example, a solar panel power source or the like may be used.
  • the communication unit 20 communicates with another device (for example, a device managed by a police not shown) via a network N including the Internet, or communicates with another security camera via a predetermined cable.
  • the method related to the communication of the communication unit 20 is not particularly limited to this.
  • the communication method of the security camera 1 one or both of a wired method and a wireless method may be adopted.
  • a method capable of communicating with a plurality of devices (security camera 1 or an external device) via the network N may be adopted, and one pair with another device without going through the network N.
  • a method of communicating with 1 may be adopted.
  • the communication method of the security camera 1 a wireless method that does not go through the network N can be adopted.
  • a method related to communication of the communication unit 20 a short-range communication method using Bluetooth (registered trademark), an ad hoc communication method using Wi-Fi (registered trademark), or the like can be adopted.
  • the warning notification control unit 106 can notify a predetermined warning as follows.
  • the warning notification control unit 106 can transfer a predetermined warning to a surrounding camera. That is, for example, the warning notification control unit 106 of the security camera 1 transfers to the other security cameras 1 around the security camera 1 that an abnormality (for example, the abnormality IR in FIG. 1) has occurred as a predetermined warning. can do. At this time, the transfer is repeated until the security camera 1 capable of controlling transmission to an external device such as the police is transferred to the effect that an abnormality (for example, the abnormal IR in FIG. 1) has occurred. As a result, for example, even when the security camera 1 cannot connect to the network N including the Internet or the like, the security camera 1 is an external device such as the police that an abnormality (for example, the abnormality IR in FIG. 1) has occurred. Can be sent to.
  • an abnormality for example, the abnormality IR in FIG. 1
  • the appearance configuration shown in FIG. 3 and the hardware configuration shown in FIG. 4 are merely examples for achieving the object of the present invention, and are not particularly limited.
  • the information processing device to which the present invention is applied does not need to be configured as a security camera 1, and if it can execute the above-mentioned series of processes, its appearance configuration and internal hardware configuration Is not particularly limited.
  • the functional block diagram shown in FIG. 4 is merely an example and is not particularly limited. That is, it suffices if the information processing system (not shown) including one or more security cameras 1 has a function capable of executing the above-mentioned series of processes as a whole, and what kind of function block is used to realize this function. Is not particularly limited to the example of FIG.
  • the location of the functional block is not limited to FIG. 5, and may be arbitrary.
  • the danger abnormality detection unit 105, the warning notification control unit 106, And the other device cooperative control unit 107 and the like may be provided in the control device.
  • one functional block may be configured by a single piece of hardware, a single piece of software, or a combination thereof.
  • each of the functions shown by the functional blocks of FIG. 5 may function in either or both of the CPU 11 and the GPU 12.
  • the programs constituting the software are installed on a computer or the like from a network or a recording medium.
  • the computer may be a computer embedded in dedicated hardware. Further, the computer may be a computer capable of executing various functions by installing various programs, for example, a general-purpose smartphone or a personal computer in addition to a server.
  • the recording medium containing such a program is not only composed of removable media, which is distributed separately from the device main body to provide the program to each user, but also is preliminarily incorporated in the device main body to each user. It is composed of the provided recording media and the like.
  • the steps for describing a program recorded on a recording medium are not necessarily processed in chronological order according to the order, but are not necessarily processed in chronological order, but in parallel or individually. It also includes the processing to be executed.
  • the term of the system means an overall device composed of a plurality of devices, a plurality of means, and the like.
  • the information processing apparatus to which the present invention is applied need only have the following configuration, and various various embodiments can be taken.
  • the information processing device to which the present invention is applied is An information processing device (for example, the surveillance camera 1 in FIGS. 3 and 4) arranged in a predetermined space (for example, the predetermined space S in FIG. 1).
  • An image acquisition means for example, the captured image acquisition unit 101 in FIG. 5) for acquiring image data obtained as a result of imaging at least a part of a predetermined space
  • a detection means for example, the danger abnormality detection unit 105 in FIG. 5 for detecting the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space based on the acquired data of the image.
  • the notification control means for example, the warning notification control unit 106 in FIG. 5 that controls the notification of a predetermined warning, To be equipped.
  • the information processing device can function as a monitoring device (stand-alone monitoring device) that functions independently in a predetermined space for detecting an abnormality or the like.
  • the detection means is a predetermined algorithm generated or updated by machine learning performed based on the data of one or more images determined to actually contain a dangerous or abnormal object or phenomenon (for example, the learning of FIG. 5).
  • the above possibility is detected by using a predetermined algorithm) stored as data in the result DB 400. be able to.
  • a voice acquisition means for example, the voice acquisition unit 103 in FIG. 5 for acquiring voice data emitted in the predetermined space.
  • the detection means detects the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space based on the acquired audio data in addition to the acquired image data. To do, be able to.
  • Collaborative control means (for example, other device collaborative control unit 107 in FIG. 5) that executes control for cooperating with other information processing devices for at least a part of the processing of the detection means and the notification control means. Can be further prepared.

Landscapes

  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

The present invention addresses the problem of achieving a surveillance device (stand-alone surveillance device) that functions independently in a prescribed space in which an abnormality or the like is to be detected. A captured image acquiring unit 101 acquires data relating to an image obtained as a result of imaging of at least a portion of a prescribed space in which an information processing device is disposed. On the basis of the acquired data relating to the image, a hazard/abnormality detecting unit 105 detects the possibility of a hazardous or abnormal object or phenomenon being present or occurring in the prescribed space. If the detection result obtained by the hazard/abnormality detecting unit 105 satisfies a predetermined condition, a warning notification control unit 106 controls notification of a prescribed warning. The problem described hereinabove is thus resolved.

Description

情報処理装置、情報処理方法、及びプログラムInformation processing equipment, information processing methods, and programs
 本発明は、情報処理装置、情報処理方法、及びプログラムに関する。 The present invention relates to an information processing device, an information processing method, and a program.
 従来より、例えば住宅等の内部やその周囲に監視カメラを設置し、当該監視カメラにより撮像された画像のデータを監視者等に送信して防犯に利用する監視カメラシステムが存在する(例えば特許文献1参照)。 Conventionally, for example, there is a surveillance camera system in which a surveillance camera is installed inside or around a house, etc., and the data of an image captured by the surveillance camera is transmitted to a monitor or the like and used for crime prevention (for example, patent documents). 1).
特開2019-068228号公報JP-A-2019-068228
 しかしながら、上述の特許文献1に記載の技術を含む従来技術のみでは、監視カメラは単に画像を撮像するのみであり、その画像のデータはインターネット等を介して外部の監視者等の外部の装置に送信され、当該外部の装置により異常等が検出されていた。
 その結果、例えばインターネット等に異常が発生した場合、監視カメラによる異常等の検出が機能しなくなることがあった。また、遠隔の外部の装置において異常等が検出されても、その異常等に対する措置が遅れて不十分な対応になる場合があった。
 このため、異常等の検出対象となる所定空間内で独立して機能する監視装置(スタンドアロンの監視装置)の実現が要求されていたが、このような要求に十分に応えることができていない状況であった。
However, with only the prior art including the technique described in Patent Document 1 described above, the surveillance camera merely captures an image, and the data of the image is sent to an external device such as an external observer via the Internet or the like. It was transmitted, and an abnormality was detected by the external device.
As a result, for example, when an abnormality occurs on the Internet or the like, the detection of the abnormality or the like by the surveillance camera may not function. In addition, even if an abnormality or the like is detected in a remote external device, the measures for the abnormality or the like may be delayed and insufficient measures may be taken.
For this reason, there has been a demand for the realization of a monitoring device (stand-alone monitoring device) that functions independently within a predetermined space for detecting abnormalities, etc., but the situation is that such a request cannot be fully met. Met.
 本発明は、このような状況に鑑みてなされたものであり、異常等の検出対象となる所定空間内で独立して機能する監視装置(スタンドアロンの監視装置)を実現することを目的とする。 The present invention has been made in view of such a situation, and an object of the present invention is to realize a monitoring device (stand-alone monitoring device) that functions independently in a predetermined space for detecting an abnormality or the like.
 本発明の一態様の情報処理装置は、
 所定空間に配置される情報処理装置において、
 当該所定空間の少なくとも一部を撮像した結果得られる画像のデータを取得する画像取得手段と、
 取得された前記画像のデータに基づいて、危険又は異常な物体又は現象が前記所定空間内に存在又は発生する可能性を検出する検出手段と、
 前記検出手段による検出結果が所定条件を満たした場合、所定の警告の報知を制御する報知制御手段と、
 を備える。
The information processing device of one aspect of the present invention is
In an information processing device arranged in a predetermined space
An image acquisition means for acquiring image data obtained as a result of imaging at least a part of the predetermined space, and
A detection means for detecting the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space based on the acquired data of the image.
When the detection result by the detection means satisfies a predetermined condition, the notification control means for controlling the notification of a predetermined warning and the notification control means.
To be equipped.
 本発明の一態様の情報処理方法及びプログラムの夫々は、上述の本発明の一態様の情報処理装置に対応する方法及びプログラムの夫々である。 Each of the information processing methods and programs of one aspect of the present invention is a method and program corresponding to the above-mentioned information processing device of one aspect of the present invention.
 本発明によれば、異常等の検出対象となる所定空間内で独立して機能する監視装置(スタンドアロンの監視装置)を実現することができる。 According to the present invention, it is possible to realize a monitoring device (stand-alone monitoring device) that functions independently in a predetermined space for detecting an abnormality or the like.
本発明の情報処理の一実施形態に係る防犯カメラの原理を説明する図である。It is a figure explaining the principle of the security camera which concerns on one Embodiment of information processing of this invention. 2台以上の図1の防犯カメラを協働して動作させることで、実現可能な処理の例を示す図である。It is a figure which shows the example of the processing which can be realized by operating two or more security cameras of FIG. 1 in cooperation with each other. 図1の防犯カメラの外観の構成例を示す斜視図である。It is a perspective view which shows the structural example of the appearance of the security camera of FIG. 図3の防犯カメラの内部のハードウェア構成例を示すブロック図である。It is a block diagram which shows the hardware configuration example inside the security camera of FIG. 図4の防犯カメラの機能的構成の一例を示す機能ブロック図である。It is a functional block diagram which shows an example of the functional structure of the security camera of FIG. 図5の防犯カメラにより検出される危険又は異常な物体又は現象の一例として、火事による火炎の検出、及び刃物による危険行為の検出の手法の一例を説明する図である。As an example of a dangerous or abnormal object or phenomenon detected by the security camera of FIG. 5, it is a figure explaining an example of a method of detecting a flame by a fire, and detecting a dangerous act by a knife.
 以下、図面を参照して、本発明の実施形態について説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は、本発明の情報処理の一実施形態に係る防犯カメラの原理を説明する図である。 FIG. 1 is a diagram illustrating the principle of a security camera according to an embodiment of information processing of the present invention.
 1台以上の防犯カメラ(図1の例では、2台の防犯カメラ1-1,1-2)は、店内等の所定空間Sに配置され、危険又は異常な物体又は現象の監視を常時(24時間)リアルタイムで監視するスタンドアロン(独立して機能)のAI搭載防犯カメラである。 One or more security cameras (in the example of FIG. 1, two security cameras 1-1, 1-2) are arranged in a predetermined space S such as in a store, and constantly monitor dangerous or abnormal objects or phenomena (in the example of FIG. 1). It is a stand-alone (independent function) AI-equipped security camera that monitors in real time (24 hours).
 ここで、AI(Artificial Intelligence)搭載とは、次のことを意味する。即ち、危険又は異常な物体又は現象を実際に含むと判定された1以上の画像のデータに基づいて事前に機械学習が行われ、その機械学習により生成又は更新された所定アルゴリズムが搭載されていることが、AI搭載の意味である。 Here, the installation of AI (Artificial Intelligence) means the following. That is, machine learning is performed in advance based on the data of one or more images determined to actually contain a dangerous or abnormal object or phenomenon, and a predetermined algorithm generated or updated by the machine learning is installed. That is the meaning of installing AI.
 ここで、危険又は異常な物体又は現象は、設計者又は使用者等が任意に設定できるものであるが、例えば本例では、次のようなものが設定されている。即ち、刃物や武器等の固体、劇薬等の液体、火事の際に生じる火炎等の気体が、危険又は異常な物体又は現象として設定されている。刃物を振り回したり喧嘩したりする等の危険行為や暴力行為、プール内でおぼれている等の水害、火災等が、危険又は異常な物体又は現象として設定されている。なお、所定空間Sは、室内等の閉空間である必要はなく、屋外等の開空間でもよい。即ち、海岸で人がおぼれている等の水害、森林火災、交通事故等も、危険又は異常な物体又は現象として設定されている。 Here, a dangerous or abnormal object or phenomenon can be arbitrarily set by the designer, the user, or the like. For example, in this example, the following is set. That is, solids such as blades and weapons, liquids such as powerful drugs, and gases such as flames generated in the event of a fire are set as dangerous or abnormal objects or phenomena. Dangerous acts such as swinging or fighting with a knife, violent acts, flood damage such as drowning in a pool, fire, etc. are set as dangerous or abnormal objects or phenomena. The predetermined space S does not have to be a closed space such as indoors, but may be an open space such as outdoors. That is, flood damage such as drowning people on the coast, forest fires, traffic accidents, etc. are also set as dangerous or abnormal objects or phenomena.
 なお、所定空間Sが屋外等で電源供給が困難な場所でも設置可能となるように、防犯カメラ1-1,1-2は、バッテリを内蔵している。 The security cameras 1-1 and 1-2 have a built-in battery so that the predetermined space S can be installed even in a place where power supply is difficult, such as outdoors.
 このような防犯カメラ1-1,1-2は、次のような一連の処理を実行することができる。
 即ち、防犯カメラ1-1,1-2は、所定空間Sの少なくとも一部を撮像し、その結果得られる撮像画像のデータに対して、内蔵するGPU(Graphics Processing Unit)により所定の画像処理を施す。ここで、画像とは、静止画像と動画像とを含む広義な概念である。また、防犯カメラ1-1,1-2は、所定空間Sの音声を録音し、その結果得られる音声のデータに対して、内蔵するGPU(Graphics Processing Unit)により所定の音声処理を施す。
 防犯カメラ1-1,1-2は、このようにして画像処理が施された撮像画像のデータ(例えば特徴量等)に基づいて、危険又は異常な物体又は現象が所定空間S内に存在又は発生する可能性を、上述の所定のアルゴリズム(AI)を用いて判定する。ここで、可能性の検出結果は、特に限定されず、例えば、可能性を示すパーセント(一定の幅を有する値)でもよいし、有(100パーセント)又は無(0パーセント)といった2値でもよい。
 防犯カメラ1-1,1-2は、検出結果が所定条件を満たした場合、所定の警告を報知する。所定条件は特に限定されず、例えば検出結果が可能性を示すパーセントで表される場合、80パーセント等の閾値を超えた等の条件を採用することができ、また例えば、検出結果が有又は無で表される場合有が出力された等の条件を採用することができる。
 さらに例えば、検出結果が可能性を示すパーセントで表される場合、複数の閾値を用いて、防犯カメラ1-1,1-2は、多段階的に所定の警告を順次報知する(例えば60パーセント以上の場合には「注意」、80パーセント以上の場合「危険」等を順次報知する)こともできる。
Such security cameras 1-1 and 1-2 can execute the following series of processes.
That is, the security cameras 1-1 and 1-2 image at least a part of the predetermined space S, and the data of the captured image obtained as a result is subjected to predetermined image processing by the built-in GPU (Graphics Processing Unit). Give. Here, the image is a broad concept including a still image and a moving image. Further, the security cameras 1-1 and 1-2 record the voice of the predetermined space S, and perform the predetermined voice processing on the voice data obtained as a result by the built-in GPU (Graphics Processing Unit).
In the security cameras 1-1 and 1-2, a dangerous or abnormal object or phenomenon exists in the predetermined space S based on the data of the captured image (for example, the feature amount) subjected to the image processing in this way. The possibility of occurrence is determined using the predetermined algorithm (AI) described above. Here, the detection result of the possibility is not particularly limited, and may be, for example, a percentage indicating the possibility (a value having a certain range), or a binary value such as yes (100%) or no (0%). ..
When the detection result satisfies the predetermined condition, the security cameras 1-1 and 1-2 notify a predetermined warning. The predetermined conditions are not particularly limited, and for example, when the detection result is represented by a percentage indicating the possibility, a condition such as exceeding a threshold value such as 80% can be adopted, and for example, the detection result is present or absent. When represented by, conditions such as the output of Yes can be adopted.
Further, for example, when the detection result is expressed as a percentage indicating the possibility, the security cameras 1-1 and 1-2 sequentially notify a predetermined warning in multiple stages using a plurality of threshold values (for example, 60%). In the above cases, "Caution", and in the case of 80% or more, "Danger", etc. can be notified in sequence).
 例えば図1の例では、防犯カメラ1-1が、人H-1がナイフKを人H-2に対して表出している現象を含む所定空間Sを撮像する。防犯カメラ1-1のGPUは、その撮像画像のデータに対して所定の画像処理を施す。
 防犯カメラ1-1は、このようにして画像処理が施された撮像画像のデータ(例えば特徴量等)に基づいて、刃物自体、又は刃物による危険行為として異常IRを検出する。
 そこで、防犯カメラ1-1は、異常IRの警告の報知として、警報Aを所定空間S内に鳴らしたり、異常IRが発生している旨のメールM-1を警察等外部の装置に送信することができる。
 なお、報知の手法は、図1の例に特に限定されず、任意でよく、例えば火災の報知としてスプリンクラーを作動させる指示(信号)等を採用することもできる。
For example, in the example of FIG. 1, the security camera 1-1 images a predetermined space S including a phenomenon in which the person H-1 expresses the knife K with respect to the person H-2. The GPU of the security camera 1-1 performs predetermined image processing on the captured image data.
The security camera 1-1 detects the abnormal IR as a dangerous act by the blade itself or the blade based on the data of the captured image (for example, the feature amount) subjected to the image processing in this way.
Therefore, the security camera 1-1 sounds an alarm A in the predetermined space S or transmits an e-mail M-1 to the effect that an abnormal IR has occurred to an external device such as the police as a notification of the abnormal IR warning. be able to.
The notification method is not particularly limited to the example of FIG. 1, and may be arbitrary. For example, an instruction (signal) for operating the sprinkler may be adopted as a fire notification.
 ここで、図1の例では2台の防犯カメラ1-1,1-2が図示されているが、防犯カメラの台数は特に図1に限定されず、任意でよく、当然1台でもよい。ただし、2台以上の防犯カメラ(例えば図1の例の防犯カメラ1-1,1-2)を各所に分散させて独立して機能させることで、膨大なデータを管理する為の高額な従来のサーバーセンターを不要にすることができる。 Here, in the example of FIG. 1, two security cameras 1-1 and 1-2 are shown, but the number of security cameras is not particularly limited to FIG. 1, and may be arbitrary, and of course, one. However, by distributing two or more security cameras (for example, security cameras 1-1 and 1-2 in the example of FIG. 1) in various places and making them function independently, it is expensive to manage a huge amount of data. You can eliminate the need for a server center.
 さらに、2台以上の防犯カメラを協働して動作させることで、図2に示すように、各種各様な処理を実行することができる。即ち例えば、図1及び図2の説明において、防犯カメラ1-1は、防犯カメラ1-2と所定のケーブル(図1や図2に示す防犯カメラ1-1,1-2の間を繋ぐケーブル)で接続されており、防犯カメラ1-1,1-2は、相互に画像データや危険又は異常の検出に係る情報等を授受できるものとする。
 図2は、2台以上の図1の防犯カメラを協働して動作させることで、実現可能な処理の例を示す図である。
Further, by operating two or more security cameras in cooperation with each other, various kinds of processing can be executed as shown in FIG. That is, for example, in the description of FIGS. 1 and 2, the security camera 1-1 is a cable connecting the security camera 1-2 and a predetermined cable (the cables connecting the security cameras 1-1 and 1-2 shown in FIGS. 1 and 2). ), And the security cameras 1-1 and 1-2 can exchange image data and information related to the detection of danger or abnormality with each other.
FIG. 2 is a diagram showing an example of processing that can be realized by operating two or more security cameras of FIG. 1 in cooperation with each other.
 例えば図2のAに示すように、所定空間Sが広い等の場合、1台の防犯カメラ1-1のみでは、その撮像の範囲(画角)が死角となり、危険又は異常な物体又は現象の検出の遅れ(所定空間S内に存在又は発生する可能性が実際より低くなるおそれ)が発生することがある。逆に、1台の防犯カメラ1-1のみでは、危険又は異常な物体又は現象の誤検出(所定空間S内に存在又は発生する可能性が実際より高くなるおそれ)が発生することがある。
 これらのことを防止すべく、異なる位置に夫々配置された(それゆえその撮像の範囲や対象を撮像する方向が異なる)防犯カメラ1-1,1-2等が、異なる視点で撮像した結果得られる夫々の撮像画像のデータに基づいて、危険又は異常な物体又は現象が所定空間S内に存在又は発生する可能性を、上述の所定のアルゴリズム(AI)を用いて検出する。
 その結果、防犯カメラ1-1により検出される異常IR-1と、防犯カメラ1-2により検出される異常IR-2のうち少なくとも一方に基づく警告が報知される。
 このようにすることで、防犯カメラ1-1,1-2の少なくとも一方からデータとして出力された撮像画像には、人H-1がナイフKを表出している現象が含まれている可能性が、防犯カメラ1-1のみからデータとして出力された撮像画像に含まれている可能性より高くなる。その結果、異なる視点で夫々撮像する防犯カメラ1-1,1-2が配置された場合、防犯カメラ1-1のみが配置されている場合と比較して、危険又は異常な物体又は現象の検出の遅れの発生を防止したり、誤検出の発生を防止することができる。
For example, as shown in A of FIG. 2, when the predetermined space S is wide, the imaging range (angle of view) becomes a blind spot with only one security camera 1-1, and a dangerous or abnormal object or phenomenon occurs. A delay in detection (there is a possibility that the presence or occurrence in the predetermined space S is lower than the actual value) may occur. On the contrary, with only one security camera 1-1, erroneous detection of a dangerous or abnormal object or phenomenon (there is a possibility that it exists or occurs in the predetermined space S is higher than it actually is) may occur.
In order to prevent these things, the security cameras 1-1, 1-2, etc., which are arranged at different positions (and therefore the imaging range and the object imaging direction are different), are obtained as a result of imaging from different viewpoints. Based on the data of each captured image, the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S is detected by using the predetermined algorithm (AI) described above.
As a result, a warning based on at least one of the abnormal IR-1 detected by the security camera 1-1 and the abnormal IR-2 detected by the security camera 1-2 is notified.
By doing so, it is possible that the captured image output as data from at least one of the security cameras 1-1 and 1-2 includes a phenomenon in which the person H-1 expresses the knife K. However, it is more likely that it is included in the captured image output as data from only the security camera 1-1. As a result, when the security cameras 1-1 and 1-2 that capture images from different viewpoints are arranged, the detection of dangerous or abnormal objects or phenomena is compared with the case where only the security cameras 1-1 are arranged. It is possible to prevent the occurrence of delay and the occurrence of false detection.
 例えば図2のBに示すように、人H-1は、防犯カメラ1-2を破壊してから、刃物による危険行為をするおそれがある。この場合、防犯カメラ1-2のみが配置されていたときには、人H-1による刃物により危険行為は検出されず、何ら警告も報知されないことがある。
 このことを防止すべく、即ち、防犯カメラ1-2が破壊されるリスクを低減すべく、2台の防犯カメラ1-1,1-2等が、夫々独立して撮像した結果得られる夫々の撮像画像のデータに基づいて、危険又は異常な物体又は現象が所定空間S内に存在又は発生する可能性を、上述の所定のアルゴリズム(AI)を用いて検出する。
 その結果、たとえ防犯カメラ1-2が破壊されたとしても、防犯カメラ1-1により検出される異常IR-2に基づく警告が報知される。
For example, as shown in B of FIG. 2, the person H-1 may perform a dangerous act with a knife after destroying the security camera 1-2. In this case, when only the security cameras 1-2 are arranged, the dangerous act is not detected by the blade by the person H-1, and no warning may be notified.
In order to prevent this, that is, to reduce the risk that the security cameras 1-2 are destroyed, the two security cameras 1-1, 1-2, etc. are obtained as a result of independent imaging. Based on the data of the captured image, the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S is detected by using the above-mentioned predetermined algorithm (AI).
As a result, even if the security camera 1-2 is destroyed, a warning based on the abnormal IR-2 detected by the security camera 1-1 is notified.
 また例えば、既に危険行為をした人H-1は、防犯カメラ1-2に撮像されたと認識し、当該防犯カメラ1-2を破壊することがある。人H-1による危険行為を検知した防犯カメラ1-2は、人H-1の危険行為が撮像された画像のデータを、防犯カメラ1-1の記憶部にコピーするよう制御する。これにより、防犯カメラ1-2が人H-1に破壊された場合であっても、人H-1の危険行為が撮像された画像のデータは、保持される。 Also, for example, a person H-1 who has already performed a dangerous act may recognize that the image has been taken by the security camera 1-2 and destroy the security camera 1-2. The security camera 1-2 that detects the dangerous act by the person H-1 controls to copy the data of the image in which the dangerous act of the person H-1 is captured to the storage unit of the security camera 1-1. As a result, even if the security camera 1-2 is destroyed by the person H-1, the data of the image in which the dangerous act of the person H-1 is captured is retained.
 例えば図2のCに示すように、危険又は異常な物体又は現象の検出が2つ以上必要になる場合がある。具体的に例えば、所定空間S内で、人H-1による刃物により危険行為の検出の他に、さらに、火事による火炎Fの検出が必要になる場合がある。
 このような場合、防犯カメラ1-2の1台のみが配置されていたときには、その処理能力の限界により、危険又は異常な物体又は現象の2以上の同時検出が困難となり、夫々の警告(図2のCの例では、2つの異常IR-4,IR-5の夫々に基づく警告)の少なくとも一方の報知が遅れるおそれがある。
 このようなおそれを防止すべく、2台の防犯カメラ1-1,1-2が、自身が受け持つ検出及び警告の担当を分散することができる。
 具体的には例えば、防犯カメラ1-1は、人H-1による刃物により危険行為についての異常IR-5のみを担当する。一方、防犯カメラ1-2は、人H-1による刃物により危険行為についての異常IR-5の担当はせずに、火事による火炎Fについての異常IR-5の検出のみを担当することができる。
 その結果、防犯カメラ1-1,1-2の夫々の検出のための処理負荷が軽減するので、危険又は異常な物体又は現象の2以上の夫々の検出が容易に可能となり、その結果、夫々の警告の報知の遅れを防止することができる。
For example, as shown in C of FIG. 2, it may be necessary to detect two or more dangerous or abnormal objects or phenomena. Specifically, for example, in the predetermined space S, it may be necessary to detect a flame F due to a fire in addition to detecting a dangerous act by a knife by a person H-1.
In such a case, when only one of the security cameras 1-2 is arranged, it becomes difficult to detect two or more dangerous or abnormal objects or phenomena at the same time due to the limitation of the processing capacity, and each warning (Fig. In the example of C of 2, at least one of the warnings based on the two abnormalities IR-4 and IR-5) may be delayed.
In order to prevent such a fear, the two security cameras 1-1 and 1-2 can distribute the responsibility for detection and warning that they are in charge of.
Specifically, for example, the security camera 1-1 is in charge of only the abnormal IR-5 regarding dangerous acts by the blade by the person H-1. On the other hand, the security camera 1-2 can be in charge of only detecting the abnormal IR-5 for the flame F caused by the fire, without being in charge of the abnormal IR-5 for the dangerous act by the blade by the person H-1. ..
As a result, the processing load for detecting each of the security cameras 1-1 and 1-2 is reduced, so that it is possible to easily detect two or more dangerous or abnormal objects or phenomena, and as a result, each of them. It is possible to prevent a delay in the notification of the warning.
 なお、以下、防犯カメラ1-1,1-2等を個々に区別する必要がない場合、これらをまとめて「防犯カメラ1」と呼ぶ。 In the following, when it is not necessary to distinguish the security cameras 1-1, 1-2, etc. individually, these are collectively referred to as "security camera 1".
 図3は、図1の防犯カメラの外観の構成例を示す斜視図である。
 図3に示すように、防犯カメラ1は、筐体2と、カメラ3とを有している。
 防犯カメラ1は、筐体2側を所定空間Sの上部(天井等)に設置させることで、カメラ3は、上方から下方に向けた視点で所定空間Sを撮像することができるため、死角が少なくなり、危険又は異常な物体又は現象の検出の精度が向上する。
FIG. 3 is a perspective view showing a configuration example of the appearance of the security camera of FIG.
As shown in FIG. 3, the security camera 1 has a housing 2 and a camera 3.
By installing the security camera 1 on the upper part (ceiling or the like) of the predetermined space S on the housing 2 side, the camera 3 can image the predetermined space S from the viewpoint from the upper side to the lower side, so that the blind spot is created. It is reduced and the accuracy of detecting dangerous or abnormal objects or phenomena is improved.
 図4は、図3の防犯カメラの内部のハードウェア構成例を示すブロック図である。 FIG. 4 is a block diagram showing an example of the internal hardware configuration of the security camera of FIG.
 防犯カメラ1は、図3の筐体2の内部に、CPU(Central Processing Unit)11と、GPU12と、ROM(Read Only Memory)13と、RAM(Random Access Memory)14と、バス15と、入出力インターフェース16と、出力部17と、入力部18と、記憶部19と、通信部20と、ドライブ21と、バッテリ22とを備えている。 The security camera 1 contains a CPU (Central Processing Unit) 11, a GPU 12, a ROM (Read Only Memory) 13, a RAM (Random Access Memory) 14, and a bus 15 inside the housing 2 of FIG. It includes an output interface 16, an output unit 17, an input unit 18, a storage unit 19, a communication unit 20, a drive 21, and a battery 22.
 CPU11は、ROM13に記録されているプログラム、又は、記憶部19からRAM14にロードされたプログラムに従って各種の処理を実行する。
 GPU12は、ROM13に記録されているプログラム、又は、記憶部19からRAM14にロードされたプログラムに従って各種の画像処理を実行する。
 RAM14には、CPU11が各種の処理を実行する上及びGPU12が各種の画像処理を実行する上において必要なデータ等も適宜記憶される。
The CPU 11 executes various processes according to the program recorded in the ROM 13 or the program loaded from the storage unit 19 into the RAM 14.
The GPU 12 executes various image processing according to the program recorded in the ROM 13 or the program loaded from the storage unit 19 into the RAM 14.
Data and the like necessary for the CPU 11 to execute various processes and the GPU 12 to execute various image processes are also appropriately stored in the RAM 14.
 CPU11、GPU12、ROM13及びRAM14は、バス15を介して相互に接続されている。このバス15にはまた、入出力インターフェース16も接続されている。入出力インターフェース16には、出力部17、入力部18、記憶部19、通信部20及びドライブ21が接続されている。 The CPU 11, GPU 12, ROM 13 and RAM 14 are connected to each other via the bus 15. An input / output interface 16 is also connected to the bus 15. An output unit 17, an input unit 18, a storage unit 19, a communication unit 20, and a drive 21 are connected to the input / output interface 16.
 出力部17は、スピーカ17等により構成される。スピーカ17は、警告の報知のために各種音声を出力する。
 入力部18は、カメラ3と、マイクロフォン41等により構成される。カメラ3は、所定空間Sの少なくとも一部を撮像し、その結果得られる撮像画像のデータを出力する。マイクロフォン41は、所定空間S内で発せられる音声を入力し、音声のデータとして出力する。
 記憶部19は、DRAM(Dynamic Random Access Memory)等で構成され、各種データを記憶する。
 通信部20は、近隣に設置された他の防犯カメラ1(例えば図1の例では、図4に示す防犯カメラ1が防犯カメラ1-1の場合、他の防犯カメラ1-2)との間で通信を行う。通信部20は、必要に応じて、インターネットを含むネットワークNを介して他の装置(例えば図示せぬ警察が管理する装置)に対して、警告の報知を行うための情報を送信する。
The output unit 17 is composed of a speaker 17 and the like. The speaker 17 outputs various sounds for warning notification.
The input unit 18 includes a camera 3, a microphone 41, and the like. The camera 3 captures at least a part of the predetermined space S and outputs the data of the captured image obtained as a result. The microphone 41 inputs the voice emitted in the predetermined space S and outputs it as voice data.
The storage unit 19 is composed of a DRAM (Dynamic Random Access Memory) or the like, and stores various data.
The communication unit 20 is connected to another security camera 1 installed nearby (for example, in the example of FIG. 1, when the security camera 1 shown in FIG. 4 is the security camera 1-1, the other security camera 1-2). Communicate with. The communication unit 20 transmits information for notifying a warning to another device (for example, a device managed by a police (not shown)) via a network N including the Internet, if necessary.
 ドライブ21には、磁気ディスク、光ディスク、光磁気ディスク、或いは半導体メモリ等よりなる、リムーバブルメディア31が適宜装着される。ドライブ21によってリムーバブルメディア31から読み出されたプログラムは、必要に応じて記憶部19にインストールされる。
 また、リムーバブルメディア31は、記憶部19に記憶されている各種データも、記憶部19と同様に記憶することができる。
A removable medium 31 made of a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is appropriately mounted on the drive 21. The program read from the removable media 31 by the drive 21 is installed in the storage unit 19 as needed.
In addition, the removable media 31 can also store various data stored in the storage unit 19 in the same manner as the storage unit 19.
 バッテリ22は、防犯カメラ1が機能するのに十分な電力を供給する電源である。即ち、防犯カメラ1は、商用電源が不要となるので、上述したように、所定空間Sが屋外等で電源供給が困難な場所でも設置が可能となる。 The battery 22 is a power source that supplies sufficient power for the security camera 1 to function. That is, since the security camera 1 does not require a commercial power supply, it can be installed even in a place where the predetermined space S is outdoors or where power supply is difficult, as described above.
 このような図4の防犯カメラ1の各種ハードウェアと各種ソフトウェアとの協働により、危険若しくは異常な物体若しくは現象の検出、並びにその検出に基づく警告の報知を行うための各種処理の実行が可能になる。
 図5は、図4の防犯カメラの機能的構成の一例を示す機能ブロック図である。
By collaborating with various hardware and various software of the security camera 1 shown in FIG. 4, it is possible to detect a dangerous or abnormal object or phenomenon and execute various processes for notifying a warning based on the detection. become.
FIG. 5 is a functional block diagram showing an example of the functional configuration of the security camera of FIG.
 図5に示すように、防犯カメラ1のCPU11又はGPU12においては、撮像画像取得部101と、画像処理部102と、音声取得部103と、音声処理部104と、危険異常検出部105と、警告報知制御部106と、他装置協働制御部107と、が機能する。
 記憶部19には、画像情報DB(Database)200と、音声情報DB300と、学習結果DB400とが設けられている。
As shown in FIG. 5, in the CPU 11 or GPU 12 of the security camera 1, the captured image acquisition unit 101, the image processing unit 102, the voice acquisition unit 103, the voice processing unit 104, the danger abnormality detection unit 105, and the warning The notification control unit 106 and the other device collaborative control unit 107 function.
The storage unit 19 is provided with an image information DB (database) 200, an audio information DB 300, and a learning result DB 400.
 上述したように、防犯カメラ1のカメラ3は、所定空間Sの少なくとも一部を撮像し、その結果得られる撮像画像のデータを出力する。
 撮像画像取得部101は、カメラ3から出力された撮像画像のデータを取得し、画像処理部102に出力すると共に、一定時間のログ用データとして圧縮処理等を適宜施して画像情報DB200に記憶させる。
 画像処理部102は、撮像画像取得部101からの撮像画像のデータに対して各種画像処理(例えば画像の特徴量の抽出処理等)を施して、危険異常検出部105に出力する。
As described above, the camera 3 of the security camera 1 images at least a part of the predetermined space S, and outputs the data of the captured image obtained as a result.
The captured image acquisition unit 101 acquires the captured image data output from the camera 3, outputs the data to the image processing unit 102, and appropriately performs compression processing or the like as log data for a certain period of time to store the data in the image information DB 200. ..
The image processing unit 102 performs various image processing (for example, extraction processing of the feature amount of the image) on the data of the captured image from the captured image acquisition unit 101, and outputs the data to the danger abnormality detection unit 105.
 上述したように、防犯カメラ1のマイクロフォン41は、所定空間S内で発せられる音声を入力し、音声のデータとして出力する。
 音声取得部103は、マイクロフォン41から出力された音声のデータを取得し、音声処理部104に出力すると共に、一定時間のログ用データとして圧縮処理等を適宜施して音声情報DB300に記憶させる。
 音声処理部104は、音声取得部103からの音声のデータに対して各種処理(例えば音声の抽出処理等)を施して、危険異常検出部105に出力する。
As described above, the microphone 41 of the security camera 1 inputs the voice emitted in the predetermined space S and outputs it as voice data.
The voice acquisition unit 103 acquires the voice data output from the microphone 41, outputs the voice data to the voice processing unit 104, and appropriately performs compression processing or the like as log data for a certain period of time to store the voice information DB 300.
The voice processing unit 104 performs various processes (for example, voice extraction processing, etc.) on the voice data from the voice acquisition unit 103, and outputs the data to the danger abnormality detection unit 105.
 ここで、後述するように、危険異常検出部105による、危険又は異常な物体又は現象が所定空間S内に存在又は発生する可能性の検出は、画像処理部102からの撮像画像のデータをベースにして実行される。ただし、必要に応じて音声処理部104からの音声のデータもさらに考慮されて、危険又は異常な物体又は現象が所定空間S内に存在又は発生する可能性の検出が行われる。
 このように、危険又は異常な物体又は現象が所定空間S内に存在又は発生する可能性の検出に対して、音声のデータが考慮される理由について、以下説明する。
Here, as will be described later, the detection of the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S by the danger abnormality detection unit 105 is based on the data of the captured image from the image processing unit 102. Is executed. However, if necessary, the voice data from the voice processing unit 104 is further taken into consideration, and the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S is detected.
The reason why the voice data is considered for the detection of the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S will be described below.
 即ち、従来の監視カメラシステム等は、異常等(例えば、不審者の侵入等)の検出を、画像のデータのみに基づいて行っていた。しかしながら、所定の種類の異常等(例えば、喧嘩等)には、画像のみならず、音声にも、当該所定種類の異常等の検出に資する情報(例えば音声のうち、喧嘩等でよく発せられる怒号等)が含まれることがある。
 そこで、本実施形態の監視カメラ1は、画像(例えば、喧嘩等の画像)のデータに加え、音声(例えば、怒号等)のデータに基づいて、所定種類の物体又は現象(例えば、喧嘩等)に関する検出をするようにしている。即ち、画像のみならず、音声も考慮することで、より精度の高い検出が可能になる。
 さらに、所定の種類の異常等(例えば、喧嘩等)については、画像(例えば、喧嘩等の画像)の情報と音声(例えば、怒号等)の情報との間に、当該所定の種類の異常等に特有の相関関係があることが期待される。しかしながら、従来は、そのような相関関係に基づいて異常等の検出は行われていなかった。
 これに対して、本実施形態では、画像の情報と音声の情報との間の相関関係についても事前に機械学習が行われ、その結果生成又は更新される所定アルゴリズムが用いられる。このため、所定種類の物体又は現象(例えば、喧嘩等)に関する検出の精度がより一段と向上する。
That is, the conventional surveillance camera system or the like detects an abnormality or the like (for example, an intrusion of a suspicious person) based only on the image data. However, for a predetermined type of abnormality (for example, a fight, etc.), not only an image but also a sound, information that contributes to detection of the predetermined type of abnormality, etc. (for example, an angry word that is often issued in a fight, etc. Etc.) may be included.
Therefore, the surveillance camera 1 of the present embodiment has a predetermined type of object or phenomenon (for example, a fight, etc.) based on audio data (for example, an angry number, etc.) in addition to image data (for example, an image of a fight, etc.). I am trying to detect about. That is, by considering not only the image but also the sound, more accurate detection becomes possible.
Further, for a predetermined type of abnormality (for example, a fight, etc.), the predetermined type of abnormality, etc. It is expected that there is a peculiar correlation with. However, conventionally, anomalies and the like have not been detected based on such a correlation.
On the other hand, in the present embodiment, a predetermined algorithm is used in which machine learning is performed in advance for the correlation between the image information and the audio information, and the result is generated or updated. Therefore, the accuracy of detection of a predetermined type of object or phenomenon (for example, a fight) is further improved.
 危険異常検出部105は、画像処理部102からの撮像画像のデータ、及び必要に応じて音声処理部104からの音声のデータに基づいて、危険又は異常な物体又は現象が所定空間S内に存在又は発生する可能性を検出する。
 ここで、学習結果DB400には、危険又は異常な物体又は現象を実際に含むと判定された1以上の画像のデータに基づいて予め行われた機械学習により生成又は更新された所定アルゴリズムのデータが格納されている。
 そこで、危険異常検出部105は、当該所定アルゴリズムを用いて、危険又は異常な物体又は現象が所定空間S内に存在又は発生する可能性を検出する。
The danger abnormality detection unit 105 has a dangerous or abnormal object or phenomenon in the predetermined space S based on the data of the captured image from the image processing unit 102 and the sound data from the sound processing unit 104 as needed. Or detect the possibility of occurrence.
Here, the learning result DB 400 contains data of a predetermined algorithm generated or updated by machine learning performed in advance based on the data of one or more images determined to actually contain a dangerous or abnormal object or phenomenon. It is stored.
Therefore, the danger abnormality detection unit 105 detects the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space S by using the predetermined algorithm.
 ここで、図6を参照して、危険又は異常な物体又は現象の具体的な検出の一例について説明する。
 図6は、図5の防犯カメラにより検出される危険又は異常な物体又は現象の一例として、火事による火炎の検出、及び刃物による危険行為の検出の手法の一例を説明する図である。
Here, an example of specific detection of a dangerous or abnormal object or phenomenon will be described with reference to FIG.
FIG. 6 is a diagram illustrating an example of a method of detecting a flame due to a fire and a method of detecting a dangerous act by a knife as an example of a dangerous or abnormal object or phenomenon detected by the security camera of FIG.
 図6のAは、火事による火炎の検出の手法の一例を説明する図である。
 図6のAに示すように、人口的に作られた火による火炎F1(以下「人口炎F1」と呼ぶ)と、自然発火による火炎F2(以下「自然炎F2」と呼ぶ)とは異なっている。
FIG. 6A is a diagram illustrating an example of a method for detecting a flame caused by a fire.
As shown in A of FIG. 6, the flame F1 caused by artificially created fire (hereinafter referred to as "artificial flame F1") and the flame F2 caused by spontaneous combustion (hereinafter referred to as "natural flame F2") are different. There is.
 そこで、まず、人口炎F1を含むと判定された1以上の画像のデータと、自然炎F2を含むと判定された1以上の画像のデータとの夫々に基づいて事前に機械学習が行われ、その機械学習により、(適宜画像処理が施された)撮像画像のデータから、人口炎F1と自然炎F2とを区別して認識する所定アルゴリズムのデータが生成又は更新されて、図5の学習結果DB400に予め格納される。 Therefore, first, machine learning is performed in advance based on the data of one or more images determined to contain the artificial flame F1 and the data of one or more images determined to contain the natural flame F2. By the machine learning, the data of the predetermined algorithm for distinguishing and recognizing the artificial flame F1 and the natural flame F2 is generated or updated from the data of the captured image (appropriately image-processed), and the learning result DB 400 of FIG. Pre-stored in.
 この場合、所定空間Sにおいて、マッチやライター等から火が発生されたときには、人口炎F1(学習時とは異なる)を含む撮像画像のデータが、撮像画像取得部101により取得されて、画像処理部102により画像処理が施される。
 そこで、危険異常検出部105は、当該撮像画像のデータに基づいて、火事による火炎が所定空間S内に存在又は発生する可能性は非常に低いと、上述の所定アルゴリズムを用いて検出する。その結果、後述する警告報知制御部106の制御により、所定の警告の報知は禁止される(即ち、警告は報知されない)。
In this case, when a fire is generated from a match, a lighter, or the like in the predetermined space S, the captured image data including the artificial flame F1 (different from the learning time) is acquired by the captured image acquisition unit 101 and image processing is performed. Image processing is performed by unit 102.
Therefore, based on the data of the captured image, the danger abnormality detection unit 105 detects that it is very unlikely that a flame due to a fire exists or occurs in the predetermined space S by using the above-mentioned predetermined algorithm. As a result, the notification of a predetermined warning is prohibited (that is, the warning is not notified) by the control of the warning notification control unit 106 described later.
 これに対して、所定空間Sにおいて、火災により自然発火がなされたときには自然炎F2(学習時とは異なる)を含む撮像画像のデータが、撮像画像取得部101により取得されて、画像処理部102により画像処理が施される。
 そこで、危険異常検出部105は、当該撮像画像のデータに基づいて、火事による火炎が所定空間S内に存在又は発生する可能性は非常に高いと、上述の所定アルゴリズムを用いて検出する。その結果、後述する警告報知制御部106の制御により、所定の警告が報知される。
 ここで、マッチやライター等から飛び火等し、火災となった場合においても、自然炎F2(学習時とは異なる)が発生する。従って、この場合においても、危険異常検出部105は、火事による火炎が所定空間S内に存在又は発生する可能性は非常に高いと、上述の所定アルゴリズムを用いて検出する。その結果、後述する警告報知制御部106の制御により、所定の警告が報知される。
On the other hand, in the predetermined space S, when spontaneous combustion is caused by a fire, the data of the captured image including the natural flame F2 (different from the time of learning) is acquired by the captured image acquisition unit 101, and the image processing unit 102 Image processing is performed by.
Therefore, the danger abnormality detection unit 105 detects that there is a very high possibility that a flame due to a fire exists or occurs in the predetermined space S based on the data of the captured image by using the above-mentioned predetermined algorithm. As a result, a predetermined warning is notified by the control of the warning notification control unit 106, which will be described later.
Here, even in the case of a fire caused by a match, a lighter, or the like, a natural flame F2 (different from that at the time of learning) is generated. Therefore, even in this case, the danger abnormality detection unit 105 detects that the flame due to the fire is very likely to exist or occur in the predetermined space S by using the predetermined algorithm described above. As a result, a predetermined warning is notified by the control of the warning notification control unit 106, which will be described later.
 なお、所定空間Sは、上述したように、図1のように室内の閉空間であるとは限らず、図示しないが森林等の開空間である場合もある。この場合、森林等の火災が発生しても、多くの場合、遠目からは実際の自然炎F2(学習時とは異なる)を視認することは困難である。換言すると、遠目と判断されるような場所に防犯カメラ1が設置される場合には、撮像画像のデータに基づいて自然炎F2(学習時とは異なる)の検出は困難である。
 そこで、さらに、このような場合には、自然炎F2に加えて、図示せぬ炎を含むと判定された1以上の画像のデータもさらに用いられて、事前に機械学習が行われ、その機械学習により、(適宜画像処理が施された)撮像画像のデータから、人口炎F1と自然炎F2に加えてさらに煙を区別して認識する所定アルゴリズムのデータが生成又は更新されて、図5の学習結果DB400に予め格納される。
 この場合、森林等(所定空間S)において、火災により自然発火がなされたときには、最初の段階では、(自然炎F2を含まないか、含んでも認識できない程度であって)煙(学習時とは異なる)を含む撮像画像のデータが、撮像画像取得部101により取得されて、画像処理部102により画像処理が施される。
 そこで、危険異常検出部105は、当該撮像画像のデータに基づいて、火事による火炎が所定空間S内に存在又は発生する可能性は中程度ある(今後その可能性は高まる)と、上述の所定アルゴリズムを用いて検出する。その結果、後述する警告報知制御部106の制御により、所定の警告として例えば「火災の危険性あり」という報知がなされる。
 その後、さらに火災が進むと、自然炎F2を含む撮像画像のデータが、撮像画像取得部101により取得されて、画像処理部102により画像処理が施される。
 そこで、危険異常検出部105は、当該撮像画像のデータに基づいて、火事による火炎が所定空間S内に存在又は発生する可能性は非常に高いと、上述の所定アルゴリズムを用いて検出する。その結果、後述する警告報知制御部106の制御により、所定の警告として例えば「火災(上述の危険性よりもさらに一段上の警告)」という報知がなされる。
As described above, the predetermined space S is not always a closed space in the room as shown in FIG. 1, and may be an open space such as a forest, although not shown. In this case, even if a fire occurs in a forest or the like, it is often difficult to visually recognize the actual natural flame F2 (different from the time of learning) from a distance. In other words, when the security camera 1 is installed in a place where it is determined to be far away, it is difficult to detect the natural flame F2 (different from the time of learning) based on the data of the captured image.
Therefore, in such a case, in addition to the natural flame F2, the data of one or more images determined to contain a flame (not shown) is further used, and machine learning is performed in advance, and the machine is performed. By learning, data of a predetermined algorithm that distinguishes and recognizes smoke in addition to artificial flame F1 and natural flame F2 is generated or updated from the data of the captured image (with appropriate image processing), and the learning of FIG. The result is stored in the DB 400 in advance.
In this case, when spontaneous ignition is caused by a fire in a forest or the like (predetermined space S), smoke (at the time of learning) at the first stage (the natural flame F2 is not included or is unrecognizable even if it is included). The captured image data including (different) is acquired by the captured image acquisition unit 101, and image processing is performed by the image processing unit 102.
Therefore, based on the data of the captured image, the danger abnormality detection unit 105 has a moderate possibility that a flame due to a fire exists or occurs in the predetermined space S (the possibility will increase in the future). Detect using an algorithm. As a result, under the control of the warning notification control unit 106, which will be described later, for example, "there is a risk of fire" is notified as a predetermined warning.
After that, when the fire further progresses, the data of the captured image including the natural flame F2 is acquired by the captured image acquisition unit 101, and the image processing unit 102 performs image processing.
Therefore, the danger abnormality detection unit 105 detects that there is a very high possibility that a flame due to a fire exists or occurs in the predetermined space S based on the data of the captured image by using the above-mentioned predetermined algorithm. As a result, under the control of the warning notification control unit 106, which will be described later, a notification such as "fire (warning one step higher than the above-mentioned danger)" is given as a predetermined warning.
 このような火事による火炎の検出の手法の一例を説明する図6のAに対して、図6のBは、刃物による危険行為の検出の手法の一例を説明する図である。
 図6のBに示すように、店で購入可能なナイフK1(以下「購入用ナイフK1)と呼ぶ」はパッケージされているのに対して、強盗のために表出させるナイフK2(以下「強盗用ナイフK2」と呼ぶ)はパッケージされてない。
 また、人H―3が購入するに際し購入用ナイフK1を持つ持ち方と、人H-1が強盗のため(人を脅かすために)強盗用ナイフK2を持つ持ち方とは異なっている。
 さらにまた、店員としての人H-2についても、購入用ナイフK1を販売するための行為と、強盗用ナイフK2を突きつけられたときの行為(例えば手を挙げる等の行為)は異なっている。
In contrast to A in FIG. 6 for explaining an example of a method for detecting a flame due to a fire, FIG. 6B in FIG. 6 is a diagram for explaining an example of a method for detecting a dangerous act with a knife.
As shown in B of FIG. 6, the knife K1 that can be purchased at the store (hereinafter referred to as "purchasing knife K1") is packaged, while the knife K2 that is exposed for robbery (hereinafter referred to as "robbery"). (Called Knife K2) is not packaged.
Also, the way the person H-3 holds the purchase knife K1 when purchasing is different from the way the person H-1 holds the robbery knife K2 for robbery (to threaten the person).
Furthermore, regarding the person H-2 as a clerk, the act of selling the purchase knife K1 and the act of being struck by the robbery knife K2 (for example, the act of raising a hand) are different.
 そこで、まず、購入用ナイフK1を持参して購入する人H-3と店員としての人H-2を含むと判定された1以上の画像のデータと、強盗用ナイフK2で店員としての人H-3を脅している様子を含むと判定された1以上の画像のデータとの夫々に基づいて事前に機械学習が行われる。そして、その機械学習により、(適宜画像処理が施された)撮像画像のデータから、購入のための通常状態(刃物による危険行為はない状態)と強盗等による異常状態(刃物による危険行為の状態)とを区別して認識する所定アルゴリズムのデータが生成又は更新されて、図5の学習結果DB400に予め格納される。 Therefore, first, the data of one or more images determined to include the person H-3 who purchases with the purchase knife K1 and the person H-2 as a clerk, and the person H as a clerk with the robbery knife K2. Machine learning is performed in advance based on the data of one or more images determined to include the appearance of threatening -3. Then, by the machine learning, from the data of the captured image (with appropriate image processing), the normal state for purchase (the state where there is no dangerous act by the knife) and the abnormal state due to theft etc. (the state of the dangerous act by the knife). ) And the data of the predetermined algorithm to be recognized separately are generated or updated and stored in advance in the learning result DB 400 of FIG.
 ここで、所定空間Sにおいて、人H-3(学習時とは異なる)が、購入用ナイフK1(学習時とは異なる)を持参して店員としての人H-2(学習時とは異なる)に渡したものとする。
 この場合、人H-3に手渡したとき等においては、購入用ナイフK1を持参して購入する人H-3と店員としての人H-2を含む撮像画像のデータが、撮像画像取得部101により取得されて、画像処理部102により画像処理が施される。
 そこで、危険異常検出部105は、当該撮像画像のデータに基づいて、刃物による危険行為の可能性は非常に低いと、上述の所定アルゴリズムを用いて検出する。その結果、後述する警告報知制御部106の制御により、所定の警告の報知は禁止される(即ち、警告は報知されない)。
Here, in the predetermined space S, the person H-3 (different from the time of learning) brings the purchase knife K1 (different from the time of learning) and the person H-2 as a clerk (different from the time of learning). It shall be handed over to.
In this case, when handed over to the person H-3, the data of the captured image including the person H-3 who purchases with the purchase knife K1 and the person H-2 as a clerk is the captured image acquisition unit 101. Image processing is performed by the image processing unit 102.
Therefore, the danger abnormality detection unit 105 detects that the possibility of a dangerous act by the blade is very low based on the data of the captured image by using the above-mentioned predetermined algorithm. As a result, the notification of a predetermined warning is prohibited (that is, the warning is not notified) by the control of the warning notification control unit 106 described later.
 これに対して、所定空間Sにおいて、人H-1(学習時とは異なる)が、強盗用ナイフK2(学習時とは異なる)で店員としての人H-3(学習時とは異なる)を脅しているものとする。
 この場合、強盗用ナイフK2で店員としての人H-3を脅している様子を含む撮像画像のデータが、撮像画像取得部101により取得されて、画像処理部102により画像処理が施される。
 そこで、危険異常検出部105は、当該撮像画像のデータに基づいて、刃物による危険行為の可能性は非常に高いと、上述の所定アルゴリズムを用いて検出する。その結果、後述する警告報知制御部106の制御により、所定の警告が報知される。
On the other hand, in the predetermined space S, the person H-1 (different from the time of learning) uses the robber knife K2 (different from the time of learning) to use the person H-3 as a clerk (different from the time of learning). Suppose you are threatening.
In this case, the captured image data including the appearance of threatening the person H-3 as a clerk with the robbery knife K2 is acquired by the captured image acquisition unit 101, and the image processing unit 102 performs image processing.
Therefore, the danger abnormality detection unit 105 detects that the possibility of a dangerous act by the blade is very high based on the data of the captured image by using the above-mentioned predetermined algorithm. As a result, a predetermined warning is notified by the control of the warning notification control unit 106, which will be described later.
 図5に戻り、警告報知制御部106は、危険異常検出部105の検出結果が所定条件を満たした場合、所定の警告を報知する制御を実行する。
 例えば、警告報知制御部106は、防犯カメラ1が備えるスピーカ17や通信部20を介して防犯カメラ1から独立した他のスピーカ等から、所定の警告として、警報(例えば図1の警報A)を、所定空間S内に鳴らすための制御を実行することができる。また例えば、警告報知制御部106は、所定の警告として、異常(例えば図1の異常IR)が発生している旨のメール(例えば図1のメールM-1)を、通信部20を介して図示せぬ警察等の外部の装置に送信する制御を実行することができる。
Returning to FIG. 5, the warning notification control unit 106 executes control to notify a predetermined warning when the detection result of the danger abnormality detection unit 105 satisfies the predetermined condition.
For example, the warning notification control unit 106 issues an alarm (for example, alarm A in FIG. 1) as a predetermined warning from a speaker 17 included in the security camera 1 or another speaker independent of the security camera 1 via the communication unit 20. , Control for ringing in the predetermined space S can be executed. Further, for example, the warning notification control unit 106 sends an e-mail (for example, the e-mail M-1 of FIG. 1) to the effect that an abnormality (for example, the abnormal IR of FIG. 1) has occurred as a predetermined warning via the communication unit 20. It is possible to execute control to transmit to an external device such as the police (not shown).
 他装置協働制御部107は、上述の一連の処理のうち少なくとも一部(例えば危険異常検出部105や警告報知制御部106の処理)を、他の防犯カメラ1(例えば図5の例の防犯カメラ1と協働する図示せぬ他の防犯カメラのうちの少なくとも一部)と協働するための制御を実行する。
 協働の手法は、特に限定されないが、例えば上述の図2のA乃至Cの夫々で説明したような協働の手法を夫々採用することができる。
The other device collaborative control unit 107 performs at least a part of the above-mentioned series of processes (for example, the processes of the danger abnormality detection unit 105 and the warning notification control unit 106) with another security camera 1 (for example, the security of the example of FIG. Cooperate with Camera 1 Perform control to collaborate with (at least some of the other unshown security cameras).
The method of collaboration is not particularly limited, but for example, the method of collaboration as described in each of A to C in FIG. 2 described above can be adopted.
 なお、本発明は、上述の実施形態に限定されるものではなく、本発明の目的を達成できる範囲での変形、改良等は本発明に含まれるものである。 The present invention is not limited to the above-described embodiment, and modifications, improvements, and the like within the range in which the object of the present invention can be achieved are included in the present invention.
 例えば、上述の実施形態では、防犯カメラ1の電源は、内蔵されるバッテリとされたが、特にこれに限定されず、例えばソーラーパネル電源等でもよい。 For example, in the above-described embodiment, the power source of the security camera 1 is a built-in battery, but the power source is not particularly limited to this, and for example, a solar panel power source or the like may be used.
 また例えば、上述の説明において、通信部20は、インターネットを含むネットワークNを介して他の装置(例えば図示せぬ警察が管理する装置)と通信したり、他の防犯カメラと所定のケーブルを介して接続され通信したりしていた。しかしながら、通信部20の通信に係る方式は、特にこれに限定されない。例えば、防犯カメラ1の通信方式は、有線及び無線の方式の何れか一方又はその両方を採用してもよい。また例えば、防犯カメラ1の通信方式は、ネットワークNを介し複数の装置(防犯カメラ1や外部の装置)と通信可能な方式を採用してもよく、ネットワークNを介さず他の装置と1対1で通信する方式を採用してもよい。 Further, for example, in the above description, the communication unit 20 communicates with another device (for example, a device managed by a police not shown) via a network N including the Internet, or communicates with another security camera via a predetermined cable. Was connected and communicated. However, the method related to the communication of the communication unit 20 is not particularly limited to this. For example, as the communication method of the security camera 1, one or both of a wired method and a wireless method may be adopted. Further, for example, as the communication method of the security camera 1, a method capable of communicating with a plurality of devices (security camera 1 or an external device) via the network N may be adopted, and one pair with another device without going through the network N. A method of communicating with 1 may be adopted.
 例えば、防犯カメラ1の通信方式として、無線の方式であって、ネットワークNを介さない方式を採用することができる。具体的には例えば、通信部20の通信に係る方式として、Bluetooth(登録商標)を用いた近距離通信やWi-Fi(登録商標)を用いたアドホック通信の方式等を採用することができる。これにより、警告報知制御部106は、所定の警告を以下のように報知することができる。 For example, as the communication method of the security camera 1, a wireless method that does not go through the network N can be adopted. Specifically, for example, as a method related to communication of the communication unit 20, a short-range communication method using Bluetooth (registered trademark), an ad hoc communication method using Wi-Fi (registered trademark), or the like can be adopted. As a result, the warning notification control unit 106 can notify a predetermined warning as follows.
 例えば、警告報知制御部106は、所定の警告を周辺のカメラに転送することができる。即ち例えば、防犯カメラ1の警告報知制御部106は、防犯カメラ1の周囲にある、他の防犯カメラ1に所定の警告として、異常(例えば図1の異常IR)が発生している旨を転送することができる。このとき、当該転送は、警察等の外部の装置に送信する制御ができる防犯カメラ1に、異常(例えば図1の異常IR)が発生している旨を転送するまで繰り返し行われる。これにより、例えば、防犯カメラ1がインターネット等を含むネットワークNに接続できなくなった場合においても、防犯カメラ1は、異常(例えば図1の異常IR)が発生している旨警察等の外部の装置に送信することができる。 For example, the warning notification control unit 106 can transfer a predetermined warning to a surrounding camera. That is, for example, the warning notification control unit 106 of the security camera 1 transfers to the other security cameras 1 around the security camera 1 that an abnormality (for example, the abnormality IR in FIG. 1) has occurred as a predetermined warning. can do. At this time, the transfer is repeated until the security camera 1 capable of controlling transmission to an external device such as the police is transferred to the effect that an abnormality (for example, the abnormal IR in FIG. 1) has occurred. As a result, for example, even when the security camera 1 cannot connect to the network N including the Internet or the like, the security camera 1 is an external device such as the police that an abnormality (for example, the abnormality IR in FIG. 1) has occurred. Can be sent to.
 また例えば、防犯カメラ1について、図3に示す外観の構成や、図4に示すハードウェア構成は、本発明の目的を達成するための例示に過ぎず、特に限定されない。さらに言えば、本発明が適用される情報処理装置は、防犯カメラ1として構成される必要は特になく、上述の一連の処理を実行できるものであれば、その外観の構成や内部のハードウェア構成は特に限定されない。 Further, for example, with respect to the security camera 1, the appearance configuration shown in FIG. 3 and the hardware configuration shown in FIG. 4 are merely examples for achieving the object of the present invention, and are not particularly limited. Furthermore, the information processing device to which the present invention is applied does not need to be configured as a security camera 1, and if it can execute the above-mentioned series of processes, its appearance configuration and internal hardware configuration Is not particularly limited.
 また例えば、図4に示す機能ブロック図は、例示に過ぎず、特に限定されない。即ち、上述した一連の処理を全体として実行できる機能が、1台以上の防犯カメラ1を含む図示せぬ情報処理システムに備えられていれば足り、この機能を実現するためにどのような機能ブロックを用いるのかは、特に図5の例に限定されない。 Further, for example, the functional block diagram shown in FIG. 4 is merely an example and is not particularly limited. That is, it suffices if the information processing system (not shown) including one or more security cameras 1 has a function capable of executing the above-mentioned series of processes as a whole, and what kind of function block is used to realize this function. Is not particularly limited to the example of FIG.
 また、機能ブロックの存在場所も、図5に限定されず任意でよい。例えば図示はしないが、所定空間S内又はその近傍に、1台以上の監視カメラ1の夫々を制御するための制御装置が配置されている場合、危険異常検出部105、警告報知制御部106、及び他装置協働制御部107等は当該制御装置に備えられてもよい。
 また、1つの機能ブロックは、ハードウェア単体で構成してもよいし、ソフトウェア単体で構成してもよいし、それらの組み合わせで構成してもよい。
 また、図5の機能ブロックで示す機能の夫々は、CPU11及びGPU12の何れか一方又は両方において機能してよい。
Further, the location of the functional block is not limited to FIG. 5, and may be arbitrary. For example, although not shown, when a control device for controlling each of one or more surveillance cameras 1 is arranged in or near the predetermined space S, the danger abnormality detection unit 105, the warning notification control unit 106, And the other device cooperative control unit 107 and the like may be provided in the control device.
Further, one functional block may be configured by a single piece of hardware, a single piece of software, or a combination thereof.
Further, each of the functions shown by the functional blocks of FIG. 5 may function in either or both of the CPU 11 and the GPU 12.
 各機能ブロックの処理をソフトウェアにより実行させる場合には、そのソフトウェアを構成するプログラムが、コンピュータ等にネットワークや記録媒体からインストールされる。
 コンピュータは、専用のハードウェアに組み込まれているコンピュータであってもよい。また、コンピュータは、各種のプログラムをインストールすることで、各種の機能を実行することが可能なコンピュータ、例えばサーバの他汎用のスマートフォンやパーソナルコンピュータであってもよい。
When the processing of each functional block is executed by software, the programs constituting the software are installed on a computer or the like from a network or a recording medium.
The computer may be a computer embedded in dedicated hardware. Further, the computer may be a computer capable of executing various functions by installing various programs, for example, a general-purpose smartphone or a personal computer in addition to a server.
 このようなプログラムを含む記録媒体は、各ユーザにプログラムを提供するために装置本体とは別に配布される、リムーバブルメディアにより構成されるだけではなく、装置本体に予め組み込まれた状態で各ユーザに提供される記録媒体等で構成される。 The recording medium containing such a program is not only composed of removable media, which is distributed separately from the device main body to provide the program to each user, but also is preliminarily incorporated in the device main body to each user. It is composed of the provided recording media and the like.
 なお、本明細書において、記録媒体に記録されるプログラムを記述するステップは、その順序に添って時系列的に行われる処理はもちろん、必ずしも時系列的に処理されなくとも、並列的或いは個別に実行される処理をも含むものである。 In the present specification, the steps for describing a program recorded on a recording medium are not necessarily processed in chronological order according to the order, but are not necessarily processed in chronological order, but in parallel or individually. It also includes the processing to be executed.
 また、本明細書において、システムの用語は、複数の装置や複数の手段等より構成される全体的な装置を意味するものである。 Further, in the present specification, the term of the system means an overall device composed of a plurality of devices, a plurality of means, and the like.
 以上まとめると、本発明が適用される情報処理装置は、次のような構成を取れば足り、各種各様な実施形態を取ることができる。 Summarizing the above, the information processing apparatus to which the present invention is applied need only have the following configuration, and various various embodiments can be taken.
 以上まとめると、本発明が適用される情報処理装置は、
 所定空間(例えば図1の所定空間S)に配置される情報処理装置(例えば図3や図4の監視カメラ1)であって、
 所定空間の少なくとも一部を撮像した結果得られる画像のデータを取得する画像取得手段(例えば図5の撮像画像取得部101)と、
 取得された前記画像のデータに基づいて、危険又は異常な物体又は現象が前記所定空間内に存在又は発生する可能性を検出する検出手段(例えば図5の危険異常検出部105)と、
 前記検出手段による検出結果が所定条件を満たした場合、所定の警告の報知を制御する報知制御手段(例えば図5の警告報知制御部106)と、
 を備える。
 これにより、情報処理装置は、異常等の検出対象となる所定空間内で独立して機能する監視装置(スタンドアロンの監視装置)として機能させることができる。
To summarize the above, the information processing device to which the present invention is applied is
An information processing device (for example, the surveillance camera 1 in FIGS. 3 and 4) arranged in a predetermined space (for example, the predetermined space S in FIG. 1).
An image acquisition means (for example, the captured image acquisition unit 101 in FIG. 5) for acquiring image data obtained as a result of imaging at least a part of a predetermined space, and
A detection means (for example, the danger abnormality detection unit 105 in FIG. 5) for detecting the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space based on the acquired data of the image.
When the detection result by the detection means satisfies a predetermined condition, the notification control means (for example, the warning notification control unit 106 in FIG. 5) that controls the notification of a predetermined warning,
To be equipped.
As a result, the information processing device can function as a monitoring device (stand-alone monitoring device) that functions independently in a predetermined space for detecting an abnormality or the like.
 また、前記検出手段は、危険又は異常な物体又は現象を実際に含むと判定された1以上の画像のデータに基づいて行われた機械学習により生成又は更新された所定アルゴリズム(例えば図5の学習結果DB400にデータとして格納される所定アルゴリズム)を用いて、前記可能性を検出する、
 ことができる。
Further, the detection means is a predetermined algorithm generated or updated by machine learning performed based on the data of one or more images determined to actually contain a dangerous or abnormal object or phenomenon (for example, the learning of FIG. 5). The above possibility is detected by using a predetermined algorithm) stored as data in the result DB 400.
be able to.
 また、前記所定空間内で発せられた音声のデータを取得する音声取得手段(例えば図5の音声取得部103)をさらに備え、
 前記検出手段は、取得された前記画像のデータに加えてさらに、取得された前記音声のデータに基づいて、危険又は異常な物体又は現象が前記所定空間内に存在又は発生する前記可能性を検出する、
 ことができる。
Further, a voice acquisition means (for example, the voice acquisition unit 103 in FIG. 5) for acquiring voice data emitted in the predetermined space is further provided.
The detection means detects the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space based on the acquired audio data in addition to the acquired image data. To do,
be able to.
 前記検出手段及び前記報知制御手段の処理のうち少なくとも一部を、他の情報処理装置と協働するための制御を実行する協働制御手段(例えば図5の他装置協働制御部107)
 をさらに備えることができる。
Collaborative control means (for example, other device collaborative control unit 107 in FIG. 5) that executes control for cooperating with other information processing devices for at least a part of the processing of the detection means and the notification control means.
Can be further prepared.
 1,1-1,1-2・・・防犯カメラ、3・・・カメラ、11・・・CPU,12・・・GPU、19・・・記憶部、20・・・通信部、101・・・撮像画像取得部、102・・・画像処理部、103・・・音声取得部、104・・・音声処理部、105・・・危険異常検出部、106・・・警告報知制御部、107・・・他装置協働制御部、200・・・画像情報DB、300・・・音声情報DB、400・・・学習結果DB 1,1-1,1-2 ... Security camera, 3 ... Camera, 11 ... CPU, 12 ... GPU, 19 ... Storage unit, 20 ... Communication unit, 101 ... -Captured image acquisition unit, 102 ... image processing unit, 103 ... voice acquisition unit, 104 ... voice processing unit, 105 ... danger abnormality detection unit, 106 ... warning notification control unit, 107.・ ・ Other device collaborative control unit, 200 ・ ・ ・ image information DB, 300 ・ ・ ・ voice information DB, 400 ・ ・ ・ learning result DB

Claims (6)

  1.  所定空間に配置される情報処理装置において、
     当該所定空間の少なくとも一部を撮像した結果得られる画像のデータを取得する画像取得手段と、
     取得された前記画像のデータに基づいて、危険又は異常な物体又は現象が前記所定空間内に存在又は発生する可能性を検出する検出手段と、
     前記検出手段による検出結果が所定条件を満たした場合、所定の警告の報知を制御する報知制御手段と、
     を備える情報処理装置。
    In an information processing device arranged in a predetermined space
    An image acquisition means for acquiring image data obtained as a result of imaging at least a part of the predetermined space, and
    A detection means for detecting the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space based on the acquired data of the image.
    When the detection result by the detection means satisfies a predetermined condition, the notification control means for controlling the notification of a predetermined warning and the notification control means.
    Information processing device equipped with.
  2.  前記検出手段は、危険又は異常な物体又は現象を実際に含むと判定された1以上の画像のデータに基づいて行われた機械学習により生成又は更新された所定アルゴリズムを用いて、前記可能性を検出する、
     請求項1に記載の情報処理装置。
    The detection means uses a predetermined algorithm generated or updated by machine learning based on the data of one or more images determined to actually contain a dangerous or anomalous object or phenomenon to obtain the possibility. To detect,
    The information processing device according to claim 1.
  3.  前記所定空間内で発せられた音声のデータを取得する音声取得手段、
     をさらに備え、
     前記検出手段は、取得された前記画像のデータに加えてさらに、取得された前記音声のデータに基づいて、危険又は異常な物体又は現象が前記所定空間内に存在又は発生する前記可能性を検出する、
     請求項1又は2に記載の情報処理装置。
    A voice acquisition means for acquiring voice data emitted in the predetermined space,
    With more
    The detection means detects the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space based on the acquired audio data in addition to the acquired image data. To do,
    The information processing device according to claim 1 or 2.
  4.  前記検出手段及び前記報知制御手段の処理のうち少なくとも一部を、他の情報処理装置と協働するための制御を実行する協働制御手段、
     をさらに備える請求項1乃至3のうち何れか1項に記載の情報処理装置。
    A collaborative control means that executes control for collaborating with another information processing apparatus at least a part of the processes of the detection means and the notification control means.
    The information processing apparatus according to any one of claims 1 to 3, further comprising.
  5.  所定空間に配置される情報処理装置が実行する情報処理方法において、
     当該所定空間の少なくとも一部を撮像した結果得られる画像のデータを取得する画像取得ステップと、
     取得された前記画像のデータに基づいて、危険又は異常な物体又は現象が前記所定空間内に存在又は発生する可能性を検出する検出ステップと、
     前記検出ステップの処理による検出結果が所定条件を満たした場合、所定の警告の報知を制御する報知制御ステップと、
     を含む情報処理方法。
    In the information processing method executed by the information processing device arranged in the predetermined space,
    An image acquisition step of acquiring image data obtained as a result of imaging at least a part of the predetermined space, and
    A detection step for detecting the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space based on the acquired data of the image.
    When the detection result obtained by the processing of the detection step satisfies a predetermined condition, the notification control step for controlling the notification of a predetermined warning and the notification control step
    Information processing methods including.
  6.  所定空間に配置される情報処理装置を制御するコンピュータに、
     当該所定空間の少なくとも一部を撮像した結果得られる画像のデータを取得する画像取得ステップと、
     取得された前記画像のデータに基づいて、危険又は異常な物体又は現象が前記所定空間内に存在又は発生する可能性を検出する検出ステップと、
     前記検出ステップの処理による検出結果が所定条件を満たした場合、所定の警告の報知を制御する報知制御ステップと、
     を含む制御処理を実行させるプログラム。
    A computer that controls an information processing device placed in a predetermined space
    An image acquisition step of acquiring image data obtained as a result of imaging at least a part of the predetermined space, and
    A detection step for detecting the possibility that a dangerous or abnormal object or phenomenon exists or occurs in the predetermined space based on the acquired data of the image.
    When the detection result obtained by the processing of the detection step satisfies a predetermined condition, the notification control step for controlling the notification of a predetermined warning and the notification control step
    A program that executes control processing including.
PCT/JP2020/024372 2019-06-21 2020-06-22 Information processing device, information processing method, and program WO2020256152A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-115759 2019-06-21
JP2019115759A JP2021002229A (en) 2019-06-21 2019-06-21 Information processor, information processing method, and program

Publications (1)

Publication Number Publication Date
WO2020256152A1 true WO2020256152A1 (en) 2020-12-24

Family

ID=73994053

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/024372 WO2020256152A1 (en) 2019-06-21 2020-06-22 Information processing device, information processing method, and program

Country Status (2)

Country Link
JP (1) JP2021002229A (en)
WO (1) WO2020256152A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7452832B2 (en) * 2019-09-10 2024-03-19 i-PRO株式会社 Surveillance camera and detection method
JP7271769B1 (en) 2022-05-19 2023-05-11 大陽日酸株式会社 High pressure gas facility monitoring system and high pressure gas facility monitoring method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013131153A (en) * 2011-12-22 2013-07-04 Welsoc Co Ltd Autonomous crime prevention warning system and autonomous crime prevention warning method
WO2014174760A1 (en) * 2013-04-26 2014-10-30 日本電気株式会社 Action analysis device, action analysis method, and action analysis program
JP2015064800A (en) * 2013-09-25 2015-04-09 株式会社 シリコンプラス Self-contained monitoring camera
JP2018093266A (en) * 2016-11-30 2018-06-14 キヤノン株式会社 Information processing device, information processing method, and program
JP2018101317A (en) * 2016-12-21 2018-06-28 ホーチキ株式会社 Abnormality monitoring system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013131153A (en) * 2011-12-22 2013-07-04 Welsoc Co Ltd Autonomous crime prevention warning system and autonomous crime prevention warning method
WO2014174760A1 (en) * 2013-04-26 2014-10-30 日本電気株式会社 Action analysis device, action analysis method, and action analysis program
JP2015064800A (en) * 2013-09-25 2015-04-09 株式会社 シリコンプラス Self-contained monitoring camera
JP2018093266A (en) * 2016-11-30 2018-06-14 キヤノン株式会社 Information processing device, information processing method, and program
JP2018101317A (en) * 2016-12-21 2018-06-28 ホーチキ株式会社 Abnormality monitoring system

Also Published As

Publication number Publication date
JP2021002229A (en) 2021-01-07

Similar Documents

Publication Publication Date Title
CN103150856B (en) Fire flame video monitoring and early warning system
WO2020256152A1 (en) Information processing device, information processing method, and program
CN102682565B (en) Be suitable for fire-fighting and the security protection integral intelligent video frequency monitoring system of open space
KR101544019B1 (en) Fire detection system using composited video and method thereof
US10930129B2 (en) Self-propelled monitoring device
JP2021508189A (en) Internet of Things (IoT) -based integrated device for monitoring and controlling events in the environment
KR102501053B1 (en) Complex fire detector and fire monitoring system comprising the same
CN110519560B (en) Intelligent early warning method, device and system
KR20230004421A (en) System for detecting abnormal behavior based on artificial intelligence
KR102097286B1 (en) CCTV camera device with alarm signal transmission function using sensor
CN103218892A (en) Fire detecting system capable of monitoring and recording fire by video and monitoring public security
JP2008507054A (en) Smoke alarm system
KR20200098453A (en) System and method for fire defense management
CN105336089A (en) Intelligent all-weather networking warning system
KR102369351B1 (en) Operating method of smart drone for crime prevention
KR101524922B1 (en) Apparatus, method, and recording medium for emergency alert
CN108255119A (en) Safety disaster prevention management system
JP2007026126A (en) Monitoring and reporting system
Podržaj et al. Intelligent space as a framework for fire detection and evacuation
CN203204762U (en) A fire flame video early warning system
CN110796397A (en) Alarm system and method
KR102300101B1 (en) Apparatus and system for fire detection and ventilation of road tunnels
CN111882800A (en) Fire-fighting early warning method and system based on multi-dimensional data linkage
US20230368628A1 (en) Cigarette smoke alarm device for non-smoking space
TWM593046U (en) Fire notification system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20827430

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20827430

Country of ref document: EP

Kind code of ref document: A1