WO2019225147A1 - Airborne object detection device, airborne object detection method, and airborne object detection system - Google Patents

Airborne object detection device, airborne object detection method, and airborne object detection system Download PDF

Info

Publication number
WO2019225147A1
WO2019225147A1 PCT/JP2019/013456 JP2019013456W WO2019225147A1 WO 2019225147 A1 WO2019225147 A1 WO 2019225147A1 JP 2019013456 W JP2019013456 W JP 2019013456W WO 2019225147 A1 WO2019225147 A1 WO 2019225147A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
microphone array
camera
monitoring area
drone
Prior art date
Application number
PCT/JP2019/013456
Other languages
French (fr)
Japanese (ja)
Inventor
信太郎 吉國
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2019225147A1 publication Critical patent/WO2019225147A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/802Systems for determining direction or deviation from predetermined direction
    • G01S3/808Systems for determining direction or deviation from predetermined direction using transducers spaced apart and measuring phase or time difference between signals therefrom, i.e. path-difference systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/04Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using a single signalling line, e.g. in a closed loop
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present disclosure relates to a flying object detection device, a flying object detection method, and a flying object detection system that detect a flying object in flight for monitoring.
  • a flying object monitoring apparatus capable of detecting the presence of an object and detecting the flying direction of the object using a plurality of sound detection units that detect sound generated in the monitoring area for each direction.
  • the processing device of this flying flying object monitoring device detects the flying object and its flying direction by detecting sound with a non-directional microphone, it directs the monitoring camera in the flying direction of the flying object. Further, the processing device displays the video captured by the monitoring camera on the display device.
  • Patent Document 1 depending on the surrounding sound environment, it may be erroneously detected that there is a flying object that should not originally exist under the influence of the sound environment, or There is a problem that a flying object that should have already been detected may be lost, and the recognition accuracy of the flying object deteriorates.
  • the present disclosure has been devised in view of the above-described conventional situation, and a flying object detection device that accurately reduces false detection of an unmanned air vehicle according to the environment of a monitoring area and suppresses deterioration in recognition accuracy of the unmanned air vehicle. It is an object of the present invention to provide a flying object detection method and a flying object detection system.
  • the present disclosure provides a communication unit that communicates with a first camera that can adjust an optical axis direction and is arranged so as to be able to capture an image of a monitoring area, or a microphone array that is arranged so as to be able to collect sound in the monitoring area. And when the noise level included in the sound of the monitoring area collected by the microphone array is equal to or higher than a predetermined value, the sound parameter indicating the volume of the sound is suppressed by a predetermined value, and the microphone array A processor for detecting the presence or absence of an unmanned air vehicle in the monitoring area based on a sound parameter indicating the volume of the collected sound or the sound parameter subjected to the suppression process, and the processor includes the unmanned vehicle.
  • the first camera adjusts the optical axis direction in the detection direction in which the flying object is detected, execution of the sound parameter suppression process Omitted, it provides a flying object detecting device.
  • the present disclosure is a flying object detection method in a flying object detection apparatus that is communicably connected to a first camera that can adjust an optical axis direction or a microphone array, and the microphone array collects sound.
  • the step of suppressing the sound parameter indicating the sound volume by a predetermined value, and the volume of the sound collected by the microphone array Detecting the presence or absence of an unmanned aerial vehicle in the monitoring area based on the sound parameter indicating the suppression parameter or the sound parameter subjected to the suppression process, and the step of detecting the presence or absence of the unmanned aerial vehicle includes:
  • the first camera adjusts the optical axis direction in the detection direction in which the flying object is detected, the execution of the sound parameter suppression process is omitted. Based on the sound parameter indicating the magnitude of the sound picked up by the microphone array to detect the presence or absence of the unmanned air vehicle, to provide a flying object detection method.
  • the present disclosure provides a first camera that can adjust the optical axis direction and is arranged so as to be able to capture an image of the monitoring area, a microphone array that is arranged so as to be able to pick up sounds in the monitoring area, and the first camera.
  • a flying object detection device that communicates with the microphone array, wherein the flying object detection device has a noise level included in the sound of the monitoring area collected by the microphone array equal to or higher than a predetermined value.
  • the sound parameter indicating the volume of the sound is suppressed by a predetermined value, and the monitoring is performed based on the sound parameter indicating the volume of the sound collected by the microphone array or the sound parameter subjected to the suppression process.
  • erroneous detection of an unmanned air vehicle can be accurately reduced according to the environment of the monitoring area, and deterioration in recognition accuracy of the unmanned air vehicle can be suppressed.
  • FIG. 1 The figure which shows the system structural example of the flying object detection system which concerns on Embodiment 1.
  • FIG. 1 The figure which shows the external appearance example of a sound source detection unit Block diagram showing an example of the internal configuration of a microphone array Block diagram showing an internal configuration example of an omnidirectional camera Block diagram showing an internal configuration example of a PTZ camera Block diagram showing an example of the internal configuration of the monitoring device Timing chart showing examples of detection sound signal patterns for drones registered in memory Timing chart showing an example of frequency change of detected sound signal obtained as a result of frequency analysis processing Sequence diagram showing an example of the overall operation procedure of the flying object detection system according to the first embodiment
  • the flowchart which shows the example of an operation
  • an unmanned air vehicle eg UAV (Unmanned Aerial Vehicle) such as a drone
  • UAV Unmanned Aerial Vehicle
  • the present disclosure can also be defined as a flying object detection device constituting a flying object detection system, a flying object detection method executed in the flying object detection device, and a flying object detection method executed in the flying object detection system. It is.
  • a user of the flying object detection system for example, a supervisor who looks around and monitors the monitoring area
  • a user of the flying object detection system is simply referred to as a “user”.
  • the flying object detection device (for example, the monitoring device 10) can at least collect the sound of the monitoring area, the PTZ camera CZ arranged so that the optical axis direction can be adjusted and the monitoring area can be imaged. Communicating with the microphone array MA arranged in the.
  • the flying object detection device suppresses the sound parameter indicating the loudness by a predetermined value and also suppresses the microphone array.
  • the presence / absence of an unmanned air vehicle (for example, a drone) in the monitoring area is detected based on a sound parameter indicating the volume of sound collected by the MA or a sound parameter subjected to suppression processing.
  • the PTZ camera CZ adjusts the optical axis direction in the detection direction in which the unmanned flying object is detected, the flying object detection device omits the execution of the sound parameter suppression process.
  • FIG. 1 is a diagram showing a system configuration example of a flying object detection system 5 according to the first embodiment.
  • the flying object detection system 5 detects the presence or absence of an unmanned flying object (see, for example, the drone DN shown in FIG. 12) that is a detection target in the monitoring area (see above).
  • the unmanned aerial vehicle uses, for example, a GPS (Global Positioning System) function to autonomously ascend, descend, turn left, move left, turn right, move right, or a combination thereof.
  • GPS Global Positioning System
  • the unmanned air vehicle can be used for multiple purposes such as aerial photography of a destination or target object, monitoring, chemical spraying, transportation of goods, and the like.
  • a multi-copter type drone DN equipped with a plurality of rotors (in other words, rotor blades) will be described as an example of an unmanned air vehicle.
  • the number of rotor wings is generally two, a harmonic having a frequency twice as high as a specific frequency and a harmonic having a frequency multiplied by that frequency are generated.
  • the number of rotor wings is three, harmonics having a frequency three times as high as a specific frequency and further harmonics having a frequency multiplied by the harmonics are generated.
  • the number of rotor wings is four or more.
  • the flying object detection system 5 includes a plurality of sound source detection units UD, a monitoring device 10, a monitor MN, and a recorder RC.
  • a sound source detection unit UD In FIG. 1, only one sound source detection unit UD is shown to simplify the drawing.
  • the plurality of sound source detection units UD have the same configuration and are connected to the monitoring apparatus 10 via the network NW.
  • the sound source detection unit UD includes a microphone array MA, an omnidirectional camera CA (an example of a second camera), and a PTZ camera CZ (an example of a first camera).
  • the microphone array MA picks up sound in all directions (that is, 360 degrees) in a non-directional state in the monitoring area (see above) where the device is installed.
  • the microphone array MA has a housing 15 (see FIG. 2) in which a circular opening having a predetermined width is formed at the center.
  • the sound collected by the microphone array MA is, for example, a mechanical operation sound such as a drone DN, a sound generated by a person or the like, a mechanical sound generated by the PTZ camera CZ (for example, pan rotation or tilt rotation, zoom processing) Drive sound of the zoom lens) and other sounds, not only the sound in the audible frequency range (that is, 20 Hz to 20 kHz), but also the low frequency sound lower than the audible frequency and the ultrasonic sound exceeding the audible frequency are included. Good.
  • the microphone array MA includes a plurality of omnidirectional microphones M1 to Mq (see FIG. 3). q is a natural number of 2 or more.
  • the microphones M1 to Mq are arranged around the circular opening provided in the housing 15 in a concentric predetermined interval (for example, a uniform interval) along the circumferential direction.
  • ECM electret condenser microphones
  • the microphone array MA transmits sound data signals obtained by collecting the sounds of the microphones M1 to Mq to the monitoring device 10 via the network NW.
  • the arrangement of the microphones M1 to Mq is an example, and other arrangements (for example, a square arrangement or a rectangular arrangement) may be used, but the microphones M1 to Mq are preferably arranged at equal intervals. .
  • the number of microphones in the microphone array MA is not limited to 32, but may be other numbers (for example, 16, 64, 128).
  • An omnidirectional camera CA that substantially matches the volume of the circular opening is accommodated inside the circular opening formed in the center of the casing 15 (see FIG. 2) of the microphone array MA. That is, the microphone array MA and the omnidirectional camera CA are integrated with each other so that the center of each case is in the coaxial direction (see FIG. 2).
  • the omnidirectional camera CA is a camera equipped with a fisheye lens 45a (see FIG. 4) capable of capturing images in all directions (that is, 360 degrees) in a monitoring area (see above) as an imaging area of the omnidirectional camera CA.
  • the sound collection area of the microphone array MA and the imaging area of the omnidirectional camera CAk are both described as a common monitoring area, but the spatial size (for example, volume) of the sound collection area and the imaging area is It does not have to be the same.
  • the volume of the sound collection area may be larger or smaller than the volume of the imaging area.
  • the sound collection area and the imaging area only need to have a common space.
  • the omnidirectional camera CA functions as a monitoring camera capable of imaging an imaging area where the sound source detection unit UD is installed, for example. That is, the omnidirectional camera CA has an angle of view of, for example, vertical direction: 180 ° and horizontal direction: 360 °, and images the monitoring area 8 (see FIG. 14), which is a hemisphere, for example, as an imaging area.
  • the omnidirectional camera CA and the microphone array MA are coaxially arranged by fitting the omnidirectional camera CA inside the circular opening of the housing 15.
  • the optical axis of the omnidirectional camera CA and the central axis of the housing of the microphone array MA coincide with each other, so that the imaging area and the sound collection area in the axial circumferential direction (that is, the horizontal direction) are substantially the same.
  • the position of the subject in the omnidirectional image captured by the omnidirectional camera CA (in other words, the direction indicating the position of the subject viewed from the omnidirectional camera CA) and the position of the sound source to be collected by the microphone array MA (in other words, The direction of the sound source as viewed from the microphone array MA can be expressed in the same coordinate system (for example, coordinates indicated by (horizontal angle, vertical angle)).
  • the sound source detection unit UD is attached so that, for example, the upward direction in the vertical direction becomes the sound collection surface and the imaging surface in order to detect the drone DN flying in the sky (see FIG. 2).
  • the monitoring device 10 as an example of a flying object detection device is configured using a computer such as a PC (Personal Computer) or a server.
  • the monitoring apparatus 10 forms directivity with the arbitrary beam direction as the main beam direction based on the user's operation (that is, beam forming) in the omnidirectional sound collected by the microphone array MA. Can emphasize the sound.
  • the details of the process of forming the directivity in the sound data by beam forming the sound collected by the microphone array MA are known techniques as shown in, for example, Reference Patent Documents 1 and 2.
  • the monitoring apparatus 10 uses a captured image captured by the omnidirectional camera CA to cut out an omnidirectional captured image having a field angle of 360 degrees or a part of a specific range (direction) of the omnidirectional captured image to obtain a two-dimensional view.
  • Panorama captured images hereinafter, “captured images having an angle of view of 360 degrees” and “panoramic captured images” are collectively referred to as “omnidirectional captured images”). Note that the omnidirectional captured image may be generated not by the monitoring device 10 but by the omnidirectional camera CA.
  • the monitoring device 10 generates an omnidirectional image captured by the omnidirectional camera CA based on a calculated value of a sound parameter (for example, a sound pressure level) that specifies the volume of sound collected by the microphone array MA.
  • a sound parameter for example, a sound pressure level
  • An image of the sound pressure heat map is superimposed and displayed on the monitor MN.
  • the monitoring apparatus 10 uses the first visual image (for example, an identification mark not shown) that is easy for the user to visually identify the detected drone DN, and the position (that is, coordinates) of the corresponding drone DN in the omnidirectional captured image IMG1. May be displayed.
  • the first visual image for example, in the omnidirectional captured image IMG1, when the user views the omnidirectional captured image IMG1, the position of the drone DN is explicitly shown to such an extent that it can be clearly distinguished from other subjects. The same shall apply hereinafter.
  • the monitor MN displays the omnidirectional captured image IMG1 output from the monitoring device 10 or a composite image in which an identification mark (see above) is superimposed on the omnidirectional captured image IMG1.
  • the monitor MN may be configured as a device integrated with the monitoring device 10.
  • the recorder RC is configured by using a semiconductor memory such as a hard disk (Hard Disk Drive), a solid state drive (Solid State Drive), or a flash memory, for example, and data of various captured images generated by the monitoring device 10 and respective sound sources.
  • the omnidirectional captured image data and sound data sent from the detection unit UD are recorded.
  • the recorder RC may be configured as an apparatus integrated with the monitoring apparatus 10 or may be omitted from the configuration of the flying object detection system 5.
  • a plurality of sound source detection units UD and a monitoring device 10 each have a communication interface and are connected to each other via a network NW so that data communication is possible.
  • the network NW may be a wired network (for example, an intranet, the Internet, a wired LAN (Local Area Network), or a wireless network (for example, a wireless LAN), and each of the sound source detection units UD or a part of the sound source detection units UD.
  • the monitoring device 10 may be directly connected to the monitoring device 10 without going through the network NW, and the monitoring device 10, the monitor MN, and the recorder RC are all installed in the monitoring room RM where the user resides at the time of monitoring.
  • FIG. 2 is a view showing an example of the appearance of the sound source detection unit UD.
  • the sound source detection unit UD includes a microphone array MA, an omnidirectional camera CA, a PTZ camera CZ, and a support base 70 that mechanically supports them.
  • the support base 70 includes a tripod 71, two rails 72 fixed to the top plate 71 a of the tripod 71, and a first attachment plate 73 and a second attachment plate 74 attached to both ends of the two rails 72, respectively. And has a combined structure.
  • the first mounting plate 73 and the second mounting plate 74 are mounted so as to straddle the two rails 72 and have substantially the same plane.
  • the first mounting plate 73 and the second mounting plate 74 are slidable on the two rails 72, respectively, and are adjusted and fixed at positions separated or approached from each other.
  • the first mounting plate 73 is a disk-shaped plate material.
  • a circular opening 73 a is formed at the center of the first mounting plate 73.
  • the casing 15 of the microphone array MA is accommodated and fixed in the circular opening 73a.
  • the second mounting plate 74 is a substantially rectangular plate material.
  • a circular opening 74 a is formed in a portion near the outside of the second mounting plate 74.
  • the PTZ camera CZ is accommodated and fixed in the circular opening 74a.
  • the optical axis L1 of the omnidirectional camera CA housed in the casing 15 of the microphone array MA and the optical axis L2 of the PTZ camera CZ attached to the second mounting plate 74 are in the initial installation state. Are set to be parallel to each other.
  • the tripod 71 is supported by the grounding surface by three legs 71b, and the position of the top plate 71a can be moved in the vertical direction with respect to the grounding surface by manual operation, and the top and bottom can be moved in the pan and tilt directions.
  • the direction of the plate 71a can be adjusted.
  • the sound collection area of the microphone array MA in other words, the imaging area of the omnidirectional camera CA or the monitoring area of the flying object detection system 5
  • the sound collection area of the microphone array MA in other words, the imaging area of the omnidirectional camera CA or the monitoring area of the flying object detection system 5
  • FIG. 3 is a block diagram showing an example of the internal configuration of the microphone array MA.
  • the compression processor 25 generates a sound data signal packet based on the digital sound data signal output from the A / D converters A1 to An.
  • the transmission unit 26 transmits the packet of the sound data signal generated by the compression processing unit 25 (hereinafter simply referred to as “sound data”) to the monitoring device 10 via the network NW.
  • the microphone array MA when the microphone array MA receives the sound distribution request (see FIG. 9) from the monitoring device 10, the microphone array MA converts the sound data signals output from the microphones M1 to Mq to the amplifiers PA1 to PA1 corresponding to the amplifiers (Amp). It is amplified by PAq and converted into a digital sound data signal by corresponding A / D converters A1 to Aq. Further, the microphone array MA generates a sound data signal packet in the compression processing unit 25, and continues to transmit the sound data signal packet to the monitoring device 10 via the network NW.
  • FIG. 4 is a block diagram showing an example of the internal configuration of the omnidirectional camera CA.
  • the omnidirectional camera CA shown in FIG. 4 includes a CPU 41, a communication unit 42, a power management unit 44, an image sensor 45, a fisheye lens 45a, a memory 46, and a network connector 47.
  • the CPU 41 performs signal processing for overall control of operations of each unit of the omnidirectional camera CA, data input / output processing with other units, data calculation processing, and data storage processing.
  • a processor such as MPU (Micro Processing Unit) or DSP (Digital Signal Processor) may be provided.
  • the CPU 41 is a two-dimensional panorama obtained by cutting out an image in a specific range (direction) from captured image data having a field angle of 360 degrees generated by the image sensor 45 according to the designation of the user who operates the monitoring device 10.
  • Image data (that is, image data obtained by two-dimensional panorama conversion) is generated and stored in the memory 46.
  • the image sensor 45 is configured using, for example, a CMOS (Complementary Metal Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) sensor, and photoelectrically converts an optical image of incident light from the monitoring area collected by the fisheye lens 45a.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • the fisheye lens 45a receives and collects incident light from all directions of the imaging area (that is, the monitoring area), and forms an optical image of the incident light on the imaging surface of the image sensor 45.
  • the memory 46 includes a ROM 46z storing a program for defining the operation of the omnidirectional camera CA and set value data, and panoramic imaging in which captured image data having a field angle of 360 degrees or a part of the range is cut out.
  • a RAM 46y that stores image data and work data, and a memory card 46x that is detachably connected to the omnidirectional camera CAk and stores various data.
  • the communication unit 42 is a communication interface that controls data communication with the network NW connected via the network connector 47.
  • the power management unit 44 supplies DC power to each unit of the omnidirectional camera CA. Further, the power management unit 44 may supply DC power to devices connected to the network NW via the network connector 47.
  • the network connector 47 transmits picked-up image data or panoramic picked-up image data (that is, the above-described omnidirectional picked-up image data) having an angle of view of 360 degrees to the monitoring apparatus 10 via the network NW, and a network cable. It is a connector that can be fed via.
  • FIG. 5 is a block diagram showing an example of the internal configuration of the PTZ camera CZ.
  • the PTZ camera CZ is a camera that can adjust the optical axis direction (that is, the imaging direction of the PTZ camera CZ) and the zoom magnification according to the PTZ control request (see FIG. 9) from the monitoring device 10.
  • the PTZ camera CZ includes a CPU 51, a communication unit 52, a power management unit 54, an image sensor 55, an imaging lens 55a, a memory 56, and a network connector 57, an imaging direction control unit 58, and a lens driving motor. 59.
  • the CPU 51 receives a PTZ control request (see FIG. 9) from the monitoring device 10, the CPU 51 notifies the imaging direction control unit 58 of an angle of view change instruction.
  • the imaging direction control unit 58 controls at least one of the panning direction and the tilting direction of the PTZ camera CZ in accordance with the view angle change instruction notified from the CPU 51, and further changes the zoom magnification as necessary.
  • Control signal is sent to the lens drive motor 59.
  • the lens drive motor 59 drives the imaging lens 55a according to this control signal, changes its imaging direction (direction of the optical axis L2 shown in FIG. 2), and adjusts the focal length of the imaging lens 55a to increase the zoom magnification. change.
  • the imaging lens 55a is configured using one or two or more lenses.
  • the optical axis direction of pan rotation and tilt rotation or the zoom magnification is changed by driving the lens drive motor 59 according to the control signal from the imaging direction control unit 58.
  • FIG. 6 is a block diagram showing an example of the internal configuration of the monitoring apparatus 10.
  • the monitoring apparatus 10 illustrated in FIG. 6 includes a communication unit 31, an operation unit 32, a signal processing unit 33, an SPK 37, a memory 38, and a setting management unit 39.
  • SPK is an abbreviation for speaker.
  • the communication unit 31 receives the omnidirectional captured image data transmitted from the omnidirectional camera CA and the sound data transmitted from the microphone array MA, and sends them to the signal processing unit 33.
  • the operation unit 32 is a user interface (UI) for notifying the signal processing unit 33 of the contents of the user's input operation, and includes an input device such as a mouse or a keyboard.
  • the operation unit 32 may be configured using, for example, a touch panel or a touch pad that is arranged correspondingly on the display screen of the monitor MN and can be directly input by a user's finger or stylus pen.
  • the operation unit 32 When the user designates the red region RD1 of the sound pressure heat map (see FIG. 13) superimposed on the omnidirectional captured image IMG1 of the omnidirectional camera CA on the monitor MN, the operation unit 32 displays the designated position. The indicated coordinates are acquired and sent to the signal processing unit 33.
  • the signal processing unit 33 reads out the sound data collected by the microphone array MA from the memory 38, forms directivity from the microphone array MAk in the direction toward the actual sound source position corresponding to the designated position, and generates the directivity from the SPK 37. Output sound.
  • the user can check not only the drone DN but also the sound at the position specified by the user on the omnidirectional captured image IMG1 in an emphasized state.
  • the signal processing unit 33 is configured using a processor such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a DSP (Digital Signal Processor), or an FPGA (Field Programmable Gate Array).
  • the signal processing unit 33 (that is, the processor) is configured to control and control the operation of each unit of the monitoring apparatus 10, data input / output processing with other units, data calculation (calculation) processing, Data storage processing is performed.
  • the signal processing unit 33 includes a sound source direction detection unit 34, an output control unit 35, a directivity processing unit 63, a frequency analysis unit 64, an object detection unit 65, a detection result determination unit 66, a scanning control unit 67, and a detection direction control unit 68. including.
  • the monitoring device 10 is connected to the monitor MN.
  • the sound source direction detection unit 34 estimates the sound source position in the monitoring area using the sound collected by the microphone array MA, for example, in accordance with a known whitening cross-correlation method (CSP (Cross-power Spectrum Spectrum Analysis) method). To do.
  • CSP Cross-correlation method
  • the sound source direction detection unit 34 divides the monitoring area 8 shown in FIG. 14 into a plurality of blocks (see later), and when sound is picked up by the microphone array MA, the sound source direction detection unit 34 generates a sound for each block (see later).
  • the sound source position in the monitoring area 8 can be roughly estimated.
  • the sound source direction detection unit 34 constitutes omnidirectional captured image data in the monitoring area 8 based on the omnidirectional captured image data captured by the omnidirectional camera CA and the sound data collected by the microphone array MA ( For example, a sound parameter indicating a loudness at a position corresponding to the block for each pixel group including a predetermined number of pixels such as “2 * 2” and “4 * 4”. (For example, the above-described normalized output value of the cross-correlation value or sound pressure level) is calculated.
  • the sound source direction detection unit 34 sends the calculation result of the sound parameter for each block constituting the omnidirectional captured image data to the output control unit 35.
  • the sound parameter calculation process described above is a known technique, and a detailed description of the process is omitted.
  • the sound source direction detection unit 34 constantly monitors the noise level included in the sound data collected by the microphone array MA, and when the noise level exceeds a predetermined value, a sound parameter indicating the size of the sound data (for example, the cross-correlation value normalized output value or sound pressure level) is suppressed by a predetermined value QT (see FIG. 11).
  • the monitoring device 10 turns the drone into the monitoring area. This is because there is a high possibility of being erroneously detected as having appeared.
  • the ghost drone is a virtual drone in which the substance of the drone DN is not detected, but it is erroneously detected that the drone DN exists in the noise level higher than the above-mentioned fixed value. That's it.
  • FIG. 11 is an explanatory diagram schematically showing an example of suppression of the sound pressure level in step S5 of FIG.
  • FIG. 11 is not limited to the example of suppressing the sound pressure level, and can also be applied as an example of suppressing the normalized output value of the cross-correlation value described above.
  • the detailed description of FIG. 10 will be described later, but here, the sound pressure level suppression processing will be briefly described with reference to FIG.
  • the horizontal axis indicates the frequency
  • the vertical axis indicates the sound pressure level.
  • the characteristic Cv1 is an example of the characteristic of the sound pressure level before the sound pressure level suppression process is performed (that is, the frequency characteristic of the sound data collected by the microphone array MA).
  • the characteristic Cv2 is the characteristic of the sound pressure level after the sound pressure level suppression process is performed (that is, the frequency characteristic in which the frequency characteristic of the sound data collected by the microphone array MA is suppressed by the predetermined value QT). (Characteristic).
  • the frequency fp indicates the frequency when the peak value of the sound pressure level in the characteristics CV1 and Cv2 is obtained.
  • the first threshold value is a threshold value for determining whether or not the drone DN exists, and is specifically referred to in the object detection unit 65 described later.
  • the sound source direction detection unit 34 sets the frequency characteristic of the sound pressure level indicated by the characteristic Cv1 to a predetermined value QT.
  • the suppression processing is performed as much as possible (see FIG. 11).
  • the monitoring apparatus 10 suppresses (that is, lowers) the frequency characteristic of the sound pressure level by the predetermined value QT in a state where a noise level higher than the above-described constant value is generated.
  • a frequency characteristic (see characteristic Cv2) in which the peak value of the sound pressure level is less than the first threshold value is obtained, and erroneous detection of the presence of the ghost drone described above can be effectively suppressed.
  • the PTZ camera CZ performs PTZ control when receiving a PTZ control request from the monitoring device 10 in response to detection of the drone DN.
  • a loud sound (see above) is generated when the PTZ camera CZ is rotated by panning, tilting, or moving the imaging lens 55a.
  • the sound source direction detection unit 34 regards the large generated sound as a noise level higher than the above-described constant value and suppresses the frequency characteristic of the sound pressure level by a predetermined value QT, the peak value of the sound pressure level is less than the first threshold value.
  • the drone DN that should originally exist does not exist (in other words, the drone DN is lost).
  • the sound source direction detection unit 34 suppresses the predetermined value QT of the frequency characteristic of the sound pressure level described above during the operation of the PTZ camera CZ (that is, during pan rotation, tilt rotation, and movement of the imaging lens 55a). It is conceivable to perform control so as to omit the execution of.
  • the sound source direction detection unit 34 suppresses the predetermined value QT of the frequency characteristic of the sound pressure level (see FIG. 10) when either the following first condition or second condition is satisfied.
  • the execution of the indicated step St5) is omitted (see FIG. 10).
  • the first condition is when the noise level included in the sound data collected by the microphone array MA is not higher than a certain value.
  • the noise level since the noise level is lower than a certain value, it is considered that the noise level does not cause the above-described ghost drone, and that the monitoring device 10 does not erroneously detect the presence of the ghost drone. It is.
  • the second condition is that when the presence of the drone DN is detected by the monitoring device 10 and the PTZ camera CZ operates, the noise level included in the sound data collected by the microphone array MA is recognized to be higher than a certain value. Is the case.
  • the PTZ camera CZ operates (for example, pan rotation, tilt rotation, zoom processing) when the drone DN is detected, if the suppression processing is performed, the drone DN originally exists. This is because although the sound pressure level is lowered, the substance of the drone DN cannot be detected and is missed, and the detection accuracy of the drone DN of the monitoring device 10 is deteriorated. Details regarding whether or not to suppress the predetermined value QT of the frequency characteristic of the sound pressure level in the sound source direction detector 34 will be described in detail with reference to FIG.
  • the setting management unit 39 holds a coordinate conversion formula relating to the coordinate conversion of the position specified by the user on the display screen of the monitor MN on which the omnidirectional captured image data captured by the omnidirectional camera CA is displayed.
  • This coordinate conversion formula is based on, for example, the physical distance difference between the installation position of the omnidirectional camera CA (see FIG. 2) and the installation position of the PTZ camera CZ (see FIG. 2).
  • This is a mathematical formula for converting the coordinates of the designated position (that is, (horizontal angle, vertical angle)) into coordinates in the direction viewed from the PTZ camera CZ.
  • the signal processing unit 33 uses the coordinate conversion formula held by the setting management unit 39 to determine the position designated by the user from the installation position of the PTZ camera CZ with reference to the installation position of the PTZ camera CZ (see FIG. 2).
  • the coordinates ( ⁇ MAh, ⁇ MAv) indicating the directivity direction toward the actual sound source position corresponding to is calculated.
  • ⁇ MAh is a horizontal angle in a direction toward the actual sound source position corresponding to the position designated by the user as viewed from the installation position of the PTZ camera CZ.
  • ⁇ MAv is a vertical angle in a direction toward the actual sound source position corresponding to the position designated by the user as viewed from the installation position of the PTZ camera CZ. As shown in FIG.
  • the sound source position is an actual sound source position corresponding to the position designated from the operation unit 32 by the operation of the user's finger or stylus pen with respect to the omnidirectional captured image data displayed on the monitor MN.
  • the omnidirectional camera CA and the microphone array MA are arranged so that the optical axis direction of the omnidirectional camera CA and the central axis of the casing of the microphone array MA are coaxial.
  • the coordinates of the user-specified position calculated by the omnidirectional camera CA according to the user's specification for the monitor MN on which the omnidirectional captured image data is displayed are the sound emphasis direction (also referred to as the directional direction) viewed from the microphone array MA. ).
  • the monitoring apparatus 10 transmits the coordinates of the designated position on the omnidirectional captured image data to the omnidirectional camera CA.
  • the omnidirectional camera CA uses the coordinates of the designated position transmitted from the monitoring device 10, and the coordinates (horizontal angle, vertical) indicating the direction of the sound source position corresponding to the user designated position as seen from the omnidirectional camera CA. Corner). Since the calculation process in the omnidirectional camera CA is a known technique, a description thereof will be omitted.
  • the omnidirectional camera CA transmits a calculation result of coordinates indicating the direction of the sound source position to the monitoring device 10.
  • the monitoring device 10 can use the coordinates (horizontal angle, vertical angle) calculated by the omnidirectional camera CA as coordinates (horizontal angle, vertical angle) indicating the direction of the sound source position as viewed from the microphone array MA.
  • the setting management unit 39 performs, for example, an omnidirectional camera according to a method described in Japanese Patent Application Laid-Open No. 2015-029241. It is necessary to convert the coordinates calculated by the CA into coordinates in the direction viewed from the microphone array MA.
  • the setting management unit 39 compares the sound pressure level for each block constituting the omnidirectional captured image data calculated by the sound source direction detection unit 34 with a first threshold th1, a second threshold th2, and a third threshold th3 (for example, FIG. 9).
  • the sound pressure level is used as an example of a sound parameter indicating the volume of sound generated in the monitoring area 8 collected by the microphone array MA, and the volume indicating the volume of sound output from the SPK 37 It is a different concept.
  • the first threshold th1, the second threshold th2, and the third threshold th3 are thresholds that are compared with the sound pressure level of the sound generated in the monitoring area 8, for example, for determining the presence or absence of the sound generated by the drone DN It may be set as a threshold value.
  • first threshold th1> the second threshold th2 that is smaller than this
  • third threshold th3 the third threshold th3 that is smaller than this
  • the omnidirectional captured image data is stored in the red region RD1 (see FIG. 16) of the pixel in which the sound pressure level larger than the first threshold th1 is obtained.
  • the displayed monitor MN for example, it is drawn in red.
  • the pink region PD1 of the pixel in which the sound pressure level greater than the second threshold th2 and less than or equal to the first threshold th1 is obtained is rendered, for example, in pink on the monitor MN on which the omnidirectional captured image data is displayed.
  • the blue region BD1 of the pixel for which the sound pressure level greater than the third threshold th3 and less than or equal to the second threshold th2 is obtained is rendered in blue, for example, on the monitor MN on which the omnidirectional captured image data is displayed. Further, the sound pressure level region N1 (see FIG. 16) of the pixel equal to or smaller than the third threshold th3 is drawn, for example, in a colorless manner on the monitor MN on which the omnidirectional captured image data is displayed. No change in display color.
  • the SPK 37 is a speaker, and the sound data of the monitoring area 8 collected by the microphone array MA, the sound data after the sound pressure level of the sound data is suppressed by the predetermined value QT, or the sound data collected by the microphone array MA
  • the sound data having the directivity formed by the signal processing unit 33 is output as sound.
  • the SPK 37 may be configured as a separate device from the monitoring device 10.
  • the memory 38 is configured using, for example, a ROM or a RAM, and holds various data including sound data of a certain section, setting information, a program, and the like.
  • the memory 38 also has a pattern memory (see FIG. 7) in which a sound pattern unique to each drone DN is registered. Further, the memory 38 stores sound pressure heat map data generated by the output control unit 35.
  • data of an identification mark (not shown) that schematically represents the position of the drone DN is registered in the memory 38.
  • the identification mark used here is, for example, a star symbol.
  • the identification mark is not limited to a star shape, but may be a circle, a triangle, a square, or a symbol or character such as a “ ⁇ ” shape that reminds a drone DN.
  • the display mode of the identification mark may be changed between daytime and nighttime.
  • it may be a star shape during the daytime and a square that is not mistaken for a star at nighttime.
  • the identification mark may be dynamically changed.
  • a star symbol may be blinked or rotated, and the user can be further alerted.
  • FIG. 7 is a timing chart showing an example of the detection sound pattern of the drone DN registered in the memory 38.
  • the detected sound pattern shown in FIG. 7 is a combination of frequency patterns, and includes sounds of four frequencies f1, f2, f3, and f4 generated by the rotation of the four rotors mounted on the multicopter type drone DN.
  • the signals of the respective frequencies are signals having different sound frequencies that are generated, for example, with the rotation of a plurality of wings pivotally supported by each rotor.
  • the frequency region indicated by the diagonal lines is a region where the sound pressure level is high.
  • the detected sound pattern may include not only the number of sounds having a plurality of frequencies and the sound pressure level but also other sound information.
  • a sound pressure level ratio representing a ratio of sound pressure levels at each frequency can be used.
  • the detection of the drone DN is determined by whether or not the sound pressure level of each frequency included in the detection sound pattern exceeds a threshold value.
  • the directivity processing unit 63 uses the sound data signals collected by the non-directional microphones M1 to Mq, performs the above-described directivity formation processing (beam forming), and makes the sound having the direction of the monitoring area 8 as the directivity direction. Data signal extraction processing is performed.
  • the directivity processing unit 63 can also perform a sound data signal extraction process in which the range in the direction of the monitoring area 8 is the directional range.
  • the directivity range is a range including a plurality of adjacent directivity directions, and is intended to include a certain extent of directivity direction compared to the directivity direction.
  • the frequency analysis unit 64 performs frequency analysis processing on the sound data signal extracted in the directivity direction by the directivity processing unit 63. In this frequency analysis process, the frequency and the sound pressure level included in the sound data signal in the pointing direction are detected.
  • FIG. 8 is a timing chart showing an example of the frequency change of the detected sound signal obtained as a result of the frequency analysis process.
  • four frequencies f11, f12, f13, f14 and sound pressure levels of the respective frequencies are obtained as detected sound signals (that is, detected sound data signals).
  • the fluctuation of each frequency that changes irregularly occurs due to, for example, a rotational fluctuation of the rotor (rotary blade) that slightly changes when the drone DN controls the attitude of the drone DN itself.
  • the object detection unit 65 performs the detection process of the drone DN using the frequency analysis processing result of the frequency analysis unit 64. Specifically, in the detection process of the drone DN, the object detection unit 65 detects the pattern of the detection sound obtained as a result of the frequency analysis process (see the frequencies f11 to f14 shown in FIG. 8) in the monitoring area 8. The detected sound patterns (see frequencies f1 to f4 shown in FIG. 7) registered in advance in the pattern memory of the memory 38 are compared. The object detection unit 65 determines whether or not the patterns of both detection sounds are approximate.
  • Whether or not both patterns are approximated is determined, for example, as follows.
  • a threshold value for example, the first threshold th1 shown in FIG. 9 or FIG. 11
  • the object detection unit 65 detects the drone DN. Note that the drone DN may be detected when other conditions are satisfied.
  • the detection result determination unit 66 instructs the detection direction control unit 68 to shift to detection of the drone DN in the next directivity direction.
  • the detection result determination unit 66 notifies the output control unit 35 of the detection result of the drone DN when it is determined that the drone DN exists as a result of the scanning in the directivity direction.
  • the detection result includes information on the detected drone DN.
  • the information of the drone DN includes, for example, identification information of the drone DN and position information (for example, direction information) of the drone DN in the monitoring area 8.
  • the detection direction control unit 68 controls the direction for detecting the unmanned air vehicle dn in the sound collection space based on an instruction from the detection result determination unit 66. For example, the detection direction control unit 68 detects an arbitrary direction of the directivity range BF1 (see FIG. 11) including the sound source position estimated by the sound source direction detection unit 34 in the entire sound collection space (that is, the monitoring area 8). Set as direction.
  • the scanning control unit 67 instructs the directivity processing unit 63 to perform beam forming using the detection direction set by the detection direction control unit 68 as the directivity direction.
  • the directivity processing unit 63 performs beam forming in the directivity direction instructed from the scanning control unit 67.
  • the directivity processing unit 63 sets the initial position in the directivity range BF1 (see FIG. 14) including the sound source position estimated by the sound source direction detection unit 34 as the directivity direction BF2.
  • the directivity direction BF2 is sequentially set by the detection direction control unit 68 within the directivity range BF1.
  • the output control unit 35 controls each operation of the monitor MN and the SPK 37, displays the omnidirectional captured image data sent from the omnidirectional camera CA on the monitor MN, and further outputs the sound data sent from the microphone array MA. Audio is output to the SPK 37.
  • the output control unit 35 superimposes an identification mark (not shown) indicating the drone DN on a corresponding position (that is, a coordinate indicating the position of the drone DN) of the omnidirectional captured image data. And display it on the monitor MN.
  • the output control unit 35 uses the sound data collected by the microphone array MA and the coordinates indicating the direction of the sound source position derived by the omnidirectional camera CA, and the directivity of the sound data collected by the microphone array MA. By performing the forming process, the sound data in the directivity direction is emphasized.
  • the sound data directivity forming process is a known technique described in, for example, Japanese Patent Application Laid-Open No. 2015-029241.
  • the output control unit 35 uses the sound pressure level for each block constituting the omnidirectional captured image data calculated by the sound source direction detection unit 34, and positions the corresponding block for each block constituting the omnidirectional captured image data. A sound pressure map to which the calculated value of the sound pressure level is assigned is generated. Further, the output control unit 35 performs a color conversion process on the sound pressure level for each block of the generated sound pressure map to a visual image (for example, a colored image) so as to be visually and easily discriminable for the user. Thus, the sound pressure heat map shown in FIG. 16 is generated.
  • the output control unit 35 has been described as generating a sound pressure map or a sound pressure heat map in which the sound pressure level calculated in units of blocks is assigned to the position of the corresponding block.
  • the sound pressure level may be calculated and a sound pressure map or a sound pressure heat map in which the sound pressure level for each pixel is assigned to the position of the corresponding pixel may be generated.
  • FIG. 9 is a sequence diagram showing an example of the overall operation procedure of the flying object detection system 5 according to the first embodiment.
  • the flying object detection system 5 starts its operation. .
  • the signal processing unit 33 of the monitoring apparatus 10 sends an image distribution request to the PTZ camera CZ via the communication unit 31 (T1).
  • the PTZ camera CZ starts imaging processing according to power-on in accordance with the image distribution request sent from the monitoring device 10 (T2).
  • the signal processing unit 33 of the monitoring apparatus 10 sends an image distribution request to the omnidirectional camera CA via the communication unit 31 (T3).
  • the omnidirectional camera CA starts imaging processing according to power-on in accordance with the image distribution request sent from the monitoring device 10 (T4).
  • the signal processing unit 33 of the monitoring apparatus 10 sends a sound distribution request to the microphone array MA via the communication unit 31 (T5).
  • the microphone array MA starts sound collection processing in response to power-on in accordance with the sound distribution request sent from the monitoring device 10 (T6).
  • the PTZ camera CZ transmits PTZ captured image data (for example, a still image and a moving image) obtained by imaging to the monitoring apparatus 10 via the network NW (T7).
  • the output control unit 35 of the monitoring apparatus 10 converts the PTZ captured image data sent from the PTZ camera CZ into display data such as NTSC, and outputs it to the monitor MN to send an instruction to display the PTZ captured image data together with the display data.
  • the monitor MN displays the data (see FIG. 16) of the PTZ image IMG2 captured by the PTZ camera CZ on the display screen according to the instruction sent from the monitoring device 10 (T9).
  • the omnidirectional camera CA transmits omnidirectional captured image data (for example, a still image and a moving image) obtained by imaging to the monitoring apparatus 10 via the network NW (T10).
  • the output control unit 35 of the monitoring device 10 converts the omnidirectional captured image data sent from the omnidirectional camera CA into display data such as NTSC, and outputs it to the monitor MN to display an instruction to display the omnidirectional captured image data. It is sent together with the data (T11).
  • the monitor MN displays data (see FIG. 16) of the omnidirectional image IMG1 taken by the omnidirectional camera CA on the display screen according to the instruction sent from the monitoring device 10 (T9).
  • the microphone array MA encodes the sound data of the monitoring area 8 obtained by sound collection through the network NW and transmits the encoded sound data to the monitoring device 10 (T12).
  • the sound source direction detection unit 34 of the monitoring apparatus 10 executes various sound control processes (see FIG. 10) including the sound source detection process in the monitoring area 8 using the sound data sent in step T12 (T13). Details of the sound control process in step T13 will be described later with reference to FIG.
  • the output control unit 35 of the monitoring device 10 uses the sound pressure level for each block constituting the omnidirectional captured image data calculated by the sound source direction detection unit 34 in step T13, and for each block constituting the omnidirectional captured image data. Then, a sound pressure map in which the calculated value of the sound pressure level is assigned to the position of the corresponding block is generated. The output control unit 35 performs a color conversion process on the sound pressure level for each block of the generated sound pressure map to a visual image (for example, a colored image) so that it is visually and easily discriminable for the user. 16 is generated (T14).
  • a visual image for example, a colored image
  • the output control unit 35 of the monitoring device 10 converts the sound pressure heat map data generated in step T14 into display data such as NTSC, and outputs it to the monitor MN to output the sound pressure heat map onto the omnidirectional image data. Is sent together with the display data (T15).
  • the monitor MN displays the sound pressure heat map superimposed on the omnidirectional captured image data on the display screen according to the instruction sent from the monitoring device 10 (see FIG. 16, T9).
  • the signal processing unit 33 of the monitoring device 10 uses the sound data sent from the microphone array MA in step T12 to form directivity while sequentially scanning the directivity direction within the monitoring area 8, thereby making it possible to perform each directivity direction.
  • the drone DN is detected and determined (T16). Details of the detection determination process for the unmanned air vehicle dn will be described later with reference to FIGS. 14 and 15.
  • step T16 If the drone DN is not detected in step T16, the process of the flying object detection system 5 according to the first embodiment returns to step T7, and a series of processes from step T7 to step T16 is repeated. On the other hand, when the drone DN is detected in step T16, the process of the flying object detection system 5 according to the first embodiment proceeds to the process after step T17.
  • the output control unit 35 of the monitoring device 10 displays the omnidirectional captured image IMG1 displayed on the display screen of the monitor MN in step T16.
  • the monitor MN synthesizes (superimposes) an identification mark (not shown) indicating the drone DN and displays it on the omnidirectional image IMG1 (T18).
  • the output control unit 35 of the monitoring apparatus 10 provides information regarding the detection direction of the drone DN detected in step T16 (in other words, the direction set when the drone DN is detected) to the PTZ camera CZ, and the PTZ camera CZ.
  • a PTZ control request for changing the optical axis direction (in other words, the imaging direction) and the zoom magnification is sent in association with each other (T19).
  • the PTZ camera CZ drives the lens driving motor 59 in the imaging direction control unit 58 based on information on the directivity direction, for example, and the optical axis of the imaging lens of the PTZ camera CZ L2 is changed, and the imaging direction is changed to the pointing direction (T20).
  • the imaging direction control unit 58 changes the zoom magnification of the imaging lens 55a of the PTZ camera CZ to a preset value, a value corresponding to the proportion of the captured image of the drone DN, or the like.
  • the PTZ camera CZ can adjust (that is, direct) the optical axis direction in the detection direction (that is, the directing direction) of the drone DN detected in step T16, and can further determine the details of the drone DN.
  • the zoom magnification can be changed so that the PTZ captured image IMG2 can be captured.
  • the monitoring device 10 detects the omnidirectional captured image IMG1 in which the wide area of the monitoring area 8 is known and the drone PTZ detected in the monitoring area 8.
  • the PTZ captured image IMG2 whose details are known can be displayed side by side on the monitor MN1 (see FIG. 16).
  • FIG. 16 is a diagram illustrating a display screen example in which the omnidirectional captured image IMG1 and the PTZ captured image IMG2 are displayed side by side when the drone DN is detected.
  • the sound pressure level for each block calculated by the sound source direction detection unit 34 is equal to or less than the third threshold th3, the sound pressure level is displayed in a colorless manner. If it is greater than the third threshold th3 and less than or equal to the second threshold th2, it is displayed in blue, and if the sound pressure level is greater than the second threshold th2 and less than or equal to the first threshold th1, it is displayed in pink, and the sound pressure level is If it is larger than one threshold th1, it is displayed in red.
  • the omnidirectional captured image IMG1 shows a sound pressure heat map in which the presence or absence of a sound source can be determined in a wide area together with the appearance of the monitoring area 8.
  • red regions RD1, RD2, RD3, as an example of the first visual image showing the position of the drone DN It is drawn with RD4.
  • pink areas PD1, PD2, PD3, and PD4 indicating that the sound pressure value is next to the red area are drawn as an example of the first visual image indicating the position of the drone DN around the red area.
  • blue areas BD1, BD2, BD3, and BD4 indicating that the sound pressure value is next to the pink area are drawn as an example of the first visual image indicating the position of the drone DN around the pink area. ing.
  • the details of the appearance of the drone DN detected in the monitoring area 8 are shown to be clarified. Therefore, the user can browse the omnidirectional captured image IMG1 and the PTZ captured image IMG2 shown in FIG. 16 side by side to see what kind of appearance the drone DN is detected in and around the monitoring area 8. It can be grasped in detail.
  • step T7 the process of the flying object detection system 5 according to the first embodiment returns to step T7, and each process after step T7 is repeated until a predetermined event such as a power-off operation is detected. .
  • FIG. 10 is a flowchart showing an example of the operation procedure of the sound control process in step T13 of FIG.
  • FIG. 12 is a diagram illustrating a display screen example in which a sound pressure heat map generated based on the sound pressure level that is not suppressed is superimposed on the omnidirectional captured image data in the monitoring area 8.
  • FIG. 13 shows a display screen in which the sound pressure heat map generated based on the sound pressure level suppressed by detecting the drone DN and operating the PTZ camera CZ is superimposed on the omnidirectional captured image data in the monitoring area 8. It is a figure which shows an example.
  • the sound source direction detection unit 34 determines the omnidirectional captured image data in the monitoring area 8 based on the omnidirectional captured image data captured by the omnidirectional camera CA and the sound data collected by the microphone array MA.
  • a sound parameter for example, sound pressure level
  • the sound source direction detector 34 detects the sound source in the monitoring area 8 using the calculated sound parameter (S1).
  • the detected position of the sound source may be used as a reference position of the directivity range BF1 necessary for setting the initial directivity direction when the monitoring apparatus 10 detects the drone DN in step T16.
  • the sound source direction detection unit 34 constantly monitors and measures the noise level included in the sound data collected by the microphone array MA (S2). When it is determined that the noise level is less than a certain value (predetermined value) (S3, NO), the sound control process of the sound source direction detection unit 34 is terminated. Therefore, the sound source direction detection unit 34 does not execute the above-described frequency characteristic suppression process (see FIG. 11) of the sound pressure level (see the first condition described above), and the process of step T13 ends.
  • a certain value predetermined value
  • the sound source direction detection unit 34 determines that the noise level measured in step S2 is equal to or higher than a predetermined value (predetermined value) (S3, YES), the memory 38 is referred to and the PTZ camera CZ is in operation. It is determined whether or not there is (S4).
  • the memory 38 temporarily holds, for example, a PTZ control request sent to the PTZ camera CZ.
  • the sound source direction detection unit 34 can determine whether or not the current time is during the operation of the PTZ camera CZ based on, for example, whether or not the PTZ control request is held in the memory 38.
  • step S5 the sound control process of the sound source direction detection unit 34 ends, and the process of step T13 ends.
  • the sound source direction detection unit 34 determines that the current time is during the operation of the PTZ camera CZ (S4, YES), the sound source direction detection unit 34 refers to the memory 38 and determines whether the drone DN is being detected (S6). ). For example, in the loop (LOOP) process of FIG. 9, when it is determined that the drone DN is detected at the time of executing the process of step T ⁇ b> 16 of the previous cycle, the detection direction of the drone DN determined at that time is stored in the memory 38. Information on (that is, the scanned directivity direction) is held. The sound source direction detection unit 34 can determine whether or not the current time is detecting the drone DN based on, for example, whether or not the information regarding the detection direction is held in the memory 38.
  • LOOP loop
  • the sound source direction detection unit 34 determines that the current time is that the drone DN is being detected (S6, YES), the sound source direction detection unit 34 ends the sound control processing. Accordingly, the sound source direction detection unit 34 does not execute the above-described suppression process of the frequency characteristic of the sound pressure level (see FIG. 11) (see the second condition described above), and the process of step T13 ends.
  • step S5 the sound control process of the sound source direction detection unit 34 ends, and the process of step T13 ends.
  • the noise level has a constant value (predetermined value) even though the substance of the drone DN is not detected in the monitoring area 8.
  • An example of being detected as a ghost drone GDN1 under the influence of noise at that noise level is shown. Since the ghost drone GDN1 has not been subjected to suppression processing, for example, the virtual device erroneously detected by the monitoring device 10 based on two noise sources (specifically, sound sources located in the blue regions BD1 and BD4) is detected. A drone.
  • step S5 shown in FIG. 10 that is, the sound pressure level frequency characteristic suppression process
  • the sound source direction detection unit 34 determines that the second condition is satisfied and the sound pressure level suppression processing when the surrounding noise level exceeds a certain value when the drone DN is detected and the PTZ camera CZ operates. Do not execute.
  • the sound source direction detection unit 34 can suppress overlooking the drone DN entity through the suppression process, and thus can accurately detect the drone DN entity.
  • FIG. 14 is a diagram illustrating an example of sequential scanning in the directivity direction performed in the monitoring area 8 when the drone DN is detected.
  • FIG. 15 is a flowchart illustrating an example of an operation procedure of the drone DN detection determination process in step T ⁇ b> 16 of FIG. 9.
  • the directivity processing unit 63 sets, for example, the directivity range BF1 based on the sound source position estimated by the sound source direction detection unit 34 as the initial position of the directivity direction BF2 (S21).
  • the initial position may not be limited to the directivity range BF1 based on the sound source position of the monitoring area 8 estimated by the sound source direction detection unit 34. That is, an arbitrary position designated by the user may be set as the initial position, and the monitoring area 8 may be sequentially scanned. Since the initial position is not limited, even when the sound source included in the directivity range BF1 based on the estimated sound source position is not the drone DN, a drone flying in another directivity direction can be detected at an early stage.
  • the directivity processing unit 63 determines whether or not the sound data signal collected by the microphone array MA and converted into a digital value by the A / D converters An1 to Aq is temporarily stored in the memory 38 (S22). ). When the sound data signal is not stored (S22, NO), the processing of the directivity processing unit 63 returns to step S21.
  • the directivity processing unit 63 moves in the arbitrary directivity direction BF2 in the directivity range BF1 of the monitoring area 8. On the other hand, beam forming is performed, and the sound data signal in the directivity direction BF2 is extracted (S23).
  • the frequency analysis unit 64 detects the frequency of the extracted sound data signal and its sound pressure level (S24).
  • the object detection unit 65 compares the detection sound pattern registered in the pattern memory of the memory 38 with the detection sound pattern obtained as a result of the frequency analysis process, and detects the drone DN (S25).
  • the detection result determination unit 66 notifies the output control unit 35 of the result of the comparison, and notifies the detection direction control unit 68 about the detection direction shift (S26).
  • the object detection unit 65 compares the pattern of the detection sound obtained as a result of the frequency analysis process with the four frequencies f1, f2, f3, and f4 registered in the pattern memory of the memory 38. As a result of comparison, the object detection unit 65 has at least two of the same frequency in both detection sound patterns, and if the sound pressure level of these frequencies is greater than the first threshold th1, the detection sound patterns of both , And it is determined that the drone DN exists.
  • the object detection unit 65 approximates when one frequency matches and the sound pressure level of this frequency is greater than the first threshold th1. You may determine that you are doing.
  • the object detection unit 65 may set an error of an allowable frequency with respect to each frequency, and may determine the presence or absence of the approximation assuming that the frequencies within the error range are the same frequency.
  • the object detection unit 65 may determine that the sound pressure level ratios of the sounds of the respective frequencies substantially match in addition to the determination condition. In this case, since the determination conditions become strict, the sound source detection unit UD can easily identify the detected drone DN as a pre-registered object, and the detection accuracy of the drone DN can be improved.
  • the detection result determination unit 66 determines whether or not the drone DN is present as a result of step S26 (S27).
  • the detection result determination unit 66 notifies the output control unit 35 that the drone DN exists (that is, the detection result of the drone DN) (S28).
  • the detection result determination unit 66 instructs the scanning control unit 67 to move the scanning direction BF2 in the monitoring area 8 to the next different direction.
  • the scanning control unit 67 moves the pointing direction BF2 to be scanned in the monitoring area 8 in the next different direction (S29).
  • the notification of the detection result of the drone DN may be collectively performed after the omnidirectional scanning is completed, not at the timing when the detection processing of one directivity direction is completed.
  • the order in which the directing direction BF2 is sequentially moved in the monitoring area 8 is, for example, within the directivity range BF1 or the entire range of the monitoring area 8 so as to go from the outer circumference to the inner circumference, or the inner circle.
  • a spiral (spiral) order may be used so as to go from the circumference to the outer circumference.
  • the detection direction control unit 68 does not scan the directivity direction continuously as in a single stroke, but sets a position in the monitoring area 8 in advance, and sets the directivity direction BF2 at each position in an arbitrary order. It may be moved. Thereby, the monitoring apparatus 10 can start the detection process from a position where the drone DN easily enters, for example, and can improve the efficiency of the detection process.
  • the scanning control unit 67 determines whether or not scanning in all directions in the monitoring area 8 has been completed (S30). When the omnidirectional scanning is not completed (S30, NO), the processing of the signal processing unit 33 returns to step S23, and the processing from steps S23 to S30 is repeated. That is, the directivity processing unit 63 performs beamforming in the directivity direction BF2 at the position moved in step S29, and extracts sound data in the directivity direction BF2. As a result, the sound source detection unit UD continues to detect other drones that may exist even if one drone DN is detected, so that a plurality of drones can be detected.
  • the directivity processing unit 63 erases the sound data collected by the microphone array MA temporarily stored in the memory 38 (S31). ). The directivity processing unit 63 may delete the sound data collected by the microphone array MA from the memory 38 and store it in the recorder RC.
  • the signal processing unit 33 determines whether or not to end the drone DN detection process (S32).
  • the detection process for the drone DN is terminated in response to a predetermined event. For example, the number of times that the drone DN has not been detected in step S26 is held in the memory 38, and when this number exceeds a predetermined number, the drone DN detection process may be terminated. Further, the signal processing unit 33 may end the detection process of the drone DN based on time-up by a timer or a user operation on a UI (User Interface) (not shown) of the operation unit 32. Moreover, you may complete
  • UI User Interface
  • the frequency analysis unit 64 analyzes the frequency and also measures the sound pressure at that frequency.
  • the detection result determination unit 66 may determine that the drone DN is approaching the sound source detection unit UD when the sound pressure level measured by the frequency analysis unit 64 gradually increases with time. .
  • the sound pressure level of a predetermined frequency measured at a certain time is smaller than the sound pressure level of the same frequency measured at a time later than that time, the sound pressure increases with time, and the drone It may be determined that the DN is approaching. Further, the sound pressure level may be measured three times or more, and it may be determined that the drone DN is approaching based on the transition of statistical values (for example, variance value, average value, maximum value, minimum value, etc.).
  • the detection result determination unit 66 may determine that the drone DN has entered the warning area.
  • the warning threshold is a value larger than the first threshold th1 described above, for example.
  • the warning area is, for example, the same area as the monitoring area 8 or an area included in the monitoring area 8 and narrower than the monitoring area 8.
  • the alert area is an area where entry of the drone DN is restricted, for example.
  • the approach determination and the intrusion determination of the drone DN may be executed by the detection result determination unit 66.
  • the monitoring device 10 can adjust the optical axis direction and the sound of the PTZ camera CZ arranged so as to be able to image the monitoring area 8 or the sound of the monitoring area 8.
  • the communication unit 31 communicates with the microphone array MA arranged so that sound can be collected.
  • the monitoring device 10 suppresses the sound parameter indicating the sound volume by a predetermined value QT.
  • the drone DN in the monitoring area 8 It has a signal processing unit 33 that detects presence or absence.
  • the PTZ camera CZ adjusts the optical axis direction in the detection direction in which the drone DN is detected, the monitoring apparatus 10 omits the execution of the sound parameter suppression process.
  • the monitoring apparatus 10 suppresses (that is, lowers) the frequency characteristic of the sound pressure level by a predetermined value QT in a state where a noise level higher than a certain value is generated.
  • a frequency characteristic (see characteristic Cv2) in which the peak value of the sound pressure level is less than the first threshold is obtained, and erroneous detection of the presence of the ghost drone described above can be effectively suppressed.
  • the monitoring device 10 detects the presence or absence of the drone DN using sound data collected under a situation where the suppression process is not necessary or sound data collected under a situation where the suppression process is necessary. As a result, it is possible to accurately detect the presence or absence of the drone DN.
  • the monitoring apparatus 10 determines the predetermined value QT of the frequency characteristic of the sound pressure level. Therefore, the drone DN can be detected accurately without missing the substance of the drone DN. Therefore, according to the monitoring device 10, the erroneous detection of the drone DN can be accurately reduced according to the environment of the monitoring area 8, and the deterioration of the recognition accuracy of the drone DN can be suppressed.
  • the monitoring device 10 executes a sound parameter (for example, sound pressure level) suppression process in the signal processing unit 33.
  • a sound parameter for example, sound pressure level
  • the monitoring device 10 can obtain a frequency characteristic (see characteristic Cv2) in which the peak value of the sound pressure level is less than the first threshold value by the suppression process, so that even if the noise level is equal to or higher than a predetermined value (a constant value).
  • a predetermined value a constant value
  • the monitoring device 10 executes a sound parameter (for example, sound pressure level) suppression process in the signal processing unit 33.
  • a sound parameter for example, sound pressure level
  • the monitoring device 10 can obtain a frequency characteristic (see characteristic Cv2) in which the peak value of the sound pressure level is less than the first threshold value by the suppression process, so that even if the noise level is equal to or higher than a predetermined value (a constant value).
  • a predetermined value a constant value
  • the monitoring device 10 communicates with the omnidirectional camera CA arranged so that the monitoring area 8 can be imaged in the communication unit 31.
  • the monitoring device 10 uses a first visual image (see, for example, FIG. 12) indicating the position of the drone DN in the signal processing unit 33 in the first monitoring area 8 captured by the omnidirectional camera CA.
  • the image is superimposed on the captured image (for example, the omnidirectional captured image IMG1) and displayed on the monitor MN.
  • the user can visually grasp the position of the drone DN detected by the monitoring device 10 on the omnidirectional captured image IMG1 displayed on the monitor MN, and can easily specify the position of the drone DN specifically. .
  • the present disclosure is useful as a flying object detection device, a flying object detection method, and a flying object detection system that accurately reduce false detection of an unmanned air vehicle according to the environment of a monitoring area and suppress deterioration in recognition accuracy of the unmanned air vehicle. It is.
  • flying object detection system 10 monitoring device 31 communication unit 32 operation unit 33 signal processing unit 34 sound source direction detection unit 35 output control unit 37 SPK 38 memory 39 setting management unit 63 directivity processing unit 64 frequency analysis unit 65 object detection unit 66 detection result determination unit 67 scanning control unit 68 detection direction control unit CA omnidirectional camera CZ PTZ camera MA microphone array MN monitor NW network RC recorder UD sound source detection unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Emergency Management (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Computational Linguistics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Alarm Systems (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Studio Devices (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

This airborne object detection device is provided with: a communication unit capable of communicating with a first camera of which an optical axis can be adjusted and which is disposed so as to be able to capture an image of a monitoring area, or with a microphone array which is disposed so as to be able to pick up sounds in the monitoring area; and a processor which, in the case when the noise level of a sound picked up in the monitoring area is equal to or greater than a predetermined value, performs suppression processing to bring down, by a given value, the sound parameter representing sound magnitude, and which detects the presence or absence of an unmanned air vehicle in the monitoring on the basis of a sound parameter representing the magnitude of the picked-up sound or the sound parameter that has undergone suppression processing. In the case when the optical axis direction of the first camera is to be adjusted toward a direction in which an unmanned air vehicle has been detected, the processor skips the execution of suppression processing for the sound parameter.

Description

飛行物体検知装置、飛行物体検知方法および飛行物体検知システムFlying object detection device, flying object detection method, and flying object detection system
 本開示は、モニタリングを飛行中の飛行物体を検知する飛行物体検知装置、飛行物体検知方法および飛行物体検知システムに関する。 The present disclosure relates to a flying object detection device, a flying object detection method, and a flying object detection system that detect a flying object in flight for monitoring.
 従来、監視領域内に生じる音を方向ごとに検知する複数の音検知部を用いて、物体の存在の検知と、物体の飛来方向の検知とが可能な飛来飛行物体監視装置が知られている(例えば特許文献1参照)。この飛来飛行物体監視装置の処理装置は、無指向性のマイクによる音検知によって飛行物体の飛来およびその飛来方向を検知すると、その飛行物体が飛来した方向に監視カメラを向ける。さらに、処理装置は、この監視カメラで撮影された映像を表示装置に表示する。 Conventionally, a flying object monitoring apparatus capable of detecting the presence of an object and detecting the flying direction of the object using a plurality of sound detection units that detect sound generated in the monitoring area for each direction is known. (For example, refer to Patent Document 1). When the processing device of this flying flying object monitoring device detects the flying object and its flying direction by detecting sound with a non-directional microphone, it directs the monitoring camera in the flying direction of the flying object. Further, the processing device displays the video captured by the monitoring camera on the display device.
日本国特開2006-168421号公報Japanese Unexamined Patent Publication No. 2006-168421
 しかしながら、特許文献1を含む従来技術では、周囲の音環境によっては、その音環境の影響を受けて、本来存在していないはずの飛行物体が存在していると誤検知されてしまったり、あるいは、既に検知済みの存在しているはずの飛行物体を見失ったりすることがあり、飛行物体の認識精度が劣化するという課題があった。 However, in the prior art including Patent Document 1, depending on the surrounding sound environment, it may be erroneously detected that there is a flying object that should not originally exist under the influence of the sound environment, or There is a problem that a flying object that should have already been detected may be lost, and the recognition accuracy of the flying object deteriorates.
 本開示は、上述した従来の状況に鑑みて案出され、モニタリングエリアの環境に応じて無人飛行体の誤検知を的確に低減し、無人飛行体の認識精度の劣化を抑制する飛行物体検知装置、飛行物体検知方法および飛行物体検知システムを提供することを目的とする。 The present disclosure has been devised in view of the above-described conventional situation, and a flying object detection device that accurately reduces false detection of an unmanned air vehicle according to the environment of a monitoring area and suppresses deterioration in recognition accuracy of the unmanned air vehicle. It is an object of the present invention to provide a flying object detection method and a flying object detection system.
 本開示は、光軸方向を調整可能であってモニタリングエリアを撮像可能に配置された第1カメラ、または前記モニタリングエリアの音を収音可能に配置されたマイクアレイとの間で通信する通信部と、前記マイクアレイにより収音された前記モニタリングエリアの音に含まれる騒音レベルが既定値以上となる場合、前記音の大きさを示す音パラメータを所定値ほど抑圧処理するとともに、前記マイクアレイにより収音された前記音の大きさを示す音パラメータまたは前記抑圧処理された音パラメータに基づいて、前記モニタリングエリアでの無人飛行体の有無を検知するプロセッサと、を備え、前記プロセッサは、前記無人飛行体が検知された検知方向に前記第1カメラが前記光軸方向を調整する場合、前記音パラメータの抑圧処理の実行を省略する、飛行物体検知装置を提供する。 The present disclosure provides a communication unit that communicates with a first camera that can adjust an optical axis direction and is arranged so as to be able to capture an image of a monitoring area, or a microphone array that is arranged so as to be able to collect sound in the monitoring area. And when the noise level included in the sound of the monitoring area collected by the microphone array is equal to or higher than a predetermined value, the sound parameter indicating the volume of the sound is suppressed by a predetermined value, and the microphone array A processor for detecting the presence or absence of an unmanned air vehicle in the monitoring area based on a sound parameter indicating the volume of the collected sound or the sound parameter subjected to the suppression process, and the processor includes the unmanned vehicle. When the first camera adjusts the optical axis direction in the detection direction in which the flying object is detected, execution of the sound parameter suppression process Omitted, it provides a flying object detecting device.
 また、本開示は、光軸方向を調整可能な第1カメラ、またはマイクアレイとの間で通信可能に接続された飛行物体検知装置における飛行物体検知方法であって、前記マイクアレイにより収音されたモニタリングエリアの音に含まれる騒音レベルが既定値以上となる場合、前記音の大きさを示す音パラメータを所定値ほど抑圧処理するステップと、前記マイクアレイにより収音された前記音の大きさを示す音パラメータまたは前記抑圧処理された音パラメータに基づいて、前記モニタリングエリアでの無人飛行体の有無を検知するステップと、を有し、前記無人飛行体の有無を検知するステップでは、前記無人飛行体が検知された検知方向に前記第1カメラが前記光軸方向を調整する場合、前記音パラメータの抑圧処理の実行が省略され、前記マイクアレイにより収音された前記音の大きさを示す音パラメータに基づいて、前記無人飛行体の有無を検知する、飛行物体検知方法を提供する。 Further, the present disclosure is a flying object detection method in a flying object detection apparatus that is communicably connected to a first camera that can adjust an optical axis direction or a microphone array, and the microphone array collects sound. When the noise level included in the sound of the monitoring area is equal to or higher than a predetermined value, the step of suppressing the sound parameter indicating the sound volume by a predetermined value, and the volume of the sound collected by the microphone array Detecting the presence or absence of an unmanned aerial vehicle in the monitoring area based on the sound parameter indicating the suppression parameter or the sound parameter subjected to the suppression process, and the step of detecting the presence or absence of the unmanned aerial vehicle includes: When the first camera adjusts the optical axis direction in the detection direction in which the flying object is detected, the execution of the sound parameter suppression process is omitted. Based on the sound parameter indicating the magnitude of the sound picked up by the microphone array to detect the presence or absence of the unmanned air vehicle, to provide a flying object detection method.
 また、本開示は、光軸方向を調整可能であってモニタリングエリアを撮像可能に配置された第1カメラと、前記モニタリングエリアの音を収音可能に配置されたマイクアレイと、前記第1カメラまたは前記マイクアレイとの間で通信する飛行物体検知装置と、を備え、前記飛行物体検知装置は、前記マイクアレイにより収音された前記モニタリングエリアの音に含まれる騒音レベルが既定値以上となる場合、前記音の大きさを示す音パラメータを所定値ほど抑圧処理し、前記マイクアレイにより収音された前記音の大きさを示す音パラメータまたは前記抑圧処理された音パラメータに基づいて、前記モニタリングエリアでの無人飛行体の有無を検知し、前記無人飛行体が検知された検知方向に前記第1カメラが前記光軸方向を調整する場合、前記音パラメータの抑圧処理の実行を省略する、飛行物体検知システムを提供する。 In addition, the present disclosure provides a first camera that can adjust the optical axis direction and is arranged so as to be able to capture an image of the monitoring area, a microphone array that is arranged so as to be able to pick up sounds in the monitoring area, and the first camera. Or a flying object detection device that communicates with the microphone array, wherein the flying object detection device has a noise level included in the sound of the monitoring area collected by the microphone array equal to or higher than a predetermined value. The sound parameter indicating the volume of the sound is suppressed by a predetermined value, and the monitoring is performed based on the sound parameter indicating the volume of the sound collected by the microphone array or the sound parameter subjected to the suppression process. When the presence or absence of an unmanned air vehicle in the area is detected, and the first camera adjusts the optical axis direction in the detection direction in which the unmanned air vehicle is detected Omitted execution of suppression of the sound parameter, provides a flying object detection system.
 本開示によれば、モニタリングエリアの環境に応じて無人飛行体の誤検知を的確に低減でき、無人飛行体の認識精度の劣化を抑制できる。 According to the present disclosure, erroneous detection of an unmanned air vehicle can be accurately reduced according to the environment of the monitoring area, and deterioration in recognition accuracy of the unmanned air vehicle can be suppressed.
実施の形態1に係る飛行物体検知システムのシステム構成例を示す図The figure which shows the system structural example of the flying object detection system which concerns on Embodiment 1. FIG. 音源検知ユニットの外観例を示す図The figure which shows the external appearance example of a sound source detection unit マイクアレイの内部構成例を示すブロック図Block diagram showing an example of the internal configuration of a microphone array 全方位カメラの内部構成例を示すブロック図Block diagram showing an internal configuration example of an omnidirectional camera PTZカメラの内部構成例を示すブロック図Block diagram showing an internal configuration example of a PTZ camera 監視装置の内部構成例を示すブロック図Block diagram showing an example of the internal configuration of the monitoring device メモリに登録されているドローンの検知音信号のパターン例を示すタイミングチャートTiming chart showing examples of detection sound signal patterns for drones registered in memory 周波数分析処理の結果として得られた検知音信号の周波数変化例を示すタイミングチャートTiming chart showing an example of frequency change of detected sound signal obtained as a result of frequency analysis processing 実施の形態1に係る飛行物体検知システムの全般的な動作手順例を示すシーケンス図Sequence diagram showing an example of the overall operation procedure of the flying object detection system according to the first embodiment 図9のステップT13の音制御処理の動作手順例を示すフローチャートThe flowchart which shows the example of an operation | movement procedure of the sound control process of step T13 of FIG. 図10のステップS5の音圧レベルの抑圧例を模式的に示す説明図Explanatory drawing which shows typically the example of suppression of the sound pressure level of step S5 of FIG. 抑圧されていない音圧レベルに基づいて生成された音圧ヒートマップがモニタリングエリアの全方位撮像画像データに重畳された表示画面例を示す図The figure which shows the example of a display screen with which the sound pressure heat map produced | generated based on the sound pressure level which is not suppressed is superimposed on the omnidirectional picked-up image data of a monitoring area ドローンが検知されてPTZカメラが動作することによって抑圧された音圧レベルに基づいて生成された音圧ヒートマップがモニタリングエリアの全方位撮像画像データに重畳された表示画面例を示す図The figure which shows the example of a display screen on which the sound pressure heat map produced | generated based on the sound pressure level suppressed by detecting the drone and operating the PTZ camera is superimposed on the omnidirectional captured image data of the monitoring area ドローンの検知時にモニタリングエリア内で行われる指向方向の順次走査例を示す図The figure which shows the example of the sequential scanning of the directional direction performed in the monitoring area at the time of drone detection 図9のステップT16のドローンの検知判定処理の動作手順例を示すフローチャートThe flowchart which shows the example of an operation | movement procedure of the detection determination process of the drone of step T16 of FIG. ドローンが検知された時の全方位撮像画像とPTZ撮像画像とを並べて表示した表示画面例を示す図The figure which shows the example of a display screen which displayed the omnidirectional captured image and PTZ captured image side by side when a drone was detected
 以下、適宜図面を参照しながら、本開示に係る飛行物体検知装置、飛行物体検知方法および飛行物体検知システムを具体的に開示した実施の形態を詳細に説明する。但し、必要以上に詳細な説明は省略する場合がある。例えば、既によく知られた事項の詳細説明や実質的に同一の構成に対する重複説明を省略する場合がある。これは、以下の説明が不必要に冗長になることを避け、当業者の理解を容易にするためである。なお、添付図面および以下の説明は、当業者が本開示を十分に理解するために提供されるものであり、これらにより特許請求の範囲に記載の主題を限定することは意図されていない。 Hereinafter, an embodiment that specifically discloses a flying object detection device, a flying object detection method, and a flying object detection system according to the present disclosure will be described in detail with reference to the drawings as appropriate. However, more detailed description than necessary may be omitted. For example, detailed descriptions of already well-known matters and repeated descriptions for substantially the same configuration may be omitted. This is to avoid the following description from becoming unnecessarily redundant and to facilitate understanding by those skilled in the art. The accompanying drawings and the following description are provided to enable those skilled in the art to fully understand the present disclosure, and are not intended to limit the subject matter described in the claims.
 飛行物体検知システムにより検知される飛行物体として、監視対象のモニタリングエリア(例えば都市生活者の居住区域等の屋外エリア)を飛行する無人飛行体(例えばドローン等のUAV(Unmanned Aerial Vehicle))を例示して説明する。また、本開示は、飛行物体検知システムを構成する飛行物体検知装置、飛行物体検知装置において実行される飛行物体検知方法、飛行物体検知システムにおいて実行される飛行物体検知方法としてそれぞれ規定することも可能である。 As an example of a flying object detected by the flying object detection system, an unmanned air vehicle (eg UAV (Unmanned Aerial Vehicle) such as a drone) flying in a monitoring area to be monitored (for example, an outdoor area such as a residential area of a city resident) To explain. The present disclosure can also be defined as a flying object detection device constituting a flying object detection system, a flying object detection method executed in the flying object detection device, and a flying object detection method executed in the flying object detection system. It is.
 以下、飛行物体検知システムの使用者(例えばモニタリングエリアを見回り、警備等する監視員)を、単に「ユーザ」という。 Hereinafter, a user of the flying object detection system (for example, a supervisor who looks around and monitors the monitoring area) is simply referred to as a “user”.
 実施の形態1では、飛行物体検知装置(例えば監視装置10)は、少なくとも、光軸方向を調整可能であってモニタリングエリアを撮像可能に配置されたPTZカメラCZ、モニタリングエリアの音を収音可能に配置されたマイクアレイMAとの間で通信する。飛行物体検知装置は、マイクアレイMAにより収音されたモニタリングエリアの音に含まれる騒音レベルが所定値以上となる場合、音の大きさを示す音パラメータを所定値ほど抑圧処理するとともに、マイクアレイMAにより収音された音の大きさを示す音パラメータまたは抑圧処理された音パラメータに基づいて、モニタリングエリアでの無人飛行体(例えばドローン)の有無を検知する。飛行物体検知装置は、無人飛行体が検知された検知方向にPTZカメラCZが光軸方向を調整する場合、音パラメータの抑圧処理の実行を省略する。 In the first embodiment, the flying object detection device (for example, the monitoring device 10) can at least collect the sound of the monitoring area, the PTZ camera CZ arranged so that the optical axis direction can be adjusted and the monitoring area can be imaged. Communicating with the microphone array MA arranged in the. When the noise level included in the sound of the monitoring area collected by the microphone array MA is equal to or higher than a predetermined value, the flying object detection device suppresses the sound parameter indicating the loudness by a predetermined value and also suppresses the microphone array. The presence / absence of an unmanned air vehicle (for example, a drone) in the monitoring area is detected based on a sound parameter indicating the volume of sound collected by the MA or a sound parameter subjected to suppression processing. When the PTZ camera CZ adjusts the optical axis direction in the detection direction in which the unmanned flying object is detected, the flying object detection device omits the execution of the sound parameter suppression process.
 図1は、実施の形態1に係る飛行物体検知システム5のシステム構成例を示す図である。飛行物体検知システム5は、モニタリングエリア(上述参照)での検知対象物である無人飛行体(例えば図12に示すドローンDN参照)の有無を検知する。無人飛行体は、例えばGPS(Global Positioning System)機能を利用して自律的に上昇、下降、左旋回、左方向への移動、右旋回、右方向への移動またはこれらの組み合わせを用いた複数の自由度を有した行動を行って飛行するドローン、または第三者の無線操縦によって飛行するラジコンヘリコプタ等である。無人飛行体は、例えば、目的地またはターゲット物体の空撮、監視、薬品散布、物資の運搬等の多目的に利用され得る。 FIG. 1 is a diagram showing a system configuration example of a flying object detection system 5 according to the first embodiment. The flying object detection system 5 detects the presence or absence of an unmanned flying object (see, for example, the drone DN shown in FIG. 12) that is a detection target in the monitoring area (see above). The unmanned aerial vehicle uses, for example, a GPS (Global Positioning System) function to autonomously ascend, descend, turn left, move left, turn right, move right, or a combination thereof. A drone that flies by performing an action with a certain degree of freedom, or a radio controlled helicopter that flies by a third party's radio control. The unmanned air vehicle can be used for multiple purposes such as aerial photography of a destination or target object, monitoring, chemical spraying, transportation of goods, and the like.
 以下、無人飛行体として、複数のロータ(言い換えると、回転翼)を搭載したマルチコプタ型のドローンDNを例示して説明する。マルチコプタ型のドローンDNでは、一般にロータの羽の枚数が2枚の場合、特定周波数に対し2倍の周波数の高調波、さらにはその逓倍の周波数の高調波が発生する。同様に、ロータの羽の枚数が3枚の場合、特定周波数に対し3倍の周波数の高調波、さらにはその逓倍の周波数の高調波が発生する。ロータの羽の枚数が4枚以上の場合も同様である。 Hereinafter, a multi-copter type drone DN equipped with a plurality of rotors (in other words, rotor blades) will be described as an example of an unmanned air vehicle. In the multi-copter type drone DN, when the number of rotor wings is generally two, a harmonic having a frequency twice as high as a specific frequency and a harmonic having a frequency multiplied by that frequency are generated. Similarly, when the number of rotor wings is three, harmonics having a frequency three times as high as a specific frequency and further harmonics having a frequency multiplied by the harmonics are generated. The same applies when the number of rotor wings is four or more.
 飛行物体検知システム5は、複数の音源検知ユニットUDと、監視装置10と、モニタMNと、レコーダRCとを含む構成である。図1では、図面を簡単にするために、音源検知ユニットUDは1つのみ図示されている。複数の音源検知ユニットUDは、それぞれ同様な構成を有し、ネットワークNWを介して監視装置10と相互に接続される。音源検知ユニットUDは、マイクアレイMAと、全方位カメラCA(第2カメラの一例)と、PTZカメラCZ(第1カメラの一例)とを有する構成である。 The flying object detection system 5 includes a plurality of sound source detection units UD, a monitoring device 10, a monitor MN, and a recorder RC. In FIG. 1, only one sound source detection unit UD is shown to simplify the drawing. The plurality of sound source detection units UD have the same configuration and are connected to the monitoring apparatus 10 via the network NW. The sound source detection unit UD includes a microphone array MA, an omnidirectional camera CA (an example of a second camera), and a PTZ camera CZ (an example of a first camera).
 音源検知ユニットUDでは、マイクアレイMAは、自装置が設置されたモニタリングエリア(上述参照)での全方位(つまり、360度)の方向の音を無指向状態で収音する。マイクアレイMAは、中央に所定幅の円形開口部が形成された筐体15(図2参照)を有する。マイクアレイMAにより収音される音は、例えば、ドローンDNのような機械的な動作音、人間等が発する音、PTZカメラCZが発生する機械音(例えば、パン回転もしくはチルト回転、ズーム処理時のズームレンズの駆動音)、その他の音を含み、可聴周波数(つまり、20Hz~20kHz)域の音に限らず、可聴周波数より低い低周波音や可聴周波数を超える超音波音が含まれてもよい。 In the sound source detection unit UD, the microphone array MA picks up sound in all directions (that is, 360 degrees) in a non-directional state in the monitoring area (see above) where the device is installed. The microphone array MA has a housing 15 (see FIG. 2) in which a circular opening having a predetermined width is formed at the center. The sound collected by the microphone array MA is, for example, a mechanical operation sound such as a drone DN, a sound generated by a person or the like, a mechanical sound generated by the PTZ camera CZ (for example, pan rotation or tilt rotation, zoom processing) Drive sound of the zoom lens) and other sounds, not only the sound in the audible frequency range (that is, 20 Hz to 20 kHz), but also the low frequency sound lower than the audible frequency and the ultrasonic sound exceeding the audible frequency are included. Good.
 マイクアレイMAは、複数の無指向性のマイクロホンM1~Mq(図3参照)を含む。qは2以上の自然数である。マイクロホンM1~Mqは、筐体15に設けられた円形開口部の周囲に円周方向に沿って、同心円状に予め決められた間隔(例えば均一な間隔)で配置されている。マイクロホンM1~Mqは、例えばエレクトレットコンデンサーマイクロホン(ECM:Electret Condenser Microphone)が用いられる。マイクアレイMAは、それぞれのマイクロホンM1~Mqの収音により得られた音データ信号を、ネットワークNWを介して監視装置10に送信する。なお、各マイクロホンM1~Mqの配列は、一例であり、他の配列(例えば正方形状な配置、長方形状の配置)でもよいが、各マイクロホンM1~Mqは等間隔に並べて配置されることが好ましい。 The microphone array MA includes a plurality of omnidirectional microphones M1 to Mq (see FIG. 3). q is a natural number of 2 or more. The microphones M1 to Mq are arranged around the circular opening provided in the housing 15 in a concentric predetermined interval (for example, a uniform interval) along the circumferential direction. As the microphones M1 to Mq, for example, electret condenser microphones (ECM) are used. The microphone array MA transmits sound data signals obtained by collecting the sounds of the microphones M1 to Mq to the monitoring device 10 via the network NW. The arrangement of the microphones M1 to Mq is an example, and other arrangements (for example, a square arrangement or a rectangular arrangement) may be used, but the microphones M1 to Mq are preferably arranged at equal intervals. .
 マイクアレイMAは、複数のマイクロホンM1~Mq(例えばq=32)、および複数のマイクロホンM1~Mqの出力信号をそれぞれ増幅する複数の増幅器PA1~PAq(図3参照)を有する。各増幅器から出力されるアナログ信号は、A/D変換器A1~Aq(図3参照)でそれぞれデジタル信号に変換される。なお、マイクアレイMAにおけるマイクロホンの数は、32個に限られず、他の数(例えば16個、64個、128個)でもよい。 The microphone array MA includes a plurality of microphones M1 to Mq (for example, q = 32) and a plurality of amplifiers PA1 to PAq (see FIG. 3) that amplify output signals of the plurality of microphones M1 to Mq, respectively. Analog signals output from each amplifier are converted into digital signals by A / D converters A1 to Aq (see FIG. 3). The number of microphones in the microphone array MA is not limited to 32, but may be other numbers (for example, 16, 64, 128).
 マイクアレイMAの筐体15(図2参照)の中央に形成された円形開口部の内側には、円形開口部の容積と略一致する全方位カメラCAが収容される。つまり、マイクアレイMAと全方位カメラCAとは一体的かつ、それぞれの筐体中心が同軸方向となるように配置される(図2参照)。全方位カメラCAは、全方位カメラCAの撮像エリアとしてのモニタリングエリア(上述参照)の全方位(つまり、360度)の画像を撮像可能な魚眼レンズ45a(図4参照)を搭載したカメラである。実施の形態1では、マイクアレイMAの収音エリアと全方位カメラCAkの撮像エリアとはともに共通のモニタリングエリアとして説明するが、収音エリアと撮像エリアの空間的な大きさ(例えば体積)は同一でなくてもよい。例えば収音エリアの体積が撮像エリアの体積より大きくても良いし、小さくてもよい。要は、収音エリアと撮像エリアとは共通する空間部分があればよい。全方位カメラCAは、例えば音源検知ユニットUDが設置された撮像エリアを撮像可能な監視カメラとして機能する。つまり、全方位カメラCAは、例えば垂直方向:180°、水平方向:360°の画角を有し、例えば半天球であるモニタリングエリア8(図14参照)を撮像エリアとして撮像する。 An omnidirectional camera CA that substantially matches the volume of the circular opening is accommodated inside the circular opening formed in the center of the casing 15 (see FIG. 2) of the microphone array MA. That is, the microphone array MA and the omnidirectional camera CA are integrated with each other so that the center of each case is in the coaxial direction (see FIG. 2). The omnidirectional camera CA is a camera equipped with a fisheye lens 45a (see FIG. 4) capable of capturing images in all directions (that is, 360 degrees) in a monitoring area (see above) as an imaging area of the omnidirectional camera CA. In the first embodiment, the sound collection area of the microphone array MA and the imaging area of the omnidirectional camera CAk are both described as a common monitoring area, but the spatial size (for example, volume) of the sound collection area and the imaging area is It does not have to be the same. For example, the volume of the sound collection area may be larger or smaller than the volume of the imaging area. In short, the sound collection area and the imaging area only need to have a common space. The omnidirectional camera CA functions as a monitoring camera capable of imaging an imaging area where the sound source detection unit UD is installed, for example. That is, the omnidirectional camera CA has an angle of view of, for example, vertical direction: 180 ° and horizontal direction: 360 °, and images the monitoring area 8 (see FIG. 14), which is a hemisphere, for example, as an imaging area.
 音源検知ユニットUDでは、全方位カメラCAが筐体15の円形開口部の内側に嵌め込まれることで、全方位カメラCAとマイクアレイMAとが同軸上に配置される。このように、全方位カメラCAの光軸とマイクアレイMAの筐体の中心軸とが一致することで、軸周方向(つまり、水平方向)における撮像エリアと収音エリアとが略同一となり、全方位カメラCAが撮像した全方位画像中の被写体の位置(言い換えれば、全方位カメラCAから見た被写体の位置を示す方向)とマイクアレイMAの収音対象となる音源の位置(言い換えれば、マイクアレイMAから見た音源の位置を示す方向)とが同じ座標系(例えば(水平角,垂直角)で示される座標)で表現可能となる。なお、音源検知ユニットUDは、上空で飛翔しているドローンDNを検知するため、例えば天地方向の上向きが収音面および撮像面となるように、取り付けられる(図2参照)。 In the sound source detection unit UD, the omnidirectional camera CA and the microphone array MA are coaxially arranged by fitting the omnidirectional camera CA inside the circular opening of the housing 15. Thus, the optical axis of the omnidirectional camera CA and the central axis of the housing of the microphone array MA coincide with each other, so that the imaging area and the sound collection area in the axial circumferential direction (that is, the horizontal direction) are substantially the same. The position of the subject in the omnidirectional image captured by the omnidirectional camera CA (in other words, the direction indicating the position of the subject viewed from the omnidirectional camera CA) and the position of the sound source to be collected by the microphone array MA (in other words, The direction of the sound source as viewed from the microphone array MA can be expressed in the same coordinate system (for example, coordinates indicated by (horizontal angle, vertical angle)). The sound source detection unit UD is attached so that, for example, the upward direction in the vertical direction becomes the sound collection surface and the imaging surface in order to detect the drone DN flying in the sky (see FIG. 2).
 飛行物体検知装置の一例としての監視装置10は、例えばPC(Personal Computer)またはサーバ等のコンピュータを用いて構成される。監視装置10は、マイクアレイMAにより収音された全方位の音に、ユーザの操作に基づいて任意の方向を主ビーム方向とする指向性を形成(つまり、ビームフォーミング)し、その指向方向の音を強調できる。なお、マイクアレイMAによって収音された音をビームフォーミングして音データに指向性を形成する処理の詳細は、例えば参考特許文献1,2に示されるように、公知技術である。 The monitoring device 10 as an example of a flying object detection device is configured using a computer such as a PC (Personal Computer) or a server. The monitoring apparatus 10 forms directivity with the arbitrary beam direction as the main beam direction based on the user's operation (that is, beam forming) in the omnidirectional sound collected by the microphone array MA. Can emphasize the sound. The details of the process of forming the directivity in the sound data by beam forming the sound collected by the microphone array MA are known techniques as shown in, for example, Reference Patent Documents 1 and 2.
 (参考特許文献1)日本国特開2014-143678号公報
 (参考特許文献2)日本国特開2015-029241号公報
(Reference Patent Document 1) Japanese Laid-Open Patent Publication No. 2014-143678 (Reference Patent Document 2) Japanese Laid-Open Patent Publication No. 2015-029241
 監視装置10は、全方位カメラCAにより撮像された撮像画像を用いて、360度の画角を有する全方位撮像画像またはその全方位撮像画像の特定の範囲(方向)の部分を切り出して2次元に変換したパノラマ撮像画像(以下、「360度の画角を有する撮像画像」および「パノラマ撮像画像」を総称して「全方位撮像画像」という)を生成する。なお、全方位撮像画像は、監視装置10ではなく、全方位カメラCAにより生成されてよい。 The monitoring apparatus 10 uses a captured image captured by the omnidirectional camera CA to cut out an omnidirectional captured image having a field angle of 360 degrees or a part of a specific range (direction) of the omnidirectional captured image to obtain a two-dimensional view. Panorama captured images (hereinafter, “captured images having an angle of view of 360 degrees” and “panoramic captured images” are collectively referred to as “omnidirectional captured images”). Note that the omnidirectional captured image may be generated not by the monitoring device 10 but by the omnidirectional camera CA.
 監視装置10は、全方位カメラCAにより撮像された全方位撮像画像に、マイクアレイMAにより収音された音の大きさを特定する音パラメータ(例えば音圧レベル)の算出値に基づいて生成する音圧ヒートマップの画像(図13参照)を重畳してモニタMNに表示する。 The monitoring device 10 generates an omnidirectional image captured by the omnidirectional camera CA based on a calculated value of a sound parameter (for example, a sound pressure level) that specifies the volume of sound collected by the microphone array MA. An image of the sound pressure heat map (see FIG. 13) is superimposed and displayed on the monitor MN.
 監視装置10は、検知されたドローンDNをユーザにとって視覚的に判別し易い第1視覚画像(例えば、図示しない識別用マーク)を、全方位撮像画像IMG1の対応するドローンDNの位置(つまり座標)に表示してよい。第1視覚画像は、例えば、全方位撮像画像IMG1において、ユーザが全方位撮像画像IMG1を見た時に、他の被写体とは明確に識別可能な程度にドローンDNの位置が明示的に示された画像であり、以下同様とする。 The monitoring apparatus 10 uses the first visual image (for example, an identification mark not shown) that is easy for the user to visually identify the detected drone DN, and the position (that is, coordinates) of the corresponding drone DN in the omnidirectional captured image IMG1. May be displayed. In the first visual image, for example, in the omnidirectional captured image IMG1, when the user views the omnidirectional captured image IMG1, the position of the drone DN is explicitly shown to such an extent that it can be clearly distinguished from other subjects. The same shall apply hereinafter.
 モニタMNは、監視装置10から出力された全方位撮像画像IMG1、または全方位撮像画像IMG1に識別用マーク(上述参照)が重畳された合成画像を表示する。モニタMNは、監視装置10と一体の装置として構成されてよい。 The monitor MN displays the omnidirectional captured image IMG1 output from the monitoring device 10 or a composite image in which an identification mark (see above) is superimposed on the omnidirectional captured image IMG1. The monitor MN may be configured as a device integrated with the monitoring device 10.
 レコーダRCは、例えばハードディスク(Hard Disk Drive)、ソリッドステートドライブ(Solid State Drive)またはフラッシュメモリ等の半導体メモリを用いて構成され、監視装置10により生成された各種の撮像画像のデータ、それぞれの音源検知ユニットUDから送られた全方位撮像画像のデータ、音の各種データを記録する。なお、レコーダRCは、監視装置10と一体の装置として構成されてよいし、飛行物体検知システム5の構成から省略されてよい。 The recorder RC is configured by using a semiconductor memory such as a hard disk (Hard Disk Drive), a solid state drive (Solid State Drive), or a flash memory, for example, and data of various captured images generated by the monitoring device 10 and respective sound sources. The omnidirectional captured image data and sound data sent from the detection unit UD are recorded. Note that the recorder RC may be configured as an apparatus integrated with the monitoring apparatus 10 or may be omitted from the configuration of the flying object detection system 5.
 図1において、複数の音源検知ユニットUDおよび監視装置10は、それぞれ通信インタフェースを有し、ネットワークNWを介して相互にデータ通信可能に接続されている。ネットワークNWは、有線ネットワーク(例えばイントラネット、インターネット、有線LAN(Local Area Network)でもよいし、無線ネットワーク(例えば無線LAN)でもよい。なお、それぞれの音源検知ユニットUDまたは一部の音源検知ユニットUDと監視装置10とは、ネットワークNWを介することなく、直接に接続されてよい。また、監視装置10、モニタMN、レコーダRCは、いずれもユーザが監視時において常駐する監視室RMに設置される。 In FIG. 1, a plurality of sound source detection units UD and a monitoring device 10 each have a communication interface and are connected to each other via a network NW so that data communication is possible. The network NW may be a wired network (for example, an intranet, the Internet, a wired LAN (Local Area Network), or a wireless network (for example, a wireless LAN), and each of the sound source detection units UD or a part of the sound source detection units UD. The monitoring device 10 may be directly connected to the monitoring device 10 without going through the network NW, and the monitoring device 10, the monitor MN, and the recorder RC are all installed in the monitoring room RM where the user resides at the time of monitoring.
 図2は、音源検知ユニットUDの外観例を示す図である。音源検知ユニットUDは、マイクアレイMA、全方位カメラCA、PTZカメラCZの他、これらを機械的に支持する支持台70を有する。支持台70は、三脚71と、三脚71の天板71aに固定された2本のレール72と、2本のレール72の両端部にそれぞれ取り付けられた第1取付板73および第2取付板74とが組み合わされた構造を有する。 FIG. 2 is a view showing an example of the appearance of the sound source detection unit UD. The sound source detection unit UD includes a microphone array MA, an omnidirectional camera CA, a PTZ camera CZ, and a support base 70 that mechanically supports them. The support base 70 includes a tripod 71, two rails 72 fixed to the top plate 71 a of the tripod 71, and a first attachment plate 73 and a second attachment plate 74 attached to both ends of the two rails 72, respectively. And has a combined structure.
 第1取付板73と第2取付板74とは、2本のレール72に跨るように取り付けられ、略同一の平面を有する。第1取付板73および第2取付板74は、それぞれ2本のレール72上を摺動自在であり、互いに離間もしくは接近した位置に調節されて固定される。 The first mounting plate 73 and the second mounting plate 74 are mounted so as to straddle the two rails 72 and have substantially the same plane. The first mounting plate 73 and the second mounting plate 74 are slidable on the two rails 72, respectively, and are adjusted and fixed at positions separated or approached from each other.
 第1取付板73は円盤状の板材である。第1取付板73の中央には、円形開口部73aが形成されている。円形開口部73aには、マイクアレイMAの筐体15が収容されて固定される。一方、第2取付板74は略長方形の板材である。第2取付板74の外側に近い部分には、円形開口部74aが形成されている。円形開口部74aには、PTZカメラCZが収容されて固定される。 The first mounting plate 73 is a disk-shaped plate material. A circular opening 73 a is formed at the center of the first mounting plate 73. The casing 15 of the microphone array MA is accommodated and fixed in the circular opening 73a. On the other hand, the second mounting plate 74 is a substantially rectangular plate material. A circular opening 74 a is formed in a portion near the outside of the second mounting plate 74. The PTZ camera CZ is accommodated and fixed in the circular opening 74a.
 図2に示すように、マイクアレイMAの筐体15に収容される全方位カメラCAの光軸L1と、第2取付板74に取り付けられたPTZカメラCZの光軸L2とは、初期設置状態においてそれぞれ平行になるように設定される。 As shown in FIG. 2, the optical axis L1 of the omnidirectional camera CA housed in the casing 15 of the microphone array MA and the optical axis L2 of the PTZ camera CZ attached to the second mounting plate 74 are in the initial installation state. Are set to be parallel to each other.
 三脚71は、3本の脚71bで接地面に支えられており、手動操作により、接地面に対して垂直方向に天板71aの位置を移動自在であり、かつ、パン方向およびチルト方向に天板71aの向きを調節可能である。これにより、マイクアレイMAの収音エリア(言い換えると、全方位カメラCAの撮像エリアまたは飛行物体検知システム5のモニタリングエリア)を任意の向きに設定できる。 The tripod 71 is supported by the grounding surface by three legs 71b, and the position of the top plate 71a can be moved in the vertical direction with respect to the grounding surface by manual operation, and the top and bottom can be moved in the pan and tilt directions. The direction of the plate 71a can be adjusted. Thereby, the sound collection area of the microphone array MA (in other words, the imaging area of the omnidirectional camera CA or the monitoring area of the flying object detection system 5) can be set in an arbitrary direction.
 図3は、マイクアレイMAの内部構成例を示すブロック図である。図3に示すマイクアレイMAは、複数のマイクロホンM1~Mq(例えばq=32)、複数のマイクロホンM1~Mqから出力される音データ信号をそれぞれ増幅する複数の増幅器PA1~PAq、各増幅器PA1~PAqから出力されるアナログの音データ信号をそれぞれデジタルの音データ信号に変換する複数のA/D変換器A1~Aq、圧縮処理部25および送信部26を含む構成である。 FIG. 3 is a block diagram showing an example of the internal configuration of the microphone array MA. The microphone array MA shown in FIG. 3 includes a plurality of microphones M1 to Mq (for example, q = 32), a plurality of amplifiers PA1 to PAq for amplifying sound data signals output from the plurality of microphones M1 to Mq, and each amplifier PA1 to This configuration includes a plurality of A / D converters A1 to Aq that convert analog sound data signals output from PAq into digital sound data signals, a compression processing unit 25, and a transmission unit 26, respectively.
 圧縮処理部25は、A/D変換器A1~Anから出力されるデジタルの音データ信号を基に、音データ信号のパケットを生成する。送信部26は、圧縮処理部25で生成された音データ信号のパケット(以下、単に「音データ」という)を、ネットワークNWを介して監視装置10に送信する。 The compression processor 25 generates a sound data signal packet based on the digital sound data signal output from the A / D converters A1 to An. The transmission unit 26 transmits the packet of the sound data signal generated by the compression processing unit 25 (hereinafter simply referred to as “sound data”) to the monitoring device 10 via the network NW.
 このように、マイクアレイMAは、監視装置10からの音配信要求(図9参照)を受信すると、マイクロホンM1~Mqから出力される音データ信号を、対応するアンプ(Amp)である増幅器PA1~PAqで増幅し、対応するA/D変換器A1~Aqでデジタルの音データ信号に変換する。さらに、マイクアレイMAは、圧縮処理部25において音データ信号のパケットを生成し、この音データ信号のパケットを、ネットワークNWを介して監視装置10への送信を継続する。 As described above, when the microphone array MA receives the sound distribution request (see FIG. 9) from the monitoring device 10, the microphone array MA converts the sound data signals output from the microphones M1 to Mq to the amplifiers PA1 to PA1 corresponding to the amplifiers (Amp). It is amplified by PAq and converted into a digital sound data signal by corresponding A / D converters A1 to Aq. Further, the microphone array MA generates a sound data signal packet in the compression processing unit 25, and continues to transmit the sound data signal packet to the monitoring device 10 via the network NW.
 図4は、全方位カメラCAの内部構成例を示すブロック図である。図4に示す全方位カメラCAは、CPU41、通信部42、電源管理部44、イメージセンサ45、魚眼レンズ45a、メモリ46およびネットワークコネクタ47を含む構成である。 FIG. 4 is a block diagram showing an example of the internal configuration of the omnidirectional camera CA. The omnidirectional camera CA shown in FIG. 4 includes a CPU 41, a communication unit 42, a power management unit 44, an image sensor 45, a fisheye lens 45a, a memory 46, and a network connector 47.
 CPU41は、全方位カメラCAの各部の動作を統括して制御するための信号処理、他の各部との間のデータの入出力処理、データの演算処理およびデータの記憶処理を行う。CPU41の代わりに、MPU(Micro Processing Unit)またはDSP(Digital Signal Processor)等のプロセッサが設けられてもよい。 The CPU 41 performs signal processing for overall control of operations of each unit of the omnidirectional camera CA, data input / output processing with other units, data calculation processing, and data storage processing. Instead of the CPU 41, a processor such as MPU (Micro Processing Unit) or DSP (Digital Signal Processor) may be provided.
 例えばCPU41は、監視装置10を操作するユーザの指定により、イメージセンサ45により生成された360度の画角を有する撮像画像データのうち、特定の範囲(方向)の画像を切り出した2次元のパノラマ画像データ(つまり、2次元パノラマ変換した画像データ)を生成してメモリ46に保存する。 For example, the CPU 41 is a two-dimensional panorama obtained by cutting out an image in a specific range (direction) from captured image data having a field angle of 360 degrees generated by the image sensor 45 according to the designation of the user who operates the monitoring device 10. Image data (that is, image data obtained by two-dimensional panorama conversion) is generated and stored in the memory 46.
 イメージセンサ45は、例えばCMOS(Complementary Metal Oxide Semiconductor)センサ、またはCCD(Charge Coupled Device)センサを用いて構成され、魚眼レンズ45aにより集光されたモニタリングエリアからの入射光の光学像を光電変換することで、360度の画角を有するモニタリングエリアの撮像画像データを生成してCPU41に送る。 The image sensor 45 is configured using, for example, a CMOS (Complementary Metal Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) sensor, and photoelectrically converts an optical image of incident light from the monitoring area collected by the fisheye lens 45a. Thus, the captured image data of the monitoring area having an angle of view of 360 degrees is generated and sent to the CPU 41.
 魚眼レンズ45aは、撮像エリア(つまり、モニタリングエリア)の全方位からの入射光を入射して集光し、イメージセンサ45の撮像面に入射光の光学像を結像する。 The fisheye lens 45a receives and collects incident light from all directions of the imaging area (that is, the monitoring area), and forms an optical image of the incident light on the imaging surface of the image sensor 45.
 メモリ46は、全方位カメラCAの動作を規定するためのプログラムや設定値のデータが格納されたROM46zと、360度の画角を有する撮像画像データまたはその一部の範囲が切り出されたパノラマ撮像画像データやワークデータを記憶するRAM46yと、全方位カメラCAkに挿抜自在に接続され、各種データが記憶されるメモリカード46xとを有する。 The memory 46 includes a ROM 46z storing a program for defining the operation of the omnidirectional camera CA and set value data, and panoramic imaging in which captured image data having a field angle of 360 degrees or a part of the range is cut out. A RAM 46y that stores image data and work data, and a memory card 46x that is detachably connected to the omnidirectional camera CAk and stores various data.
 通信部42は、ネットワークコネクタ47を介して接続されるネットワークNWとの間のデータ通信を制御する通信インタフェースである。 The communication unit 42 is a communication interface that controls data communication with the network NW connected via the network connector 47.
 電源管理部44は、全方位カメラCAの各部に直流電源を供給する。また、電源管理部44は、ネットワークコネクタ47を介してネットワークNWに接続される機器に直流電源を供給してもよい。 The power management unit 44 supplies DC power to each unit of the omnidirectional camera CA. Further, the power management unit 44 may supply DC power to devices connected to the network NW via the network connector 47.
 ネットワークコネクタ47は、360度の画角を有する撮像画像データまたはパノラマ撮像画像データ(つまり、上述した全方位撮像画像データ)を、ネットワークNWを介して監視装置10に伝送し、また、ネットワークケーブルを介して給電可能なコネクタである。 The network connector 47 transmits picked-up image data or panoramic picked-up image data (that is, the above-described omnidirectional picked-up image data) having an angle of view of 360 degrees to the monitoring apparatus 10 via the network NW, and a network cable. It is a connector that can be fed via.
 図5は、PTZカメラCZの内部構成例を示すブロック図である。全方位カメラCAと同様の各部については、図4の各部に対応する符号を付すことでその説明を省略する。PTZカメラCZは、監視装置10からのPTZ制御要求(図9参照)により、光軸方向(つまり、PTZカメラCZの撮像方向)およびズーム倍率を調整可能なカメラである。 FIG. 5 is a block diagram showing an example of the internal configuration of the PTZ camera CZ. About each part similar to omnidirectional camera CA, the code | symbol corresponding to each part of FIG. 4 is attached | subjected, and the description is abbreviate | omitted. The PTZ camera CZ is a camera that can adjust the optical axis direction (that is, the imaging direction of the PTZ camera CZ) and the zoom magnification according to the PTZ control request (see FIG. 9) from the monitoring device 10.
 PTZカメラCZは、全方位カメラCAと同様、CPU51、通信部52、電源管理部54、イメージセンサ55、撮像レンズ55a、メモリ56およびネットワークコネクタ57を有する他、撮像方向制御部58およびレンズ駆動モータ59を有する。CPU51は、監視装置10からのPTZ制御要求(図9参照)を受け取ると、撮像方向制御部58に画角変更指示を通知する。 Similar to the omnidirectional camera CA, the PTZ camera CZ includes a CPU 51, a communication unit 52, a power management unit 54, an image sensor 55, an imaging lens 55a, a memory 56, and a network connector 57, an imaging direction control unit 58, and a lens driving motor. 59. When the CPU 51 receives a PTZ control request (see FIG. 9) from the monitoring device 10, the CPU 51 notifies the imaging direction control unit 58 of an angle of view change instruction.
 撮像方向制御部58は、CPU51から通知された画角変更指示に従い、PTZカメラCZの撮像方向をパン方向およびチルト方向のうち少なくとも1つを制御し、さらに必要に応じて、ズーム倍率を変更するための制御信号をレンズ駆動モータ59に送る。レンズ駆動モータ59は、この制御信号に従って、撮像レンズ55aを駆動し、その撮像方向(図2に示す光軸L2の方向)を変更するとともに、撮像レンズ55aの焦点距離を調節してズーム倍率を変更する。 The imaging direction control unit 58 controls at least one of the panning direction and the tilting direction of the PTZ camera CZ in accordance with the view angle change instruction notified from the CPU 51, and further changes the zoom magnification as necessary. Control signal is sent to the lens drive motor 59. The lens drive motor 59 drives the imaging lens 55a according to this control signal, changes its imaging direction (direction of the optical axis L2 shown in FIG. 2), and adjusts the focal length of the imaging lens 55a to increase the zoom magnification. change.
 撮像レンズ55aは、1または2以上のレンズを用いて構成される。撮像レンズ55aでは、撮像方向制御部58からの制御信号に応じたレンズ駆動モータ59の駆動により、パン回転、チルト回転の光軸方向、またはズーム倍率が変更される。 The imaging lens 55a is configured using one or two or more lenses. In the imaging lens 55a, the optical axis direction of pan rotation and tilt rotation or the zoom magnification is changed by driving the lens drive motor 59 according to the control signal from the imaging direction control unit 58.
 図6は、監視装置10の内部構成例を示すブロック図である。図6に示す監視装置10は、通信部31と、操作部32と、信号処理部33と、SPK37と、メモリ38と、設定管理部39とを少なくとも含む構成である。SPKはスピーカの略称である。 FIG. 6 is a block diagram showing an example of the internal configuration of the monitoring apparatus 10. The monitoring apparatus 10 illustrated in FIG. 6 includes a communication unit 31, an operation unit 32, a signal processing unit 33, an SPK 37, a memory 38, and a setting management unit 39. SPK is an abbreviation for speaker.
 通信部31は、全方位カメラCAが送信した全方位撮像画像データと、マイクアレイMAが送信した音データとを受信して信号処理部33に送る。 The communication unit 31 receives the omnidirectional captured image data transmitted from the omnidirectional camera CA and the sound data transmitted from the microphone array MA, and sends them to the signal processing unit 33.
 操作部32は、ユーザの入力操作の内容を信号処理部33に通知するためのユーザインターフェース(UI:User Interface)であり、例えばマウス、キーボード等の入力デバイスで構成される。操作部32は、例えばモニタMNの表示画面上に対応して配置され、ユーザの指やスタイラスペンによって直接入力操作が可能なタッチパネルまたはタッチパッドを用いて構成されてもよい。 The operation unit 32 is a user interface (UI) for notifying the signal processing unit 33 of the contents of the user's input operation, and includes an input device such as a mouse or a keyboard. The operation unit 32 may be configured using, for example, a touch panel or a touch pad that is arranged correspondingly on the display screen of the monitor MN and can be directly input by a user's finger or stylus pen.
 操作部32は、モニタMNにおいて全方位カメラCAの全方位撮像画像IMG1上に重畳表示された音圧ヒートマップ(図13参照)の赤領域RD1がユーザにより指定されると、指定された位置を示す座標を取得して信号処理部33に送る。信号処理部33は、マイクアレイMAにより収音された音データをメモリ38から読み出し、マイクアレイMAkから、指定された位置に対応する実際の音源位置に向かう方向に指向性を形成してSPK37から音出力する。これにより、ユーザは、ドローンDNに限らず、ユーザ自身が全方位撮像画像IMG1上で指定された位置における音が強調された状態で確認できる。 When the user designates the red region RD1 of the sound pressure heat map (see FIG. 13) superimposed on the omnidirectional captured image IMG1 of the omnidirectional camera CA on the monitor MN, the operation unit 32 displays the designated position. The indicated coordinates are acquired and sent to the signal processing unit 33. The signal processing unit 33 reads out the sound data collected by the microphone array MA from the memory 38, forms directivity from the microphone array MAk in the direction toward the actual sound source position corresponding to the designated position, and generates the directivity from the SPK 37. Output sound. Thus, the user can check not only the drone DN but also the sound at the position specified by the user on the omnidirectional captured image IMG1 in an emphasized state.
 信号処理部33は、例えばCPU(Central Processing Unit)、MPU(Micro Processing Unit)、DSP(Digital Signal Processor)またはFPGA(Field Programmable Gate Array)等のプロセッサを用いて構成される。信号処理部33(つまり、プロセッサ)は、監視装置10の各部の動作を統括して制御するための制御処理、他の各部との間のデータの入出力処理、データの演算(計算)処理およびデータの記憶処理を行う。信号処理部33は、音源方向検知部34、出力制御部35、指向性処理部63、周波数分析部64、対象物検知部65、検知結果判定部66、走査制御部67および検知方向制御部68を含む。また、監視装置10はモニタMNに接続される。 The signal processing unit 33 is configured using a processor such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a DSP (Digital Signal Processor), or an FPGA (Field Programmable Gate Array). The signal processing unit 33 (that is, the processor) is configured to control and control the operation of each unit of the monitoring apparatus 10, data input / output processing with other units, data calculation (calculation) processing, Data storage processing is performed. The signal processing unit 33 includes a sound source direction detection unit 34, an output control unit 35, a directivity processing unit 63, a frequency analysis unit 64, an object detection unit 65, a detection result determination unit 66, a scanning control unit 67, and a detection direction control unit 68. including. The monitoring device 10 is connected to the monitor MN.
 音源方向検知部34は、例えば公知の白色化相互相関法(CSP(Cross-power Spectrum Phase analysis)法)に従って、マイクアレイMAにより収音された音を用いて、モニタリングエリアでの音源位置を推定する。CSP法では、音源方向検知部34は、図14に示すモニタリングエリア8を複数のブロック(後述参照)に分割し、マイクアレイMAで音が収音されると、ブロック(後述参照)毎に音の大きさを示す音パラメータ(例えば、マイクアレイMAを構成する複数のマイクロホンM1~Mqのうち、複数組からなる2つのマイクロホンによりそれぞれ収音された音データ間の相互相関値の正規化出力値)が閾値(既定値)を超えるか否かを判定することで、モニタリングエリア8内での音源位置を概略的に推定できる。 The sound source direction detection unit 34 estimates the sound source position in the monitoring area using the sound collected by the microphone array MA, for example, in accordance with a known whitening cross-correlation method (CSP (Cross-power Spectrum Spectrum Analysis) method). To do. In the CSP method, the sound source direction detection unit 34 divides the monitoring area 8 shown in FIG. 14 into a plurality of blocks (see later), and when sound is picked up by the microphone array MA, the sound source direction detection unit 34 generates a sound for each block (see later). Of the sound parameters indicating the magnitude of the sound (for example, the normalized output value of the cross-correlation value between the sound data respectively collected by the two microphones of the plurality of microphones M1 to Mq constituting the microphone array MA) ) Exceeds a threshold (predetermined value), the sound source position in the monitoring area 8 can be roughly estimated.
 音源方向検知部34は、全方位カメラCAで撮像された全方位撮像画像データとマイクアレイMAで収音された音データとを基に、モニタリングエリア8の全方位撮像画像データを構成するブロック(例えば、「2*2」個、「4*4」個等の所定数の画素からなる画素集合を示す。以下同様。)ごとに、そのブロックに対応する位置における音の大きさを示す音パラメータ(例えば、上述した相互相関値の正規化出力値、または音圧レベル)を算出する。音源方向検知部34は、全方位撮像画像データを構成するブロックごとの音パラメータの算出結果を出力制御部35に送る。上述した音パラメータの算出処理は公知技術であり、詳細な処理の説明は割愛する。 The sound source direction detection unit 34 constitutes omnidirectional captured image data in the monitoring area 8 based on the omnidirectional captured image data captured by the omnidirectional camera CA and the sound data collected by the microphone array MA ( For example, a sound parameter indicating a loudness at a position corresponding to the block for each pixel group including a predetermined number of pixels such as “2 * 2” and “4 * 4”. (For example, the above-described normalized output value of the cross-correlation value or sound pressure level) is calculated. The sound source direction detection unit 34 sends the calculation result of the sound parameter for each block constituting the omnidirectional captured image data to the output control unit 35. The sound parameter calculation process described above is a known technique, and a detailed description of the process is omitted.
 音源方向検知部34は、マイクアレイMAで収音された音データに含まれる騒音レベルを常時監視し、その騒音レベルが所定値以上となる場合に、その音データの大きさを示す音パラメータ(例えば、上述した相互相関値の正規化出力値、または音圧レベル)を所定値QTほど抑圧処理する(図11参照)。 The sound source direction detection unit 34 constantly monitors the noise level included in the sound data collected by the microphone array MA, and when the noise level exceeds a predetermined value, a sound parameter indicating the size of the sound data ( For example, the cross-correlation value normalized output value or sound pressure level) is suppressed by a predetermined value QT (see FIG. 11).
 この抑圧処理を行う理由は、次の通りである。 The reason for this suppression processing is as follows.
 具体的には、モニタリングエリア8にドローンDNが存在していない状態であるにも拘わらず、騒音レベルが一定値(既定値)を超える程に高い場合、監視装置10により、モニタリングエリアにお化けドローンが出現したと誤検知される可能性が高いからである。ここで、お化けドローンとは、ドローンDNの実体は検知されていないが、上述した一定値より高い騒音レベルによって、まるでそこにドローンDNが存在していると誤って検知される、仮想的なドローンのことである。 Specifically, if the noise level is high enough to exceed a certain value (predetermined value) even though the drone DN does not exist in the monitoring area 8, the monitoring device 10 turns the drone into the monitoring area. This is because there is a high possibility of being erroneously detected as having appeared. Here, the ghost drone is a virtual drone in which the substance of the drone DN is not detected, but it is erroneously detected that the drone DN exists in the noise level higher than the above-mentioned fixed value. That's it.
 図11は、図10のステップS5の音圧レベルの抑圧例を模式的に示す説明図である。なお、図11は音圧レベルの抑圧例に限らず、上述した相互相関値の正規化出力値の抑圧例としても適用可能である。図10の詳細な説明は後述するが、ここでは図11を参照して音圧レベルの抑圧処理を簡単に説明する。図11に示される2つのグラフの横軸はそれぞれ周波数を示し、縦軸はそれぞれ音圧レベルを示す。特性Cv1は、音圧レベルの抑圧処理が行われる前の音圧レベルの特性(つまり、マイクアレイMAにより収音された音データの周波数特性)の一例である。同様に、特性Cv2は、音圧レベルの抑圧処理が行われた後の音圧レベルの特性(つまり、マイクアレイMAにより収音された音データの周波数特性が所定値QTほど抑圧処理された周波数特性)の一例である。周波数fpは、特性CV1,Cv2における音圧レベルのピーク値が得られる時の周波数を示す。第1閾値は、ドローンDNが存在しているか否かを判定するための閾値であり、具体的には後述する対象物検知部65において参照される。 FIG. 11 is an explanatory diagram schematically showing an example of suppression of the sound pressure level in step S5 of FIG. Note that FIG. 11 is not limited to the example of suppressing the sound pressure level, and can also be applied as an example of suppressing the normalized output value of the cross-correlation value described above. The detailed description of FIG. 10 will be described later, but here, the sound pressure level suppression processing will be briefly described with reference to FIG. In each of the two graphs shown in FIG. 11, the horizontal axis indicates the frequency, and the vertical axis indicates the sound pressure level. The characteristic Cv1 is an example of the characteristic of the sound pressure level before the sound pressure level suppression process is performed (that is, the frequency characteristic of the sound data collected by the microphone array MA). Similarly, the characteristic Cv2 is the characteristic of the sound pressure level after the sound pressure level suppression process is performed (that is, the frequency characteristic in which the frequency characteristic of the sound data collected by the microphone array MA is suppressed by the predetermined value QT). (Characteristic). The frequency fp indicates the frequency when the peak value of the sound pressure level in the characteristics CV1 and Cv2 is obtained. The first threshold value is a threshold value for determining whether or not the drone DN exists, and is specifically referred to in the object detection unit 65 described later.
 音源方向検知部34は、マイクアレイMAで収音された音データに含まれる騒音レベル(図示略)が上述した一定値を超える場合、特性Cv1に示される音圧レベルの周波数特性を所定値QTほど抑圧処理する(図11参照)。これにより、監視装置10は、上述した一定値より高い騒音レベルが発生している状態に、音圧レベルの周波数特性を所定値QTほど抑圧処理する(つまり、下げる)ので、図11に示すように、音圧レベルのピーク値が第1閾値未満となる周波数特性(特性Cv2参照)が得られ、上述したお化けドローンの存在を誤検知することを効果的に抑制できる。 When the noise level (not shown) included in the sound data collected by the microphone array MA exceeds the above-described constant value, the sound source direction detection unit 34 sets the frequency characteristic of the sound pressure level indicated by the characteristic Cv1 to a predetermined value QT. The suppression processing is performed as much as possible (see FIG. 11). As a result, the monitoring apparatus 10 suppresses (that is, lowers) the frequency characteristic of the sound pressure level by the predetermined value QT in a state where a noise level higher than the above-described constant value is generated. In addition, a frequency characteristic (see characteristic Cv2) in which the peak value of the sound pressure level is less than the first threshold value is obtained, and erroneous detection of the presence of the ghost drone described above can be effectively suppressed.
 ところが、詳細は図9を参照して説明するが、ドローンDNの検知に応じて、PTZカメラCZは、監視装置10からのPTZ制御要求を受け取ると、PTZ制御を行う。この時に、PTZカメラCZのパン回転時、チルト回転時、撮像レンズ55aの移動時に、大きな発生音(上述参照)が発生する。音源方向検知部34は、その大きな発生音を上述した一定値より高い騒音レベルとみなして、音圧レベルの周波数特性を所定値QTほど抑圧処理すると、音圧レベルのピーク値が第1閾値未満となる場合があり、本来存在しているはずのドローンDNが存在していない(言い換えると、ドローンDNを見失う)という課題が生じてしまう。 However, although details will be described with reference to FIG. 9, the PTZ camera CZ performs PTZ control when receiving a PTZ control request from the monitoring device 10 in response to detection of the drone DN. At this time, a loud sound (see above) is generated when the PTZ camera CZ is rotated by panning, tilting, or moving the imaging lens 55a. When the sound source direction detection unit 34 regards the large generated sound as a noise level higher than the above-described constant value and suppresses the frequency characteristic of the sound pressure level by a predetermined value QT, the peak value of the sound pressure level is less than the first threshold value. There is a problem that the drone DN that should originally exist does not exist (in other words, the drone DN is lost).
 そこで、音源方向検知部34は、PTZカメラCZの動作時(つまり、パン回転時、チルト回転時、撮像レンズ55aの移動時)に、上述した音圧レベルの周波数特性の所定値QTの抑圧処理の実行を省略するように制御することが考えられる。 Therefore, the sound source direction detection unit 34 suppresses the predetermined value QT of the frequency characteristic of the sound pressure level described above during the operation of the PTZ camera CZ (that is, during pan rotation, tilt rotation, and movement of the imaging lens 55a). It is conceivable to perform control so as to omit the execution of.
 しかし、ドローンDNが存在していない時に上述した音圧レベルの周波数特性の所定値QTの抑圧処理の実行が省略されてしまうと、上述したようなお化けドローンがモニタリングエリア8に出現し、ドローンDNが検知されたと誤検知される可能性が存在する。 However, if the execution of the suppression process of the predetermined value QT of the frequency characteristic of the sound pressure level described above is omitted when the drone DN does not exist, the ghost drone as described above appears in the monitoring area 8 and the drone DN There is a possibility of being falsely detected as being detected.
 そこで、実施の形態1では、音源方向検知部34は、次の第1条件または第2条件のうちいずれかを満たす場合に、音圧レベルの周波数特性の所定値QTの抑圧処理(図10に示すステップSt5)の実行を省略する(図10参照)。 Therefore, in the first embodiment, the sound source direction detection unit 34 suppresses the predetermined value QT of the frequency characteristic of the sound pressure level (see FIG. 10) when either the following first condition or second condition is satisfied. The execution of the indicated step St5) is omitted (see FIG. 10).
 第1条件は、マイクアレイMAにより収音された音データに含まれる騒音レベルが一定値より高くない場合である。第1条件が満たされる場合、騒音レベルが一定値より低いので、騒音レベルが上述したお化けドローンを生じさせることが無く、監視装置10がお化けドローンが存在すると誤検知することが無いと考えられるからである。 The first condition is when the noise level included in the sound data collected by the microphone array MA is not higher than a certain value. When the first condition is satisfied, since the noise level is lower than a certain value, it is considered that the noise level does not cause the above-described ghost drone, and that the monitoring device 10 does not erroneously detect the presence of the ghost drone. It is.
 第2条件は、監視装置10によってドローンDNの存在が検知されてPTZカメラCZが動作することによって、マイクアレイMAにより収音された音データに含まれる騒音レベルが一定値より高いと認識された場合である。第2条件が満たされる場合、ドローンDNが検知されたことでPTZカメラCZが動作(例えば、パン回転、チルト回転、ズーム処理)する時に、仮に抑圧処理を行えば、ドローンDNが本来存在しているにも拘わらず、音圧レベルの低下によってドローンDNの実体を検知できずに見逃してしまい、監視装置10のドローンDNの検知精度が劣化してしまうからである。なお、音源方向検知部34における音圧レベルの周波数特性の所定値QTの抑圧処理の有無に関する詳細については、後述する図10を参照して詳述する。 The second condition is that when the presence of the drone DN is detected by the monitoring device 10 and the PTZ camera CZ operates, the noise level included in the sound data collected by the microphone array MA is recognized to be higher than a certain value. Is the case. When the second condition is satisfied, if the PTZ camera CZ operates (for example, pan rotation, tilt rotation, zoom processing) when the drone DN is detected, if the suppression processing is performed, the drone DN originally exists. This is because although the sound pressure level is lowered, the substance of the drone DN cannot be detected and is missed, and the detection accuracy of the drone DN of the monitoring device 10 is deteriorated. Details regarding whether or not to suppress the predetermined value QT of the frequency characteristic of the sound pressure level in the sound source direction detector 34 will be described in detail with reference to FIG.
 設定管理部39は、全方位カメラCAで撮像された全方位撮像画像データが表示されたモニタMNの表示画面に対してユーザにより指定された位置の座標変換に関する座標変換式を保持する。この座標変換式は、例えば全方位カメラCAの設置位置(図2参照)とPTZカメラCZの設置位置(図2参照)との物理的な距離差に基づき、全方位撮像画像データ上のユーザの指定位置の座標(つまり、(水平角,垂直角))を、PTZカメラCZから見た方向の座標に変換するための数式である。 The setting management unit 39 holds a coordinate conversion formula relating to the coordinate conversion of the position specified by the user on the display screen of the monitor MN on which the omnidirectional captured image data captured by the omnidirectional camera CA is displayed. This coordinate conversion formula is based on, for example, the physical distance difference between the installation position of the omnidirectional camera CA (see FIG. 2) and the installation position of the PTZ camera CZ (see FIG. 2). This is a mathematical formula for converting the coordinates of the designated position (that is, (horizontal angle, vertical angle)) into coordinates in the direction viewed from the PTZ camera CZ.
 信号処理部33は、設定管理部39が保持する上記座標変換式を用いて、PTZカメラCZの設置位置(図2参照)を基準として、PTZカメラCZの設置位置から、ユーザによって指定された位置に対応する実際の音源位置に向かう指向方向を示す座標(θMAh,θMAv)を算出する。θMAhは、PTZカメラCZの設置位置から見て、ユーザにより指定された位置に対応する実際の音源位置に向かう方向の水平角である。θMAvは、PTZカメラCZの設置位置から見て、ユーザにより指定された位置に対応する実際の音源位置に向かう方向の垂直角である。図2に示すように、全方位カメラCAとPTZカメラCZとの距離は既知であり、かつそれぞれの光軸L1,L2は平行であるため、上記の座標変換式の算出処理は、例えば公知の幾何学計算により実現可能である。音源位置は、モニタMNに表示された全方位撮像画像データに対し、ユーザの指またはスタイラスペンの操作によって操作部32から指定された位置に対応する実際の音源位置である。 The signal processing unit 33 uses the coordinate conversion formula held by the setting management unit 39 to determine the position designated by the user from the installation position of the PTZ camera CZ with reference to the installation position of the PTZ camera CZ (see FIG. 2). The coordinates (θMAh, θMAv) indicating the directivity direction toward the actual sound source position corresponding to is calculated. θMAh is a horizontal angle in a direction toward the actual sound source position corresponding to the position designated by the user as viewed from the installation position of the PTZ camera CZ. θMAv is a vertical angle in a direction toward the actual sound source position corresponding to the position designated by the user as viewed from the installation position of the PTZ camera CZ. As shown in FIG. 2, the distance between the omnidirectional camera CA and the PTZ camera CZ is known, and the optical axes L1 and L2 are parallel. It can be realized by geometric calculation. The sound source position is an actual sound source position corresponding to the position designated from the operation unit 32 by the operation of the user's finger or stylus pen with respect to the omnidirectional captured image data displayed on the monitor MN.
 なお、図2に示すように、実施の形態1において全方位カメラCAの光軸方向とマイクアレイMAの筐体の中心軸とは同軸上となるように全方位カメラCAおよびマイクアレイMAはそれぞれ配置されている。このため、全方位撮像画像データが表示されたモニタMNに対するユーザの指定に応じて全方位カメラCAが算出するユーザ指定位置の座標は、マイクアレイMAから見た音の強調方向(指向方向ともいう)と同一であるとみなすことが可能である。言い換えると、監視装置10は、全方位撮像画像データが表示されたモニタMNに対するユーザの指定があると、全方位撮像画像データ上の指定位置の座標を全方位カメラCAに送信する。これにより、全方位カメラCAは、監視装置10から送信された指定位置の座標を用いて、全方位カメラCAから見た、ユーザ指定位置に対応する音源位置の方向を示す座標(水平角,垂直角)を算出する。全方位カメラCAにおける算出処理は、公知技術であるため、説明は割愛する。全方位カメラCAは、音源位置の方向を示す座標の算出結果を監視装置10に送信する。監視装置10は、全方位カメラCAにより算出された座標(水平角,垂直角)を、マイクアレイMAから見た音源位置の方向を示す座標(水平角,垂直角)として使用できる。 As shown in FIG. 2, in the first embodiment, the omnidirectional camera CA and the microphone array MA are arranged so that the optical axis direction of the omnidirectional camera CA and the central axis of the casing of the microphone array MA are coaxial. Has been placed. For this reason, the coordinates of the user-specified position calculated by the omnidirectional camera CA according to the user's specification for the monitor MN on which the omnidirectional captured image data is displayed are the sound emphasis direction (also referred to as the directional direction) viewed from the microphone array MA. ). In other words, when the user designates the monitor MN on which the omnidirectional captured image data is displayed, the monitoring apparatus 10 transmits the coordinates of the designated position on the omnidirectional captured image data to the omnidirectional camera CA. As a result, the omnidirectional camera CA uses the coordinates of the designated position transmitted from the monitoring device 10, and the coordinates (horizontal angle, vertical) indicating the direction of the sound source position corresponding to the user designated position as seen from the omnidirectional camera CA. Corner). Since the calculation process in the omnidirectional camera CA is a known technique, a description thereof will be omitted. The omnidirectional camera CA transmits a calculation result of coordinates indicating the direction of the sound source position to the monitoring device 10. The monitoring device 10 can use the coordinates (horizontal angle, vertical angle) calculated by the omnidirectional camera CA as coordinates (horizontal angle, vertical angle) indicating the direction of the sound source position as viewed from the microphone array MA.
 但し、全方位カメラCAとマイクアレイMAとが同軸上に配置されていない場合には、設定管理部39は、例えば日本国特開2015-029241号公報に記載されている方法に従って、全方位カメラCAが算出した座標を、マイクアレイMAから見た方向の座標に変換する必要がある。 However, when the omnidirectional camera CA and the microphone array MA are not arranged coaxially, the setting management unit 39 performs, for example, an omnidirectional camera according to a method described in Japanese Patent Application Laid-Open No. 2015-029241. It is necessary to convert the coordinates calculated by the CA into coordinates in the direction viewed from the microphone array MA.
 設定管理部39は、音源方向検知部34で算出された全方位撮像画像データを構成するブロックごとの音圧レベルと比較される第1閾値th1、第2閾値th2および第3閾値th3(例えば図9参照)を保持する。ここで、音圧レベルは、マイクアレイMAにより収音されるモニタリングエリア8で発生する音の大きさを示す音パラメータの一例として使用され、SPK37から出力される音の大きさを示す音量とは異なる概念である。第1閾値th1、第2閾値th2および第3閾値th3は、モニタリングエリア8内で発生した音の音圧レベルと比較される閾値であり、例えばドローンDNが発生させる音の有無を判断するための閾値として設定されてよい。また、閾値は、上述した第1閾値th1、第2閾値th2および第3閾値th3以外にも複数設定可能であり、第1閾値th1だけを用いてもよい。ここでは簡単に説明するために、例えば、第1閾値th1と、これより小さな値である第2閾値th2と、さらに小さな値である第3閾値th3の3つが設定される(第1閾値th1>第2閾値th2>第3閾値th3、図9参照)。 The setting management unit 39 compares the sound pressure level for each block constituting the omnidirectional captured image data calculated by the sound source direction detection unit 34 with a first threshold th1, a second threshold th2, and a third threshold th3 (for example, FIG. 9). Here, the sound pressure level is used as an example of a sound parameter indicating the volume of sound generated in the monitoring area 8 collected by the microphone array MA, and the volume indicating the volume of sound output from the SPK 37 It is a different concept. The first threshold th1, the second threshold th2, and the third threshold th3 are thresholds that are compared with the sound pressure level of the sound generated in the monitoring area 8, for example, for determining the presence or absence of the sound generated by the drone DN It may be set as a threshold value. In addition to the first threshold th1, the second threshold th2, and the third threshold th3 described above, a plurality of thresholds can be set, and only the first threshold th1 may be used. Here, for the sake of simple explanation, for example, the first threshold th1, the second threshold th2 that is smaller than this, and the third threshold th3 that is smaller than this are set (first threshold th1>). Second threshold th2> third threshold th3, see FIG.
 後述するように、出力制御部35により生成される音圧ヒートマップでは、第1閾値th1より大きな音圧レベルが得られた画素の赤領域RD1(図16参照)は、全方位撮像画像データが表示されたモニタMN上で、例えば赤色で描画される。また、第2閾値th2より大きく第1閾値th1以下の音圧レベルが得られた画素のピンク領域PD1は、全方位撮像画像データが表示されたモニタMN上で、例えばピンク色で描画される。第3閾値th3より大きく第2閾値th2以下の音圧レベルが得られた画素の青領域BD1は、全方位撮像画像データが表示されたモニタMN上で、例えば青色で描画される。また、第3閾値th3以下の画素の音圧レベルの領域N1(図16参照)は、全方位撮像画像データが表示されたモニタMNで、例えば無色で描画され、つまり、全方位撮像画像データの表示色と何ら変わらない。 As will be described later, in the sound pressure heat map generated by the output control unit 35, the omnidirectional captured image data is stored in the red region RD1 (see FIG. 16) of the pixel in which the sound pressure level larger than the first threshold th1 is obtained. On the displayed monitor MN, for example, it is drawn in red. In addition, the pink region PD1 of the pixel in which the sound pressure level greater than the second threshold th2 and less than or equal to the first threshold th1 is obtained is rendered, for example, in pink on the monitor MN on which the omnidirectional captured image data is displayed. The blue region BD1 of the pixel for which the sound pressure level greater than the third threshold th3 and less than or equal to the second threshold th2 is obtained is rendered in blue, for example, on the monitor MN on which the omnidirectional captured image data is displayed. Further, the sound pressure level region N1 (see FIG. 16) of the pixel equal to or smaller than the third threshold th3 is drawn, for example, in a colorless manner on the monitor MN on which the omnidirectional captured image data is displayed. No change in display color.
 SPK37は、スピーカであり、マイクアレイMAにより収音されたモニタリングエリア8の音データ、その音データの音圧レベルが所定値QTほど抑圧処理された後の音データ、またはマイクアレイMAにより収音されかつ信号処理部33によって指向性が形成された音データを音出力する。なお、SPK37は、監視装置10とは別体の装置として構成されてもよい。 The SPK 37 is a speaker, and the sound data of the monitoring area 8 collected by the microphone array MA, the sound data after the sound pressure level of the sound data is suppressed by the predetermined value QT, or the sound data collected by the microphone array MA The sound data having the directivity formed by the signal processing unit 33 is output as sound. The SPK 37 may be configured as a separate device from the monitoring device 10.
 メモリ38は、例えばROMやRAMを用いて構成され、例えば一定区間の音データを含む各種データ、設定情報、プログラム等を保持する。また、メモリ38は、個々のドローンDNに固有な音パターンが登録されたパターンメモリ(図7参照)を有する。さらに、メモリ38は、出力制御部35により生成される音圧ヒートマップのデータを記憶する。また、メモリ38には、ドローンDNの位置を模式的に表す識別用マーク(不図示)のデータが登録されている。ここで用いられる識別用マークは、例えば、星形の記号である。なお、識別用マークとしては、星形に限らず、円、三角、四角、ドローンDNを想起させる「卍」形等の記号や文字でよい。また、昼間と夜間とで、識別用マークの表示態様を変えてもよく、例えば、昼間には星形で、夜間には星と見間違わないような四角としてもよい。また、識別用マークを動的に変化させてよい。例えば星形の記号を点滅表示したり、回転させたりしてもよく、より一層、ユーザに注意を喚起できる。 The memory 38 is configured using, for example, a ROM or a RAM, and holds various data including sound data of a certain section, setting information, a program, and the like. The memory 38 also has a pattern memory (see FIG. 7) in which a sound pattern unique to each drone DN is registered. Further, the memory 38 stores sound pressure heat map data generated by the output control unit 35. In addition, data of an identification mark (not shown) that schematically represents the position of the drone DN is registered in the memory 38. The identification mark used here is, for example, a star symbol. The identification mark is not limited to a star shape, but may be a circle, a triangle, a square, or a symbol or character such as a “卍” shape that reminds a drone DN. Further, the display mode of the identification mark may be changed between daytime and nighttime. For example, it may be a star shape during the daytime and a square that is not mistaken for a star at nighttime. Further, the identification mark may be dynamically changed. For example, a star symbol may be blinked or rotated, and the user can be further alerted.
 図7は、メモリ38に登録されているドローンDNの検知音のパターンの一例を示すタイミングチャートである。図7に示す検知音のパターンは、周波数パターンの組み合わせであり、マルチコプタ型のドローンDNに搭載された4つのロータの回転等によって発生する4つの周波数f1,f2,f3,f4の音を含む。それぞれの周波数の信号は、例えば各ロータに軸支された複数枚の羽の回転に伴って発生する、異なる音の周波数の信号である。 FIG. 7 is a timing chart showing an example of the detection sound pattern of the drone DN registered in the memory 38. The detected sound pattern shown in FIG. 7 is a combination of frequency patterns, and includes sounds of four frequencies f1, f2, f3, and f4 generated by the rotation of the four rotors mounted on the multicopter type drone DN. The signals of the respective frequencies are signals having different sound frequencies that are generated, for example, with the rotation of a plurality of wings pivotally supported by each rotor.
 図7では、斜線で示された周波数の領域が、音圧レベルの高い領域である。なお、検知音のパターンは、複数の周波数の音の数や音圧レベルだけでなく、その他の音情報を含んでもよい。例えば各周波数の音圧レベルの比率を表す音圧レベル比等が挙げられる。ここでは、一例としてドローンDNの検知は、検知音のパターンに含まれる、それぞれの周波数の音圧レベルが閾値を超えているか否かによって判断される。 In FIG. 7, the frequency region indicated by the diagonal lines is a region where the sound pressure level is high. The detected sound pattern may include not only the number of sounds having a plurality of frequencies and the sound pressure level but also other sound information. For example, a sound pressure level ratio representing a ratio of sound pressure levels at each frequency can be used. Here, as an example, the detection of the drone DN is determined by whether or not the sound pressure level of each frequency included in the detection sound pattern exceeds a threshold value.
 指向性処理部63は、無指向性のマイクロホンM1~Mqで収音された音データ信号を用い、上述した指向性形成処理(ビームフォーミング)を行い、モニタリングエリア8の方向を指向方向とする音データ信号の抽出処理を行う。また、指向性処理部63は、モニタリングエリア8の方向の範囲を指向範囲とする音データ信号の抽出処理を行うことも可能である。ここで、指向範囲は、隣接する指向方向を複数含む範囲であり、指向方向と比較すると、ある程度の指向方向の広がりを含むことを意図する。 The directivity processing unit 63 uses the sound data signals collected by the non-directional microphones M1 to Mq, performs the above-described directivity formation processing (beam forming), and makes the sound having the direction of the monitoring area 8 as the directivity direction. Data signal extraction processing is performed. The directivity processing unit 63 can also perform a sound data signal extraction process in which the range in the direction of the monitoring area 8 is the directional range. Here, the directivity range is a range including a plurality of adjacent directivity directions, and is intended to include a certain extent of directivity direction compared to the directivity direction.
 周波数分析部64は、指向性処理部63によって指向方向に抽出処理された音データ信号に対し、周波数分析処理を行う。この周波数分析処理では、指向方向の音データ信号に含まれる周波数およびその音圧レベルが検知される。 The frequency analysis unit 64 performs frequency analysis processing on the sound data signal extracted in the directivity direction by the directivity processing unit 63. In this frequency analysis process, the frequency and the sound pressure level included in the sound data signal in the pointing direction are detected.
 図8は、周波数分析処理の結果として得られた検知音信号の周波数変化の一例を示すタイミングチャートである。図8では、検知音信号(つまり、検知音データ信号)として、4つの周波数f11,f12,f13,f14および各周波数の音圧レベルが得られている。図中、不規則に変化する各周波数の変動は、例えばドローンDNがドローンDN自身の機体の姿勢を制御する際に僅かに変化するロータ(回転翼)の回転変動によって起こる。 FIG. 8 is a timing chart showing an example of the frequency change of the detected sound signal obtained as a result of the frequency analysis process. In FIG. 8, four frequencies f11, f12, f13, f14 and sound pressure levels of the respective frequencies are obtained as detected sound signals (that is, detected sound data signals). In the figure, the fluctuation of each frequency that changes irregularly occurs due to, for example, a rotational fluctuation of the rotor (rotary blade) that slightly changes when the drone DN controls the attitude of the drone DN itself.
 対象物検知部65は、周波数分析部64の周波数分析処理結果を用いて、ドローンDNの検知処理を行う。具体的には、ドローンDNの検知処理では、対象物検知部65は、モニタリングエリア8において、周波数分析処理の結果として得られた検知音のパターン(図8に示す周波数f11~f14参照)と、メモリ38のパターンメモリに予め登録された検知音のパターン(図7に示す周波数f1~f4参照)とを比較する。対象物検知部65は、両者の検知音のパターンが近似するか否かを判定する。 The object detection unit 65 performs the detection process of the drone DN using the frequency analysis processing result of the frequency analysis unit 64. Specifically, in the detection process of the drone DN, the object detection unit 65 detects the pattern of the detection sound obtained as a result of the frequency analysis process (see the frequencies f11 to f14 shown in FIG. 8) in the monitoring area 8. The detected sound patterns (see frequencies f1 to f4 shown in FIG. 7) registered in advance in the pattern memory of the memory 38 are compared. The object detection unit 65 determines whether or not the patterns of both detection sounds are approximate.
 両者のパターンが近似するか否かは、例えば以下のように判断される。4つの周波数f1,f2,f3,f4のうち、検知音データに含まれる少なくとも2つの周波数の音圧がそれぞれ閾値(例えば図9または図11に示す第1閾値th1)を超える場合、音パターンが近似しているとして、対象物検知部65は、ドローンDNを検知する。なお、他の条件を満たした場合にドローンDNが検知されてもよい。 Whether or not both patterns are approximated is determined, for example, as follows. When the sound pressure of at least two frequencies included in the detected sound data among the four frequencies f1, f2, f3, and f4 exceeds a threshold value (for example, the first threshold th1 shown in FIG. 9 or FIG. 11), the sound pattern is As an approximation, the object detection unit 65 detects the drone DN. Note that the drone DN may be detected when other conditions are satisfied.
 検知結果判定部66は、ドローンDNが存在しないと判定された場合、次の指向方向でのドローンDNの検知に移行するように検知方向制御部68に指示する。検知結果判定部66は、指向方向の走査の結果、ドローンDNが存在すると判定された場合、ドローンDNの検知結果を出力制御部35に通知する。なお、この検知結果には、検知されたドローンDNの情報が含まれる。ドローンDNの情報には、例えばドローンDNの識別情報、モニタリングエリア8でのドローンDNの位置情報(例えば方向情報)が含まれる。 When it is determined that the drone DN does not exist, the detection result determination unit 66 instructs the detection direction control unit 68 to shift to detection of the drone DN in the next directivity direction. The detection result determination unit 66 notifies the output control unit 35 of the detection result of the drone DN when it is determined that the drone DN exists as a result of the scanning in the directivity direction. The detection result includes information on the detected drone DN. The information of the drone DN includes, for example, identification information of the drone DN and position information (for example, direction information) of the drone DN in the monitoring area 8.
 検知方向制御部68は、検知結果判定部66からの指示に基づいて、収音空間において無人飛行体dnを検知するための方向を制御する。例えば検知方向制御部68は、収音空間全体(つまり、モニタリングエリア8)の中で、音源方向検知部34により推定された音源位置を含む指向範囲BF1(図11参照)の任意の方向を検知方向として設定する。 The detection direction control unit 68 controls the direction for detecting the unmanned air vehicle dn in the sound collection space based on an instruction from the detection result determination unit 66. For example, the detection direction control unit 68 detects an arbitrary direction of the directivity range BF1 (see FIG. 11) including the sound source position estimated by the sound source direction detection unit 34 in the entire sound collection space (that is, the monitoring area 8). Set as direction.
 走査制御部67は、検知方向制御部68により設定された検知方向を指向方向としてビームフォーミングするよう、指向性処理部63に対して指示する。 The scanning control unit 67 instructs the directivity processing unit 63 to perform beam forming using the detection direction set by the detection direction control unit 68 as the directivity direction.
 指向性処理部63は、走査制御部67から指示された指向方向に対して、ビームフォーミングする。なお、初期設定では、指向性処理部63は、音源方向検知部34によって推定された音源位置を含む指向範囲BF1(図14参照)内の初期位置を指向方向BF2とする。指向方向BF2は、検知方向制御部68により、指向範囲BF1の中で次々に設定される。 The directivity processing unit 63 performs beam forming in the directivity direction instructed from the scanning control unit 67. In the initial setting, the directivity processing unit 63 sets the initial position in the directivity range BF1 (see FIG. 14) including the sound source position estimated by the sound source direction detection unit 34 as the directivity direction BF2. The directivity direction BF2 is sequentially set by the detection direction control unit 68 within the directivity range BF1.
 出力制御部35は、モニタMNおよびSPK37の各動作を制御するとともに、全方位カメラCAから送られた全方位撮像画像データをモニタMNに表示し、さらに、マイクアレイMAから送られた音データをSPK37に音声出力する。出力制御部35は、ドローンDNが検知された場合、ドローンDNを示す識別用マーク(不図示)を、全方位撮像画像データの対応する位置(つまり、ドローンDNの位置を示す座標)上に重畳して表示するために、モニタMNに出力する。 The output control unit 35 controls each operation of the monitor MN and the SPK 37, displays the omnidirectional captured image data sent from the omnidirectional camera CA on the monitor MN, and further outputs the sound data sent from the microphone array MA. Audio is output to the SPK 37. When the drone DN is detected, the output control unit 35 superimposes an identification mark (not shown) indicating the drone DN on a corresponding position (that is, a coordinate indicating the position of the drone DN) of the omnidirectional captured image data. And display it on the monitor MN.
 出力制御部35は、マイクアレイMAにより収音された音データと全方位カメラCAにより導出された音源位置の方向を示す座標とを用いて、マイクアレイMAにより収音された音データの指向性形成処理を行うことで、指向方向の音データを強調処理する。音データの指向性形成処理は、例えば日本国特開2015-029241号公報に記載されている公知の技術である。 The output control unit 35 uses the sound data collected by the microphone array MA and the coordinates indicating the direction of the sound source position derived by the omnidirectional camera CA, and the directivity of the sound data collected by the microphone array MA. By performing the forming process, the sound data in the directivity direction is emphasized. The sound data directivity forming process is a known technique described in, for example, Japanese Patent Application Laid-Open No. 2015-029241.
 出力制御部35は、音源方向検知部34により算出された全方位撮像画像データを構成するブロックごとの音圧レベルを用い、全方位撮像画像データを構成するブロックごとに、該当するブロックの位置に音圧レベルの算出値を割り当てた音圧マップを生成する。さらに、出力制御部35は、ユーザにとって視覚的で判別し易くなるように、生成された音圧マップのブロックごとの音圧レベルを、視覚画像(例えば色付きの画像)に色変換処理を行うことで、図16に示す音圧ヒートマップを生成する。 The output control unit 35 uses the sound pressure level for each block constituting the omnidirectional captured image data calculated by the sound source direction detection unit 34, and positions the corresponding block for each block constituting the omnidirectional captured image data. A sound pressure map to which the calculated value of the sound pressure level is assigned is generated. Further, the output control unit 35 performs a color conversion process on the sound pressure level for each block of the generated sound pressure map to a visual image (for example, a colored image) so as to be visually and easily discriminable for the user. Thus, the sound pressure heat map shown in FIG. 16 is generated.
 なお、出力制御部35は、ブロック単位で算出した音圧レベルを該当するブロックの位置に割り当てた音圧マップまたは音圧ヒートマップを生成すると説明したが、他には、一つ一つの画素ごとに音圧レベルを算出し、画素ごとの音圧レベルを該当する画素の位置に割り当てた音圧マップまたは音圧ヒートマップを生成してもよい。 The output control unit 35 has been described as generating a sound pressure map or a sound pressure heat map in which the sound pressure level calculated in units of blocks is assigned to the position of the corresponding block. Alternatively, the sound pressure level may be calculated and a sound pressure map or a sound pressure heat map in which the sound pressure level for each pixel is assigned to the position of the corresponding pixel may be generated.
 次に、実施の形態1に係る飛行物体検知システム5の動作について、図9~図16を参照して詳細に説明する。 Next, the operation of the flying object detection system 5 according to the first embodiment will be described in detail with reference to FIGS.
 図9は、実施の形態1に係る飛行物体検知システム5の全般的な動作手順例を示すシーケンス図である。飛行物体検知システム5の各装置(例えば、モニタMN、監視装置10、PTZカメラCZ、全方位カメラCA、マイクアレイMA)にそれぞれ電源が投入されると、飛行物体検知システム5は動作を開始する。 FIG. 9 is a sequence diagram showing an example of the overall operation procedure of the flying object detection system 5 according to the first embodiment. When power is supplied to each device (for example, the monitor MN, the monitoring device 10, the PTZ camera CZ, the omnidirectional camera CA, and the microphone array MA) of the flying object detection system 5, the flying object detection system 5 starts its operation. .
 先ず、監視装置10の信号処理部33は、通信部31を介して、PTZカメラCZに画像配信要求を送る(T1)。PTZカメラCZは、監視装置10から送られた画像配信要求に従い、電源の投入に応じた撮像処理を開始する(T2)。同様に、監視装置10の信号処理部33は、通信部31を介して、全方位カメラCAに画像配信要求を送る(T3)。全方位カメラCAは、監視装置10から送られた画像配信要求に従い、電源の投入に応じた撮像処理を開始する(T4)。また、監視装置10の信号処理部33は、通信部31を介して、マイクアレイMAに音配信要求を送る(T5)。マイクアレイMAは、監視装置10から送られた音配信要求に従い、電源の投入に応じた収音処理を開始する(T6)。 First, the signal processing unit 33 of the monitoring apparatus 10 sends an image distribution request to the PTZ camera CZ via the communication unit 31 (T1). The PTZ camera CZ starts imaging processing according to power-on in accordance with the image distribution request sent from the monitoring device 10 (T2). Similarly, the signal processing unit 33 of the monitoring apparatus 10 sends an image distribution request to the omnidirectional camera CA via the communication unit 31 (T3). The omnidirectional camera CA starts imaging processing according to power-on in accordance with the image distribution request sent from the monitoring device 10 (T4). Further, the signal processing unit 33 of the monitoring apparatus 10 sends a sound distribution request to the microphone array MA via the communication unit 31 (T5). The microphone array MA starts sound collection processing in response to power-on in accordance with the sound distribution request sent from the monitoring device 10 (T6).
 PTZカメラCZは、ネットワークNWを介して、撮像により得られたPTZ撮像画像データ(例えば静止画、動画)を監視装置10に送信する(T7)。監視装置10の出力制御部35は、PTZカメラCZから送られたPTZ撮像画像データをNTSC等の表示データに変換し、モニタMNに出力してPTZ撮像画像データの表示の指示を表示データとともに送る(T8)。モニタMNは、監視装置10から送られた指示に従い、表示画面に、PTZカメラCZによるPTZ撮像画像IMG2のデータ(図16参照)を表示する(T9)。 The PTZ camera CZ transmits PTZ captured image data (for example, a still image and a moving image) obtained by imaging to the monitoring apparatus 10 via the network NW (T7). The output control unit 35 of the monitoring apparatus 10 converts the PTZ captured image data sent from the PTZ camera CZ into display data such as NTSC, and outputs it to the monitor MN to send an instruction to display the PTZ captured image data together with the display data. (T8). The monitor MN displays the data (see FIG. 16) of the PTZ image IMG2 captured by the PTZ camera CZ on the display screen according to the instruction sent from the monitoring device 10 (T9).
 全方位カメラCAは、ネットワークNWを介して、撮像により得られた全方位撮像画像データ(例えば静止画、動画)を監視装置10に送信する(T10)。監視装置10の出力制御部35は、全方位カメラCAから送られた全方位撮像画像データをNTSC等の表示データに変換し、モニタMNに出力して全方位撮像画像データの表示の指示を表示データとともに送る(T11)。モニタMNは、監視装置10から送られた指示に従い、表示画面に、全方位カメラCAによる全方位撮像画像IMG1のデータ(図16参照)を表示する(T9)。 The omnidirectional camera CA transmits omnidirectional captured image data (for example, a still image and a moving image) obtained by imaging to the monitoring apparatus 10 via the network NW (T10). The output control unit 35 of the monitoring device 10 converts the omnidirectional captured image data sent from the omnidirectional camera CA into display data such as NTSC, and outputs it to the monitor MN to display an instruction to display the omnidirectional captured image data. It is sent together with the data (T11). The monitor MN displays data (see FIG. 16) of the omnidirectional image IMG1 taken by the omnidirectional camera CA on the display screen according to the instruction sent from the monitoring device 10 (T9).
 マイクアレイMAは、ネットワークNWを介して、収音により得られたモニタリングエリア8の音データを符号化して監視装置10に送信する(T12)。 The microphone array MA encodes the sound data of the monitoring area 8 obtained by sound collection through the network NW and transmits the encoded sound data to the monitoring device 10 (T12).
 監視装置10の音源方向検知部34は、ステップT12において送られた音データを用い、モニタリングエリア8での音源検知処理を含む各種の音制御処理(図10参照)を実行する(T13)。ステップT13での音制御処理の詳細については、図10を参照して後述する。 The sound source direction detection unit 34 of the monitoring apparatus 10 executes various sound control processes (see FIG. 10) including the sound source detection process in the monitoring area 8 using the sound data sent in step T12 (T13). Details of the sound control process in step T13 will be described later with reference to FIG.
 監視装置10の出力制御部35は、ステップT13において音源方向検知部34により算出された全方位撮像画像データを構成するブロックごとの音圧レベルを用い、全方位撮像画像データを構成するブロックごとに、該当するブロックの位置に音圧レベルの算出値を割り当てた音圧マップを生成する。出力制御部35は、ユーザにとって視覚的で判別し易くなるように、生成した音圧マップのブロックごとの音圧レベルを、視覚画像(例えば色付きの画像)に色変換処理を行うことで、図16に示す音圧ヒートマップを生成する(T14)。 The output control unit 35 of the monitoring device 10 uses the sound pressure level for each block constituting the omnidirectional captured image data calculated by the sound source direction detection unit 34 in step T13, and for each block constituting the omnidirectional captured image data. Then, a sound pressure map in which the calculated value of the sound pressure level is assigned to the position of the corresponding block is generated. The output control unit 35 performs a color conversion process on the sound pressure level for each block of the generated sound pressure map to a visual image (for example, a colored image) so that it is visually and easily discriminable for the user. 16 is generated (T14).
 監視装置10の出力制御部35は、ステップT14において生成された音圧ヒートマップのデータをNTSC等の表示データに変換し、モニタMNに出力して全方位撮像画像データ上への音圧ヒートマップの重畳表示の指示を表示データとともに送る(T15)。モニタMNは、監視装置10から送られた指示に従い、表示画面に、全方位撮像画像データに音圧ヒートマップを重畳して表示する(図16参照、T9)。 The output control unit 35 of the monitoring device 10 converts the sound pressure heat map data generated in step T14 into display data such as NTSC, and outputs it to the monitor MN to output the sound pressure heat map onto the omnidirectional image data. Is sent together with the display data (T15). The monitor MN displays the sound pressure heat map superimposed on the omnidirectional captured image data on the display screen according to the instruction sent from the monitoring device 10 (see FIG. 16, T9).
 監視装置10の信号処理部33は、ステップT12においてマイクアレイMAから送られた音データを用い、モニタリングエリア8内で指向方向を順次走査しながら指向性を形成することで、それぞれの指向方向ごとにドローンDNの検知判定処理を行う(T16)。この無人飛行体dnの検知判定処理の詳細については、図14および図15を参照して後述する。 The signal processing unit 33 of the monitoring device 10 uses the sound data sent from the microphone array MA in step T12 to form directivity while sequentially scanning the directivity direction within the monitoring area 8, thereby making it possible to perform each directivity direction. The drone DN is detected and determined (T16). Details of the detection determination process for the unmanned air vehicle dn will be described later with reference to FIGS. 14 and 15.
 なお、ステップT16においてドローンDNが検知されなかった場合には、実施の形態1に係る飛行物体検知システム5の処理は、ステップT7に戻り、ステップT7からステップT16までの一連の処理が繰り返される。一方、ステップT16においてドローンDNが検知された場合には、実施の形態1に係る飛行物体検知システム5の処理は、ステップT17以降の処理に進む。 If the drone DN is not detected in step T16, the process of the flying object detection system 5 according to the first embodiment returns to step T7, and a series of processes from step T7 to step T16 is repeated. On the other hand, when the drone DN is detected in step T16, the process of the flying object detection system 5 according to the first embodiment proceeds to the process after step T17.
 ステップT16でのドローンDNの検知判定処理の結果、ドローンDNが検知された場合、監視装置10の出力制御部35は、モニタMNの表示画面に表示されている全方位撮像画像IMG1に、ステップT16で検知された指向方向に存在するドローンDNを示す識別用マーク(不図示)を重畳して表示することを指示する(T17)。モニタMNは、監視装置10から送られた指示に従い、全方位撮像画像IMG1に、ドローンDNを示す識別用マーク(不図示)を合成(重畳)して表示する(T18)。 When the drone DN is detected as a result of the drone DN detection determination process in step T16, the output control unit 35 of the monitoring device 10 displays the omnidirectional captured image IMG1 displayed on the display screen of the monitor MN in step T16. An instruction to superimpose and display an identification mark (not shown) indicating the drone DN existing in the pointing direction detected in (T17). In accordance with the instruction sent from the monitoring device 10, the monitor MN synthesizes (superimposes) an identification mark (not shown) indicating the drone DN and displays it on the omnidirectional image IMG1 (T18).
 監視装置10の出力制御部35は、PTZカメラCZに対し、ステップT16で検知されたドローンDNの検知方向(言い換えると、ドローンDNの検知時に設定された指向方向)に関する情報と、PTZカメラCZの光軸方向(言い換えると、撮像方向)やズーム倍率を変更するためのPTZ制御要求とを対応付けて送る(T19)。 The output control unit 35 of the monitoring apparatus 10 provides information regarding the detection direction of the drone DN detected in step T16 (in other words, the direction set when the drone DN is detected) to the PTZ camera CZ, and the PTZ camera CZ. A PTZ control request for changing the optical axis direction (in other words, the imaging direction) and the zoom magnification is sent in association with each other (T19).
 PTZカメラCZは、監視装置10から送られたPTZ制御要求に従い、指向方向に関する情報に基づいて、例えば、撮像方向制御部58においてレンズ駆動モータ59を駆動させ、PTZカメラCZの撮像レンズの光軸L2を変更して、撮像方向を指向方向に変更する(T20)。同時に、撮像方向制御部58は、PTZカメラCZの撮像レンズ55aのズーム倍率を、予め設定された値、あるいはドローンDNの撮像画像に占める割合に対応する値等に変更する。 In accordance with the PTZ control request sent from the monitoring device 10, the PTZ camera CZ drives the lens driving motor 59 in the imaging direction control unit 58 based on information on the directivity direction, for example, and the optical axis of the imaging lens of the PTZ camera CZ L2 is changed, and the imaging direction is changed to the pointing direction (T20). At the same time, the imaging direction control unit 58 changes the zoom magnification of the imaging lens 55a of the PTZ camera CZ to a preset value, a value corresponding to the proportion of the captured image of the drone DN, or the like.
 これにより、PTZカメラCZは、ステップT16において検知されたドローンDNの検知方向(つまり、指向方向)に光軸方向を調整(つまり、向けることが)でき、さらに、ドローンDNの詳細を判明可能なPTZ撮像画像IMG2の撮像が可能にズーム倍率を変更できる。この後にPTZカメラCZにより撮像されたPTZ撮像画像IMG2が監視装置10に送られると、監視装置10は、モニタリングエリア8の広域が分かる全方位撮像画像IMG1とモニタリングエリア8で検知されたドローンPTZの詳細が分かるPTZ撮像画像IMG2とを並べてモニタMN1に表示できる(図16参照)。 Thereby, the PTZ camera CZ can adjust (that is, direct) the optical axis direction in the detection direction (that is, the directing direction) of the drone DN detected in step T16, and can further determine the details of the drone DN. The zoom magnification can be changed so that the PTZ captured image IMG2 can be captured. Thereafter, when the PTZ captured image IMG2 captured by the PTZ camera CZ is sent to the monitoring device 10, the monitoring device 10 detects the omnidirectional captured image IMG1 in which the wide area of the monitoring area 8 is known and the drone PTZ detected in the monitoring area 8. The PTZ captured image IMG2 whose details are known can be displayed side by side on the monitor MN1 (see FIG. 16).
 図16は、ドローンDNが検知された時の全方位撮像画像IMG1とPTZ撮像画像IMG2とを並べて表示した表示画面例を示す図である。図9を参照して説明したように、実施の形態1では、音源方向検知部34により算出されたブロックごとの音圧レベルが第3閾値th3以下であれば無色で表示され、音圧レベルが第3閾値th3より大きくかつ第2閾値th2以下であれば青色で表示され、音圧レベルが第2閾値th2より大きくかつ第1閾値th1以下であればピンク色で表示され、音圧レベルが第1閾値th1より大きければ赤色で表示されている。 FIG. 16 is a diagram illustrating a display screen example in which the omnidirectional captured image IMG1 and the PTZ captured image IMG2 are displayed side by side when the drone DN is detected. As described with reference to FIG. 9, in the first embodiment, if the sound pressure level for each block calculated by the sound source direction detection unit 34 is equal to or less than the third threshold th3, the sound pressure level is displayed in a colorless manner. If it is greater than the third threshold th3 and less than or equal to the second threshold th2, it is displayed in blue, and if the sound pressure level is greater than the second threshold th2 and less than or equal to the first threshold th1, it is displayed in pink, and the sound pressure level is If it is larger than one threshold th1, it is displayed in red.
 図16に示すように、全方位撮像画像IMG1では、モニタリングエリア8の外観の様子とともに音源の有無が広域に判明可能な音圧ヒートマップが示されている。例えばドローンDNの筐体中心の周囲の回転翼やロータ付近では音圧レベルが第1閾値th1より大きいため、ドローンDNの位置を示す第1視覚画像の一例として、赤領域RD1,RD2,RD3,RD4で描画されている。同様に、赤領域の周囲には、ドローンDNの位置を示す第1視覚画像の一例として、赤領域の次に音圧値が大きいことを示すピンク領域PD1,PD2,PD3,PD4が描画されている。同様に、さらにピンク領域の周囲には、ドローンDNの位置を示す第1視覚画像の一例として、ピンク領域の次に音圧値が大きいことを示す青領域BD1,BD2,BD3,BD4が描画されている。 As shown in FIG. 16, the omnidirectional captured image IMG1 shows a sound pressure heat map in which the presence or absence of a sound source can be determined in a wide area together with the appearance of the monitoring area 8. For example, since the sound pressure level is higher than the first threshold th1 near the rotor blades and the rotor around the housing center of the drone DN, red regions RD1, RD2, RD3, as an example of the first visual image showing the position of the drone DN It is drawn with RD4. Similarly, pink areas PD1, PD2, PD3, and PD4 indicating that the sound pressure value is next to the red area are drawn as an example of the first visual image indicating the position of the drone DN around the red area. Yes. Similarly, blue areas BD1, BD2, BD3, and BD4 indicating that the sound pressure value is next to the pink area are drawn as an example of the first visual image indicating the position of the drone DN around the pink area. ing.
 一方で、PTZ撮像画像IMG2では、モニタリングエリア8内で検知されたドローンDNの外観の詳細が判明可能に示されている。従って、ユーザは、図16に示される全方位撮像画像IMG1とPTZ撮像画像IMG2とを並べて閲覧することで、どのような外観のドローンDNがモニタリングエリア8のどのあたりで検知されたかを俯瞰的かつ詳細に把握できる。 On the other hand, in the PTZ captured image IMG2, the details of the appearance of the drone DN detected in the monitoring area 8 are shown to be clarified. Therefore, the user can browse the omnidirectional captured image IMG1 and the PTZ captured image IMG2 shown in FIG. 16 side by side to see what kind of appearance the drone DN is detected in and around the monitoring area 8. It can be grasped in detail.
 この後、実施の形態1に係る飛行物体検知システム5の処理はステップT7に戻り、例えば電源がオフに操作される等の所定のイベントが検知されるまで、ステップT7以降の各処理が繰り返される。 Thereafter, the process of the flying object detection system 5 according to the first embodiment returns to step T7, and each process after step T7 is repeated until a predetermined event such as a power-off operation is detected. .
 次に、図9のステップT13の音制御処理の動作手順の詳細について、図10~図13を参照して説明する。図10は、図9のステップT13の音制御処理の動作手順例を示すフローチャートである。図12は、抑圧されていない音圧レベルに基づいて生成された音圧ヒートマップがモニタリングエリア8の全方位撮像画像データに重畳された表示画面例を示す図である。図13は、ドローンDNが検知されてPTZカメラCZが動作することによって抑圧された音圧レベルに基づいて生成された音圧ヒートマップがモニタリングエリア8の全方位撮像画像データに重畳された表示画面例を示す図である。 Next, details of the operation procedure of the sound control process in step T13 in FIG. 9 will be described with reference to FIGS. FIG. 10 is a flowchart showing an example of the operation procedure of the sound control process in step T13 of FIG. FIG. 12 is a diagram illustrating a display screen example in which a sound pressure heat map generated based on the sound pressure level that is not suppressed is superimposed on the omnidirectional captured image data in the monitoring area 8. FIG. 13 shows a display screen in which the sound pressure heat map generated based on the sound pressure level suppressed by detecting the drone DN and operating the PTZ camera CZ is superimposed on the omnidirectional captured image data in the monitoring area 8. It is a figure which shows an example.
 図10において、音源方向検知部34は、全方位カメラCAで撮像された全方位撮像画像データとマイクアレイMAで収音された音データとを基に、モニタリングエリア8の全方位撮像画像データを構成するブロックごとに、音パラメータ(例えば音圧レベル)を算出する(S1)。音源方向検知部34は、算出された音パラメータを用いて、モニタリングエリア8内の音源を検知する(S1)。この検知された音源の位置は、監視装置10がステップT16においてドローンDNを検知する際、初期の指向方向が設定されるために必要となる指向範囲BF1の基準位置として使用されてよい。 In FIG. 10, the sound source direction detection unit 34 determines the omnidirectional captured image data in the monitoring area 8 based on the omnidirectional captured image data captured by the omnidirectional camera CA and the sound data collected by the microphone array MA. A sound parameter (for example, sound pressure level) is calculated for each block to be configured (S1). The sound source direction detector 34 detects the sound source in the monitoring area 8 using the calculated sound parameter (S1). The detected position of the sound source may be used as a reference position of the directivity range BF1 necessary for setting the initial directivity direction when the monitoring apparatus 10 detects the drone DN in step T16.
 音源方向検知部34は、マイクアレイMAで収音された音データに含まれる騒音レベルを常時監視して測定する(S2)。この騒音レベルが一定値(既定値)未満であると判定された場合には(S3、NO)、音源方向検知部34の音制御処理は終了する。従って、音源方向検知部34により、上述した音圧レベルの周波数特性の抑圧処理(図11参照)は実行されずに(上述した第1条件参照)、ステップT13の処理は終了する。 The sound source direction detection unit 34 constantly monitors and measures the noise level included in the sound data collected by the microphone array MA (S2). When it is determined that the noise level is less than a certain value (predetermined value) (S3, NO), the sound control process of the sound source direction detection unit 34 is terminated. Therefore, the sound source direction detection unit 34 does not execute the above-described frequency characteristic suppression process (see FIG. 11) of the sound pressure level (see the first condition described above), and the process of step T13 ends.
 一方、音源方向検知部34は、ステップS2で測定された騒音レベルが一定値(既定値)以上であると判定した場合(S3、YES)、メモリ38を参照し、PTZカメラCZの動作時であるか否かを判定する(S4)。監視装置10の出力制御部35がPTZカメラCZにPTZ制御を実行させる際、メモリ38には、例えばPTZカメラCZに送られるPTZ制御要求が一時的に保持されている。音源方向検知部34は、例えばメモリ38にPTZ制御要求が保持されているか否かに基づいて、現時点がPTZカメラCZの動作時であるか否かを判定可能である。 On the other hand, when the sound source direction detection unit 34 determines that the noise level measured in step S2 is equal to or higher than a predetermined value (predetermined value) (S3, YES), the memory 38 is referred to and the PTZ camera CZ is in operation. It is determined whether or not there is (S4). When the output control unit 35 of the monitoring apparatus 10 causes the PTZ camera CZ to execute PTZ control, the memory 38 temporarily holds, for example, a PTZ control request sent to the PTZ camera CZ. The sound source direction detection unit 34 can determine whether or not the current time is during the operation of the PTZ camera CZ based on, for example, whether or not the PTZ control request is held in the memory 38.
 音源方向検知部34は、現時点がPTZカメラCZの動作時ではないと判定した場合(S4、NO)、PTZカメラCZからパン回転時、チルト回転時、または撮像レンズ55aの移動時に発生する大きな発生音が生じないので、マイクアレイMAにより収音された音データの音圧レベルの周波数特性を所定値QTほど抑圧処理する(図11参照、S5)。ステップS5の後、音源方向検知部34の音制御処理は終了し、ステップT13の処理は終了する。 When the sound source direction detection unit 34 determines that the current time is not during the operation of the PTZ camera CZ (S4, NO), a large occurrence that occurs when the panning, tilting, or movement of the imaging lens 55a from the PTZ camera CZ is performed. Since no sound is generated, the frequency characteristic of the sound pressure level of the sound data collected by the microphone array MA is suppressed by a predetermined value QT (see FIG. 11, S5). After step S5, the sound control process of the sound source direction detection unit 34 ends, and the process of step T13 ends.
 一方、音源方向検知部34は、現時点がPTZカメラCZの動作時であると判定した場合(S4、YES)、メモリ38を参照し、ドローンDNの検知中であるか否かを判定する(S6)。例えば図9のループ(LOOP)処理において、前回のサイクルのステップT16の処理の実行時にドローンDNが検知されたと判定された場合には、メモリ38には、その時に判定されたドローンDNの検知方向(つまり、走査された指向方向)に関する情報が保持されている。音源方向検知部34は、例えばメモリ38に検知方向に関する情報が保持されているか否かに基づいて、現時点がドローンDNの検知中であるか否かを判定可能である。 On the other hand, when the sound source direction detection unit 34 determines that the current time is during the operation of the PTZ camera CZ (S4, YES), the sound source direction detection unit 34 refers to the memory 38 and determines whether the drone DN is being detected (S6). ). For example, in the loop (LOOP) process of FIG. 9, when it is determined that the drone DN is detected at the time of executing the process of step T <b> 16 of the previous cycle, the detection direction of the drone DN determined at that time is stored in the memory 38. Information on (that is, the scanned directivity direction) is held. The sound source direction detection unit 34 can determine whether or not the current time is detecting the drone DN based on, for example, whether or not the information regarding the detection direction is held in the memory 38.
 音源方向検知部34は、現時点がドローンDNの検知中であると判定した場合には(S6、YES)、音源方向検知部34の音制御処理を終了する。従って、音源方向検知部34により、上述した音圧レベルの周波数特性の抑圧処理(図11参照)は実行されずに(上述した第2条件参照)、ステップT13の処理は終了する。 When the sound source direction detection unit 34 determines that the current time is that the drone DN is being detected (S6, YES), the sound source direction detection unit 34 ends the sound control processing. Accordingly, the sound source direction detection unit 34 does not execute the above-described suppression process of the frequency characteristic of the sound pressure level (see FIG. 11) (see the second condition described above), and the process of step T13 ends.
 一方、音源方向検知部34は、現時点がドローンDNの検知中でないと判定した場合には(S6、NO)、抑圧処理によってドローンの実体を見逃すことが無いので、マイクアレイMAにより収音された音データの音圧レベルの周波数特性を所定値QTほど抑圧処理する(図11参照、S5)。ステップS5の後、音源方向検知部34の音制御処理は終了し、ステップT13の処理は終了する。 On the other hand, if the sound source direction detection unit 34 determines that the drone DN is not currently detected (S6, NO), the sound source direction is not picked up by the suppression process, so the sound is picked up by the microphone array MA. The frequency characteristic of the sound pressure level of the sound data is suppressed by the predetermined value QT (see FIG. 11, S5). After step S5, the sound control process of the sound source direction detection unit 34 ends, and the process of step T13 ends.
 図12では、モニタMNに表示された全方位撮像画像IMG1には、モニタリングエリア8にドローンDNの実体が検知されていない状態であるにも拘わらず、例えば騒音レベルが一定値(既定値)を超えたことで、その騒音レベルの騒音の影響を受けて、お化けドローンGDN1として検知された例が示されている。お化けドローンGDN1は、抑圧処理がなされていないために、例えば2つの騒音源(具体的には、青領域BD1,BD4のそれぞれに位置する音源)を元にして監視装置10により誤検知された仮想的なドローンである。 In FIG. 12, in the omnidirectional captured image IMG1 displayed on the monitor MN, for example, the noise level has a constant value (predetermined value) even though the substance of the drone DN is not detected in the monitoring area 8. An example of being detected as a ghost drone GDN1 under the influence of noise at that noise level is shown. Since the ghost drone GDN1 has not been subjected to suppression processing, for example, the virtual device erroneously detected by the monitoring device 10 based on two noise sources (specifically, sound sources located in the blue regions BD1 and BD4) is detected. A drone.
 一方、図13では、ドローンDNが検知されてPTZカメラCZが動作する時であるにも拘わらず、仮に図10に示すステップS5の処理(つまり、音圧レベルの周波数特性の抑圧処理)が実行された場合、既に検知済みのドローンDNの存在が音圧ヒートマップに反映されず、ドローンDNの実体を見逃している(つまり、ロストしている)ことが示されている。従って、音源方向検知部34は、ドローンDNが検知されてPTZカメラCZが動作する時に周囲の騒音レベルが一定値を超えた場合には、第2条件を満たしたとして、音圧レベルの抑圧処理を実行しない。これにより、音源方向検知部34は、抑圧処理によってドローンDNの実体を見逃すことを抑制できるので、ドローンDNの実体を的確に検知できる。 On the other hand, in FIG. 13, although the drone DN is detected and the PTZ camera CZ is operating, the process of step S5 shown in FIG. 10 (that is, the sound pressure level frequency characteristic suppression process) is executed. In this case, the presence of the already detected drone DN is not reflected in the sound pressure heat map, indicating that the drone DN has been missed (that is, lost). Therefore, the sound source direction detection unit 34 determines that the second condition is satisfied and the sound pressure level suppression processing when the surrounding noise level exceeds a certain value when the drone DN is detected and the PTZ camera CZ operates. Do not execute. As a result, the sound source direction detection unit 34 can suppress overlooking the drone DN entity through the suppression process, and thus can accurately detect the drone DN entity.
 次に、図9のステップT16のドローンDNの検知判定処理の動作手順の詳細について、図14および図15を参照して説明する。図14は、ドローンDNの検知時にモニタリングエリア8内で行われる指向方向の順次走査例を示す図である。図15は、図9のステップT16のドローンDNの検知判定処理の動作手順例を示すフローチャートである。 Next, details of the operation procedure of the detection determination process of the drone DN in step T16 of FIG. 9 will be described with reference to FIGS. FIG. 14 is a diagram illustrating an example of sequential scanning in the directivity direction performed in the monitoring area 8 when the drone DN is detected. FIG. 15 is a flowchart illustrating an example of an operation procedure of the drone DN detection determination process in step T <b> 16 of FIG. 9.
 音源検知ユニットUDにおいて、指向性処理部63は、例えば音源方向検知部34によって推定された音源位置に基づく指向範囲BF1を、指向方向BF2の初期位置として設定する(S21)。なお、初期位置は、音源方向検知部34により推定されたモニタリングエリア8の音源位置に基づく指向範囲BF1に限定されなくてもよい。つまり、ユーザにより指定された任意の位置を初期位置として設定して、モニタリングエリア8内が順次、走査されてもよい。初期位置が限定されないことで、推定された音源位置に基づく指向範囲BF1に含まれる音源がドローンDNでなかった場合でも、他の指向方向に飛来するドローンを早期に検知できる。 In the sound source detection unit UD, the directivity processing unit 63 sets, for example, the directivity range BF1 based on the sound source position estimated by the sound source direction detection unit 34 as the initial position of the directivity direction BF2 (S21). The initial position may not be limited to the directivity range BF1 based on the sound source position of the monitoring area 8 estimated by the sound source direction detection unit 34. That is, an arbitrary position designated by the user may be set as the initial position, and the monitoring area 8 may be sequentially scanned. Since the initial position is not limited, even when the sound source included in the directivity range BF1 based on the estimated sound source position is not the drone DN, a drone flying in another directivity direction can be detected at an early stage.
 指向性処理部63は、マイクアレイMAで収音され、A/D変換器An1~Aqでデジタル値に変換された音データ信号がメモリ38に一時的に記憶されたか否かを判定する(S22)。音データ信号が記憶されていない場合(S22、NO)、指向性処理部63の処理はステップS21に戻る。 The directivity processing unit 63 determines whether or not the sound data signal collected by the microphone array MA and converted into a digital value by the A / D converters An1 to Aq is temporarily stored in the memory 38 (S22). ). When the sound data signal is not stored (S22, NO), the processing of the directivity processing unit 63 returns to step S21.
 マイクアレイMAにより収音された音データ信号がメモリ38に一時的に記憶されている場合(S22、YES)、指向性処理部63は、モニタリングエリア8の指向範囲BF1における任意の指向方向BF2に対してビームフォーミングし、この指向方向BF2の音データ信号を抽出処理する(S23)。 When the sound data signal collected by the microphone array MA is temporarily stored in the memory 38 (S22, YES), the directivity processing unit 63 moves in the arbitrary directivity direction BF2 in the directivity range BF1 of the monitoring area 8. On the other hand, beam forming is performed, and the sound data signal in the directivity direction BF2 is extracted (S23).
 周波数分析部64は、抽出処理された音データ信号の周波数およびその音圧レベルを検知する(S24)。 The frequency analysis unit 64 detects the frequency of the extracted sound data signal and its sound pressure level (S24).
 対象物検知部65は、メモリ38のパターンメモリに登録された検知音のパターンと、周波数分析処理の結果得られた検知音のパターンとを比較し、ドローンDNの検知を行う(S25)。 The object detection unit 65 compares the detection sound pattern registered in the pattern memory of the memory 38 with the detection sound pattern obtained as a result of the frequency analysis process, and detects the drone DN (S25).
 検知結果判定部66は、この比較の結果を出力制御部35に通知するとともに、検知方向制御部68へ検知方向移行について通知する(S26)。 The detection result determination unit 66 notifies the output control unit 35 of the result of the comparison, and notifies the detection direction control unit 68 about the detection direction shift (S26).
 例えば対象物検知部65は、周波数分析処理の結果得られた検知音のパターンと、メモリ38のパターンメモリに登録されている4つの周波数f1,f2,f3,f4とを比較する。対象物検知部65は、比較の結果、両検知音のパターンにおいて同じ周波数を少なくとも2つ有し、かつ、これらの周波数の音圧レベルが第1閾値th1より大きい場合、両者の検知音のパターンが近似し、ドローンDNが存在すると判定する。 For example, the object detection unit 65 compares the pattern of the detection sound obtained as a result of the frequency analysis process with the four frequencies f1, f2, f3, and f4 registered in the pattern memory of the memory 38. As a result of comparison, the object detection unit 65 has at least two of the same frequency in both detection sound patterns, and if the sound pressure level of these frequencies is greater than the first threshold th1, the detection sound patterns of both , And it is determined that the drone DN exists.
 なお、ここでは、少なくとも2つの周波数が一致している場合を想定したが、対象物検知部65は、1つの周波数が一致し、この周波数の音圧レベルが第1閾値th1より大きい場合、近似していると判定してもよい。 Here, it is assumed that at least two frequencies match, but the object detection unit 65 approximates when one frequency matches and the sound pressure level of this frequency is greater than the first threshold th1. You may determine that you are doing.
 また、対象物検知部65は、それぞれの周波数に対し、許容される周波数の誤差を設定し、この誤差範囲内の周波数は同じ周波数であるとして、上記近似の有無を判定してもよい。 Further, the object detection unit 65 may set an error of an allowable frequency with respect to each frequency, and may determine the presence or absence of the approximation assuming that the frequencies within the error range are the same frequency.
 また、対象物検知部65は、周波数および音圧レベルの比較に加えて、それぞれの周波数の音の音圧レベル比が略一致することを判定条件に加えて判定してもよい。この場合、判定条件が厳しくなるので、音源検知ユニットUDは、検知されたドローンDNを予め登録された対象物であるとして特定し易くなり、ドローンDNの検知精度を向上できる。 Further, in addition to the comparison of the frequency and the sound pressure level, the object detection unit 65 may determine that the sound pressure level ratios of the sounds of the respective frequencies substantially match in addition to the determination condition. In this case, since the determination conditions become strict, the sound source detection unit UD can easily identify the detected drone DN as a pre-registered object, and the detection accuracy of the drone DN can be improved.
 検知結果判定部66は、ステップS26の結果、ドローンDNが存在するか存在しないかを判別する(S27)。 The detection result determination unit 66 determines whether or not the drone DN is present as a result of step S26 (S27).
 ドローンDNが存在する場合(S27、YES)、検知結果判定部66は、出力制御部35にドローンDNが存在する旨(つまり、ドローンDNの検知結果)を通知する(S28)。 When the drone DN exists (S27, YES), the detection result determination unit 66 notifies the output control unit 35 that the drone DN exists (that is, the detection result of the drone DN) (S28).
 一方、ドローンDNが存在しない場合(S27、NO)、検知結果判定部66は、モニタリングエリア8内における走査対象の指向方向BF2を次の異なる方向に移動する旨を走査制御部67に指示する。走査制御部67は、検知結果判定部66からの指示に応じて、モニタリングエリア8内における走査対象の指向方向BF2を次の異なる方向に移動させる(S29)。なお、ドローンDNの検知結果の通知は、1つの指向方向の検知処理が終了したタイミングでなく、全方位走査完了した後にまとめて行われてもよい。 On the other hand, when the drone DN does not exist (S27, NO), the detection result determination unit 66 instructs the scanning control unit 67 to move the scanning direction BF2 in the monitoring area 8 to the next different direction. In response to the instruction from the detection result determination unit 66, the scanning control unit 67 moves the pointing direction BF2 to be scanned in the monitoring area 8 in the next different direction (S29). The notification of the detection result of the drone DN may be collectively performed after the omnidirectional scanning is completed, not at the timing when the detection processing of one directivity direction is completed.
 また、モニタリングエリア8で指向方向BF2を順番に移動させる順序は、例えばモニタリングエリア8の指向範囲BF1内或いは全範囲内で、外側の円周から内側の円周に向かうように、または内側の円周から外側の円周に向かうように、螺旋状(渦巻状)の順序でもよい。 The order in which the directing direction BF2 is sequentially moved in the monitoring area 8 is, for example, within the directivity range BF1 or the entire range of the monitoring area 8 so as to go from the outer circumference to the inner circumference, or the inner circle. A spiral (spiral) order may be used so as to go from the circumference to the outer circumference.
 また、検知方向制御部68は、一筆書きのように連続して指向方向を走査するのではなく、モニタリングエリア8内に予め位置を設定しておき、任意の順序で各位置に指向方向BF2を移動させてもよい。これにより、監視装置10は、例えばドローンDNが侵入し易い位置から検知処理を開始でき、検知処理を効率化できる。 In addition, the detection direction control unit 68 does not scan the directivity direction continuously as in a single stroke, but sets a position in the monitoring area 8 in advance, and sets the directivity direction BF2 at each position in an arbitrary order. It may be moved. Thereby, the monitoring apparatus 10 can start the detection process from a position where the drone DN easily enters, for example, and can improve the efficiency of the detection process.
 走査制御部67は、モニタリングエリア8における全方位の走査を完了したか否かを判定する(S30)。全方位の走査が完了していない場合(S30、NO)、信号処理部33の処理はステップS23に戻り、ステップS23~S30までの処理が繰り返される。つまり、指向性処理部63は、ステップS29で移動された位置の指向方向BF2にビームフォーミングし、この指向方向BF2の音データを抽出処理する。これにより、音源検知ユニットUDは、1つのドローンDNが検知されても、他にも存在する可能性のあるドローンの検知を続行するので、複数のドローンの検知が可能である。 The scanning control unit 67 determines whether or not scanning in all directions in the monitoring area 8 has been completed (S30). When the omnidirectional scanning is not completed (S30, NO), the processing of the signal processing unit 33 returns to step S23, and the processing from steps S23 to S30 is repeated. That is, the directivity processing unit 63 performs beamforming in the directivity direction BF2 at the position moved in step S29, and extracts sound data in the directivity direction BF2. As a result, the sound source detection unit UD continues to detect other drones that may exist even if one drone DN is detected, so that a plurality of drones can be detected.
 一方、ステップS30で全方位の走査が完了すると(S30、YES)、指向性処理部63は、メモリ38に一時的に記憶された、マイクアレイMAで収音された音データを消去する(S31)。また、指向性処理部63は、マイクアレイMAで収音された音データをメモリ38から消去し、かつレコーダRCに保存してもよい。 On the other hand, when the omnidirectional scanning is completed in step S30 (S30, YES), the directivity processing unit 63 erases the sound data collected by the microphone array MA temporarily stored in the memory 38 (S31). ). The directivity processing unit 63 may delete the sound data collected by the microphone array MA from the memory 38 and store it in the recorder RC.
 音データの消去後、信号処理部33は、ドローンDNの検知処理を終了するか否かを判別する(S32)。このドローンDNの検知処理の終了は、所定のイベントに応じて行われる。例えばステップS26でドローンDNが検知されなかった回数をメモリ38に保持し、この回数が所定回数以上となった場合、ドローンDNの検知処理を終了してもよい。また、タイマによるタイムアップや、操作部32が有するUI(User Interface)(不図示)に対するユーザ操作に基づいて、信号処理部33がドローンDNの検知処理を終了してもよい。また、監視装置10の電源がオフとなる場合に、終了してもよい。 After erasing the sound data, the signal processing unit 33 determines whether or not to end the drone DN detection process (S32). The detection process for the drone DN is terminated in response to a predetermined event. For example, the number of times that the drone DN has not been detected in step S26 is held in the memory 38, and when this number exceeds a predetermined number, the drone DN detection process may be terminated. Further, the signal processing unit 33 may end the detection process of the drone DN based on time-up by a timer or a user operation on a UI (User Interface) (not shown) of the operation unit 32. Moreover, you may complete | finish when the power supply of the monitoring apparatus 10 turns off.
 なお、ステップS24の処理では、周波数分析部64は、周波数を分析するとともに、その周波数の音圧も計測する。検知結果判定部66は、周波数分析部64によって測定された音圧レベルが時間経過とともに徐々に大きくなっていると、音源検知ユニットUDに対してドローンDNが接近していると判定してもよい。 In the process of step S24, the frequency analysis unit 64 analyzes the frequency and also measures the sound pressure at that frequency. The detection result determination unit 66 may determine that the drone DN is approaching the sound source detection unit UD when the sound pressure level measured by the frequency analysis unit 64 gradually increases with time. .
 例えばある時刻で測定された所定の周波数の音圧レベルが、その時刻よりも後の時刻で測定された同じ周波数の音圧レベルよりも小さい場合、時間経過とともに音圧が大きくなっており、ドローンDNが接近していると判定されてもよい。また、3回以上にわたって音圧レベルを測定し、統計値(例えば分散値、平均値、最大値、最小値等)の推移に基づいて、ドローンDNが接近していると判定されてもよい。 For example, if the sound pressure level of a predetermined frequency measured at a certain time is smaller than the sound pressure level of the same frequency measured at a time later than that time, the sound pressure increases with time, and the drone It may be determined that the DN is approaching. Further, the sound pressure level may be measured three times or more, and it may be determined that the drone DN is approaching based on the transition of statistical values (for example, variance value, average value, maximum value, minimum value, etc.).
 また、測定された音圧レベルが警戒レベルである警戒閾値より大きい場合に、検知結果判定部66が、ドローンDNが警戒エリアに侵入したと判定してもよい。なお、警戒閾値は、例えば上述した第1閾値th1よりも大きな値である。警戒エリアは、例えばモニタリングエリア8と同じエリア、またはモニタリングエリア8に含まれモニタリングエリア8よりも狭いエリアである。警戒エリアは、例えばドローンDNの侵入が規制されたエリアである。また、ドローンDNの接近判定や侵入判定は、検知結果判定部66により実行されてもよい。 In addition, when the measured sound pressure level is larger than a warning threshold that is a warning level, the detection result determination unit 66 may determine that the drone DN has entered the warning area. The warning threshold is a value larger than the first threshold th1 described above, for example. The warning area is, for example, the same area as the monitoring area 8 or an area included in the monitoring area 8 and narrower than the monitoring area 8. The alert area is an area where entry of the drone DN is restricted, for example. Moreover, the approach determination and the intrusion determination of the drone DN may be executed by the detection result determination unit 66.
 以上により、実施の形態1に係る飛行物体検知システム5では、監視装置10は、光軸方向を調整可能であってモニタリングエリア8を撮像可能に配置されたPTZカメラCZ、またはモニタリングエリア8の音を収音可能に配置されたマイクアレイMAとの間で通信する通信部31を有する。監視装置10は、マイクアレイMAにより収音されたモニタリングエリア8の音に含まれる騒音レベルが一定値(既定値)以上となる場合、音の大きさを示す音パラメータを所定値QTほど抑圧処理するとともに、マイクアレイMAにより収音された音の大きさを示す音パラメータ(例えば音圧レベル)または抑圧処理された音パラメータ(例えば音圧レベル)に基づいて、モニタリングエリア8でのドローンDNの有無を検知する信号処理部33を有する。また、監視装置10は、ドローンDNが検知された検知方向にPTZカメラCZが光軸方向を調整する場合、音パラメータの抑圧処理の実行を省略する。 As described above, in the flying object detection system 5 according to the first embodiment, the monitoring device 10 can adjust the optical axis direction and the sound of the PTZ camera CZ arranged so as to be able to image the monitoring area 8 or the sound of the monitoring area 8. The communication unit 31 communicates with the microphone array MA arranged so that sound can be collected. When the noise level included in the sound of the monitoring area 8 collected by the microphone array MA is equal to or higher than a predetermined value (predetermined value), the monitoring device 10 suppresses the sound parameter indicating the sound volume by a predetermined value QT. At the same time, based on the sound parameter (for example, sound pressure level) indicating the volume of sound collected by the microphone array MA or the sound parameter (for example, sound pressure level) subjected to suppression processing, the drone DN in the monitoring area 8 It has a signal processing unit 33 that detects presence or absence. In addition, when the PTZ camera CZ adjusts the optical axis direction in the detection direction in which the drone DN is detected, the monitoring apparatus 10 omits the execution of the sound parameter suppression process.
 これにより、監視装置10は、一定値より高い騒音レベルが発生している状態に、音圧レベルの周波数特性を所定値QTほど抑圧処理する(つまり、下げる)ので、図11に示すように、音圧レベルのピーク値が第1閾値未満となる周波数特性(特性Cv2参照)が得られ、上述したお化けドローンの存在を誤検知することを効果的に抑制できる。また、監視装置10は、抑圧処理を行う必要の無い状況下で収音された音データまたは抑圧処理を行う必要のある状況下で収音された音データを用いてドローンDNの有無の検知を行うので、的確にドローンDNの有無を検知できる。また、監視装置10は、PTZカメラCZが光軸方向を調整する際に一定値(既定値)より高い騒音レベルほどの発生音が発生する際には、音圧レベルの周波数特性の所定値QTの抑圧処理の実行を省略するので、ドローンDNの実体を見逃すこと無く、的確にドローンDNの検知を行える。従って、監視装置10によれば、モニタリングエリア8の環境に応じて、ドローンDNの誤検知を的確に低減でき、ドローンDNの認識精度の劣化を抑制できる。 As a result, the monitoring apparatus 10 suppresses (that is, lowers) the frequency characteristic of the sound pressure level by a predetermined value QT in a state where a noise level higher than a certain value is generated. A frequency characteristic (see characteristic Cv2) in which the peak value of the sound pressure level is less than the first threshold is obtained, and erroneous detection of the presence of the ghost drone described above can be effectively suppressed. In addition, the monitoring device 10 detects the presence or absence of the drone DN using sound data collected under a situation where the suppression process is not necessary or sound data collected under a situation where the suppression process is necessary. As a result, it is possible to accurately detect the presence or absence of the drone DN. In addition, when the PTZ camera CZ adjusts the optical axis direction and the generated sound is generated with a noise level higher than a certain value (predetermined value), the monitoring apparatus 10 determines the predetermined value QT of the frequency characteristic of the sound pressure level. Therefore, the drone DN can be detected accurately without missing the substance of the drone DN. Therefore, according to the monitoring device 10, the erroneous detection of the drone DN can be accurately reduced according to the environment of the monitoring area 8, and the deterioration of the recognition accuracy of the drone DN can be suppressed.
 また、監視装置10は、ドローンDNが検知されない場合、音パラメータ(例えば音圧レベル)の抑圧処理を信号処理部33において実行する。これにより、監視装置10は、抑圧処理によって音圧レベルのピーク値が第1閾値未満となる周波数特性(特性Cv2参照)が得られるので、騒音レベルが既定値(一定値)以上であっても上述したお化けドローンが存在するという誤検知を効果的に抑制できる。 In addition, when the drone DN is not detected, the monitoring device 10 executes a sound parameter (for example, sound pressure level) suppression process in the signal processing unit 33. As a result, the monitoring device 10 can obtain a frequency characteristic (see characteristic Cv2) in which the peak value of the sound pressure level is less than the first threshold value by the suppression process, so that even if the noise level is equal to or higher than a predetermined value (a constant value). The erroneous detection that the ghost drone mentioned above exists can be suppressed effectively.
 また、監視装置10は、PTZカメラCZが光軸方向を調整しない場合、音パラメータ(例えば音圧レベル)の抑圧処理を信号処理部33において実行する。これにより、監視装置10は、抑圧処理によって音圧レベルのピーク値が第1閾値未満となる周波数特性(特性Cv2参照)が得られるので、騒音レベルが既定値(一定値)以上であっても上述したお化けドローンが存在するという誤検知を効果的に抑制できる。 Further, when the PTZ camera CZ does not adjust the optical axis direction, the monitoring device 10 executes a sound parameter (for example, sound pressure level) suppression process in the signal processing unit 33. As a result, the monitoring device 10 can obtain a frequency characteristic (see characteristic Cv2) in which the peak value of the sound pressure level is less than the first threshold value by the suppression process, so that even if the noise level is equal to or higher than a predetermined value (a constant value). The erroneous detection that the ghost drone mentioned above exists can be suppressed effectively.
 また、監視装置10は、モニタリングエリア8を撮像可能に配置された全方位カメラCAとの間で通信部31において通信する。監視装置10は、ドローンDNが検知された場合、ドローンDNの位置を示す第1視覚画像(例えば図12参照)を信号処理部33において、全方位カメラCAにより撮像されたモニタリングエリア8の第1撮像画像(例えば全方位撮像画像IMG1)に重畳してモニタMNに表示する。これにより、ユーザは、モニタMNに表示された全方位撮像画像IMG1上に、監視装置10により検知されたドローンDNの位置を視覚的に把握でき、ドローンDNの位置を具体的に特定し易くなる。 In addition, the monitoring device 10 communicates with the omnidirectional camera CA arranged so that the monitoring area 8 can be imaged in the communication unit 31. When the drone DN is detected, the monitoring device 10 uses a first visual image (see, for example, FIG. 12) indicating the position of the drone DN in the signal processing unit 33 in the first monitoring area 8 captured by the omnidirectional camera CA. The image is superimposed on the captured image (for example, the omnidirectional captured image IMG1) and displayed on the monitor MN. Thereby, the user can visually grasp the position of the drone DN detected by the monitoring device 10 on the omnidirectional captured image IMG1 displayed on the monitor MN, and can easily specify the position of the drone DN specifically. .
 以上、図面を参照しながら各種の実施の形態について説明したが、本開示はかかる例に限定されないことは言うまでもない。当業者であれば、特許請求の範囲に記載された範疇内において、各種の変更例、修正例、置換例、付加例、削除例、均等例に想到し得ることは明らかであり、それらについても当然に本開示の技術的範囲に属するものと了解される。また、発明の趣旨を逸脱しない範囲において、上述した各種の実施の形態における各構成要素を任意に組み合わせてもよい。 Although various embodiments have been described above with reference to the drawings, it goes without saying that the present disclosure is not limited to such examples. It is obvious for those skilled in the art that various modifications, modifications, substitutions, additions, deletions, and equivalents can be conceived within the scope of the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure. In addition, the constituent elements in the various embodiments described above may be arbitrarily combined without departing from the spirit of the invention.
 なお、本出願は、2018年5月24日出願の日本特許出願(特願2018-099869)に基づくものであり、その内容は本出願の中に参照として援用される。 Note that this application is based on a Japanese patent application filed on May 24, 2018 (Japanese Patent Application No. 2018-099869), the contents of which are incorporated herein by reference.
 本開示は、モニタリングエリアの環境に応じて無人飛行体の誤検知を的確に低減し、無人飛行体の認識精度の劣化を抑制する飛行物体検知装置、飛行物体検知方法および飛行物体検知システムとして有用である。 The present disclosure is useful as a flying object detection device, a flying object detection method, and a flying object detection system that accurately reduce false detection of an unmanned air vehicle according to the environment of a monitoring area and suppress deterioration in recognition accuracy of the unmanned air vehicle. It is.
5 飛行物体検知システム
10 監視装置
31 通信部
32 操作部
33 信号処理部
34 音源方向検知部
35 出力制御部
37 SPK
38 メモリ
39 設定管理部
63 指向性処理部
64 周波数分析部
65 対象物検知部
66 検知結果判定部
67 走査制御部
68 検知方向制御部
CA 全方位カメラ
CZ PTZカメラ
MA マイクアレイ
MN モニタ
NW ネットワーク
RC レコーダ
UD 音源検知ユニット
5 flying object detection system 10 monitoring device 31 communication unit 32 operation unit 33 signal processing unit 34 sound source direction detection unit 35 output control unit 37 SPK
38 memory 39 setting management unit 63 directivity processing unit 64 frequency analysis unit 65 object detection unit 66 detection result determination unit 67 scanning control unit 68 detection direction control unit CA omnidirectional camera CZ PTZ camera MA microphone array MN monitor NW network RC recorder UD sound source detection unit

Claims (6)

  1.  光軸方向を調整可能であってモニタリングエリアを撮像可能に配置された第1カメラ、または前記モニタリングエリアの音を収音可能に配置されたマイクアレイとの間で通信する通信部と、
     前記マイクアレイにより収音された前記モニタリングエリアの音に含まれる騒音レベルが既定値以上となる場合、前記音の大きさを示す音パラメータを所定値ほど抑圧処理するとともに、前記マイクアレイにより収音された前記音の大きさを示す音パラメータまたは前記抑圧処理された音パラメータに基づいて、前記モニタリングエリアでの無人飛行体の有無を検知するプロセッサと、を備え、
     前記プロセッサは、
     前記無人飛行体が検知された検知方向に前記第1カメラが前記光軸方向を調整する場合、前記音パラメータの抑圧処理の実行を省略する、
     飛行物体検知装置。
    A communication unit that communicates with a first camera that can adjust the optical axis direction and is arranged so as to be able to capture an image of the monitoring area, or a microphone array that is arranged so as to be able to pick up sound of the monitoring area;
    When the noise level included in the sound of the monitoring area collected by the microphone array is equal to or higher than a predetermined value, the sound parameter indicating the loudness is suppressed by a predetermined value, and the sound is collected by the microphone array. A processor for detecting the presence or absence of an unmanned air vehicle in the monitoring area based on the sound parameter indicating the volume of the sound or the sound parameter subjected to the suppression process,
    The processor is
    When the first camera adjusts the optical axis direction in the detection direction in which the unmanned air vehicle is detected, the execution of the sound parameter suppression process is omitted.
    Flying object detection device.
  2.  前記プロセッサは、
     前記無人飛行体が検知されない場合、前記音パラメータの抑圧処理を実行する、
     請求項1に記載の飛行物体検知装置。
    The processor is
    When the unmanned air vehicle is not detected, the sound parameter suppression process is executed.
    The flying object detection device according to claim 1.
  3.  前記プロセッサは、
     前記第1カメラが前記光軸方向を調整しない場合、前記音パラメータの抑圧処理を実行する、
     請求項1に記載の飛行物体検知装置。
    The processor is
    When the first camera does not adjust the optical axis direction, the sound parameter suppression processing is executed.
    The flying object detection device according to claim 1.
  4.  前記通信部は、
     前記モニタリングエリアを撮像可能に配置された第2カメラとの間で通信し、
     前記プロセッサは、
     前記無人飛行体が検知された場合、前記無人飛行体の位置を示す第1視覚画像を、前記第2カメラにより撮像された前記モニタリングエリアの第1撮像画像に重畳してモニタに表示する、
     請求項1~3のうちいずれか一項に記載の飛行物体検知装置。
    The communication unit is
    Communicating with the second camera arranged to be able to image the monitoring area,
    The processor is
    When the unmanned aerial vehicle is detected, a first visual image indicating the position of the unmanned aerial vehicle is superimposed on a first captured image of the monitoring area captured by the second camera and displayed on a monitor.
    The flying object detection device according to any one of claims 1 to 3.
  5.  光軸方向を調整可能な第1カメラ、またはマイクアレイとの間で通信可能に接続された飛行物体検知装置における飛行物体検知方法であって、
     前記マイクアレイにより収音されたモニタリングエリアの音に含まれる騒音レベルが既定値以上となる場合、前記音の大きさを示す音パラメータを所定値ほど抑圧処理するステップと、
     前記マイクアレイにより収音された前記音の大きさを示す音パラメータまたは前記抑圧処理された音パラメータに基づいて、前記モニタリングエリアでの無人飛行体の有無を検知するステップと、を有し、
     前記無人飛行体の有無を検知するステップでは、
     前記無人飛行体が検知された検知方向に前記第1カメラが前記光軸方向を調整する場合、前記音パラメータの抑圧処理の実行が省略され、前記マイクアレイにより収音された前記音の大きさを示す音パラメータに基づいて、前記無人飛行体の有無を検知する、
     飛行物体検知方法。
    A flying object detection method in a flying object detection device connected to be communicable with a first camera capable of adjusting an optical axis direction or a microphone array,
    When a noise level included in the sound of the monitoring area collected by the microphone array is equal to or higher than a predetermined value, a step of suppressing a sound parameter indicating the volume of the sound by a predetermined value;
    Detecting the presence or absence of an unmanned air vehicle in the monitoring area based on a sound parameter indicating the volume of the sound collected by the microphone array or the sound parameter subjected to the suppression process,
    In the step of detecting the presence or absence of the unmanned air vehicle,
    When the first camera adjusts the optical axis direction in the detection direction in which the unmanned air vehicle is detected, execution of the sound parameter suppression processing is omitted, and the volume of the sound collected by the microphone array Detecting the presence or absence of the unmanned air vehicle based on a sound parameter indicating
    Flying object detection method.
  6.  光軸方向を調整可能であってモニタリングエリアを撮像可能に配置された第1カメラと、前記モニタリングエリアの音を収音可能に配置されたマイクアレイと、前記第1カメラまたは前記マイクアレイとの間で通信する飛行物体検知装置と、を備え、
     前記飛行物体検知装置は、
     前記マイクアレイにより収音された前記モニタリングエリアの音に含まれる騒音レベルが既定値以上となる場合、前記音の大きさを示す音パラメータを所定値ほど抑圧処理し、
     前記マイクアレイにより収音された前記音の大きさを示す音パラメータまたは前記抑圧処理された音パラメータに基づいて、前記モニタリングエリアでの無人飛行体の有無を検知し、
     前記無人飛行体が検知された検知方向に前記第1カメラが前記光軸方向を調整する場合、前記音パラメータの抑圧処理の実行を省略する、
     飛行物体検知システム。
    A first camera having an adjustable optical axis direction and capable of capturing an image of a monitoring area; a microphone array disposed to be able to pick up sound of the monitoring area; and the first camera or the microphone array. A flying object detection device that communicates between,
    The flying object detection device is
    When the noise level included in the sound of the monitoring area collected by the microphone array is equal to or higher than a predetermined value, the sound parameter indicating the loudness is suppressed by a predetermined value,
    Based on the sound parameter indicating the volume of the sound collected by the microphone array or the suppressed sound parameter, the presence or absence of an unmanned air vehicle in the monitoring area is detected.
    When the first camera adjusts the optical axis direction in the detection direction in which the unmanned air vehicle is detected, the execution of the sound parameter suppression process is omitted.
    Flying object detection system.
PCT/JP2019/013456 2018-05-24 2019-03-27 Airborne object detection device, airborne object detection method, and airborne object detection system WO2019225147A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018099869A JP2019203821A (en) 2018-05-24 2018-05-24 Flying object detector, flying object detection method, and flying object detection system
JP2018-099869 2018-05-24

Publications (1)

Publication Number Publication Date
WO2019225147A1 true WO2019225147A1 (en) 2019-11-28

Family

ID=68617087

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/013456 WO2019225147A1 (en) 2018-05-24 2019-03-27 Airborne object detection device, airborne object detection method, and airborne object detection system

Country Status (2)

Country Link
JP (1) JP2019203821A (en)
WO (1) WO2019225147A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174809A (en) * 2022-06-29 2022-10-11 合肥泰禾智能科技集团股份有限公司 Line frequency adaptive debugging method for linear array camera
CN117452336A (en) * 2023-10-25 2024-01-26 东北大学 Unmanned aerial vehicle sound event detection positioning device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102331371B1 (en) * 2020-08-05 2021-12-01 주식회사 다빈시스템스 Method and Apparatus for Multi-Beamforming for Tracking a Position of a Drone

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0943330A (en) * 1995-08-03 1997-02-14 Oki Electric Ind Co Ltd Narrow band signal tracking signal
JP2017135550A (en) * 2016-01-27 2017-08-03 セコム株式会社 Flying object monitoring system
US20170261604A1 (en) * 2016-03-11 2017-09-14 Raytheon Bbn Technologies Corp. Intercept drone tasked to location of lidar tracked drone
JP2018026792A (en) * 2016-07-28 2018-02-15 パナソニックIpマネジメント株式会社 Unmanned flying object detection system and unmanned flying object detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0943330A (en) * 1995-08-03 1997-02-14 Oki Electric Ind Co Ltd Narrow band signal tracking signal
JP2017135550A (en) * 2016-01-27 2017-08-03 セコム株式会社 Flying object monitoring system
US20170261604A1 (en) * 2016-03-11 2017-09-14 Raytheon Bbn Technologies Corp. Intercept drone tasked to location of lidar tracked drone
JP2018026792A (en) * 2016-07-28 2018-02-15 パナソニックIpマネジメント株式会社 Unmanned flying object detection system and unmanned flying object detection method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174809A (en) * 2022-06-29 2022-10-11 合肥泰禾智能科技集团股份有限公司 Line frequency adaptive debugging method for linear array camera
CN117452336A (en) * 2023-10-25 2024-01-26 东北大学 Unmanned aerial vehicle sound event detection positioning device
CN117452336B (en) * 2023-10-25 2024-06-04 东北大学 Unmanned aerial vehicle sound event detection positioning device

Also Published As

Publication number Publication date
JP2019203821A (en) 2019-11-28

Similar Documents

Publication Publication Date Title
JP5979458B1 (en) Unmanned air vehicle detection system and unmanned air vehicle detection method
US20210142072A1 (en) Monitoring system and monitoring method
JP6598064B2 (en) Object detection apparatus, object detection system, and object detection method
US10791395B2 (en) Flying object detection system and flying object detection method
JP7122708B2 (en) Image processing device, monitoring system and image processing method
JP2018026792A (en) Unmanned flying object detection system and unmanned flying object detection method
WO2019225147A1 (en) Airborne object detection device, airborne object detection method, and airborne object detection system
US20050099500A1 (en) Image processing apparatus, network camera system, image processing method and program
JP6493860B2 (en) Monitoring control system and monitoring control method
WO2017163688A1 (en) Monitoring system and monitoring method
JP5939341B1 (en) Monitoring system and monitoring method
JP2018101987A (en) Sound source display system in monitoring area and sound source display method
KR101729966B1 (en) CCTV camera system having image recognition and voice guidance and an operating method thereof
US20210152750A1 (en) Information processing apparatus and method for controlling the same
JP2017092938A (en) Audio source detection system and audio source detection method
JP6664119B2 (en) Monitoring system and monitoring method
WO2018020965A1 (en) Unmanned aerial vehicle detection system and unmanned aerial vehicle detection method
KR20110114096A (en) Monitoring system employing thermal imaging camera and nighttime monitoring method using the same
TWI471825B (en) System and method for managing security of a roof
JP2019009520A (en) Information processing apparatus, information processing method, and program
JP2017175473A (en) Monitoring system and monitoring method
KR101471187B1 (en) System and method for controlling movement of camera
JP2014033332A (en) Imaging apparatus and imaging method
CN116033116A (en) Control method and device of projection equipment and projection equipment
JP2018061135A (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19807690

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19807690

Country of ref document: EP

Kind code of ref document: A1