WO2022059584A1 - Camera system and camera system control method - Google Patents

Camera system and camera system control method Download PDF

Info

Publication number
WO2022059584A1
WO2022059584A1 PCT/JP2021/033109 JP2021033109W WO2022059584A1 WO 2022059584 A1 WO2022059584 A1 WO 2022059584A1 JP 2021033109 W JP2021033109 W JP 2021033109W WO 2022059584 A1 WO2022059584 A1 WO 2022059584A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera device
camera
monitored object
angle
monitored
Prior art date
Application number
PCT/JP2021/033109
Other languages
French (fr)
Japanese (ja)
Inventor
嵩臣 神田
Original Assignee
株式会社日立国際電気
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立国際電気 filed Critical 株式会社日立国際電気
Priority to JP2022550505A priority Critical patent/JP7472299B2/en
Publication of WO2022059584A1 publication Critical patent/WO2022059584A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a camera system and a control method for the camera system.
  • Patent Document 1 Japanese Unexamined Patent Publication No. 2013-219556 proposes a method of performing electronic zooming in order to zoom to a position of interest.
  • Patent Document 1 The problem with the electronic zoom technology described in Patent Document 1 is that the resolution of the human face, which is the object to be monitored, is inferior to that of the optical zoom, and the accuracy is expected to decrease when applied to face recognition or face recognition. was there.
  • optical zoom there is a problem that generally, zooming can be performed only at the center of the angle of view, and images outside the region cannot be captured while zooming. Therefore, it is an object of the present invention to provide a technique for solving the above-mentioned problem that the angle of view is limited in the optical zoom and the problem that the resolution is inferior in the electronic zoom.
  • one of the representative camera systems of the present invention has a first camera device and an image pickup area overlapping the image pickup area of the first camera device, and is a monitored object. It is provided with a second camera device capable of acquiring an enlarged image of the above, and a control device for controlling the second camera device. Then, the control device (a) extracts the monitored object from the image pickup data of the first camera device or the second camera device, and (b) calculates the expected projection time for the monitored object. (C) A control signal is transmitted to the second camera device so as to capture an enlarged image of the monitored object having a shorter reflection time among the monitored objects.
  • the first camera device acquires the monitored area image, while the second camera device acquires the enlarged image of the monitored object, so that the image data for the monitored area can be obtained. There is no omission, and a clear image of the monitored object can be obtained.
  • FIG. 1 It is a figure which shows the outline structure of the camera system 100 which concerns on embodiment of this invention. It is a figure which shows an example of the internal structure of the camera apparatus 50 used in the camera system 100 which concerns on embodiment of this invention. It is a block diagram of a computer system 300. It is a figure explaining the setting example of the camera system 100 which concerns on embodiment of this invention. It is a figure explaining the preparation procedure at the time of installation of the camera system 100 which concerns on embodiment of this invention. It is a figure explaining the concept of the projection expected time table in the camera system 100 which concerns on embodiment of this invention. It is a figure explaining an exemplary projection time exemplarily. It is a figure which shows the flowchart of the face recognition (face recognition) processing in the camera system 100 which concerns on embodiment of this invention.
  • FIG. 1 is a diagram showing an outline configuration of a camera system 100 according to an embodiment when two cameras are used. Further, the camera system 100 according to the present invention acquires moving image data having the maximum angle of view that can be acquired by the camera system 100 with one camera device (hereinafter, may be referred to as "wide-angle moving image data"). One of the purposes is to acquire a magnified image (hereinafter, may be referred to as "enlarged moving image data") by a local optical zoom lens with another camera device and monitor a predetermined area. ..
  • the number of camera devices used in the camera system 100 according to the present invention is not particularly limited. Therefore, the maximum angle of view that can be acquired by the camera system 100 in the case of a system using three or more cameras is the sum of the angles of view that can be acquired by three or more cameras.
  • At least one camera device 50 includes a moving image imager 25 provided with an image pickup lens 22 capable of optical zoom operation, and pan / tilt electrically with respect to the moving image imager 25.
  • a pan head mechanism 30 for operating is included. According to such a camera device 50, the enlarged moving image data can be acquired.
  • a camera device 50 in which the pan head mechanism 30 and the zoom operation of the lens are omitted can be used.
  • the camera device 50 in which the pan head mechanism 30 and the zoom operation of the lens are omitted is configured to acquire wide-angle moving image data. In the embodiment shown in FIG. 1, two pan head mechanisms 30 and two camera devices 50 capable of zooming the lens are used.
  • a control command is transmitted to each of the plurality of camera devices 50 via the signal cable 9 together with the plurality of camera devices 50 as described above.
  • a computer system 300 that receives moving image data acquired by the moving image imaging device 25 of a plurality of camera devices 50 is included.
  • a computer system 300 outside the camera device is used as a higher-level control device that sends a control command to a plurality of camera devices 50, receives moving image data from each camera device 50, and performs analysis.
  • such an upper control device may be provided inside one of a plurality of camera devices 50 so that the upper control device can play the role of the upper control device.
  • the computer system 300 transmits a control command related to pan / tilt / zoom to the camera device 50. Based on such a control command, the camera device 50 sets the turning angle and elevation angle of the pan head mechanism 30, and the zoom magnification of the image pickup lens 22, and the moving image image pickup device 25 acquires the moving image data. Data is transmitted to the computer system 300. As a result, on the computer system 300 side, it is possible to acquire moving image data having a designated arbitrary angle of view.
  • the zoom operation of the image pickup lens 22 is realized by being driven by a motor (not shown) based on a control command from the computer system 300.
  • the two camera devices 50 are communicably connected to the computer system 300 via the signal cable 9.
  • suffixes a and b may be used to distinguish between the two camera devices 50.
  • the power cable 43 is not shown.
  • the camera device 50a and the camera device 50b are set so that the wide-angle side angle of view of each image pickup lens 22 is common. It should be noted that it is not essential in the present invention to set the angle of view on the wide-angle side of each camera device 50 to be common in this way.
  • at least one of the camera device 50a and the camera device 50b constantly acquires moving image data having an angle of view on the widest angle side.
  • the camera device 50a constantly acquires moving image data of the angle of view on the widest angle side, and the camera device 50b pans and tilts from the computer system 300. -Receives control commands related to zooming and acquires moving image data at the specified angle of view.
  • the former moving image data is referred to as wide-angle moving image data
  • the latter moving image data is referred to as enlarged moving image data.
  • a camera device that acquires wide-angle moving image data is referred to as a first camera device
  • a camera device that is pan / tilt / zoom controlled and acquires enlarged moving image data is referred to as a second camera device.
  • the camera device 50a a camera device in which the pan head mechanism 30 or the zoom operation of the lens is omitted can be used as long as the moving image data of the wide-angle side angle of view can be acquired.
  • FIG. 2 is a diagram showing an example of the internal configuration of the camera device 50 included in the camera system 100 according to the present invention.
  • 1 is a fixed portion
  • 2 is a horizontal rotating portion
  • 3 is a camera housing.
  • the horizontal rotating portion 2 is arranged on the fixed portion 1
  • the camera housing is placed on the upper side surface of the horizontal rotating portion 2.
  • 3 is arranged, and a moving image imaging device 25 provided with an image pickup lens 22 capable of optical zoom, that is, a video camera is housed in the camera housing 3 to form a camera housing.
  • the fixed portion 1 which is the basis of the pan head mechanism 30 serves as a pedestal for installing the camera device 50 at a required place, and a horizontal rotation shaft 6 is vertically attached to the pedestal, whereby the camera device 50 is horizontally rotated.
  • the unit 2 can be rotated to an arbitrary turning angle position, and the moving image imaging device 25 is provided with a pan (PAN: turning angle) operation.
  • PAN turning angle
  • the moving image imaging device 25 is provided with a tilt (TILT: elevation angle) operation.
  • the horizontal rotation unit 2 includes a horizontal rotation pulse motor 4, a vertical rotation pulse motor 5, a horizontal rotation worm gear 7, and a camera control.
  • a circuit 10 a vertical rotation belt 13, a power supply unit 40, an origin turning angle sensor 44 and an origin elevation angle sensor 45, a horizontal rotation motor drive circuit 46, a vertical rotation motor drive circuit 47, and the like are provided.
  • the horizontal rotation motor drive circuit 46 and the vertical rotation motor drive circuit 47 are each controlled by the camera control circuit 10. At this time, the horizontal rotation shaft 6 is made hollow so that the two cables of the signal cable 9 and the power cable 43 can be drawn into the horizontal rotation portion 2 from the outside via the fixing portion 1. It has become.
  • the signal cable 9 is for transmitting a video signal (moving image data to the computer system 300) and a control signal (control command from the computer system 300), and is connected to the camera control circuit 10 after being drawn from the outside. Further, the cable 14 is also connected to the moving image imaging device 25 in the camera housing 3 via the vertical rotation shaft 8 which is also made hollow.
  • the power cable 43 is for supplying AC power, is connected to the power supply unit 40, and functions to supply power to the power cable 43.
  • the power supply unit 40 functions to supply electric power for operation to the camera control circuit 10, the horizontal rotation motor drive circuit 46, and the vertical rotation motor drive circuit 47. Then, the horizontal rotation motor drive circuit 46 supplies a drive pulse to the horizontal rotation pulse motor 4 via the horizontal rotation motor cable 11, and the vertical rotation motor drive circuit 47 passes through the vertical rotation pulse motor cable 12. And supplies a drive pulse to the vertical rotation pulse motor 5.
  • the camera housing 3 can be panned by rotating around the shaft 6 and moving the camera housing 3 to an arbitrary turning angle.
  • the origin turning angle sensor 44 functions to generate a detection signal when the turning angle reaches the origin turning angle (a predetermined turning angle set in advance).
  • the vertical rotation pulse motor 5 rotates, whereby the vertical rotation shaft 8 is rotated via the vertical rotation belt 13.
  • the camera housing 3 rotates about the vertical rotation shaft 8, and the camera housing 3 can be moved to an arbitrary elevation angle to perform a tilt operation.
  • the origin elevation angle sensor 45 functions to generate a detection signal when the origin elevation angle reaches the origin elevation angle (predetermined predetermined elevation angle).
  • a coaxial type reduction mechanism is interposed in the rotation transmission system from the vertical rotation pulse motor 5 to the vertical rotation shaft 8 via the vertical rotation belt 13. Therefore, a predetermined reduction ratio such as 100: 1 is given between the rotation speed of the vertical rotation pulse motor 5 and the displacement speed of the tilt angle of the camera housing 3. It should be noted that it is not necessary to explain that this predetermined reduction ratio is similarly given by the horizontal rotation worm gear 7 with respect to the displacement speed of the pan angle.
  • a pulse motor is used to drive the moving image imaging device 25, and the turning angle and the elevation angle are controlled at the origin. Since the so-called open-loop control method is adopted, in which only the number of pulses supplied to the pulse motor is used, the turning angle and elevation angle of the moving image imaging device 25 are set in advance as initial processing. It is necessary to move to a certain origin (origin turning angle and origin elevation angle).
  • This initial processing is performed as follows. That is, the camera control circuit 10 rotationally drives the horizontal rotation pulse motor 4 and the vertical rotation pulse motor 5 while observing the detection signals of the origin rotation angle sensor 44 and the origin elevation angle sensor 45, and the origin rotation angle sensor 44 and the origin elevation angle sensor. By stopping when the detection signal of 45 is obtained, the turning angle and the elevation angle of the moving image imaging device 25 are set to the origin. As the origin at this time, for example, for the turning angle, the front is the origin turning angle, and for the elevation angle, the origin elevation angle is when the camera housing 3 is set to the maximum angle downward (or upward) from the horizontal position. good.
  • the process shifts to the above-mentioned imaging direction setting operation, and includes information indicating the horizontal rotation angle (turning angle) and the vertical rotation angle (elevation angle) from the computer system 300 via the signal cable 9.
  • a signal that is, a horizontal rotation angle command and a vertical rotation angle command is supplied to the camera control circuit 10.
  • the number of step pulses corresponding to each of the horizontal rotation angle command and the vertical rotation angle command is generated from the camera control circuit 10 and supplied to the horizontal rotation pulse motor 4 and the vertical rotation pulse motor 5.
  • the horizontal rotation pulse motor 4 and the vertical rotation pulse motor 5 start to rotate, and the moving image imaging device 25 in the camera housing 3 starts to move toward the turning angle and the elevation angle commanded from the origin.
  • the horizontal rotation pulse motor 4 and the vertical rotation pulse motor 5 are finally stopped after step rotation from the origin position by the number of commanded pulses.
  • the moving image imaging device 25 in the camera housing 3 will be stopped when it is correctly oriented in the commanded direction.
  • the imaging direction of the moving image imaging device 25 is set here without feedback control.
  • FIG. 3 is a block diagram of a computer system 300 for implementing aspects of the embodiments of the present disclosure.
  • the mechanisms and devices of the various embodiments disclosed herein may be applied to any suitable computing system.
  • the main components of the computer system 300 include one or more processors 302, memory 304, terminal interface 312, storage interface 314, I / O (input / output) device interface 316, and network interface 318. These components may be interconnected via a memory bus 306, an I / O bus 308, a bus interface unit 309, and an I / O bus interface unit 310.
  • the computer system 300 may include one or more general purpose programmable central processing units (CPUs) 302A and 302B collectively referred to as processors 302.
  • processors 302. the computer system 300 may include a plurality of processors, and in another embodiment, the computer system 300 may be a single CPU system.
  • Each processor 302 may execute an instruction stored in memory 304 and include an onboard cache.
  • the memory 304 may store all or part of a program, module, and data structure that implements the functions described herein.
  • the memory 304 may store a camera system application 350 for controlling the camera system 100 according to the present invention.
  • the camera system application 350 may include an instruction or description that performs a function described below on the processor 302, or may include an instruction or description that is interpreted by another instruction or description.
  • the application 350 for a camera system replaces or in addition to a processor-based system, semiconductor devices, chips, logic gates, circuits, circuit cards, and / or other physical hardware. It may be implemented in hardware via the device.
  • the camera system application 350 may include data other than instructions or descriptions.
  • other data input devices may be provided to communicate directly with the bus interface unit 309, processor 302, or other hardware of the computer system 300.
  • the computer system 300 may include a processor 302, a memory 304, a display system 324, and a bus interface unit 309 that communicates between the I / O bus interface units 310.
  • the computer system 300 may also include devices such as one or more sensors configured to collect the data and provide the data to the processor 302.
  • the display memory may be a dedicated memory for buffering video data.
  • the display system 324 may be connected to a display device 326 such as a stand-alone display screen, television, tablet, or portable device.
  • the display device 326 may include a speaker to render the audio. Alternatively, the speaker for rendering the audio may be connected to the I / O interface unit.
  • the functionality provided by the display system 324 may be implemented by an integrated circuit that includes a processor 302.
  • the functionality provided by the bus interface unit 309 may be implemented by an integrated circuit that includes a processor 302.
  • the I / O interface unit has a function of communicating with various storages or I / O devices.
  • the terminal interface unit 312 may be a user output device such as a video display device or a speaker TV, or a user input device such as a keyboard, mouse, keypad, touchpad, trackball, button, light pen, or other pointing device. It is possible to attach such a user I / O device 320.
  • the user inputs input data and instructions to the user I / O device 320 and the computer system 300 by operating the user input device using the user interface, and receives output data from the computer system 300. May be good.
  • the user interface may be displayed on the display device via the user I / O device 320, reproduced by the speaker, or printed via the printer, for example.
  • the I / O device interface 316 and the network interface 318 can be used for connection with the camera device 50 included in the camera system 100. It was
  • the computer system 300 is a device that receives a request from another computer system (client) that does not have a direct user interface, such as a multi-user mainframe computer system, a single user system, or a server computer. There may be.
  • the computer system 300 may be a desktop computer, a portable computer, a laptop computer, a tablet computer, a pocket computer, a telephone, a smartphone, or any other suitable electronic device.
  • FIG. 4 is a diagram for explaining a setting example of the camera system 100 according to the embodiment of the present invention
  • FIG. 5 is a diagram for explaining a preparation procedure at the time of installing the camera system 100 according to the embodiment of the present invention.
  • the preparation procedure in FIG. 5 is a diagram for explaining an action performed by the user, and is not a flowchart executed by the computer system 300.
  • FIG. 4 shows an example of moving image data acquired at the widest angle of view that can be captured by the camera device 50a and the camera device 50b by installing the camera system 100 according to FIG.
  • the moving image data wide-angle moving image data
  • the imaging regions of the camera device 50a, which is the first camera device, and the camera device 50b, which is the second camera device overlap. Therefore, even while the camera device 50b is performing zoom imaging, it can be acquired by the camera device 50a, which is the first camera device.
  • the camera system 100 will be described based on the settings for face recognition (face recognition if a matching database exists) of a person on the road in a landscape as shown in the figure.
  • the magnified moving image data for such face recognition is provided with an image pickup lens 22 equipped with a zoom and a pan head mechanism 30, pan / tilt / zoom controlled, and an enlarged moving image having a specified angle of view. It is acquired by the camera device 50b, which is a second camera device that acquires image data.
  • the designated effective area when the camera system 100 is installed, the designated effective area is set as shown in step S1 of FIG.
  • a designated effective region will be described with reference to FIG.
  • the designated effective area means an area in which the camera system can capture an image and captures a monitored object or performs image processing.
  • an area excluding an area of interest for example, a tree area among the areas that can be imaged by the camera system, that is, a person on the road.
  • An area where face recognition (face recognition) may be performed can be set as a designated effective area.
  • the image processing area can be limited and efficient processing can be performed.
  • this area is divided into an arbitrary number of blocks (when the designated effective area is not set, the area that can be imaged by the camera system is divided into an arbitrary number of blocks. do it). In this example, it is divided into 3 vertical blocks and 8 horizontal blocks.
  • the estimated time until the monitored object disappears from the designated effective area is set according to the direction in which the monitored object (for example, a moving object such as a person or a car) moves. .. In the present disclosure, such an estimated time is referred to as an "estimated reflection time". (If the designated effective area is not set, the estimated time until the monitored object disappears from the area where the camera system can take an image is the "estimated reflection time".)
  • the expected reflection time as described above is used to predict how long the monitored object to be image-analyzed will be reflected in the wide-angle image.
  • the user sets the expected projection time for each block in the designated effective area and for each moving direction of the monitored object.
  • the estimated projection time may be set by a preparation period after the camera system is constructed and by an update method during operation, which will be described later.
  • the estimated projection time set for each block and each movement direction of the monitored object is stored in the estimated projection time table.
  • FIG. 6 is a diagram illustrating the concept of such an expected projection time table.
  • the monitored object recognized by the block B mn having the designated effective area when the monitored object recognized by the block B mn having the designated effective area is moving in the eight directions from (a) to (h), the monitored object disappears from the designated effective area. It shows that the expected time to do is set for each direction. For example, when it is recognized that the monitored object is moving in the direction of (a) in the block B mn , it indicates that the expected projection time is set to T mn (0, 1) . There is. The setting of these estimated times will be described later, but it may be set appropriately for each setting direction based on the average moving speed of the monitored object, or it is specified as changing according to the moving speed of the monitored object. You may.
  • the expected reflection time of the monitored object is set in every eight directions as the moving direction of the monitored object, but the number of setting directions of the estimated time is not limited to eight directions. For example, it may be set in four directions in the vertical and horizontal directions when looking at the paper surface, or it may be set in more than eight directions, for example, 16 directions.
  • the moving direction of the monitored object extracted by image analysis does not correspond to the direction in which the estimated time is set
  • the projected projection time in the direction closest to the moving direction of the monitored object among the set directions is set. May be applied.
  • the moving direction of the monitored object may be vector-decomposed in the direction in which the expected time is set, and calculated as a combination of the estimated times in these decomposed directions.
  • the average of the expected projection times in all directions may be applied. If the monitored object spans multiple blocks, the estimated projection time of the block containing the center point of the monitored object (for example, the center of gravity when the monitored object is viewed as a two-dimensional figure) is applied. Alternatively, the average expected projection time of a plurality of blocks may be applied.
  • Such an estimated projection time will be illustrated by way of FIG. 7.
  • the object to be monitored is moving in the direction of (c) in the block B 22 . Therefore, the expected reflection time of the monitored object moving in the direction (c) is relatively long.
  • this monitoring object is moving in the direction (g)
  • the distance to the end of the area where the camera system can take an image is relatively short, so the monitoring object moving in the direction (g).
  • the expected projection time of an object is relatively short.
  • an expected projection time table in which the estimated projection time is set for each block in the designated effective area and for each movement direction of the monitored object is prepared, and the camera system 100 according to the present invention is ready for installation and start of use. Complete.
  • FIG. 8 is a diagram showing a flowchart of face recognition (face recognition) processing in the camera system 100 according to the embodiment of the present invention.
  • a program based on such a flowchart can be stored in the memory 304 as an application 350 for a camera system and configured to be executed on the processor 302.
  • step S100 when the face recognition (face recognition) process is started in step S100, the process proceeds to step S101, and wide-angle moving image data is acquired from the camera device 50a, which is the first camera device.
  • FIG. 9 is a diagram showing an example of acquired wide-angle moving image data.
  • the monitored object is extracted by image analysis of the acquired wide-angle moving image data.
  • a known method such as an object extraction process based on background subtraction can be adopted.
  • the object to be monitored has a specific shape such as a vehicle or a person, it may be extracted by a method such as pattern matching or skeleton detection.
  • the position where the monitored object is extracted more specifically, the block is specified, and further, the moving direction of the monitored object is specified.
  • the moving direction of the monitored object a known method of calculating the moving direction from the feature points of the monitored object can be used.
  • step S104 the expected projection time table is referred to from the block and the moving direction of the specified monitored object, and the estimated projection time stored in the estimated projection time table is acquired.
  • FIG. 10 is a diagram illustrating acquisition of the expected projection time.
  • T 23 (-1, -1) is acquired as the expected projection time of the monitored object extracted in the block B 23 from the estimated projection time table, and the monitored object extracted in the block B 24 is obtained. It is shown that T 24 (1,0) was acquired as the expected projection time of.
  • T 23 (-1, -1) ⁇ T 24 (1, 0) .
  • step S105 it is determined whether or not the expected projection time has been acquired for all the monitored objects in the wide-angle moving image data. If the determination in step S105 is NO, the process proceeds to step S112, paying attention to the next monitored object, and looping to perform the processes of steps S103 and S104. On the other hand, if the determination in step 105 is YES, the process proceeds to step S106.
  • a control command is transmitted to the camera device 50b, which is the second camera device, so as to acquire the magnified moving image data of the monitored object having the shorter expected projection time.
  • a control command is transmitted to the camera device 50b so as to acquire magnified moving image data for the monitored object extracted by B 23 .
  • This control command commands the camera device 50b to acquire magnified moving image data by optical zoom of the monitored object extracted by the block B 23 by pan / tilt / zoom control.
  • the camera device 50b acquires enlarged moving image data based on such a control command.
  • step S107 image analysis of the enlarged moving image data acquired from the camera device 50b, which is the second camera device, is performed.
  • FIG. 11 shows an example of enlarged moving image data acquired from the camera device 50b.
  • the camera device 50b is pan / tilt / zoom controlled to acquire magnified moving image data by optical zoom of the monitored object, so that face recognition and face recognition and face are performed. It is possible to improve the accuracy of authentication.
  • step S108 it is determined whether or not the monitored object is a human being based on the image analysis of the enlarged moving image data. If this determination is NO, the process proceeds to step S111, and the process of quantifying the feature amount of the human face and the like is skipped.
  • step S108 determines whether or not the facial feature amount of a predetermined value or more can be acquired.
  • step S110 determines whether or not sufficient information (facial feature amount) has been acquired for the monitored object. If the determination in step S110 is YES, it is determined that sufficient information (facial feature amount) has been acquired for the monitored object, and the process proceeds to step S111. On the other hand, if the determination in step S110 is NO, the process proceeds to step S114, and it is first determined whether or not a predetermined predetermined time (time-out time) has elapsed.
  • a predetermined predetermined time time-out time
  • Such a time-out time is set even if the monitored object is a person, but if the face is not shown in the magnified moving image data, it can be determined by adjusting the zoom of the image pickup lens 22. Because it is impossible.
  • step S114 determines whether or not a facial feature amount of a predetermined value or more can be acquired.
  • the setting is made so as to try to acquire the facial feature amount of a predetermined value or more while adjusting the angle of view until the time-out time elapses.
  • step S111 it is determined whether a sufficient feature amount has been acquired for all the monitored objects, or whether a time-out has occurred because a sufficient feature amount could not be acquired. If the determination in step S111 is NO, the process proceeds to step S113, focusing on the next monitored object, and looping to perform a series of processes for acquiring facial features of a predetermined value or more.
  • step S113 when the acquisition process of the facial feature amount of the monitored object extracted in the block B 23 is completed, subsequently, in step S113, the monitored object extracted in the block B 24 Focusing on this, pan / tilt / zoom control is performed on the camera device 50b in order to acquire magnified moving image data.
  • step S111 if the determination in step S111 is YES, the process proceeds to step S116, and the face recognition (face recognition) process is terminated.
  • the enlarged image of the next monitored object is acquired in step S113, the next monitored object is moving while the enlarged image of the first monitored object is acquired.
  • the first camera device since the first camera device continues to shoot the next monitored object even while the second camera device performs pan / tilt / zoom control, the following Never lose track of what you are monitoring. Further, even if a new monitored object is generated while the magnified image of the first monitored object is acquired, the first camera device can supplement this.
  • the transition destination of step S113 is set between steps S101 and S102, and the new monitoring target and the recognized monitoring target are set. After comparing the expected projection times again, the acquisition target of the next enlarged image may be determined.
  • the monitoring objects extracted by the image analysis are prioritized based on the expected projection time acquired from the estimated projection time table. Then, pan / tilt / zoom control is performed on the camera device 50b so as to give priority to the monitored object that is expected to disappear from the moving image data (designated effective area) earlier and acquire the enlarged moving image data.
  • pan / tilt / zoom control is performed on the camera device 50b so as to give priority to the monitored object that is expected to disappear from the moving image data (designated effective area) earlier and acquire the enlarged moving image data.
  • Is configured to do According to such a configuration, accurate face recognition and face recognition can be realized without deteriorating the resolution of the human face in the monitoring area, and human face recognition and face recognition and face can be realized in the monitoring area. It is possible to reduce the probability that authentication will fail.
  • the estimated projection time is set for each block in the designated effective area and for each moving direction of the monitored object.
  • the estimated time table should be prepared in advance. It is conceivable that the estimated time initially set in such an estimated projection time table is different from the actual estimated projection time when the camera system 100 is actually operated. Therefore, it is preferable that the expected projection time table is configured to be appropriately updated with the actual operation of the camera system 100.
  • FIG. 12 is a diagram showing a flowchart of table update processing of the expected projection time table in the camera system 100 according to the embodiment of the present invention.
  • a program based on such a flowchart can be stored in the memory 304 as an application 350 for a camera system and configured to be executed on the processor 302.
  • step S200 when the table update process is started in step S200, the process proceeds to step S201, and wide-angle moving image data is acquired from the camera device 50a, which is the first camera device.
  • step S202 the monitored object is extracted by image analysis of the acquired wide-angle moving image data.
  • image analysis a known method such as an object extraction process based on background subtraction can be adopted.
  • the position where the monitored object is extracted more specifically, the block is specified, and the moving direction of the monitored object is further specified.
  • a known method of calculating the moving direction from the feature points of the monitored object can be used.
  • step S204 attention is paid to the monitored object extracted in the previous step, and the time until the monitored object disappears from the designated effective area is timed.
  • step S205 the projected projection time table is based on the position (block) specified in step S203, the moving direction of the monitored object of interest, and the time until the monitored object disappears measured in step S204. Make an update.
  • the method of updating so as to overwrite with the newly acquired timed time, or the newly acquired timed time and the estimated time already described in the table. It is possible to adopt a method of taking the average value of and updating with this average value. Further, when taking the average value, the newly acquired timekeeping time and the expected time already described in the table may be weighted and the average value may be taken. Then, in step S206, the table update process is terminated.
  • the expected projection time table By updating the expected projection time table as described above, even if there is a discrepancy between the initially set estimated projection time table and the actual expected projection time, the cumulative total of the camera system 100 As the operating time increases, the expected projection time with higher accuracy will be described in the estimated projection time table.
  • the second embodiment of the present invention will be described.
  • the operation of the camera system 100 including the two camera devices 50 has been described, but the camera system 100 according to the second embodiment can also use three or more camera devices 50.
  • an embodiment of a camera system 100 including four camera devices 50 will be described.
  • the differences from the embodiments described above will be mainly described.
  • the points not described below are the same as those described in the previous embodiment.
  • FIG. 13 is a diagram showing an outline configuration of the camera system 100 according to the second embodiment.
  • the four camera devices 50 used in the camera system 100 according to the second embodiment four cameras similar to those described in detail in FIG. 2 can be used.
  • the suffixes a, b, c and d will be used to distinguish these four camera devices 50.
  • FIG. 14 is a diagram showing the angles of view of the four camera devices 50a, 50b, 50c, and 50d.
  • FIG. 14 shows the angle of view on the widest angle side of the image pickup lens 22 of each of the camera devices 50a, 50b, 50c, and 50d.
  • different types of lines indicating the angle of view are used so that the angle of view of each camera device 50 can be understood, and the lines indicating the angle of view are intentionally shifted in the vertical direction. There is. Therefore, the angles of view in the vertical direction of each camera device may be the same.
  • the wide-angle moving image data is always used for the image having the widest angle of view that can be covered by the camera devices 50a, 50b, 50c, 50d constituting all the camera systems 100.
  • the wide-angle moving image data at both ends can be acquired only by the camera device 50a and the camera device 50d. Therefore, among the four camera devices, the camera device 50a and the camera device 50a The camera device 50d will be used as a camera device dedicated to the first camera device.
  • the camera device 50b While the camera device 50b is acquiring wide-angle moving image data on the widest angle side, the areas shown in FIGS. 13 (X) and 13 (Y) can be covered by the camera device 50b. Is pan / tilt / zoom controlled and can be used to acquire enlarged moving image data of a specified angle of view. In this case, the camera device 50b is used as the first camera device, and the camera device 50c is used as the second camera device.
  • the camera device 50c While the camera device 50c is acquiring wide-angle moving image data on the widest angle side, the areas shown in FIGS. 13 (Y) and (Z) can be covered by the camera device 50c. Is pan / tilt / zoom controlled and can be used to acquire enlarged moving image data of a specified angle of view. In this case, the camera device 50c is used as the first camera device, and the camera device 50b is used as the second camera device.
  • the overlapping imaging regions of the first camera device and the second camera device are the regions (X), (Y), and (Z) of FIG. 13, and the camera device 50b and the camera device 50c.
  • the area that can be set as the designated effective area by the user of the camera system 100 is within the areas (X), (Y), and (Z) of FIG.
  • the camera system 100 according to the second embodiment also has accuracy corresponding to a wider monitoring area by setting the designated effective area and the expected projection time table. You can often perform face recognition and face recognition.
  • the areas (X), (Y), and (Z) in FIG. 13 capture the monitored object or perform image processing. Since it is an area that can be used, the estimated time until the monitored object disappears from the areas (X), (Y), and (Z) in FIG. 13 is set. Then, in the second embodiment, by using the camera devices 50a to 50d in conjunction with each other by the control device, it is possible to monitor a wider range as compared with the previous embodiment.
  • FIG. 15 is a diagram showing an operation example of the camera system 100 according to the second embodiment.
  • FIG. 15 shows the entire moving image data acquired by the camera system 100 according to the second embodiment, and shows a situation in which the monitored object A and the monitored object B are present in the moving image data. There is.
  • the expected projection time will be described on the premise that the monitored object A is shorter than the monitored object B.
  • the camera device 50c acquires the magnified moving image data with the pan / tilt / zoom controlled camera device 50b, while the camera device 50c obtains the wide-angle moving image data. Have acquired.
  • the camera device 50c is used as the first camera device, and the camera device 50b is used as the second camera device.
  • the camera system 100 subsequently obtains an enlarged moving image of the monitored object B. Execute the process to acquire the data.
  • FIG. 15B shows a situation after FIG. 15A, and the camera device 50b is obtained with the pan / tilt / zoom controlled camera device 50c while acquiring the magnified moving image data of the monitored object B. Is acquiring wide-angle moving image data.
  • the camera device 50b is used as the first camera device
  • the camera device 50c is used as the second camera device.
  • the angle of view of the camera device used as the second camera device for acquiring the magnified moving image data is complemented by another camera device, so that the enlarged moving image data of the monitoring target is acquired without degrading the resolution, while the monitoring area is obtained by another camera device used as the first camera device. It is possible to continue to acquire the entire wide-angle moving image data.
  • the first camera device acquires wide-angle moving image data
  • the second camera device acquires enlarged moving image data of the monitored object of interest. Therefore, the moving image data of the entire monitoring area is not lost, and the resolution of the human face is not lowered, so that accurate face recognition and face recognition can be realized.
  • the object to be monitored is a person, but the object to be monitored may be a vehicle.
  • the information on the license plate of the vehicle may be acquired instead of quantifying the facial features from the enlarged image data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The present invention provides a camera system for magnifying and monitoring a plurality of objects to be monitored, wherein the objects to be monitored are magnified without a decrease in resolution, and a loss from a screen region is minimized. A first camera device (50a) acquires wide-angle video data. A second camera device (50b) is controlled for panning, tilting, and zooming, and acquires magnified video data having a designated angle of view. A computer system (300): (a) extracts an object to be monitored from captured data from the first camera device or the second camera device; (b) calculates an expected showing time with respect to the object to be monitored; and (c) transmits a control signal to the second camera device to capture a magnified image with respect to an object to be monitored that has a shorter showing time among the objects to be monitored.

Description

カメラシステム及びカメラシステムの制御方法Camera system and camera system control method
 本発明は、カメラシステム及びカメラシステムの制御方法に関する。 The present invention relates to a camera system and a control method for the camera system.
 近年、監視分野でカメラシステムを用いた人の顔認識や顔認証などの需要が増加している。従来は、これらの需要に対して固定焦点レンズで固定された画角の映像を取得し、それに対して顔認識や顔認証などを行うことが一般的である。しかし、こういった固定された画角で得られる映像では、遠方にある物体に対しては高い精度を得ることができないという問題がある。 In recent years, there has been an increase in demand for human face recognition and face recognition using camera systems in the surveillance field. Conventionally, in response to these demands, it is common to acquire an image having a fixed angle of view with a fixed focus lens and perform face recognition or face recognition on the image. However, there is a problem that high accuracy cannot be obtained for a distant object in an image obtained with such a fixed angle of view.
 これに対して、ズーム制御を行い監視対象物を拡大することが考えられる。例えば、特許文献1(特開2013-219556号公報)では、注目位置にズームするために電子ズームを行う手法が提案されている。 On the other hand, it is conceivable to perform zoom control to expand the monitored object. For example, Patent Document 1 (Japanese Unexamined Patent Publication No. 2013-219556) proposes a method of performing electronic zooming in order to zoom to a position of interest.
特開2013-219556号公報Japanese Unexamined Patent Publication No. 2013-219556
 特許文献1記載の電子ズーム技術では、監視対象物である人の顔の解像度が光学ズームに対して劣り、顔認識や顔認証などに応用する際には精度の低下が想定される、という問題があった。
 一方、光学ズームでは一般的に画角の中心にしかズームができず、またズームしている間はその領域外の映像は撮影できない、という問題があった。そこで、本発明では、前述の光学ズームにおける画角が限られる問題、および電子ズームにおける解像度が劣る問題を解決する技術を提供することを目的とする。
The problem with the electronic zoom technology described in Patent Document 1 is that the resolution of the human face, which is the object to be monitored, is inferior to that of the optical zoom, and the accuracy is expected to decrease when applied to face recognition or face recognition. was there.
On the other hand, with optical zoom, there is a problem that generally, zooming can be performed only at the center of the angle of view, and images outside the region cannot be captured while zooming. Therefore, it is an object of the present invention to provide a technique for solving the above-mentioned problem that the angle of view is limited in the optical zoom and the problem that the resolution is inferior in the electronic zoom.
 上記のような課題を解決するために、代表的な本発明のカメラシステムの一つは、第1カメラ装置と、前記第1カメラ装置の撮像領域と重複した撮像領域を有し、監視対象物の拡大画像が取得可能な第2カメラ装置と、前記第2カメラ装置を制御する制御装置を備えている。
 そして、前記制御装置は、(ア)前記第1カメラ装置または前記第2カメラ装置の撮像データから監視対象物を抽出し、(イ)前記監視対象物に対して、映込予想時間を算出し、(ウ)前記監視対象物の中で前記映込時間がより短い監視対象物に対して拡大画像を撮影するよう第2カメラ装置に対して制御信号を送信する。
In order to solve the above problems, one of the representative camera systems of the present invention has a first camera device and an image pickup area overlapping the image pickup area of the first camera device, and is a monitored object. It is provided with a second camera device capable of acquiring an enlarged image of the above, and a control device for controlling the second camera device.
Then, the control device (a) extracts the monitored object from the image pickup data of the first camera device or the second camera device, and (b) calculates the expected projection time for the monitored object. (C) A control signal is transmitted to the second camera device so as to capture an enlarged image of the monitored object having a shorter reflection time among the monitored objects.
 本発明によれば、カメラシステムが監視する領域において、第1カメラ装置が監視領域画像を取得しつつ、第2カメラ装置が、監視対象物の拡大画像を取得するので、監視領域について画像データが欠落することがないし、また、監視対象物の鮮明な画像を得ることができる。 According to the present invention, in the area monitored by the camera system, the first camera device acquires the monitored area image, while the second camera device acquires the enlarged image of the monitored object, so that the image data for the monitored area can be obtained. There is no omission, and a clear image of the monitored object can be obtained.
 上記した以外の課題、構成および効果は、以下の実施をするための形態における説明により明らかにされる。 Issues, configurations and effects other than those mentioned above will be clarified by the explanation in the form for carrying out the following.
本発明の実施形態に係るカメラシステム100の概要構成を示す図である。It is a figure which shows the outline structure of the camera system 100 which concerns on embodiment of this invention. 本発明の実施形態に係るカメラシステム100に用いるカメラ装置50の内部構成の一例を示す図である。It is a figure which shows an example of the internal structure of the camera apparatus 50 used in the camera system 100 which concerns on embodiment of this invention. コンピュータシステム300のブロック図である。It is a block diagram of a computer system 300. 本発明の実施形態に係るカメラシステム100の設定例を説明する図である。It is a figure explaining the setting example of the camera system 100 which concerns on embodiment of this invention. 本発明の実施形態に係るカメラシステム100の設置時における準備手順を説明する図である。It is a figure explaining the preparation procedure at the time of installation of the camera system 100 which concerns on embodiment of this invention. 本発明の実施形態に係るカメラシステム100における映込予想時間テーブルの概念を説明する図である。It is a figure explaining the concept of the projection expected time table in the camera system 100 which concerns on embodiment of this invention. 映込予想時間を例示的に説明する図である。It is a figure explaining an exemplary projection time exemplarily. 本発明の実施形態に係るカメラシステム100における顔認識(顔認証)処理のフローチャートを示す図である。It is a figure which shows the flowchart of the face recognition (face recognition) processing in the camera system 100 which concerns on embodiment of this invention. 取得される広角動画像データの一例を示す図である。It is a figure which shows an example of the acquired wide-angle moving image data. 映込予想時間の取得を説明する図を示す図である。It is a figure which shows the figure explaining the acquisition of the expected reflection time. 取得される拡大動画像データの一例を示す図である。It is a figure which shows an example of the acquired enlarged moving image data. 本発明の実施形態に係るカメラシステム100におけるテーブル更新処理のフローチャートを示す図である。It is a figure which shows the flowchart of the table update process in the camera system 100 which concerns on embodiment of this invention. 本発明の他の実施形態に係るカメラシステム100の概要構成を示す図である。It is a figure which shows the outline structure of the camera system 100 which concerns on other embodiment of this invention. 他の実施形態に係るカメラシステム100の各カメラ装置50による画角を示す図である。It is a figure which shows the angle of view by each camera device 50 of the camera system 100 which concerns on another embodiment. 他の実施形態に係るカメラシステム100の運用例を示す図である。It is a figure which shows the operation example of the camera system 100 which concerns on other embodiment.
 以下、図面を参照しながら本発明の実施の形態を説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
(実施形態)  図1は2台のカメラを用いた場合の実施形態に係るカメラシステム100の概要構成を示す図である。また、本発明に係るカメラシステム100は、一のカメラ装置でカメラシステム100が取得し得る最大の画角の動画像データ(以下、「広角動画像データ」ということがある。)を取得すると共に、他のカメラ装置で局所的な光学式のズームレンズによる拡大画像(以下、「拡大動画像データ」ということがある。)を取得して、所定の領域を監視することを一つの目的としている。
 なお、本発明に係るカメラシステム100で用いるカメラ装置の数に特に制限はない。このため、3台以上のカメラを用いたシステムの場合のカメラシステム100が取得し得る最大の画角は3台以上のカメラによって取得し得る画角の総和となる。
(Embodiment) FIG. 1 is a diagram showing an outline configuration of a camera system 100 according to an embodiment when two cameras are used. Further, the camera system 100 according to the present invention acquires moving image data having the maximum angle of view that can be acquired by the camera system 100 with one camera device (hereinafter, may be referred to as "wide-angle moving image data"). One of the purposes is to acquire a magnified image (hereinafter, may be referred to as "enlarged moving image data") by a local optical zoom lens with another camera device and monitor a predetermined area. ..
The number of camera devices used in the camera system 100 according to the present invention is not particularly limited. Therefore, the maximum angle of view that can be acquired by the camera system 100 in the case of a system using three or more cameras is the sum of the angles of view that can be acquired by three or more cameras.
 本開示のカメラシステムにおいては、少なくとも一つのカメラ装置50には、光学式のズーム動作が可能な撮像レンズ22を備えた動画撮像装置25と、この動画撮像装置25に対して電動でパン・チルト操作を行う雲台機構30と、が含まれる。このようなカメラ装置50によれば、先の拡大動画像データを取得することができる。
 また、本発明では、このような1台のカメラ装置50と共に、雲台機構30やレンズのズーム動作が省略されたカメラ装置50も用い得る。雲台機構30やレンズのズーム動作が省略されたカメラ装置50では、広角動画像データを取得するように構成する。
 なお、図1に示す実施形態においては、雲台機構30やレンズのズーム動作を行い得るカメラ装置50が2台用いられている。
In the camera system of the present disclosure, at least one camera device 50 includes a moving image imager 25 provided with an image pickup lens 22 capable of optical zoom operation, and pan / tilt electrically with respect to the moving image imager 25. A pan head mechanism 30 for operating is included. According to such a camera device 50, the enlarged moving image data can be acquired.
Further, in the present invention, together with such a single camera device 50, a camera device 50 in which the pan head mechanism 30 and the zoom operation of the lens are omitted can be used. The camera device 50 in which the pan head mechanism 30 and the zoom operation of the lens are omitted is configured to acquire wide-angle moving image data.
In the embodiment shown in FIG. 1, two pan head mechanisms 30 and two camera devices 50 capable of zooming the lens are used.
(ハードウエア構成)  本発明に係るカメラシステム100には、上記のようなカメラ装置50複数台と共に、信号ケーブル9を介して、複数台のカメラ装置50のそれぞれに対して制御指令を発信し、複数台のカメラ装置50の動画撮像装置25で取得される動画像データを受信するコンピュータシステム300が含まれる。
 本実施形態では、複数台のカメラ装置50に制御指令を発信し、各カメラ装置50からの動画像データを受信し解析を行う上位制御装置として、カメラ装置の外部のコンピュータシステム300を用いる例について説明するが、このようなコンピュータシステム300を用いることは必ずしも必須ではない。例えば、このような上位制御装置を複数台のカメラ装置50の一台の内部に設けて、上位制御装置の役割をこの一台に担わせるような構成とすることもできる。
(Hardware Configuration) In the camera system 100 according to the present invention, a control command is transmitted to each of the plurality of camera devices 50 via the signal cable 9 together with the plurality of camera devices 50 as described above. A computer system 300 that receives moving image data acquired by the moving image imaging device 25 of a plurality of camera devices 50 is included.
In the present embodiment, there is an example in which a computer system 300 outside the camera device is used as a higher-level control device that sends a control command to a plurality of camera devices 50, receives moving image data from each camera device 50, and performs analysis. As will be described, it is not always essential to use such a computer system 300. For example, such an upper control device may be provided inside one of a plurality of camera devices 50 so that the upper control device can play the role of the upper control device.
 本開示におけるカメラシステム100においては、コンピュータシステム300はカメラ装置50に対して、パン・チルト・ズームに係る制御指令を発信するようになっている。このような制御指令に基づいて、カメラ装置50は雲台機構30の旋回角度や仰角度、撮像レンズ22のズーム倍率を設定して、動画撮像装置25で動画像データを取得し、この動画像データをコンピュータシステム300に対して送信する。これにより、コンピュータシステム300側では、指定した任意の画角の動画像データを取得することができるようになっている。なお、撮像レンズ22におけるズーム動作は、コンピュータシステム300からの制御指令に基づいて、不図示のモーターにより駆動されることで実現される。 In the camera system 100 in the present disclosure, the computer system 300 transmits a control command related to pan / tilt / zoom to the camera device 50. Based on such a control command, the camera device 50 sets the turning angle and elevation angle of the pan head mechanism 30, and the zoom magnification of the image pickup lens 22, and the moving image image pickup device 25 acquires the moving image data. Data is transmitted to the computer system 300. As a result, on the computer system 300 side, it is possible to acquire moving image data having a designated arbitrary angle of view. The zoom operation of the image pickup lens 22 is realized by being driven by a motor (not shown) based on a control command from the computer system 300.
 本実施形態に係るカメラシステム100では、2台のカメラ装置50がコンピュータシステム300と信号ケーブル9を介して通信可能に接続されている。以下、2台のカメラ装置50を区別するためにサフィックスa,bを用いることがある。なお、図1において、電力ケーブル43については図示を省略している。 In the camera system 100 according to the present embodiment, the two camera devices 50 are communicably connected to the computer system 300 via the signal cable 9. Hereinafter, suffixes a and b may be used to distinguish between the two camera devices 50. In FIG. 1, the power cable 43 is not shown.
 図1に示す実施形態においては、カメラ装置50a、カメラ装置50bは、それぞれの撮像レンズ22の最も広角側の画角が共通となるように設定されている。なお、このように各カメラ装置50の広角側の画角が共通となるように設定することは本発明では必須ではない。本実施形態に係るカメラシステム100では、カメラ装置50a、カメラ装置50bの少なくとも1台が、最も広角側の画角の動画像データを常時取得する。 In the embodiment shown in FIG. 1, the camera device 50a and the camera device 50b are set so that the wide-angle side angle of view of each image pickup lens 22 is common. It should be noted that it is not essential in the present invention to set the angle of view on the wide-angle side of each camera device 50 to be common in this way. In the camera system 100 according to the present embodiment, at least one of the camera device 50a and the camera device 50b constantly acquires moving image data having an angle of view on the widest angle side.
 本実施形態では、カメラ装置50a、カメラ装置50bのうち、例えば、カメラ装置50aが、最も広角側の画角の動画像データを常時取得し、カメラ装置50bが、コンピュータシステム300から、パン・チルト・ズームに係る制御指令を受信し、指定された画角での動画像データを取得する。 In the present embodiment, of the camera device 50a and the camera device 50b, for example, the camera device 50a constantly acquires moving image data of the angle of view on the widest angle side, and the camera device 50b pans and tilts from the computer system 300. -Receives control commands related to zooming and acquires moving image data at the specified angle of view.
 ここで、前者の動画像データを広角動画像データと称し、後者の動画像データを拡大動画像データと称することとする。また、広角動画像データを取得するカメラ装置を第1カメラ装置と称し、パン・チルト・ズーム制御され拡大動画像データを取得するカメラ装置を第2カメラ装置と称することとする。なお、本実施形態の場合、カメラ装置50aについては、最も広角側の画角の動画像データを取得することができれば、雲台機構30やレンズのズーム動作が省略されたカメラ装置も用い得る。 Here, the former moving image data is referred to as wide-angle moving image data, and the latter moving image data is referred to as enlarged moving image data. Further, a camera device that acquires wide-angle moving image data is referred to as a first camera device, and a camera device that is pan / tilt / zoom controlled and acquires enlarged moving image data is referred to as a second camera device. In the case of the present embodiment, as for the camera device 50a, a camera device in which the pan head mechanism 30 or the zoom operation of the lens is omitted can be used as long as the moving image data of the wide-angle side angle of view can be acquired.
(カメラ装置の構成)  次に、カメラ装置50の内部構成についてより詳しく説明する。図2は、本発明に係るカメラシステム100に含まれるカメラ装置50の内部構成の一例を示す図である。図2において、1は固定部、2は水平回転部、3はカメラ筐体で、このとき固定部1の上に水平回転部2を配置し、この水平回転部2の上部側面にカメラ筐体3を配置し、このカメラ筐体3の中に、光学ズームが可能な撮像レンズ22を備えた動画撮像装置25、つまりビデオカメラを収納してカメラハウジングとしているものである。 (Camera device configuration) Next, the internal configuration of the camera device 50 will be described in more detail. FIG. 2 is a diagram showing an example of the internal configuration of the camera device 50 included in the camera system 100 according to the present invention. In FIG. 2, 1 is a fixed portion, 2 is a horizontal rotating portion, and 3 is a camera housing. At this time, the horizontal rotating portion 2 is arranged on the fixed portion 1, and the camera housing is placed on the upper side surface of the horizontal rotating portion 2. 3 is arranged, and a moving image imaging device 25 provided with an image pickup lens 22 capable of optical zoom, that is, a video camera is housed in the camera housing 3 to form a camera housing.
 雲台機構30の基礎となる固定部1は、このカメラ装置50を必要な場所に設置するための台座となるもので、これに水平回転用シャフト6が垂直に取付けてあり、これにより水平回転部2が任意の旋回角位置に回転でき、動画撮像装置25にパン(PAN:旋回角)操作とが与えられるようになっている。そして、この水平回転部2に垂直回転用シャフト8を回転可能に水平に保持させ、この垂直回転用シャフト8にカメラ筐体3を取付けることにより、カメラ筐体3が任意の仰角位置に回転でき、動画撮像装置25にチルト(TILT:仰角)操作が与えられるようになっている。 The fixed portion 1 which is the basis of the pan head mechanism 30 serves as a pedestal for installing the camera device 50 at a required place, and a horizontal rotation shaft 6 is vertically attached to the pedestal, whereby the camera device 50 is horizontally rotated. The unit 2 can be rotated to an arbitrary turning angle position, and the moving image imaging device 25 is provided with a pan (PAN: turning angle) operation. Then, by holding the vertical rotation shaft 8 rotatably horizontally in the horizontal rotation portion 2 and attaching the camera housing 3 to the vertical rotation shaft 8, the camera housing 3 can be rotated to an arbitrary elevation angle position. , The moving image imaging device 25 is provided with a tilt (TILT: elevation angle) operation.
 このため、水平回転部2には、上記した水平回転用シャフト6と垂直回転用シャフト8の外にも更に水平回転用パルスモータ4と垂直回転用パルスモータ5、水平回転用ウォームギヤ7、カメラ制御回路10、垂直回転用ベルト13、電源ユニット40、原点旋回角センサ44と原点仰角センサ45、水平回転用モータ駆動回路46、それに垂直回転用モータ駆動回路47などが設けられている。 Therefore, in addition to the horizontal rotation shaft 6 and the vertical rotation shaft 8, the horizontal rotation unit 2 includes a horizontal rotation pulse motor 4, a vertical rotation pulse motor 5, a horizontal rotation worm gear 7, and a camera control. A circuit 10, a vertical rotation belt 13, a power supply unit 40, an origin turning angle sensor 44 and an origin elevation angle sensor 45, a horizontal rotation motor drive circuit 46, a vertical rotation motor drive circuit 47, and the like are provided.
  そして水平回転用モータ駆動回路46と垂直回転用モータ駆動回路47は、夫々カメラ制御回路10により制御されるようになっている。このとき水平回転用シャフト6は中空に作られていて、信号ケーブル9と電力ケーブル43の2本のケーブルを、外部から固定部1を介して水平回転部2の中に引き込むことができるようになっている。 The horizontal rotation motor drive circuit 46 and the vertical rotation motor drive circuit 47 are each controlled by the camera control circuit 10. At this time, the horizontal rotation shaft 6 is made hollow so that the two cables of the signal cable 9 and the power cable 43 can be drawn into the horizontal rotation portion 2 from the outside via the fixing portion 1. It has become.
 そして、まず信号ケーブル9は、映像信号(コンピュータシステム300への動画像データ)とコントロール信号(コンピュータシステム300からの制御指令)の伝送用で、外部から引き込まれた後、カメラ制御回路10に接続され、更にケーブル14により、同じく中空に作られている垂直回転用シャフト8を介してカメラ筐体3内の動画撮像装置25にも接続されている。一方、電力ケーブル43はAC電源供給用で、電源ユニット40に接続され、それに電力を供給する働きをする。 First, the signal cable 9 is for transmitting a video signal (moving image data to the computer system 300) and a control signal (control command from the computer system 300), and is connected to the camera control circuit 10 after being drawn from the outside. Further, the cable 14 is also connected to the moving image imaging device 25 in the camera housing 3 via the vertical rotation shaft 8 which is also made hollow. On the other hand, the power cable 43 is for supplying AC power, is connected to the power supply unit 40, and functions to supply power to the power cable 43.
 このとき電源ユニット40は、カメラ制御回路10と水平回転用モータ駆動回路46及び垂直回転用モータ駆動回路47に動作用の電力を供給する働きをする。そして水平回転用モータ駆動回路46は、水平回転用モータケーブル11を介して水平回転用パルスモータ4に駆動パルスを供給し、垂直回転用モータ駆動回路47は、垂直回転用パルスモータケーブル12を介して垂直回転用パルスモータ5に駆動パルスを供給する。 At this time, the power supply unit 40 functions to supply electric power for operation to the camera control circuit 10, the horizontal rotation motor drive circuit 46, and the vertical rotation motor drive circuit 47. Then, the horizontal rotation motor drive circuit 46 supplies a drive pulse to the horizontal rotation pulse motor 4 via the horizontal rotation motor cable 11, and the vertical rotation motor drive circuit 47 passes through the vertical rotation pulse motor cable 12. And supplies a drive pulse to the vertical rotation pulse motor 5.
 そこで、水平回転用モータ駆動回路46から駆動パルスが供給されると、水平回転用パルスモータ4が回転し、この結果、水平回転用ウォームギヤ7が回転するので、水平回転部2の全体が水平回転用シャフト6を中心にして回転し、カメラ筐体3を任意の旋回角に動かし、パン操作することができる。原点旋回角センサ44は、旋回角が原点旋回角(予め設定してある所定の旋回角)になったとき検出信号を発生する働きをする。 Therefore, when a drive pulse is supplied from the horizontal rotation motor drive circuit 46, the horizontal rotation pulse motor 4 rotates, and as a result, the horizontal rotation worm gear 7 rotates, so that the entire horizontal rotation unit 2 rotates horizontally. The camera housing 3 can be panned by rotating around the shaft 6 and moving the camera housing 3 to an arbitrary turning angle. The origin turning angle sensor 44 functions to generate a detection signal when the turning angle reaches the origin turning angle (a predetermined turning angle set in advance).
 同様に、垂直回転用モータ駆動回路47から駆動パルスが供給されると、垂直回転用パルスモータ5が回転し、これにより垂直回転用ベルト13を介して垂直回転用シャフト8が回転されるので、カメラ筐体3が垂直回転用シャフト8を中心にして回転し、カメラ筐体3を任意の仰角に動かし、チルト操作することができる。原点仰角センサ45は、仰角が原点仰角(予め設定してある所定の仰角)になったとき検出信号を発生する働きをする。 Similarly, when a drive pulse is supplied from the vertical rotation motor drive circuit 47, the vertical rotation pulse motor 5 rotates, whereby the vertical rotation shaft 8 is rotated via the vertical rotation belt 13. The camera housing 3 rotates about the vertical rotation shaft 8, and the camera housing 3 can be moved to an arbitrary elevation angle to perform a tilt operation. The origin elevation angle sensor 45 functions to generate a detection signal when the origin elevation angle reaches the origin elevation angle (predetermined predetermined elevation angle).
 このとき、図示してないが、垂直回転用パルスモータ5から垂直回転用ベルト13を介して垂直回転用シャフト8に至る回転伝達系には、例えば同軸型の減速機構が介在させてあり、これにより垂直回転用パルスモータ5の回転速度とカメラ筐体3のチルト角の変位速度の間に、例えば100対1などの所定の減速比が与えられるようになっている。なお、この所定の減速比は、パン角の変位速度についても、水平回転用ウォームギヤ7により同様に与えられていることは説明を要しない。 At this time, although not shown, for example, a coaxial type reduction mechanism is interposed in the rotation transmission system from the vertical rotation pulse motor 5 to the vertical rotation shaft 8 via the vertical rotation belt 13. Therefore, a predetermined reduction ratio such as 100: 1 is given between the rotation speed of the vertical rotation pulse motor 5 and the displacement speed of the tilt angle of the camera housing 3. It should be noted that it is not necessary to explain that this predetermined reduction ratio is similarly given by the horizontal rotation worm gear 7 with respect to the displacement speed of the pan angle.
 次に、このカメラ装置50による撮像方向を設定する動作について説明すると、このような装置では、上記したように、動画撮像装置25の駆動にパルスモータを用い、旋回角と仰角の制御を、原点からパルスモータに供給したパルスの個数だけで一義的に行なうようにした、いわゆるオープンループ制御方式を採用しているので、まずイニシャル処理として、動画撮像装置25の旋回角と仰角を、予め設定してある原点(原点旋回角と原点仰角)に動かす処理が必要である。 Next, the operation of setting the imaging direction by the camera device 50 will be described. In such a device, as described above, a pulse motor is used to drive the moving image imaging device 25, and the turning angle and the elevation angle are controlled at the origin. Since the so-called open-loop control method is adopted, in which only the number of pulses supplied to the pulse motor is used, the turning angle and elevation angle of the moving image imaging device 25 are set in advance as initial processing. It is necessary to move to a certain origin (origin turning angle and origin elevation angle).
 このイニシャル処理は、次のようにして行なう。すなわちカメラ制御回路10は、原点旋回角センサ44と原点仰角センサ45の検出信号をみながら水平回転用パルスモータ4と垂直回転用パルスモータ5を回転駆動させ、原点旋回角センサ44と原点仰角センサ45の検出信号が得られとき停止させることにより、動画撮像装置25の旋回角と仰角が原点になるようにする。このときの原点としては、例えば旋回角については正面を原点旋回角とし、仰角については、カメラ筐体3を水平位置から下向き(又は上向き)の最大限の角度にしたときを原点仰角とすればよい。 This initial processing is performed as follows. That is, the camera control circuit 10 rotationally drives the horizontal rotation pulse motor 4 and the vertical rotation pulse motor 5 while observing the detection signals of the origin rotation angle sensor 44 and the origin elevation angle sensor 45, and the origin rotation angle sensor 44 and the origin elevation angle sensor. By stopping when the detection signal of 45 is obtained, the turning angle and the elevation angle of the moving image imaging device 25 are set to the origin. As the origin at this time, for example, for the turning angle, the front is the origin turning angle, and for the elevation angle, the origin elevation angle is when the camera housing 3 is set to the maximum angle downward (or upward) from the horizontal position. good.
 イニシャル処理が終わったら、ここで上記した撮像方向の設定動作に移行し、コンピュータシステム300から、信号ケーブル9を介して水平回転角度(旋回角)及び垂直回転角度(仰角)を表わす情報を含んだ信号、すなわち水平回転角指令及び垂直回転角指令をカメラ制御回路10に供給する。 After the initial processing is completed, the process shifts to the above-mentioned imaging direction setting operation, and includes information indicating the horizontal rotation angle (turning angle) and the vertical rotation angle (elevation angle) from the computer system 300 via the signal cable 9. A signal, that is, a horizontal rotation angle command and a vertical rotation angle command is supplied to the camera control circuit 10.
 そうすると、これら水平回転角指令と垂直回転角指令のそれぞれに対応した個数のステップパルスがカメラ制御回路10から発生され、水平回転用パルスモータ4と垂直回転用パルスモータ5に供給される。これにより水平回転用パルスモータ4と垂直回転用パルスモータ5が回転を開始し、カメラ筐体3の中にある動画撮像装置25が原点から指令された旋回角と仰角に向かって動きだす。 Then, the number of step pulses corresponding to each of the horizontal rotation angle command and the vertical rotation angle command is generated from the camera control circuit 10 and supplied to the horizontal rotation pulse motor 4 and the vertical rotation pulse motor 5. As a result, the horizontal rotation pulse motor 4 and the vertical rotation pulse motor 5 start to rotate, and the moving image imaging device 25 in the camera housing 3 starts to move toward the turning angle and the elevation angle commanded from the origin.
 そして各水平回転用パルスモータ4と垂直回転用パルスモータ5は、最終的には指令されたパルスの個数分、原点位置からステップ回転してから停止されるが、このときパルスモータの特性として、原点からの角度は与えられたパルスの個数に一義的に対応しているので、カメラ筐体3内の動画撮像装置25は、それが指令された方向に正しく向いたとき停止されることになり、フィードバック制御することなく、ここで動画撮像装置25の撮像方向に設定されることになる。 The horizontal rotation pulse motor 4 and the vertical rotation pulse motor 5 are finally stopped after step rotation from the origin position by the number of commanded pulses. At this time, as a characteristic of the pulse motor, Since the angle from the origin uniquely corresponds to the number of given pulses, the moving image imaging device 25 in the camera housing 3 will be stopped when it is correctly oriented in the commanded direction. , The imaging direction of the moving image imaging device 25 is set here without feedback control.
(コンピュータシステムの構成)  次に、図面を参照し、コンピュータシステム300の構成例について説明する。図3は、本開示の実施形態による態様を実施するためのコンピュータシステム300のブロック図である。本明細書で開示される様々な実施形態の機構及び装置は、任意の適切なコンピューティングシステムに適用されてもよい。コンピュータシステム300の主要コンポーネントは、1つ以上のプロセッサ302、メモリ304、端末インターフェース312、ストレージインターフェース314、I/O(入出力)デバイスインターフェース316、及びネットワークインターフェース318を含む。これらのコンポーネントは、メモリバス306、I/Oバス308、バスインターフェースユニット309、及びI/Oバスインターフェースユニット310を介して、相互的に接続されてもよい。 (Computer system configuration) Next, a configuration example of the computer system 300 will be described with reference to the drawings. FIG. 3 is a block diagram of a computer system 300 for implementing aspects of the embodiments of the present disclosure. The mechanisms and devices of the various embodiments disclosed herein may be applied to any suitable computing system. The main components of the computer system 300 include one or more processors 302, memory 304, terminal interface 312, storage interface 314, I / O (input / output) device interface 316, and network interface 318. These components may be interconnected via a memory bus 306, an I / O bus 308, a bus interface unit 309, and an I / O bus interface unit 310.
 コンピュータシステム300は、プロセッサ302と総称される1つ又は複数の汎用プログラマブル中央処理装置(CPU)302A及び302Bを含んでもよい。ある実施形態では、コンピュータシステム300は複数のプロセッサを備えてもよく、また別の実施形態では、コンピュータシステム300は単一のCPUシステムであってもよい。各プロセッサ302は、メモリ304に格納された命令を実行し、オンボードキャッシュを含んでもよい。 The computer system 300 may include one or more general purpose programmable central processing units (CPUs) 302A and 302B collectively referred to as processors 302. In one embodiment, the computer system 300 may include a plurality of processors, and in another embodiment, the computer system 300 may be a single CPU system. Each processor 302 may execute an instruction stored in memory 304 and include an onboard cache.
 メモリ304は、本明細書で説明する機能を実施するプログラム、モジュール、及びデータ構造のすべて又は一部を格納してもよい。例えば、メモリ304は、本発明に係るカメラシステム100を制御するためのカメラシステム用アプリケーション350を格納していてもよい。ある実施形態では、カメラシステム用アプリケーション350は、後述する機能をプロセッサ302上で実行する命令又は記述を含んでもよく、あるいは別の命令又は記述によって解釈される命令又は記述を含んでもよい。ある実施形態では、カメラシステム用アプリケーション350は、プロセッサベースのシステムの代わりに、またはプロセッサベースのシステムに加えて、半導体デバイス、チップ、論理ゲート、回路、回路カード、および/または他の物理ハードウェアデバイスを介してハードウェアで実施されてもよい。ある実施形態では、カメラシステム用アプリケーション350は、命令又は記述以外のデータを含んでもよい。ある実施形態では、他のデータ入力デバイス(図示せず)が、バスインターフェースユニット309、プロセッサ302、またはコンピュータシステム300の他のハードウェアと直接通信するように提供されてもよい。 The memory 304 may store all or part of a program, module, and data structure that implements the functions described herein. For example, the memory 304 may store a camera system application 350 for controlling the camera system 100 according to the present invention. In one embodiment, the camera system application 350 may include an instruction or description that performs a function described below on the processor 302, or may include an instruction or description that is interpreted by another instruction or description. In certain embodiments, the application 350 for a camera system replaces or in addition to a processor-based system, semiconductor devices, chips, logic gates, circuits, circuit cards, and / or other physical hardware. It may be implemented in hardware via the device. In certain embodiments, the camera system application 350 may include data other than instructions or descriptions. In certain embodiments, other data input devices (not shown) may be provided to communicate directly with the bus interface unit 309, processor 302, or other hardware of the computer system 300.
 コンピュータシステム300は、プロセッサ302、メモリ304、表示システム324、及びI/Oバスインターフェースユニット310間の通信を行うバスインターフェースユニット309を含んでもよい。また、コンピュータシステム300は、データを収集し、プロセッサ302に当該データを提供するように構成された1つまたは複数のセンサ等のデバイスを含んでもよい。表示メモリは、ビデオデータをバッファするための専用メモリであってもよい。表示システム324は、単独のディスプレイ画面、テレビ、タブレット、又は携帯型デバイスなどの表示装置326に接続されてもよい。ある実施形態では、表示装置326は、オーディオをレンダリングするためスピーカを含んでもよい。あるいは、オーディオをレンダリングするためのスピーカは、I/Oインターフェースユニットと接続されてもよい。他の実施形態では、表示システム324が提供する機能は、プロセッサ302を含む集積回路によって実現されてもよい。同様に、バスインターフェースユニット309が提供する機能は、プロセッサ302を含む集積回路によって実現されてもよい。 The computer system 300 may include a processor 302, a memory 304, a display system 324, and a bus interface unit 309 that communicates between the I / O bus interface units 310. The computer system 300 may also include devices such as one or more sensors configured to collect the data and provide the data to the processor 302. The display memory may be a dedicated memory for buffering video data. The display system 324 may be connected to a display device 326 such as a stand-alone display screen, television, tablet, or portable device. In certain embodiments, the display device 326 may include a speaker to render the audio. Alternatively, the speaker for rendering the audio may be connected to the I / O interface unit. In other embodiments, the functionality provided by the display system 324 may be implemented by an integrated circuit that includes a processor 302. Similarly, the functionality provided by the bus interface unit 309 may be implemented by an integrated circuit that includes a processor 302.
 I/Oインターフェースユニットは、様々なストレージ又はI/Oデバイスと通信する機能を備える。例えば、端末インターフェースユニット312は、ビデオ表示装置、スピーカテレビ等のユーザ出力デバイスや、キーボード、マウス、キーパッド、タッチパッド、トラックボール、ボタン、ライトペン、又は他のポインティングデバイス等のユーザ入力デバイスのようなユーザI/Oデバイス320の取り付けが可能である。ユーザは、ユーザインターフェースを使用して、ユーザ入力デバイスを操作することで、ユーザI/Oデバイス320及びコンピュータシステム300に対して入力データや指示を入力し、コンピュータシステム300からの出力データを受け取ってもよい。ユーザインターフェースは例えば、ユーザI/Oデバイス320を介して、表示装置に表示されたり、スピーカによって再生されたり、プリンタを介して印刷されたりしてもよい。I/Oデバイスインターフェース316やネットワークインターフェース318は、カメラシステム100に含まれるカメラ装置50との接続に用いることができる。  The I / O interface unit has a function of communicating with various storages or I / O devices. For example, the terminal interface unit 312 may be a user output device such as a video display device or a speaker TV, or a user input device such as a keyboard, mouse, keypad, touchpad, trackball, button, light pen, or other pointing device. It is possible to attach such a user I / O device 320. The user inputs input data and instructions to the user I / O device 320 and the computer system 300 by operating the user input device using the user interface, and receives output data from the computer system 300. May be good. The user interface may be displayed on the display device via the user I / O device 320, reproduced by the speaker, or printed via the printer, for example. The I / O device interface 316 and the network interface 318 can be used for connection with the camera device 50 included in the camera system 100. It was
 ある実施形態では、コンピュータシステム300は、マルチユーザメインフレームコンピュータシステム、シングルユーザシステム、又はサーバコンピュータ等の、直接的ユーザインターフェースを有しない、他のコンピュータシステム(クライアント)からの要求を受信するデバイスであってもよい。他の実施形態では、コンピュータシステム300は、デスクトップコンピュータ、携帯型コンピュータ、ノートパソコン、タブレットコンピュータ、ポケットコンピュータ、電話、スマートフォン、又は任意の他の適切な電子機器であってもよい。 In certain embodiments, the computer system 300 is a device that receives a request from another computer system (client) that does not have a direct user interface, such as a multi-user mainframe computer system, a single user system, or a server computer. There may be. In other embodiments, the computer system 300 may be a desktop computer, a portable computer, a laptop computer, a tablet computer, a pocket computer, a telephone, a smartphone, or any other suitable electronic device.
(カメラシステムの設定、使用開始の準備)  次に、以上のようなシステム構成を有する本発明に係るカメラシステム100の設置時における設定方法について説明する。図4は本発明の実施形態に係るカメラシステム100の設定例を説明する図であり、図5は本発明の実施形態に係るカメラシステム100の設置時における準備手順を説明する図である。なお、図5における準備手順はユーザが行う行為を説明する図であり、コンピュータシステム300が実行するフローチャートではない。 (Setting of the camera system and preparation for starting use) Next, a setting method at the time of installation of the camera system 100 according to the present invention having the above system configuration will be described. FIG. 4 is a diagram for explaining a setting example of the camera system 100 according to the embodiment of the present invention, and FIG. 5 is a diagram for explaining a preparation procedure at the time of installing the camera system 100 according to the embodiment of the present invention. The preparation procedure in FIG. 5 is a diagram for explaining an action performed by the user, and is not a flowchart executed by the computer system 300.
 図4は、図1に係るカメラシステム100を設置して、カメラ装置50a及びカメラ装置50bによって撮影可能な最も広角側の画角で取得される動画像データの例を示している。本実施形態における最も広角側の画角の動画像データ(広角動画像データ)は、第1カメラ装置であるカメラ装置50a及び第2カメラ装置であるカメラ装置50bの撮像領域が重複していることから、カメラ装置50bがズーム撮像を行っている間でも、第1カメラ装置であるカメラ装置50aで取得されることが可能である。 FIG. 4 shows an example of moving image data acquired at the widest angle of view that can be captured by the camera device 50a and the camera device 50b by installing the camera system 100 according to FIG. In the moving image data (wide-angle moving image data) having the widest angle of view in the present embodiment, the imaging regions of the camera device 50a, which is the first camera device, and the camera device 50b, which is the second camera device, overlap. Therefore, even while the camera device 50b is performing zoom imaging, it can be acquired by the camera device 50a, which is the first camera device.
 本実施形態では、カメラシステム100によって、図示するような風景における道路上の人の顔認識(マッチング用のデータベースが存在する場合には顔認証)するための設定に基づいて説明を行う。このような顔認識(顔認証)のための拡大動画像データは、ズームを搭載した撮像レンズ22と雲台機構30とを備え、パン・チルト・ズーム制御され、指定された画角の拡大動画像データを取得する第2カメラ装置であるカメラ装置50bによって取得される。 In the present embodiment, the camera system 100 will be described based on the settings for face recognition (face recognition if a matching database exists) of a person on the road in a landscape as shown in the figure. The magnified moving image data for such face recognition (face recognition) is provided with an image pickup lens 22 equipped with a zoom and a pan head mechanism 30, pan / tilt / zoom controlled, and an enlarged moving image having a specified angle of view. It is acquired by the camera device 50b, which is a second camera device that acquires image data.
 このような例において、カメラシステム100の設置の際には、図5の工程S1に示すように、指定有効領域の設定がなされる。このような指定有効領域について、図4を参照しつつ説明する。指定有効領域とは、カメラシステムが撮像可能な領域のうち、監視対象物を補足したり、画像処理を実施したりする領域を意味する。 In such an example, when the camera system 100 is installed, the designated effective area is set as shown in step S1 of FIG. Such a designated effective region will be described with reference to FIG. The designated effective area means an area in which the camera system can capture an image and captures a monitored object or performs image processing.
 例えば、図4において、顔認識(顔認証)処理を行う場合には、カメラシステムが撮像可能な領域のうち、注目しないエリア(例えば、樹木エリア)を除いたエリア、つまり、道路上の人の顔認識(顔認証)を行う可能性のあるエリアを指定有効領域として設定することができる。このように、指定有効領域を設定することで、画像処理の領域を限定し、効率的な処理を行うことができる。
 しかし、上記のような指定有効領域の設定は必ずしも行う必要はなく、カメラシステムが撮像可能な領域の全域について監視対象物を補足したり、画像処理を実施したりしてもよいことは言うまでもない。
For example, in FIG. 4, when performing face recognition (face recognition) processing, an area excluding an area of interest (for example, a tree area) among the areas that can be imaged by the camera system, that is, a person on the road. An area where face recognition (face recognition) may be performed can be set as a designated effective area. By setting the designated effective area in this way, the image processing area can be limited and efficient processing can be performed.
However, it is not always necessary to set the designated effective area as described above, and it goes without saying that the monitored object may be supplemented or image processing may be performed on the entire area where the camera system can capture an image. ..
 次に、指定有効領域を設定した場合には、この領域を任意の個数のブロックに分割する(指定有効領域を設定しない場合には、カメラシステムが撮像可能な領域を任意の個数のブロックに分割すればよい)。本例では、縦3、横8のブロックに分割されている。このブロックの一つ一つに対しては、監視対象物(例えば、人や自動車などの動体)が動く方向に応じて、その監視対象物が指定有効領域から消失するまでの予想時間を設定する。本開示においては、このような予想時間を、「映込予想時間」と称する。(指定有効領域を設定しない場合には、監視対象物がカメラシステムが撮像可能な領域から消失するまでの予想時間が「映込予想時間」となる。) Next, when the designated effective area is set, this area is divided into an arbitrary number of blocks (when the designated effective area is not set, the area that can be imaged by the camera system is divided into an arbitrary number of blocks. do it). In this example, it is divided into 3 vertical blocks and 8 horizontal blocks. For each of these blocks, the estimated time until the monitored object disappears from the designated effective area is set according to the direction in which the monitored object (for example, a moving object such as a person or a car) moves. .. In the present disclosure, such an estimated time is referred to as an "estimated reflection time". (If the designated effective area is not set, the estimated time until the monitored object disappears from the area where the camera system can take an image is the "estimated reflection time".)
 上記のような映込予想時間は、画像解析を行う監視対象物が、広角画像中にどの程度の時間的長さで映り込むかを予想するために用いられる。これによって、カメラシステムが撮像可能な領域に複数の監視対象物が存在した場合に、いずれの監視対象物に対して優先的にズーム等の画像処理を施すべきかを判断するための情報を得ることができる。 The expected reflection time as described above is used to predict how long the monitored object to be image-analyzed will be reflected in the wide-angle image. As a result, when there are a plurality of monitored objects in the area where the camera system can take an image, information for determining which monitored object should be preferentially subjected to image processing such as zooming is obtained. be able to.
 次に、ユーザは、図5の工程S2に示すように、指定有効領域内のブロック毎・監視対象物の移動方向毎に映込予想時間を設定する。なお、映込予想時間は、カメラシステムを構築後に準備期間を設け、後述する運用中の更新方法で設定されてもよい。このブロック毎・監視対象物の移動方向毎に設定された映込予想時間は、映込予想時間テーブルに記憶される。図6は、このような映込予想時間テーブルの概念を説明する図である。 Next, as shown in step S2 of FIG. 5, the user sets the expected projection time for each block in the designated effective area and for each moving direction of the monitored object. The estimated projection time may be set by a preparation period after the camera system is constructed and by an update method during operation, which will be described later. The estimated projection time set for each block and each movement direction of the monitored object is stored in the estimated projection time table. FIG. 6 is a diagram illustrating the concept of such an expected projection time table.
 図6においては、指定有効領域のあるブロックBmnで認識された監視対象物が、(a)乃至(h)までの8方向に移動しているとき、その監視対象物が指定有効領域から消失するまでの予想時間が方向毎に設定されることを示している。例えば、ブロックBmnで監視対象物が、(a)の方向に移動している認識された場合には、映込予想時間は、Tmn(0,1)が設定されていることを示している。これらの予想時間の設定については後述するが、監視対象物の平均的な移動速度などを踏まえて設定方向ごとに適宜定めてもよいし、監視対象物の移動速度に応じて変化するものとして規定してもよい。 In FIG. 6, when the monitored object recognized by the block B mn having the designated effective area is moving in the eight directions from (a) to (h), the monitored object disappears from the designated effective area. It shows that the expected time to do is set for each direction. For example, when it is recognized that the monitored object is moving in the direction of (a) in the block B mn , it indicates that the expected projection time is set to T mn (0, 1) . There is. The setting of these estimated times will be described later, but it may be set appropriately for each setting direction based on the average moving speed of the monitored object, or it is specified as changing according to the moving speed of the monitored object. You may.
 なお、本実施形態では、監視対象物の映込予想時間は、監視対象物の移動方向として8方向毎で設定されているが、予想時間の設定方向の数は8方向に限らない。例えば、紙面を見て上下左右方向の4方向での設定であってもよいし、8方向より多くして、例えば16方向としてもよい。 In the present embodiment, the expected reflection time of the monitored object is set in every eight directions as the moving direction of the monitored object, but the number of setting directions of the estimated time is not limited to eight directions. For example, it may be set in four directions in the vertical and horizontal directions when looking at the paper surface, or it may be set in more than eight directions, for example, 16 directions.
 さらに、画像解析によって抽出された監視対象物の移動方向が、予想時間を設定した方向に当てはまらない場合は、設定した方向のうち、監視対象物の移動方向と最も近い方向の映込予想時間を適用してもよい。また、監視対象物の移動方向を予想時間の設定方向にベクトル的に分解し、これらの分解された方向の予想時間の合成として算出してもよい。
 また、監視対象物が静止している場合は、すべての方向の映込予想時間の平均を適用してもよい。また、監視対象物が複数のブロックにまたがる場合は、監視対象物の中心点(例えば、監視対象物を2次元図形として見たときの重心点)が入る方のブロックの映込予想時間を適用してもよいし、複数のブロックの映込予想時間の平均を適用してもよい。
Furthermore, if the moving direction of the monitored object extracted by image analysis does not correspond to the direction in which the estimated time is set, the projected projection time in the direction closest to the moving direction of the monitored object among the set directions is set. May be applied. Further, the moving direction of the monitored object may be vector-decomposed in the direction in which the expected time is set, and calculated as a combination of the estimated times in these decomposed directions.
Further, when the monitored object is stationary, the average of the expected projection times in all directions may be applied. If the monitored object spans multiple blocks, the estimated projection time of the block containing the center point of the monitored object (for example, the center of gravity when the monitored object is viewed as a two-dimensional figure) is applied. Alternatively, the average expected projection time of a plurality of blocks may be applied.
 このような映込予想時間を、図7によって例示的に説明する。図7において、ブロックB22で監視対象物が、(c)の方向に移動しているとすると、カメラシステムが撮像可能な領域の端部までの距離は相対的に長い。従って、(c)方向に移動する監視対象物の映込予想時間は相対的に長くなる。一方、この監視対象物が(g)の方向に移動しているとすると、カメラシステムが撮像可能な領域の端部までの距離は相対的に短くなるので、(g)方向に移動する監視対象物の映込予想時間は相対的に短くなる。 Such an estimated projection time will be illustrated by way of FIG. 7. In FIG. 7, assuming that the object to be monitored is moving in the direction of (c) in the block B 22 , the distance to the end of the region where the camera system can take an image is relatively long. Therefore, the expected reflection time of the monitored object moving in the direction (c) is relatively long. On the other hand, if this monitoring object is moving in the direction (g), the distance to the end of the area where the camera system can take an image is relatively short, so the monitoring object moving in the direction (g). The expected projection time of an object is relatively short.
 以上のように指定有効領域内のブロック毎・監視対象物の移動方向毎に映込予想時間を設定した映込予想時間テーブルを用意し、本発明に係るカメラシステム100の設置・使用開始準備が完了する。 As described above, an expected projection time table in which the estimated projection time is set for each block in the designated effective area and for each movement direction of the monitored object is prepared, and the camera system 100 according to the present invention is ready for installation and start of use. Complete.
(顔認識処理の実施例)  次に、本実施例に係るカメラシステム100を用いて、所定エリアの監視のために顔認識(顔認証)処理を行う場合の例について説明する。
 図8は本発明の実施形態に係るカメラシステム100における顔認識(顔認証)処理のフローチャートを示す図である。このようなフローチャートに基づくプログラムは、メモリ304にカメラシステム用アプリケーション350として格納し、プロセッサ302上で実行されるように構成することができる。
(Example of Face Recognition Processing) Next, an example in which face recognition (face recognition) processing is performed for monitoring a predetermined area using the camera system 100 according to this embodiment will be described.
FIG. 8 is a diagram showing a flowchart of face recognition (face recognition) processing in the camera system 100 according to the embodiment of the present invention. A program based on such a flowchart can be stored in the memory 304 as an application 350 for a camera system and configured to be executed on the processor 302.
 図8において、ステップS100で顔認識(顔認証)処理が開始されると、続いて、ステップS101に進み、第1カメラ装置であるカメラ装置50aから広角動画像データを取得する。図9は取得される広角動画像データの一例を示す図である。 In FIG. 8, when the face recognition (face recognition) process is started in step S100, the process proceeds to step S101, and wide-angle moving image data is acquired from the camera device 50a, which is the first camera device. FIG. 9 is a diagram showing an example of acquired wide-angle moving image data.
 ステップS102では、取得した広角動画像データを画像解析することで監視対象物を抽出する。このような画像解析による監視対象物の抽出には、背景差分による物体の抽出処理などの公知の方法を採用することができる。監視対象物が車両や人等の特定の形状をもつ場合は、パターンマッチングや骨格検知等の方法で抽出してもよい。 In step S102, the monitored object is extracted by image analysis of the acquired wide-angle moving image data. For the extraction of the monitored object by such image analysis, a known method such as an object extraction process based on background subtraction can be adopted. When the object to be monitored has a specific shape such as a vehicle or a person, it may be extracted by a method such as pattern matching or skeleton detection.
 次のステップS103では、監視対象物が抽出された位置、より具体的にはブロックを特定し、さらに監視対象物の移動方向を特定する。監視対象物の移動方向の特定には、その監視対象物の特徴点から移動方向を算出する公知の手法を利用することができる。図9を参照して説明すると、このようなステップでは、ブロックB23とブロックB24のそれぞれに監視対象物が存在することを特定し、矢印の方向にそれぞれの監視対象物が移動していることを特定する。 In the next step S103, the position where the monitored object is extracted, more specifically, the block is specified, and further, the moving direction of the monitored object is specified. To specify the moving direction of the monitored object, a known method of calculating the moving direction from the feature points of the monitored object can be used. Explaining with reference to FIG. 9, in such a step, it is specified that the monitored object exists in each of the block B 23 and the block B 24 , and each monitored object is moved in the direction of the arrow. Identify that.
 ステップS104では、特定された監視対象物のブロックと移動方向から、映込予想時間テーブルを参照し、映込予想時間テーブル中に記憶されている映込予想時間を取得する。図10は映込予想時間の取得を説明する図である。図10では、映込予想時間テーブルから、ブロックB23で抽出された監視対象物の映込予想時間としてT23(-1,-1)が取得され、ブロックB24で抽出された監視対象物の映込予想時間としてT24(1,0)が取得されたことが示されている。なお、ここではT23(-1,-1)<T24(1,0)であるものとする。 In step S104, the expected projection time table is referred to from the block and the moving direction of the specified monitored object, and the estimated projection time stored in the estimated projection time table is acquired. FIG. 10 is a diagram illustrating acquisition of the expected projection time. In FIG. 10, T 23 (-1, -1) is acquired as the expected projection time of the monitored object extracted in the block B 23 from the estimated projection time table, and the monitored object extracted in the block B 24 is obtained. It is shown that T 24 (1,0) was acquired as the expected projection time of. Here, it is assumed that T 23 (-1, -1) <T 24 (1, 0) .
 ステップS105では、広角動画像データ中における、全ての監視対象物について、映込予想時間を取得したか否かが判定される。ステップS105における判定がNOであれば、ステップS112に進み、次の監視対象物に着目して、ステップS103、ステップS104の処理を行うべくループする。一方、ステップ105における判定がYESであれば、続いてステップS106に進む。 In step S105, it is determined whether or not the expected projection time has been acquired for all the monitored objects in the wide-angle moving image data. If the determination in step S105 is NO, the process proceeds to step S112, paying attention to the next monitored object, and looping to perform the processes of steps S103 and S104. On the other hand, if the determination in step 105 is YES, the process proceeds to step S106.
 ステップS106では、映込予想時間がより短い方の監視対象物の拡大動画像データを取得するように第2カメラ装置であるカメラ装置50bに対して制御指令を送信する。図9、図10の例によれば、映込予想時間のT23(-1,-1)<T24(1,0)の関係から、先に指定有効領域から消えることが予想されるブロックB23で抽出された監視対象物に対しての拡大動画像データを取得するように、カメラ装置50bに対して制御指令が送信される。この制御指令はカメラ装置50bに対して、パン・チルト・ズーム制御によって、ブロックB23で抽出された監視対象物の光学式ズームによる拡大動画像データの取得を指令するものである。カメラ装置50bでは、このような制御指令に基づき、拡大動画像データの取得を行う。 In step S106, a control command is transmitted to the camera device 50b, which is the second camera device, so as to acquire the magnified moving image data of the monitored object having the shorter expected projection time. According to the examples of FIGS. 9 and 10, the block that is expected to disappear from the designated effective area first due to the relationship of T 23 (-1, -1) <T 24 (1, 0) of the estimated projection time. A control command is transmitted to the camera device 50b so as to acquire magnified moving image data for the monitored object extracted by B 23 . This control command commands the camera device 50b to acquire magnified moving image data by optical zoom of the monitored object extracted by the block B 23 by pan / tilt / zoom control. The camera device 50b acquires enlarged moving image data based on such a control command.
 ステップS107では、第2カメラ装置であるカメラ装置50bから取得した拡大動画像データの画像解析を行う。図11は、カメラ装置50bから取得される拡大動画像データの例を示している。ここで、本実施態様に係るカメラシステム100においては、カメラ装置50bがパン・チルト・ズーム制御されることで、監視対象物の光学式ズームによる拡大動画像データを取得するので、顔認識や顔認証の精度を向上させることが可能となる。 In step S107, image analysis of the enlarged moving image data acquired from the camera device 50b, which is the second camera device, is performed. FIG. 11 shows an example of enlarged moving image data acquired from the camera device 50b. Here, in the camera system 100 according to the present embodiment, the camera device 50b is pan / tilt / zoom controlled to acquire magnified moving image data by optical zoom of the monitored object, so that face recognition and face recognition and face are performed. It is possible to improve the accuracy of authentication.
 ステップS108では、拡大動画像データの画像解析に基づいて、その監視対象物が人であるか否かが判定される。この判定がNOであると、ステップS111まで進み、人の顔の特徴量などを数値化する処理についてはスキップする。 In step S108, it is determined whether or not the monitored object is a human being based on the image analysis of the enlarged moving image data. If this determination is NO, the process proceeds to step S111, and the process of quantifying the feature amount of the human face and the like is skipped.
 一方、ステップS108の判定がYESであると、ステップS109に進み、人である当該監視対象物の顔の特徴量を数値化する。この顔の特徴量の数値化のためのアルゴリズムは従来技術を適宜用いることができる。ステップS110では、所定値以上の顔の特徴量を取得することができたかが判定される。 On the other hand, if the determination in step S108 is YES, the process proceeds to step S109, and the facial feature amount of the monitored object, which is a human, is quantified. Conventional techniques can be appropriately used as the algorithm for quantifying the facial features. In step S110, it is determined whether or not the facial feature amount of a predetermined value or more can be acquired.
 ステップS110における判定がYESであれば、当該監視対象物に対しては十分な情報(顔の特徴量)が取得できたものと判断し、ステップS111へと進む。一方、ステップS110における判定がNOであると、続いてステップS114に進み、まず予め定められた所定の時間(タイムアウト時間)が経過したか否かが判定される。 If the determination in step S110 is YES, it is determined that sufficient information (facial feature amount) has been acquired for the monitored object, and the process proceeds to step S111. On the other hand, if the determination in step S110 is NO, the process proceeds to step S114, and it is first determined whether or not a predetermined predetermined time (time-out time) has elapsed.
 このようなタイムアウト時間が設定されているのは、当該監視対象物が人であったとしても、拡大動画像データに顔が写っていない場合には、撮像レンズ22のズームの調整によっても判定が不可能であるからである。カメラシステムが撮像可能な領域に複数の監視対象物が存在している場合には、他の監視対象物の拡大動画像データを取得に処理を振り向ける方が効率的である。従って、ステップS114における判定がYESであれば、当該監視対象物についての顔の特徴量の取得を諦め、ステップS111へと進む。 Such a time-out time is set even if the monitored object is a person, but if the face is not shown in the magnified moving image data, it can be determined by adjusting the zoom of the image pickup lens 22. Because it is impossible. When there are a plurality of monitored objects in the area where the camera system can take an image, it is more efficient to direct the processing to the acquisition of the magnified moving image data of the other monitored objects. Therefore, if the determination in step S114 is YES, the acquisition of the facial feature amount for the monitored object is given up, and the process proceeds to step S111.
 一方、ステップS114における判定がNOであれば、ステップS115に進み、撮像レンズ22のズーム動作により画角を調整して、拡大動画像データを取得して、再びステップS109で顔の特徴量を数値化し、ステップS110では、所定値以上の顔の特徴量を取得することができたか否かが判定される処理をループする。このように本発明に係るカメラシステム100においては、タイムアウト時間が経過するまでは、画角を調整しつつ、所定値以上の顔の特徴量を取得するべく努めるような設定がなされている。 On the other hand, if the determination in step S114 is NO, the process proceeds to step S115, the angle of view is adjusted by the zoom operation of the image pickup lens 22, the enlarged moving image data is acquired, and the facial feature amount is numerically measured again in step S109. In step S110, a process for determining whether or not a facial feature amount of a predetermined value or more can be acquired is looped. As described above, in the camera system 100 according to the present invention, the setting is made so as to try to acquire the facial feature amount of a predetermined value or more while adjusting the angle of view until the time-out time elapses.
 ステップS111においては、全ての監視対象物について十分な特徴量が取得できたか、又は、十分な特徴量が取得できずタイムアウトとなってしまったか否かが判定される。ステップS111の判定がNOであると、ステップS113に進み、次の監視対象物に着目して、所定値以上の顔の特徴量を取得する一連の処理を行うべくループする。 In step S111, it is determined whether a sufficient feature amount has been acquired for all the monitored objects, or whether a time-out has occurred because a sufficient feature amount could not be acquired. If the determination in step S111 is NO, the process proceeds to step S113, focusing on the next monitored object, and looping to perform a series of processes for acquiring facial features of a predetermined value or more.
 図9、図10の例では、ブロックB23で抽出された監視対象物についての顔の特徴量の取得処理が完了すると、続いて、ステップS113で、ブロックB24で抽出された監視対象物について着目して、拡大動画像データを取得するべく、カメラ装置50bに対して、パン・チルト・ズーム制御が行われることとなる。一方、ステップS111における判定がYESとなると、ステップS116に進み、顔認識(顔認証)処理を終了する。
 なお、ステップS113によって次の監視対象物の拡大画像を取得する際には、最初の監視対象物の拡大画像を取得している間に、次の監視対象物は移動していることとなるが、本実施態様のカメラシステムにおいては、第2カメラ装置がパン・チルト・ズーム制御を行っている間でも、第1のカメラ装置が次の監視対象物の撮影を継続しているため、次の監視対象物を見失うことがない。
 また、最初の監視対象物の拡大画像を取得している間に、新たな監視対象物が発生した場合でも第1のカメラ装置がこれを補足することができる。このように新たな監視対象物が発生した場合には、図8のフローチャートにおいて、ステップS113の遷移先をステップS101とステップS102の間として、新たな監視対象物と既認識の監視対象物との間で、再度、映込予想時間の比較を行った上で、次の拡大画像の取得対象を決定することとしてもよい。
In the examples of FIGS. 9 and 10, when the acquisition process of the facial feature amount of the monitored object extracted in the block B 23 is completed, subsequently, in step S113, the monitored object extracted in the block B 24 Focusing on this, pan / tilt / zoom control is performed on the camera device 50b in order to acquire magnified moving image data. On the other hand, if the determination in step S111 is YES, the process proceeds to step S116, and the face recognition (face recognition) process is terminated.
When the enlarged image of the next monitored object is acquired in step S113, the next monitored object is moving while the enlarged image of the first monitored object is acquired. In the camera system of the present embodiment, since the first camera device continues to shoot the next monitored object even while the second camera device performs pan / tilt / zoom control, the following Never lose track of what you are monitoring.
Further, even if a new monitored object is generated while the magnified image of the first monitored object is acquired, the first camera device can supplement this. When a new monitoring target is generated in this way, in the flowchart of FIG. 8, the transition destination of step S113 is set between steps S101 and S102, and the new monitoring target and the recognized monitoring target are set. After comparing the expected projection times again, the acquisition target of the next enlarged image may be determined.
 以上のように、本実施態様に係るカメラシステム100においては、映込予想時間テーブルから取得される映込予想時間に基づいて、画像解析により抽出された監視対象物毎に優先順位付けをした上で、より早く動画像データ(の指定有効領域)から消失することが予想される監視対象物を優先して、拡大動画像データを取得するように、カメラ装置50bに対しパン・チルト・ズーム制御を行うよう構成されている。このような構成によれば、監視領域において、人の顔の解像度が低下したりすることもなく、精度のよい顔認識や顔認証を実現できるし、また、監視領域で人の顔認識や顔認証に失敗するような確率を低減することが可能となる。 As described above, in the camera system 100 according to the present embodiment, the monitoring objects extracted by the image analysis are prioritized based on the expected projection time acquired from the estimated projection time table. Then, pan / tilt / zoom control is performed on the camera device 50b so as to give priority to the monitored object that is expected to disappear from the moving image data (designated effective area) earlier and acquire the enlarged moving image data. Is configured to do. According to such a configuration, accurate face recognition and face recognition can be realized without deteriorating the resolution of the human face in the monitoring area, and human face recognition and face recognition and face can be realized in the monitoring area. It is possible to reduce the probability that authentication will fail.
(映込予想時間の設定)  本発明に係るカメラシステム100が設置・設定される際には、指定有効領域内のブロック毎・監視対象物の移動方向毎に映込予想時間を設定した映込予想時間テーブルを予め準備しておくことを説明した。このような映込予想時間テーブルで初期に設定された予想時間は、カメラシステム100を実運用してみると、実際の映込予想時間と異なっていることが考えられる。そこで、映込予想時間テーブルが、カメラシステム100の実運用に伴い、適宜更新されるように構成されることが好ましい。 (Setting of estimated projection time) When the camera system 100 according to the present invention is installed and set, the estimated projection time is set for each block in the designated effective area and for each moving direction of the monitored object. I explained that the estimated time table should be prepared in advance. It is conceivable that the estimated time initially set in such an estimated projection time table is different from the actual estimated projection time when the camera system 100 is actually operated. Therefore, it is preferable that the expected projection time table is configured to be appropriately updated with the actual operation of the camera system 100.
 以下、カメラシステム100における映込予想時間テーブルの更新処理について説明する。図12は本発明の実施形態に係るカメラシステム100における映込予想時間テーブルのテーブル更新処理のフローチャートを示す図である。このようなフローチャートに基づくプログラムは、メモリ304にカメラシステム用アプリケーション350として格納し、プロセッサ302上で実行されるように構成することができる。 Hereinafter, the update process of the expected projection time table in the camera system 100 will be described. FIG. 12 is a diagram showing a flowchart of table update processing of the expected projection time table in the camera system 100 according to the embodiment of the present invention. A program based on such a flowchart can be stored in the memory 304 as an application 350 for a camera system and configured to be executed on the processor 302.
 図12において、ステップS200でテーブル更新処理が開始されると、続いて、ステップS201に進み、第1カメラ装置であるカメラ装置50aから広角動画像データを取得する。 In FIG. 12, when the table update process is started in step S200, the process proceeds to step S201, and wide-angle moving image data is acquired from the camera device 50a, which is the first camera device.
 ステップS202では、取得した広角動画像データを画像解析することで監視対象物を抽出する。このような画像解析による監視対象物の抽出には、背景差分による物体の抽出処理などの公知の方法を採用することができる。 In step S202, the monitored object is extracted by image analysis of the acquired wide-angle moving image data. For the extraction of the monitored object by such image analysis, a known method such as an object extraction process based on background subtraction can be adopted.
 次のステップS203では、監視対象物が抽出された位置、より具体的にはブロックを特定し、さらに監視対象物の移動方向を特定する。監視対象物の移動方向の特定には、その監視対象物の特徴点から移動方向を算出する公知の手法を利用することができる。 In the next step S203, the position where the monitored object is extracted, more specifically, the block is specified, and the moving direction of the monitored object is further specified. To specify the moving direction of the monitored object, a known method of calculating the moving direction from the feature points of the monitored object can be used.
 ステップS204では、先のステップで抽出された監視対象物に着目しておき、その監視対象物が指定有効領域から消失するまでの時間を計時する。 In step S204, attention is paid to the monitored object extracted in the previous step, and the time until the monitored object disappears from the designated effective area is timed.
 続いて、ステップS205では、ステップS203で特定した位置(ブロック)と、着目した監視対象物の移動方向と、ステップS204で計時した当該監視対象物の消失までの時間とにより映込予想時間テーブルの更新を行うようにする。 Subsequently, in step S205, the projected projection time table is based on the position (block) specified in step S203, the moving direction of the monitored object of interest, and the time until the monitored object disappears measured in step S204. Make an update.
 映込予想時間テーブルの更新を行う際には、新規に取得された計時時間で上書きするように更新する方法、或いは、新規に取得された計時時間と、すでにテーブルに記述されている予想時間との平均値をとり、この平均値によって更新する方法などを採用することができる。また、当該平均値をとる際には、新規に取得された計時時間と、すでにテーブルに記述されている予想時間とに、重み付けをして平均値をとるようにしてもよい。続く、ステップS206で、テーブル更新処理を終了する。 When updating the projected expected time table, the method of updating so as to overwrite with the newly acquired timed time, or the newly acquired timed time and the estimated time already described in the table. It is possible to adopt a method of taking the average value of and updating with this average value. Further, when taking the average value, the newly acquired timekeeping time and the expected time already described in the table may be weighted and the average value may be taken. Then, in step S206, the table update process is terminated.
 以上のような映込予想時間テーブルの更新処理を行うことで、初期に設定した映込予想時間テーブルと、実際の映込予想時間との間に乖離があったとしても、カメラシステム100の累積運用時間の増大に伴い、より確度の高い映込予想時間が、映込予想時間テーブルに記述されるようになる。 By updating the expected projection time table as described above, even if there is a discrepancy between the initially set estimated projection time table and the actual expected projection time, the cumulative total of the camera system 100 As the operating time increases, the expected projection time with higher accuracy will be described in the estimated projection time table.
(第2の実施形態)  次に本発明の第2の実施形態について説明する。先の実施形態では2台のカメラ装置50からなるカメラシステム100の運用について説明したが、第2の実施形態に係るカメラシステム100は3台以上のカメラ装置50を利用することも可能である。ここでは、4台のカメラ装置50からなるカメラシステム100の実施形態について説明する。
 なお、以下では、先に説明した実施形態との相違点を中心的に説明することとする。以下に説明のない点については、先の実施形態の説明と同様である。
(Second Embodiment) Next, the second embodiment of the present invention will be described. In the previous embodiment, the operation of the camera system 100 including the two camera devices 50 has been described, but the camera system 100 according to the second embodiment can also use three or more camera devices 50. Here, an embodiment of a camera system 100 including four camera devices 50 will be described.
In the following, the differences from the embodiments described above will be mainly described. The points not described below are the same as those described in the previous embodiment.
 図13は第2の実施形態に係るカメラシステム100の概要構成を示す図である。第2の実施形態に係るカメラシステム100で用いる4台のカメラ装置50も、図2に詳しく説明したものと同様のものを4台用いることができる。これら4台のカメラ装置50を区別するためにサフィックスa,b,c,dを用いることとする。 FIG. 13 is a diagram showing an outline configuration of the camera system 100 according to the second embodiment. As the four camera devices 50 used in the camera system 100 according to the second embodiment, four cameras similar to those described in detail in FIG. 2 can be used. The suffixes a, b, c and d will be used to distinguish these four camera devices 50.
 図14は4台のカメラ装置50a,50b,50c,50dによる画角を示す図である。図14では、カメラ装置50a,50b,50c,50dそれぞれの撮像レンズ22の最も広角側の画角を示している。さらに、図14では各カメラ装置50の画角が理解できるように、画角を示す線には種類の異なるものを用いると共に、画角を示す線を意図的に上下方向においてずらして記載している。したがって、各カメラ装置の上下方向の画角は一致してもよい。 FIG. 14 is a diagram showing the angles of view of the four camera devices 50a, 50b, 50c, and 50d. FIG. 14 shows the angle of view on the widest angle side of the image pickup lens 22 of each of the camera devices 50a, 50b, 50c, and 50d. Further, in FIG. 14, different types of lines indicating the angle of view are used so that the angle of view of each camera device 50 can be understood, and the lines indicating the angle of view are intentionally shifted in the vertical direction. There is. Therefore, the angles of view in the vertical direction of each camera device may be the same.
 第2の実施形態に係るカメラシステム100では、全てのカメラシステム100を構成するカメラ装置50a,50b,50c,50dでカバーし得る最も広角側の画角の画像については、常時、広角動画像データの取得を行うようにする。図13に示す全体の画角のうち、両端部の広角動画像データについては、カメラ装置50a及びカメラ装置50dによってしか取得が不可能であるため、4台のカメラ装置のうち、カメラ装置50a及びカメラ装置50dについては、第1カメラ装置専用のカメラ装置として用いることなる。 In the camera system 100 according to the second embodiment, the wide-angle moving image data is always used for the image having the widest angle of view that can be covered by the camera devices 50a, 50b, 50c, 50d constituting all the camera systems 100. To get. Of the total angle of view shown in FIG. 13, the wide-angle moving image data at both ends can be acquired only by the camera device 50a and the camera device 50d. Therefore, among the four camera devices, the camera device 50a and the camera device 50a The camera device 50d will be used as a camera device dedicated to the first camera device.
 カメラ装置50bが最も広角側で広角動画像データの取得を行っている間は、図13の(X)と(Y)に示すエリアをカメラ装置50bでカバーすることができるので、カメラ装置50cについては、パン・チルト・ズーム制御され、指定された画角の拡大動画像データを取得するように活用することができる。この場合、カメラ装置50bは第1カメラ装置として利用され、カメラ装置50cは第2カメラ装置として利用されることとなる。 While the camera device 50b is acquiring wide-angle moving image data on the widest angle side, the areas shown in FIGS. 13 (X) and 13 (Y) can be covered by the camera device 50b. Is pan / tilt / zoom controlled and can be used to acquire enlarged moving image data of a specified angle of view. In this case, the camera device 50b is used as the first camera device, and the camera device 50c is used as the second camera device.
 カメラ装置50cが最も広角側で広角動画像データの取得を行っている間は、図13の(Y)と(Z)に示すエリアをカメラ装置50cでカバーすることができるので、カメラ装置50bについては、パン・チルト・ズーム制御され、指定された画角の拡大動画像データを取得するように活用することができる。この場合、カメラ装置50cは第1カメラ装置として利用され、カメラ装置50bは第2カメラ装置として利用されることとなる。 While the camera device 50c is acquiring wide-angle moving image data on the widest angle side, the areas shown in FIGS. 13 (Y) and (Z) can be covered by the camera device 50c. Is pan / tilt / zoom controlled and can be used to acquire enlarged moving image data of a specified angle of view. In this case, the camera device 50c is used as the first camera device, and the camera device 50b is used as the second camera device.
 したがって、第2の実施形態では、第1カメラ装置及び第2カメラ装置の重複した撮像領域は、図13の(X) 、(Y)、(Z)の領域となり、カメラ装置50b、カメラ装置50cが第1カメラ装置・第2カメラ装置兼用のカメラ装置として活用されることで、先の実施形態と同様の運用を行うことが可能となる。ここで、カメラシステム100のユーザによって、指定有効領域として設定され得るエリアは、図13の(X) 、(Y)、(Z)の領域内となる。
 このような第2の実施形態に係るカメラシステム100によっても、先の実施形態と同様に、指定有効領域と映込予想時間テーブルとを設定することで、より広い監視領域に対応して、精度よく顔認識や顔認証を行うことができる。
 なお、第2の実施形態において映込予想時間テーブルを設定するに際しては、図13の(X) 、(Y)、(Z)の領域内が、監視対象物を補足したり、画像処理を実施したりすることが可能な領域となることから、図13の(X) 、(Y)、(Z)の領域から監視対象物が消失するまでの予想時間を設定することとなる。
 そして、第2の実施形態においては、カメラ装置50aから50dを制御装置によって連動して用いることによって、先の実施形態に比較して、より広範な範囲について監視を行うことが可能となる。
Therefore, in the second embodiment, the overlapping imaging regions of the first camera device and the second camera device are the regions (X), (Y), and (Z) of FIG. 13, and the camera device 50b and the camera device 50c. Is utilized as a camera device for both the first camera device and the second camera device, so that the same operation as that of the previous embodiment can be performed. Here, the area that can be set as the designated effective area by the user of the camera system 100 is within the areas (X), (Y), and (Z) of FIG.
Similarly to the previous embodiment, the camera system 100 according to the second embodiment also has accuracy corresponding to a wider monitoring area by setting the designated effective area and the expected projection time table. You can often perform face recognition and face recognition.
When setting the expected projection time table in the second embodiment, the areas (X), (Y), and (Z) in FIG. 13 capture the monitored object or perform image processing. Since it is an area that can be used, the estimated time until the monitored object disappears from the areas (X), (Y), and (Z) in FIG. 13 is set.
Then, in the second embodiment, by using the camera devices 50a to 50d in conjunction with each other by the control device, it is possible to monitor a wider range as compared with the previous embodiment.
 図15は第2の実施形態に係るカメラシステム100の運用例を示す図である。図15は、第2の実施形態に係るカメラシステム100で取得される全体の動画像データを示しており、動画像データ中には監視対象物A、監視対象物Bが存在する状況を示している。以下、映込予想時間は監視対象物Aの方が、監視対象物Bに比べて短いことを前提に説明する。 FIG. 15 is a diagram showing an operation example of the camera system 100 according to the second embodiment. FIG. 15 shows the entire moving image data acquired by the camera system 100 according to the second embodiment, and shows a situation in which the monitored object A and the monitored object B are present in the moving image data. There is. Hereinafter, the expected projection time will be described on the premise that the monitored object A is shorter than the monitored object B.
 図15(A)において、映込予想時間がより短い監視対象物Aについて、パン・チルト・ズーム制御されたカメラ装置50bで拡大動画像データを取得しつつ、カメラ装置50cでは広角動画像データを取得している。図15(A)では、カメラ装置50cは第1カメラ装置として利用され、カメラ装置50bは第2カメラ装置として利用されることとなる。図15(A)で、監視対象物Aについて、所定値以上の顔の特徴量が取得できると、これまで説明したように、続いて、カメラシステム100は、監視対象物Bについての拡大動画像データを取得する処理を実行する。 In FIG. 15A, for the monitored object A having a shorter expected projection time, the camera device 50c acquires the magnified moving image data with the pan / tilt / zoom controlled camera device 50b, while the camera device 50c obtains the wide-angle moving image data. Have acquired. In FIG. 15A, the camera device 50c is used as the first camera device, and the camera device 50b is used as the second camera device. As described above, as described above, when the feature amount of the face equal to or higher than the predetermined value can be acquired for the monitored object A in FIG. 15 (A), the camera system 100 subsequently obtains an enlarged moving image of the monitored object B. Execute the process to acquire the data.
 図15(B)は図15(A)の後刻の様子を示しており、監視対象物Bについて、パン・チルト・ズーム制御されたカメラ装置50cで拡大動画像データを取得しつつ、カメラ装置50bでは広角動画像データを取得している。図15(B)では、カメラ装置50bは第1カメラ装置として利用され、カメラ装置50cは第2カメラ装置として利用されることとなる。 FIG. 15B shows a situation after FIG. 15A, and the camera device 50b is obtained with the pan / tilt / zoom controlled camera device 50c while acquiring the magnified moving image data of the monitored object B. Is acquiring wide-angle moving image data. In FIG. 15B, the camera device 50b is used as the first camera device, and the camera device 50c is used as the second camera device.
 以上のような、3台以上のカメラ装置で構成される第2の実施形態に係るカメラシステム100においても、拡大動画像データを取得するために第2カメラ装置として利用されるカメラ装置の画角を、他のカメラ装置で補完するようにすることで、監視対象の拡大動画像データを解像度を低下させることなく取得する一方で、第1カメラ装置として利用される他のカメラ装置で、監視領域全体の広角動画像データの取得を継続することが可能となる。 Also in the camera system 100 according to the second embodiment composed of three or more camera devices as described above, the angle of view of the camera device used as the second camera device for acquiring the magnified moving image data. Is complemented by another camera device, so that the enlarged moving image data of the monitoring target is acquired without degrading the resolution, while the monitoring area is obtained by another camera device used as the first camera device. It is possible to continue to acquire the entire wide-angle moving image data.
 以上、本発明によれば、カメラシステム100が監視する領域において、第1カメラ装置で広角動画像データを取得しつつ、第2カメラ装置で、着目する監視対象物の拡大動画像データを取得するので、監視領域全体の動画像データが欠落することがないし、また、人の顔の解像度が低下したりすることもなく、精度のよい顔認識や顔認証を実現できる。
 また、本実施形態では、監視対象物が人であったが、監視対象物は車両でもよい。監視対象物を車両とした場合、拡大画像データから顔の特徴量を数値化する代わりに、車両のナンバープレートの情報を取得するようにしてもよい。
As described above, according to the present invention, in the area monitored by the camera system 100, the first camera device acquires wide-angle moving image data, and the second camera device acquires enlarged moving image data of the monitored object of interest. Therefore, the moving image data of the entire monitoring area is not lost, and the resolution of the human face is not lowered, so that accurate face recognition and face recognition can be realized.
Further, in the present embodiment, the object to be monitored is a person, but the object to be monitored may be a vehicle. When the object to be monitored is a vehicle, the information on the license plate of the vehicle may be acquired instead of quantifying the facial features from the enlarged image data.
1・・・固定部2・・・水平回転部3・・・カメラ筐体4・・・水平回転用パルスモータ5・・・垂直回転用パルスモータ6・・・水平回転用シャフト7・・・水平回転用ウォームギヤ8・・・垂直回転用シャフト9・・・信号ケーブル10・・・カメラ制御回路13・・・垂直回転用ベルト14・・・ケーブル22・・・撮像レンズ25・・・動画撮像装置30・・・雲台機構40・・・電源ユニット43・・・電力ケーブル44・・・原点旋回角センサ45・・・原点仰角センサ46・・・水平回転用モータ駆動回路47・・・垂直回転用モータ駆動回路50・・・カメラ装置100・・・カメラシステム300・・・コンピュータシステム302・・・プロセッサ302A、302B・・・汎用プログラマブル中央処理装置(CPU)304・・・メモリ306・・・メモリバス308・・・I/Oバス309・・・バスインターフェースユニット310・・・I/Oバスインターフェースユニット312・・・端末インターフェース314・・・ストレージインターフェース、316・・・I/O(入出力)デバイスインターフェース318・・・ネットワークインターフェース320・・・ユーザI/Oデバイス322・・・ストレージ装置324・・・表示システム326・・・表示装置330・・・ネットワーク350・・・カメラシステム用アプリケーション 1 ... Fixed part 2 ... Horizontal rotation part 3 ... Camera housing 4 ... Horizontal rotation pulse motor 5 ... Vertical rotation pulse motor 6 ... Horizontal rotation shaft 7 ... Warm gear for horizontal rotation 8 ... Shaft for vertical rotation 9 ... Signal cable 10 ... Camera control circuit 13 ... Belt for vertical rotation 14 ... Cable 22 ... Imaging lens 25 ... Video imaging Device 30 ... Cloud stand mechanism 40 ... Power supply unit 43 ... Power cable 44 ... Origin turning angle sensor 45 ... Origin elevation angle sensor 46 ... Horizontal rotation motor drive circuit 47 ... Vertical Rotating motor drive circuit 50 ... Camera device 100 ... Camera system 300 ... Computer system 302 ... Processors 302A, 302B ... General-purpose programmable central processing device (CPU) 304 ... Memory 306 ... -Memory bus 308 ... I / O bus 309 ... Bus interface unit 310 ... I / O bus interface unit 312 ... Terminal interface 314 ... Storage interface 316 ... I / O (ON) Output) Device interface 318 ... Network interface 320 ... User I / O device 322 ... Storage device 324 ... Display system 326 ... Display device 330 ... Network 350 ... Camera system application

Claims (4)

  1.  第1カメラ装置と、
     前記第1カメラ装置の撮像領域と重複した撮像領域を有し、監視対象物の拡大画像が取得可能な第2カメラ装置と、
     前記第2カメラ装置を制御する制御装置を備え、
     前記制御装置は、(ア)前記第1カメラ装置または前記第2カメラ装置の撮像データから監視対象物を抽出し、(イ)前記監視対象物に対して、映込予想時間を算出し、(ウ)前記監視対象物の中で前記映込時間がより短い監視対象物に対して拡大画像を撮影するよう第2カメラ装置に対して制御信号を送信するカメラシステム。
    The first camera device and
    A second camera device having an image pickup area overlapping with the image pickup area of the first camera device and capable of acquiring an enlarged image of a monitored object, and a second camera device.
    A control device for controlling the second camera device is provided.
    The control device (a) extracts a monitored object from the image pickup data of the first camera device or the second camera device, and (b) calculates the expected projection time for the monitored object, and (b) C) A camera system that transmits a control signal to the second camera device so as to capture an enlarged image on the monitored object having a shorter reflection time among the monitored objects.
  2.  前記制御装置は、前記映込予想時間を算出するために、前記監視対象物の位置及び移動方向に応じた前記映込予想時間テーブルを備えている請求項1に記載のカメラシステム。 The camera system according to claim 1, wherein the control device includes the expected projection time table according to the position and moving direction of the monitored object in order to calculate the estimated reflection time.
  3.  前記前記映込予想時間テーブルは、前記第1カメラ装置及び前記第2カメラ装置の重複した撮像領域を複数に分割したブロックごとに設けられている、請求項2に記載のカメラシステム。 The camera system according to claim 2, wherein the expected projection time table is provided for each block in which the overlapping imaging regions of the first camera device and the second camera device are divided into a plurality of blocks.
  4.  第1カメラ装置と、
     前記第1カメラ装置の撮像領域と重複した撮像領域を有し、監視対象物の拡大画像が取得可能な第2カメラ装置と、
     前記第2カメラ装置を制御する制御装置と、を備えたカメラシステムにおいて、(1)前記第1カメラ装置及び前記第2カメラ装置の重複した撮像領域において、監視対象物を抽出するステップと(2)前記監視対象物に対して、監視対象物が認識された位置及び監視対象物の移動方向に基づいて映込予想時間を算出するステップと、(3)前記監視対象物の中で前記映込時間がより短い監視対象物について、前記第2カメラ装置が拡大画像を撮影するステップと、(4)監視対象物が複数存在する場合には、(3)のステップにおいて撮影対象となった監視対象物以外の監視対象物のうち、前記映込時間がより短い監視対象物について、拡大画像を撮影するステップを含む、カメラシステムの制御方法。
    The first camera device and
    A second camera device having an image pickup area overlapping with the image pickup area of the first camera device and capable of acquiring an enlarged image of a monitored object, and a second camera device.
    In a camera system including a control device for controlling the second camera device, (1) a step of extracting a monitored object in an overlapping imaging region of the first camera device and the second camera device (2). ) The step of calculating the expected projection time for the monitored object based on the position where the monitored object is recognized and the moving direction of the monitored object, and (3) the projection in the monitored object. For a monitoring target with a shorter time, the monitoring target that was the shooting target in the step of the second camera device taking a magnified image and (4) if there are a plurality of monitoring targets, in the step (3). A method for controlling a camera system, which comprises a step of taking a magnified image of a monitored object other than an object, which has a shorter reflection time.
PCT/JP2021/033109 2020-09-18 2021-09-09 Camera system and camera system control method WO2022059584A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022550505A JP7472299B2 (en) 2020-09-18 2021-09-09 Camera system and method for controlling camera system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-156839 2020-09-18
JP2020156839 2020-09-18

Publications (1)

Publication Number Publication Date
WO2022059584A1 true WO2022059584A1 (en) 2022-03-24

Family

ID=80776037

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/033109 WO2022059584A1 (en) 2020-09-18 2021-09-09 Camera system and camera system control method

Country Status (2)

Country Link
JP (1) JP7472299B2 (en)
WO (1) WO2022059584A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006332881A (en) * 2005-05-24 2006-12-07 Canon Inc Monitor photographing system, photographing method, computer program, and recording medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006332881A (en) * 2005-05-24 2006-12-07 Canon Inc Monitor photographing system, photographing method, computer program, and recording medium

Also Published As

Publication number Publication date
JPWO2022059584A1 (en) 2022-03-24
JP7472299B2 (en) 2024-04-22

Similar Documents

Publication Publication Date Title
US10475312B2 (en) Monitoring camera and monitoring camera control method
TWI442328B (en) Shadow and reflection identification in image capturing devices
WO2014044161A1 (en) Target tracking method and system for intelligent tracking high speed dome camera
US8223214B2 (en) Camera system with masking processor
WO2020182176A1 (en) Method and apparatus for controlling linkage between ball camera and gun camera, and medium
CN110072078B (en) Monitoring camera, control method of monitoring camera, and storage medium
JP2006245648A (en) Information processing system, information processing apparatus, information processing method, program and recording medium
CN110944101A (en) Image pickup apparatus and image recording method
JP6624800B2 (en) Image processing apparatus, image processing method, and image processing system
KR102193984B1 (en) Monitoring System and Method for Controlling PTZ using Fisheye Camera thereof
CN110351475B (en) Image pickup system, information processing apparatus, control method therefor, and storage medium
US9386280B2 (en) Method for setting up a monitoring camera
US10643315B2 (en) Information processing apparatus, information processing method, and recording medium
WO2022059584A1 (en) Camera system and camera system control method
JP5631065B2 (en) Video distribution system, control terminal, network camera, control method and program
JP2012119971A (en) Monitoring video display unit
EP3367353B1 (en) Control method of a ptz camera, associated computer program product and control device
JPH1023465A (en) Image pickup method and its device
WO2019183808A1 (en) Control method, control device, imaging system, aircraft and storage medium
JP2019153986A (en) Monitoring system, management apparatus, monitoring method, computer program, and storage medium
KR100736565B1 (en) Method of taking a panorama image and mobile communication terminal thereof
TWI507028B (en) Controlling system and method for ptz camera, adjusting apparatus for ptz camera including the same
JP3034891B2 (en) Image display device
US20240022812A1 (en) Image capturing system, control apparatus, image capturing apparatus, and display apparatus constituting the system, control method, and display method
WO2023105598A1 (en) Image processing device, image processing system, and image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21869271

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022550505

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21869271

Country of ref document: EP

Kind code of ref document: A1