WO2022180756A1 - 監視カメラの情報送信装置、監視カメラの情報受信装置、及び監視カメラシステム、並びに監視カメラの情報受信方法 - Google Patents
監視カメラの情報送信装置、監視カメラの情報受信装置、及び監視カメラシステム、並びに監視カメラの情報受信方法 Download PDFInfo
- Publication number
- WO2022180756A1 WO2022180756A1 PCT/JP2021/007232 JP2021007232W WO2022180756A1 WO 2022180756 A1 WO2022180756 A1 WO 2022180756A1 JP 2021007232 W JP2021007232 W JP 2021007232W WO 2022180756 A1 WO2022180756 A1 WO 2022180756A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- camera
- analysis data
- monitoring camera
- unit
- Prior art date
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 105
- 238000000034 method Methods 0.000 title claims description 13
- 238000004458 analytical method Methods 0.000 claims abstract description 118
- 238000001514 detection method Methods 0.000 claims abstract description 100
- 238000004891 communication Methods 0.000 claims abstract description 28
- 238000003384 imaging method Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims description 49
- 230000005540 biological transmission Effects 0.000 claims description 41
- 238000013135 deep learning Methods 0.000 claims description 40
- 238000007405 data analysis Methods 0.000 claims description 28
- 238000012508 change request Methods 0.000 claims description 8
- 238000010191 image analysis Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000012986 modification Methods 0.000 abstract 4
- 230000004048 modification Effects 0.000 abstract 4
- 238000010586 diagram Methods 0.000 description 16
- 238000013500 data storage Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 238000012806 monitoring device Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/617—Upgrading or updating of programs or applications for camera control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Definitions
- the present disclosure relates to a surveillance camera information transmission device, a surveillance camera information reception device, a surveillance camera system, and a surveillance camera information reception method.
- Patent Document 1 in a surveillance camera network provided with a plurality of surveillance cameras including a first surveillance camera and a second surveillance camera, color information of an object to be monitored photographed by the first surveillance camera is transmitted to the second surveillance camera. is obtained in advance from the first monitoring camera, and the second monitoring camera increases the sensitivity or resolution of the color information so that the monitoring target is aligned with the shooting angle of view of the second monitoring camera.
- a technique has been disclosed that enables a surveillance object to be clearly photographed from the moment it enters.
- Patent Document 1 Although information regarding the color of a single monitoring target is taken into account, there is a problem in that it is not possible to appropriately deal with events that satisfy conditions regarding features other than color. . For example, when it is predicted that multiple monitored targets will enter the surveillance area of the camera itself, or if a certain monitored target is predicted to enter at high speed, it is difficult to capture these monitored targets clearly. There is a problem.
- An object of the present invention is to provide an information receiving device for a surveillance camera that changes the
- the information receiving device for the surveillance camera includes: a reception control unit that receives video analysis data, which is analysis data of video captured by another surveillance camera, via a communication network; a video analysis data analysis unit that analyzes the received video analysis data with reference to the camera linkage table and the video change table and issues a video control request for the self-monitoring camera based on the analysis result; a video control unit that changes at least one parameter related to shooting by the self-monitoring camera in accordance with the video control request; with The detection areas of the surveillance cameras connected to the communication network and the identification information of the surveillance cameras linked to each detection area are registered in the camera linkage table,
- the image change table defines a non-color condition for changing the at least one parameter and a change content of the at least one parameter when the non-color condition is satisfied, and the non-color
- the conditions include a first condition relating to the case where there are a plurality of moving bodies within the angle of view of the self-monitoring camera, or
- the surveillance camera information receiving device when an event occurs that satisfies the conditions related to the features other than the color of the monitored object, it is possible to change the settings related to shooting.
- FIG. 10 is a diagram for showing an operation by a deep learning inference processing unit;
- FIG. 10 is a diagram for showing an operation by a deep learning inference processing unit;
- FIG. 10 is a diagram for showing an operation by a deep learning inference processing unit;
- FIG. 10 is a diagram for showing an operation by a deep learning inference processing unit;
- FIG. 10 is a diagram for showing an operation by a deep learning inference processing unit;
- FIG. 10 is a diagram for showing an operation by a deep learning inference processing unit;
- FIG. 10 is a diagram for showing an operation by a deep learning inference processing unit;
- FIG. 10 is a diagram for showing an operation by a deep learning inference processing unit;
- FIG. 10 is a diagram for showing an operation by a deep learning inference processing unit;
- FIG. 10 is a diagram for showing an operation by a deep learning inference processing unit;
- FIG. 2 is a plan view looking down from above on a floor on which a plurality of cameras A to F are installed;
- 4 is a diagram showing an image at the angle of view of camera E.
- FIG. 4 is a diagram showing an image at the angle of view of camera D.
- FIG. It is an example of a camera cooperation table. It is a figure which shows an example of video analysis data.
- FIG. 10 is a diagram showing another example of video analysis data; It is a figure which shows an example of a video change table.
- FIG. 10 is a diagram showing an operation example of processing by a video analysis data analysis unit;
- FIG. 2 is a diagram showing a configuration example of hardware of an information transmitting device of a monitoring camera and an information receiving device of a monitoring camera
- FIG. 5 is a diagram showing another configuration example of the hardware of the information transmitting device of the monitoring camera and the information receiving device of the monitoring camera
- 4 is a flow chart of a surveillance camera and an information transmission device for the surveillance camera
- 4 is a flow chart of a surveillance camera and an information receiving device for the surveillance camera
- FIG. 10 is a diagram showing still another example of the video change table
- FIG. 1 is a system configuration diagram of a surveillance camera system 30 according to Embodiment 1.
- a monitoring camera system 30 includes a monitoring camera 10 and a monitoring camera 20, and the monitoring cameras 10 and 20 are connected via a communication network NW.
- the monitoring camera 10 has a function of analyzing captured images by deep learning, and can determine from the image V1 that a moving object such as a person or a drone has entered the angle of view of the imaging unit of the monitoring camera 10.
- the surveillance camera 10 can also track the person who has entered and determine from the angle of view that the person has left.
- the video analysis data analyzed by the monitoring camera 10 is transmitted to the communication network NW, and the monitoring camera 20 receives and analyzes the data, thereby changing the operating conditions related to the image quality of the video V2 captured by the monitoring camera 20. can.
- the video analysis data transmitted to the communication network NW includes, for example, data indicating the name of the camera that originated the data, and object type indicating the type of the object shown in the video. data, appearance time data indicating when the object appeared within the angle of view, and exit time data indicating when the object left the angle of view.
- a monitoring device that monitors video data transmitted from the monitoring camera 10 and the monitoring camera 20 and a recording device (not shown) that records the video data may be connected to the communication network NW.
- FIG. 2 is a block diagram showing a configuration example of the surveillance camera 10 and the surveillance camera 20 that constitute the surveillance camera system 30.
- the monitoring camera 10 receives visible light Y, performs predetermined signal processing on data obtained from the visible light Y, and outputs a data series Z.
- the monitoring camera 20 receives a data series W from the communication network NW as an input, and performs predetermined signal processing on the data series W to obtain visible light X as an input. do.
- the surveillance camera 10 includes an imaging unit 135, an information transmission device 100 for the surveillance camera 10, a camera cooperation table 160, and a detection area table 122.
- the information transmission device 100 also includes an image quality adjustment unit 130 , a video data storage unit 125 , a deep learning inference processing unit 120 , a video analysis data generation unit 115 and a transmission control unit 110 .
- each component included in the surveillance camera 10 and the information transmission device 100 will be described with reference to FIG. 2 .
- the imaging unit 135 captures an image within the angle of view of the surveillance camera 10 .
- Image data captured by the imaging unit 135 is sent to the image quality adjustment unit 130 .
- the imaging unit 135 is realized by an image sensor such as a CCD (Charged Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
- the image quality adjustment unit 130 performs various image quality adjustments such as hue adjustment by AWB (Auto White Balance), exposure correction by AE (Auto Exposure), focus adjustment by AF (Auto Focus), and sharpness adjustment.
- the image quality adjustment unit 130 performs image quality adjustment on the imaged data sent from the image pickup unit 135 , and the image data whose image quality has been adjusted is sent to the image data storage unit 125 .
- the video data storage unit 125 temporarily stores the video data sent from the image quality adjustment unit 130 , and the video data stored in the video data storage unit 125 is referred to by the deep learning inference processing unit 120 .
- the video data storage unit 125 is shown as a component included in the information transmission device 100, but it may be provided as a component of the surveillance camera 10 instead of a component included in the information transmission device 100. good.
- the deep learning inference processing unit 120 holds a neural network that has been trained in advance so that it can detect people and their movements.
- the deep learning inference processing unit 120 performs deep learning inference processing on the video data acquired from the video data storage unit 125 using a trained neural network, thereby determining whether or not a person has appeared in the video data. It has a function of detecting that a person is moving within the image and that an appearing person has left the image data.
- the detection area table 122 holds area information within the angle of view, and the area information is referred to by the deep learning inference processing unit 120 . Appearance information and exit information of a person detected by the inference of the deep learning inference processing unit 120 are associated with area information.
- the detection area table 122 is implemented by, for example, a memory included in the surveillance camera 10 .
- FIG. 3A shows an example of the detection area table 122
- FIG. 3B shows a diagram in which the detection area is superimposed on the actual angle of view.
- detection area 1, detection area 2, and detection area 3 are registered.
- coordinates defining the detection area 1 coordinates 1 (829, 250), coordinates 2 (829, 1919), coordinates 3 (1079, 1919), and coordinates 4 (1079, 250) are set.
- a detection area 1 is defined within a rectangular area formed by these coordinates 1 to 4 .
- the coordinates defining each area are set for the detection area 2 and the detection area 3 as well.
- the detection area can be specified manually when the monitoring camera is installed.
- FIG. 3B shows a diagram in which detection area 1, detection area 2, and detection area 3 are superimposed on the actual angle of view.
- FIGS. 4A to 4D examples of detecting the appearance of a person, detecting that a person is moving in an image, and detecting the exit of a person, which are realized by the deep learning inference processing unit 120.
- the monitoring target Obj is not detected because it is not within the angle of view of the monitoring camera.
- the deep learning inference processing unit 120 detects the monitored object Obj as a person, and assigns an identification number for distinguishing the monitored object Obj individually. A number is assigned, and the area information acquired from the detection area table 122 is referenced to associate the detection area 3 with the monitoring target Obj.
- the deep learning inference processor 120 transmits this detection information to the video analysis data generator 115 .
- the monitored object Obj detected in FIG. 4B moves rightward and enters the detection area 1, and then, as shown in FIG. Exit outside the angle of view.
- the deep learning inference processing unit 120 refers to the area information acquired from the detection area table 122 and associates the detection area 1 .
- the deep learning inference processing unit 120 transmits information that associates the detected monitoring target Obj with the detection area to the video analysis data creation unit 115 .
- the detection area set for each camera and the camera name (camera identification information) associated with the set detection area are registered in the camera linkage table 160.
- the camera linkage table 160 is used for video analysis. It is referred to by the data creation unit 115 .
- the camera linkage table 160 is implemented by, for example, a memory included in the surveillance camera 10 .
- FIG. 5A is a plan view looking down on the floor on which the cameras A to F are installed.
- the dashed lines in FIG. 5A are lines for indicating the range of the horizontal angle of view of each camera.
- 5B is an image at the angle of view of camera E
- FIG. 5C is an image at the angle of view of camera D.
- FIG. 5D is an example of the camera linkage table 160.
- FIG. The camera linkage settings for camera D and camera E in the camera linkage table 160 will be specifically described below.
- camera linkage 1 of camera D is set so that detection area 1 and camera A are linked.
- Camera linkage means a setting indicating, for a detection area set in a certain camera, another camera linked with respect to this detection area.
- the camera installed ahead of the detection area 1 of camera D is camera A, so camera A is set as the linked camera for detection area 1 of camera D. It is
- camera linkage 2 of camera D is set so that detection area 2 and camera E are linked. This is because the camera installed ahead of the detection area 2 of the camera D is the camera E, as shown in FIGS. 5C and 5A.
- camera linkage 1 of camera E is set so that detection area 1 and camera D are linked. This is because the camera installed ahead of the detection area 1 of the camera E is the camera D, as shown in FIGS. 5B and 5A.
- camera linkage 2 of camera E is set so that detection area 2 and camera B are linked. This is because the camera installed beyond the detection area 2 of camera E is camera B, as shown in FIGS. 5B and 5A.
- camera linkage 3 of camera E is set so that detection area 3 and camera F are linked. This is because the camera installed ahead of the detection area 3 of the camera E is the camera F, as shown in FIGS. 5B and 5A.
- the camera linkage table may include linkage tables for all cameras, or may include only the linkage table for the own camera. Also, the number of other cameras associated with the detection area of a certain camera need not be one, and multiple cameras may be associated.
- the video analysis data creation unit 115 sets the detection information transmitted from the deep learning inference processing unit 120, the camera linkage information acquired by referring to the camera linkage table 160, and the name of the own camera as the transmission source. Use to create video analytics data.
- the video analysis data creation unit 115 transmits the video analysis data to the transmission control unit 110 .
- FIG. 6A and 6B show examples of video analysis data created by the video analysis data creation unit 115.
- the video analysis data consists of the camera name of the sender, the identification number for distinguishing each monitoring target, the target type, the time when the monitoring target appeared in the camera's angle of view, and the monitoring target's position within the camera's angle of view. It has the time of exit, the area where the monitored object appeared within the angle of view of the camera, the area where the monitored object exited from the angle of view of the camera, and the name of the camera associated with the exited area.
- the monitoring camera E detects the appearance of a person assigned the identification number XXXXXXXX in the detection area 3 at 11:20:32 on September 22, 2020. means that In addition, in the video analysis data D2 in FIG. 6B , the monitoring camera E detects that the person assigned the identification number XXXXXXX left the detection area 1 at 11:20:37 on September 22, 2020, and this This means that the camera associated with the exit area 1 is the camera D.
- the transmission control unit 110 broadcasts the video analysis data received from the video analysis data creation unit 115 to the communication network NW. Instead of transmission by broadcast, the transmission control unit 110 may unicast or multicast the video analysis data to the linked cameras.
- the monitoring camera 20 includes an information receiving device 200 for the monitoring camera 20, a camera linkage table 260, an image change table 265, an imaging unit 235, and a lens 280.
- the information receiving device 200 includes a reception control section 250 , a video analysis data analysis section 255 and a video control section 270 .
- the video control unit 270 includes a video main control unit 271 , an image quality adjustment unit 272 , a lens control unit 273 and a video encoding unit 274 .
- each component included in the surveillance camera 20 and the information receiving device 200 will be described with reference to FIG.
- the reception control unit 250 receives the video analysis data transmitted by the surveillance camera 10 via the communication network NW, and transmits the received video analysis data to the video analysis data analysis unit 255 .
- the detection areas of the cameras and the names of the cameras linked thereto are registered in the camera linkage table 260, and are referred to by the video analysis data analysis unit 255.
- the camera cooperation table 260 is implemented by, for example, a memory included in the information receiving device 200 .
- a specific example of the camera linkage table 260 is a table such as that shown in FIG. 5D.
- the camera cooperation table 260 may have the same content as the camera cooperation table 160 .
- the camera cooperation table 260 may hold only a table related to its own camera. For example, in the case of surveillance camera F, a table consisting of only the items in the table shown in FIG. 5D and the row for camera F may be held.
- the image change table 265 contains predetermined conditions (hereinafter sometimes referred to as non-color conditions) regarding features other than color of the monitoring target, and parameters related to image quality when each condition of the non-color conditions is satisfied.
- a change method, a parameter change method related to video encoding, and a view angle change method are defined.
- the video change table 265 is implemented by, for example, a memory included in the information receiving device 200 .
- a specific example of the video change table 265 will now be described with reference to FIG. As shown in FIG. 7, the image change table 265 defines a plurality of conditions regarding the feature of the number of people within the angle of view.
- condition 1 that ⁇ the number of people in the angle of view is 1''
- condition 2 that ⁇ the number of people in the angle of view is 2''
- condition 2 that ⁇ the number of people in the angle of view is 3 or more''
- Condition 3 is defined as "become”. Both conditions are defined for the case where the number of people within the angle of view increases.
- condition 2 includes cases where "the number of people in the angle of view changes from 0 to 2" and cases in which "the number of people in the angle of view changes from 1 to 2". is included, but the case where "the number of people in the angle of view is changed from three to two" is not included.
- resolution and sharpness are shown as parameters related to image quality
- bit rate and Q value Quality Factor
- the parameters related to image quality, the parameters related to video coding, and the angle of view are changed compared to the setting when the number of people in the angle of view is 0.
- Method is defined. For example, when Condition 1 is satisfied, the camera settings are changed so that the resolution remains the same, the bit rate remains the same, the sharpness remains the same, the Q value increases, and the angle of view remains the same. That is, the resolution, bit rate and sharpness are the same as those set when the number of people in the angle of view is 0, and the Q value is higher than the set value when the number of people in the angle of view is 0. is the setting when the number of people in the angle of view is 0.
- condition 2 the camera settings are changed so that the resolution remains the same, the bit rate increases, the sharpness remains the same, the Q value increases, and the angle of view remains the same.
- condition 3 the camera settings are changed so that the resolution remains the same, the bit rate is increased, the sharpness is increased, the Q value is increased, and the angle of view is zoomed out.
- the resolution remains the same regardless of which condition is met, but it may be changed to increase the resolution when any condition is met.
- the resolution may be increased when condition 3 is satisfied.
- the video analysis data analysis unit 255 refers to the camera link information held by the camera link table 260 and the video change information held by the video change table 265, and extracts the video analysis data transmitted by the reception control unit 250. After analysis, a video control request is issued to the video main control unit 271 based on the analysis result.
- camera D analyzes the video analysis data example using the camera link table 260 and the video change table 265.
- camera D receives video analysis data D-T1, video analysis data DT2, video analysis data DT3, and video analysis data D-T4 from communication network NW. Suppose that they are received in order.
- the video analysis data D-T1 indicates that a person (identification number: XXXXXXXX) has appeared in the detection area 3 of camera E. Since the video analysis data DT1 is not necessary for the camera D, the video analysis data analysis section 255 of the camera D discards the received video analysis data DT1.
- Video analysis data D-T2 indicates that a person (identification number: YYYYYYYY) has left detection area 1 of camera B, and that camera A is the camera linked to detection area 1, which is the exit area. Since the linked camera is camera A, camera D does not need this video analysis data DT2. Therefore, the video analysis data analysis unit 255 of camera D discards the received video analysis data DT2.
- Video analysis data D-T3 indicates that a person (identification number: XXXXXXXX) has left detection area 1 of camera E, and that camera D is the camera linked to detection area 1, which is the exit area. Since camera D is the linked camera, camera D needs video analysis data D-T3.
- the detection area in the angle of view of camera D where a person who has left detection area 1 of camera E appears is detection area 2.
- FIG. Assuming that there is no person within the current angle of view of camera D, in order to satisfy condition 1 of image change table 265, image analysis data analysis unit 255 of camera D determines that when a person (identification number: XXXXXX) appears, Pre-increase the Q value of the predicted detection region 2. By increasing the Q value, the video encoding compression rate can be decreased. to obtain high-quality images from
- Video analysis data D-T4 indicates that a person (identification number: ZZZZZZZZZ) has left detection area 2 of camera A, and that camera D is the camera that cooperates with detection area 2, which is the exit area. Since the linked camera is camera D, camera D needs video analysis data T4. According to the camera linkage table 260, the area in the angle of view of camera D where a person appears after exiting detection area 2 of camera A is detection area 1. FIG. If there is already one person (identification number: XXXXXXXXX) within the current angle of view of camera D, Condition 2 of the image change table 265 is satisfied.
- the image analysis data analysis unit 255 of the camera D increases in advance the Q value of the detection area 1 in which a person (identification number: ZZZZZZZZ) is predicted to appear. Since the video encoding compression rate can be lowered by increasing the Q value, the video analysis data analysis unit 255 of camera D increases the Q value in advance so that a person (identification number :ZZZZZZZZ) appears, or immediately after it appears, to obtain a high-quality video. Furthermore, the image analysis data analysis unit 255 of camera D increases the bit rate in advance so that the image quality of the entire screen can be maintained even if the angle of view increases due to the increase in the number of people. back.
- the image analysis data analysis unit 255 of camera D uses the image change table 265 of FIG. You may perform control similar to the example mentioned above using.
- the video main control unit 271 receives the video control request transmitted from the video analysis data analysis unit 255, and transmits an image quality adjustment request to the image quality adjustment unit 272 according to the content of the video control request. Then, it transmits a field angle control request (lens control request) to the lens control unit 273 and transmits a video encoding change request to the video encoding unit 274 .
- the imaging unit 235 images the scene within the angle of view of the surveillance camera.
- the imaging unit 235 is implemented by an image sensor such as a CCD (Charged Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
- the imaging data captured by the imaging unit 235 is sent to the image quality adjustment unit 272 .
- the lens control unit 273 receives a lens control request from the video main control unit 271 . Upon receiving the request for changing the angle of view, the lens control unit 273 requests the lens 280 to control the angle of view (lens control). The lens 280 receives a view angle control request (lens control request) from the lens control unit 273 and changes the zoom magnification of the lens.
- the image quality adjustment unit 272 executes the video processing request requested by the video main control unit 271 on the imaging data sent from the imaging unit 235 .
- the image quality adjustment unit 272 receives a request for resolution change, the image quality adjustment unit 272 changes the size of the imaging data to the designated resolution.
- the image quality adjustment unit 272 adjusts the image quality of the imaging data with the specified sharpness.
- the video encoding unit 274 performs video encoding on the video data whose image quality has been adjusted by the image quality adjusting unit 272 .
- the video encoding unit 274 receives a bit rate change request
- the video encoding unit 274 video-encodes the video data at the specified bit rate.
- the video encoding unit 274 receives a request to change the Q value
- the video encoding unit 274 video-encodes the video data with the specified Q value.
- the video encoding unit 274 transmits the video data whose image quality has been adjusted by the image quality adjusting unit 272 or the video data encoded by the video encoding unit 274 to, for example, a monitoring device connected to the communication network NW.
- information transmitting device 100 or information receiving device 200 comprises processor 301 and memory 302 connected to processor 301 .
- the program stored in the memory 302 is read out by the processor 301 and executed, whereby the image quality adjustment unit 130, the deep learning inference processing unit 120, the video analysis data creation unit 115, and the transmission control unit 110 of the information transmission device 100 are executed. is realized.
- the video data storage unit 125 is implemented by the memory 302 .
- the image data storage unit 125 is a component of the surveillance camera 10 instead of the information transmission device 100 , the image data storage unit 125 is implemented by a memory (not shown) of the surveillance camera 10 . Further, the program stored in the memory 302 is read out by the processor 301 and executed, whereby the reception control unit 250, the video analysis data analysis unit 255, the video main control unit 271, and the image quality adjustment unit 272 of the information receiving device 200 , a lens control unit 273 and a video encoding unit 274 are implemented. Programs may be implemented as software, firmware, or a combination of software and firmware.
- Examples of the memory 302 include nonvolatile or volatile semiconductors such as RAM (Random Access Memory), ROM (Read Only Memory), flash memory, EPROM (Erasable Programmable Read Only Memory), and EEPROM (Electrically-EPROM). Memory, magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD are included.
- RAM Random Access Memory
- ROM Read Only Memory
- flash memory EPROM (Erasable Programmable Read Only Memory)
- EEPROM Electrical-EPROM
- Memory magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD are included.
- the information transmitting device 100 or the information receiving device 200 includes a processing circuit 303 instead of the processor 301 and memory 302.
- the processing circuit 303 implements the image quality adjustment unit 130 , the deep learning inference processing unit 120 , the video analysis data creation unit 115 and the transmission control unit 110 of the information transmission device 100 .
- the processing circuit 303 realizes the reception control unit 250, the video analysis data analysis unit 255, the video main control unit 271, the image quality adjustment unit 272, the lens control unit 273, and the video encoding unit 274 of the information receiving device 200. .
- the processing circuit 303 is, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or a combination thereof.
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- step ST101 the imaging unit 135 of the monitoring camera 10 captures an image within the angle of view of the monitoring camera 10.
- the imaging unit 135 sends the captured imaging data to the image quality adjustment unit 130 of the information transmitting device 100 .
- step ST102 the image quality adjustment unit 130 performs various image quality adjustments such as hue adjustment by AWB, exposure correction by AE, focus adjustment by AF, and sharpness adjustment.
- the image quality adjustment unit 130 sends the image data for which the image quality has been adjusted to the image data storage unit 125 .
- step ST103 the video data storage unit 125 temporarily stores the video data sent from the image quality adjustment unit 130.
- the deep learning inference processing unit 120 acquires video data stored in the video data storage unit 125, and performs deep learning inference processing on the acquired video data using a trained neural network. .
- inference processing it is detected that a person has appeared in the image data, that the person that has appeared is moving within the image, and that the person that has appeared has left the image data.
- the deep learning inference processing unit 120 refers to the detection area table 122 and transmits to the video analysis data creation unit 115 information in which the appearance information or exit information of a person and the area information of the detection area are associated.
- step ST105 the video analysis data creation unit 115 obtains the detection information transmitted from the deep learning inference processing unit 120, the camera linkage information obtained by referring to the camera linkage table 160, and the name of the own camera as the transmission source. to create video analytics data.
- the video analysis data creation unit 115 transmits the video analysis data to the transmission control unit 110 .
- step ST106 the transmission control section 110 transmits the video analysis data received from the video analysis data creation section 115 to the communication network NW.
- step ST111 the reception control unit 250 receives the video analysis data transmitted by the monitoring camera 10 via the communication network NW, and transmits the received video analysis data to the video analysis data analysis unit 255.
- video analysis data analysis section 255 refers to camera link table 260 and video change table 265 to analyze the video analysis data transmitted by reception control section 250, and based on the analysis results A video control request is made to the control unit 271 .
- the image change table 265 defines a non-color condition for changing at least one parameter related to imaging by the monitoring camera 20, and a change content of at least one parameter when the non-color condition is satisfied.
- the non-color condition includes a condition regarding the case where the number of moving objects within the angle of view of the monitoring camera 20 is plural. Examples of parameters include resolution, bitrate, sharpness, Q factor, and angle of view.
- the image control unit 270 receives the image control request transmitted from the image analysis data analysis unit 255, and according to the contents of the image control request, the image captured by the monitoring camera 20 or the image related to the imaged image. control. That is, the image control unit 270 performs control so as to change the value of the parameter according to the content of the image control request. More specifically, the video main control unit 271 of the video control unit 270 receives the video control request transmitted from the video analysis data analysis unit 255, and responds to the content of the video control request to the image quality adjustment unit 272. , a lens control request is sent to the lens control unit 273 , and a video encoding change request is sent to the video encoding unit 274 .
- the image quality adjustment unit 272 executes the image processing request requested by the image main control unit 271 on the imaging data imaged by the imaging unit 235 .
- the video encoding unit 274 performs video encoding on the video data whose image quality has been adjusted by the image quality adjusting unit 130 .
- the lens control section 273 receives a lens control request from the video main control section 271 .
- the lens control unit 273 requests the lens 280 to perform lens control.
- the lens 280 receives a lens control request from the lens control unit 273 and changes the zoom magnification of the lens.
- the surveillance camera information transmission device, the surveillance camera information reception device, or the surveillance camera system that considers the feature of the number of moving objects as described above can be widely used in a system for detecting the movement of people. be. For example, it can be used in a system that monitors the landing of an escalator that starts operating when a person approaches. In such an escalator monitoring system, the information receiving device of the monitoring camera receives the operation signal of the escalator, thereby making it possible to adjust in advance the image quality of the camera monitoring the escalator landing area.
- the information receiving device 200 for the surveillance camera described above can also be used in a system for monitoring elevator boarding and alighting areas.
- the information receiving device of the monitoring camera receives the stop floor signal of the elevator, thereby making it possible to adjust in advance the image quality of the camera monitoring the boarding area of the elevator stop floor.
- the monitoring camera system 30 including the information transmitting device 100 of the monitoring camera and the information receiving device 200 of the monitoring camera described above can be used as a system for monitoring moving devices other than people, such as automobiles, automatic transport devices, and drones. is also possible.
- the surveillance camera information transmission device 100 can detect the appearance of a person, the movement of a person within an image, and the exit of a person through deep learning.
- the information transmitting device 100 of the monitoring camera By configuring the information transmitting device 100 of the monitoring camera to transmit video analysis data based on these detections to the communication network NW, the information receiving device 200 of the monitoring camera that has received the video analysis data transmits the video analysis data of the monitoring camera.
- the image quality Before people appear in the angle of view of the monitoring camera connected to the information receiving device 200, it is possible to change the image quality to suit the number of people appearing in the angle of view and the appearance positions of people appearing in the angle of view. As a result, the person can be clearly photographed from the time the person enters the photographing angle of view.
- Embodiment 2 In the first embodiment described above, an example of the image change table 265 that adjusts the image of the surveillance camera according to the feature of the number of people within the angle of view is disclosed. Surveillance camera systems may be modified.
- the trained neural network held by the deep learning inference processing unit 120 of the information transmission device 100 of the surveillance camera is the size of the person's field angle and the number of people per unit time. Pre-learning is further performed so that it can also detect that a person is moving at high speed from the amount of movement. Then, the deep learning inference processing unit 120 performs deep learning inference processing on the video data acquired from the video data storage unit 125, so that the appearance of a person in the video data as in the first embodiment, In addition to the movement of an appearing person within the video data and the exit of the appearing person from the video data, it has a function to detect that a person is moving at high speed. The deep learning inference processing unit 120 transmits this detection information to the video analysis data creation unit 115 .
- the image analysis data analysis unit 255 of the information receiving device 200 of the surveillance camera refers to, for example, the image change table shown in FIG.
- condition 1 of the image change table a setting change condition is stipulated that a person moving at high speed enters the angle of view. Since the average walking speed of an adult is about 1.3 [m/s], any numerical value of 1.4 [m/s] or more may be used as a criterion for whether or not the walking speed is high.
- condition 1 of the image change table a setting change condition is stipulated that a person moving at high speed enters the angle of view. Since the average walking speed of an adult is about 1.3 [m/s], any numerical value of 1.4 [m/s] or more may be used as a criterion for whether or not the walking speed is high.
- the setting change when condition 1 is satisfied is as follows: resolution remains the same, bit rate increases, sharpness remains the same, Q value increases, angle of view zooms out, frame rate
- the content of the setting change is defined to increase the
- the image change table of FIG. 12 may be integrated with or separated from the image change table of FIG.
- the information transmission device 100 of the surveillance camera can detect a person moving at high speed by deep learning, and the image analysis data by the information transmission device 100 of the surveillance camera is transmitted to the communication network NW.
- the information receiving device 200 of the monitoring camera that received the video analysis data is suitable for people moving at high speed before a person appears within the angle of view of the monitoring camera connected to the information receiving device 200 of the monitoring camera. It is possible to change the image quality. As a result, the person can be clearly photographed from the time the person enters the photographing angle of view.
- a surveillance camera information receiving device (200) includes a reception control unit (250) that receives video analysis data, which is analysis data of a video imaged by another surveillance camera (10), via a communication network. , with reference to the camera link table (260) and the video change table (265), analyze the received video analysis data, and based on the analysis result, make a video control request for the self-monitoring camera (20). an analysis unit (255); and a video control unit (270) that changes at least one parameter related to photography of the self-monitoring camera according to the video control request, and the camera linkage table is connected to the communication network.
- the detection area of the surveillance camera and the identification information of the surveillance camera linked to each detection area are registered, and the image change table includes a non-color condition for changing the at least one parameter, and change content of the at least one parameter when a non-color condition is satisfied, and the non-color condition includes a first or a second condition relating to the speed of a moving object within the angle of view of the self-monitoring camera, and the video analysis data analysis unit detects that the received video analysis data satisfies the non-color condition Then, the video control request is made according to the content of the change.
- the surveillance camera information receiving device is the surveillance camera information receiving device according to Supplementary Note 1, wherein the video control unit receives the video control request and outputs an image quality adjustment request requesting image quality adjustment, an image quality adjustment request, and an image quality adjustment request.
- a video main control unit (271) that transmits a field angle control request requesting angle control or a video encoding change request requesting a video encoding change, and receiving the image quality adjustment request and adjusting image quality.
- the information transmission device (100) for a surveillance camera holds a neural network pre-learned so as to be able to detect the detection or movement of a moving object, and deep-layers the image data captured by the surveillance camera (10).
- a deep learning inference processing unit that performs learning inference processing, and refers to a detection area table that holds area information within an angle of view to determine the appearance, movement, or exit of the moving object in the detection area in the video data.
- a deep learning inference processing unit (120) that detects and outputs detection information, and a camera linkage table in which detection areas of surveillance cameras and identification information of surveillance cameras that cooperate with each detection area are registered, The detection area in which the appearance, movement, or exit of the moving object included in the detection information is detected, the identification information of the other monitoring camera (20) linked to the detection area, and the identification information of the self-monitoring camera (10)
- the surveillance camera information transmission device according to Supplementary Note 4 is the surveillance camera information transmission device according to Supplementary Note 3, wherein the deep learning inference processing unit performs the deep learning inference processing so that the moving object is 1.4 [m /s].
- a surveillance camera system (30) according to appendix 5 includes the surveillance camera information reception device of appendix 1 or 2 and the surveillance camera information transmission device of appendix 3 or 4.
- a surveillance camera information receiving method includes a step of receiving video analysis data, which is analysis data of a video captured by another surveillance camera (10), via a communication network (ST111); a step of analyzing the received video analysis data with reference to the video change table and making a video control request for the self-monitoring camera (20) based on the analysis result, wherein the camera linkage table contains the communication Detection areas of surveillance cameras connected to a network and identification information of surveillance cameras linked to each detection area are registered, and at least one parameter related to photography of the self-monitoring camera is changed in the image change table.
- a non-color condition for performing the above-described non-color condition and a change content of the at least one parameter when the non-color condition is satisfied are defined. number is plural, or a second condition regarding the speed of a moving object within the angle of view of the self-monitoring camera, and the received video analysis data satisfies the non-color condition. case, the step of making the video control request according to the change content (ST112), and the step of changing the at least one parameter according to the video control request (ST113).
- the information receiving device for a surveillance camera can be installed in a surveillance camera and used as a surveillance camera.
- Surveillance camera 20 Surveillance camera, 30 Surveillance camera system, 100 Information transmission device, 110 Transmission control unit, 115 Video analysis data creation unit, 120 Deep learning inference processing unit, 122 Detection area table, 125 Video data storage unit, 130 Image quality Adjustment unit 135 Imaging unit 160 Camera cooperation table 200 Information receiving device 235 Imaging unit 250 Reception control unit 255 Video analysis data analysis unit 260 Camera cooperation table 265 Video change table 270 Video control unit 271 Video Main control unit 272 Image quality adjustment unit 273 Lens control unit 274 Video encoding unit 280 Lens 301 Processor 302 Memory 303 Processing circuit.
- Information transmission device 110 Transmission control unit, 115 Video analysis data creation unit, 120 Deep learning inference processing unit, 122 Detection area table, 125 Video data storage unit, 130 Image quality Adjustment unit 135 Imaging unit 160 Camera cooperation table 200 Information receiving device 235 Imaging unit 250 Reception control unit 255 Video analysis data analysis unit 260 Camera cooperation table 265 Video change table 270 Video control unit 271 Video Main control unit 272 Image quality adjustment unit 2
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
他の監視カメラにより撮像された映像の解析データである映像解析データを、通信ネットワークを介して受信する受信制御部と、
カメラ連携テーブルと映像変更テーブルを参照して、受信した映像解析データを解析して、解析結果に基づいて自監視カメラについての映像制御要求を行う映像解析データ解析部と、
前記映像制御要求に従って、前記自監視カメラの撮影に関する少なくとも1つのパラメータを変更する映像制御部と、
を備え、
前記カメラ連携テーブルには、前記通信ネットワークに接続された監視カメラの検知領域と、各検知領域に連携する監視カメラの識別情報とが登録されており、
前記映像変更テーブルには、前記少なくとも1つのパラメータを変更するための非色彩条件と、前記非色彩条件が満たされた場合の前記少なくとも1つのパラメータの変更内容とが定義されており、前記非色彩条件には、自監視カメラの画角内の移動体の数が複数の場合に関する第1の条件、又は前記自監視カメラの画角内の移動体の速度に関する第2の条件が含まれており、
前記映像解析データ解析部は、前記受信した映像解析データが前記非色彩条件を満たす場合に、前記変更内容に従って前記映像制御要求を行う。
<構成>
(監視カメラシステム)
図1を参照して、実施の形態1によるの監視カメラシステム30のシステム構成について説明する。図1は、実施の形態1によるの監視カメラシステム30のシステム構成図である。図1に示されているように、監視カメラシステム30は、監視カメラ10と監視カメラ20を備え、監視カメラ10と監視カメラ20は通信ネットワークNWを介して接続されている。監視カメラ10は撮影した映像を深層学習によって解析する機能を持ち、監視カメラ10の撮像部の画角内に人やドローン等の移動体が進入してきたことを映像V1から判断することができる。監視カメラ10は、その進入してきた人を追尾して、画角からその人が退出したことを判断することもできる。監視カメラ10が解析した映像解析データは通信ネットワークNWに送信され、そのデータを監視カメラ20が受信して解析することで、監視カメラ20が撮像する映像V2の画質に関する動作条件を変更することができる。
次に、図2を参照して、監視カメラシステム30を構成する、監視カメラ10及び監視カメラ20の構成例について説明する。図2は、監視カメラシステム30を構成する、監視カメラ10及び監視カメラ20の構成例を示すブロック図である。図2に示されているように、監視カメラ10は、可視光Yを入力とし、可視光Yから得られたデータに対して所定の信号処理を行ってデータ系列Zを出力する。また、図2に示されているように、監視カメラ20は、通信ネットワークNWからのデータ系列Wを入力とするとともに、データ系列Wに対して所定の信号処理を行って可視光Xを入力とする。
次に、監視カメラ20の構成について説明する。図2に示されているように、監視カメラ20は、監視カメラ20のための情報受信装置200と、カメラ連携テーブル260と、映像変更テーブル265と、撮像部235と、レンズ280とを備える。情報受信装置200は、受信制御部250と、映像解析データ解析部255と、映像制御部270とを備える。映像制御部270は、映像主制御部271と、画質調整部272と、レンズ制御部273と、映像符号化部274とを備える。以下、図2を参照して、監視カメラ20及び情報受信装置200が備える各構成部について説明する。
次に、図10を参照して、監視カメラ10及び監視カメラ10の情報送信装置100の動作について説明する。
上述の実施の形態1では、画角内の人数という特徴によって監視カメラの映像を調整する映像変更テーブル265の例を開示したが、人の移動速度という特徴によって監視カメラの映像を調整するように監視カメラシステムを変形してもよい。
以上で説明した種々の実施形態のいくつかの側面について、以下にてまとめる。
(付記1)
付記1による監視カメラの情報受信装置(200)は、他の監視カメラ(10)により撮像された映像の解析データである映像解析データを、通信ネットワークを介して受信する受信制御部(250)と、カメラ連携テーブル(260)と映像変更テーブル(265)を参照して、受信した映像解析データを解析して、解析結果に基づいて自監視カメラ(20)についての映像制御要求を行う映像解析データ解析部(255)と、前記映像制御要求に従って、前記自監視カメラの撮影に関する少なくとも1つのパラメータを変更する映像制御部(270)と、を備え、前記カメラ連携テーブルには、前記通信ネットワークに接続された監視カメラの検知領域と、各検知領域に連携する監視カメラの識別情報とが登録されており、前記映像変更テーブルには、前記少なくとも1つのパラメータを変更するための非色彩条件と、前記非色彩条件が満たされた場合の前記少なくとも1つのパラメータの変更内容とが定義されており、前記非色彩条件には、自監視カメラの画角内の移動体の数が複数の場合に関する第1の条件、又は前記自監視カメラの画角内の移動体の速度に関する第2の条件が含まれており、前記映像解析データ解析部は、前記受信した映像解析データが前記非色彩条件を満たす場合に、前記変更内容に従って前記映像制御要求を行う。
(付記2)
付記2による監視カメラの情報受信装置は、付記1の監視カメラの情報受信装置であって、前記映像制御部は、前記映像制御要求を受信して、画質の調整を要求する画質調整要求、画角の制御を要求する画角制御要求、又は映像の符号化の変更を要求する映像符号化変更要求を送信する映像主制御部(271)と、前記画質調整要求を受信して画質を調整する画質調整部(272)と、前記画角制御要求を受信して画角を調整するレンズ制御部(273)と、前記映像符号化変更要求を受信して映像符号化の方法を変更する映像符号化部(274)と、を備える。
(付記3)
付記3による監視カメラの情報送信装置(100)は、移動体の検知又は移動を検出できるように予め学習されたニューラルネットワークを保持し、監視カメラ(10)により撮像された撮像データに対して深層学習の推論処理を行う深層学習推論処理部であって、画角内の領域情報を保持した検知領域テーブルを参照して、前記映像データ内の検知領域における前記移動体の出現、移動又は退出を検出して、検知情報を出力する深層学習推論処理部(120)と、監視カメラの検知領域と各検知領域に連携する監視カメラの識別情報とが登録されたカメラ連携テーブルを参照して、前記検知情報に含まれる前記移動体の出現、移動又は退出が検出された検知領域と、該検知領域に連携する他監視カメラ(20)の識別情報と、自監視カメラ(10)の識別情報とを含んだ映像解析データを作成する映像解析データ作成部(115)と、作成された映像解析データを通信ネットワークに送信する送信制御部(110)と、を備える。
(付記4)
付記4による監視カメラの情報送信装置は、付記3の監視カメラの情報送信装置であって、前記深層学習推論処理部は、前記深層学習の推論処理を行うことにより移動体が1.4[m/s]以上の速度で移動することを検出できる。
(付記5)
付記5による監視カメラシステム(30)は、付記1又は2の監視カメラの情報受信装置と、付記3又は4の監視カメラの情報送信装置と、を備える。
(付記6)
付記6による監視カメラの情報受信方法は、他の監視カメラ(10)により撮像された映像の解析データである映像解析データを、通信ネットワークを介して受信するステップ(ST111)と、カメラ連携テーブルと映像変更テーブルを参照して、受信した映像解析データを解析して、解析結果に基づいて自監視カメラ(20)についての映像制御要求を行うステップであって、前記カメラ連携テーブルには、前記通信ネットワークに接続された監視カメラの検知領域と、各検知領域に連携する監視カメラの識別情報とが登録されており、前記映像変更テーブルには、前記自監視カメラの撮影に関する少なくとも1つのパラメータを変更するための非色彩条件と、前記非色彩条件が満たされた場合の前記少なくとも1つのパラメータの変更内容とが定義されており、前記非色彩条件には、自監視カメラの画角内の移動体の数が複数の場合に関する第1の条件、又は前記自監視カメラの画角内の移動体の速度に関する第2の条件が含まれており、前記受信した映像解析データが前記非色彩条件を満たす場合に、前記変更内容に従って前記映像制御要求を行うステップ(ST112)と、前記映像制御要求に従って、前記少なくとも1つのパラメータを変更するステップ(ST113)と、を備える。
Claims (6)
- 監視カメラの情報受信装置であって、
他の監視カメラにより撮像された映像の解析データである映像解析データを、通信ネットワークを介して受信する受信制御部と、
カメラ連携テーブルと映像変更テーブルを参照して、受信した映像解析データを解析して、解析結果に基づいて自監視カメラについての映像制御要求を行う映像解析データ解析部と、
前記映像制御要求に従って、前記自監視カメラの撮影に関する少なくとも1つのパラメータを変更する映像制御部と、
を備え、
前記カメラ連携テーブルには、前記通信ネットワークに接続された監視カメラの検知領域と、各検知領域に連携する監視カメラの識別情報とが登録されており、
前記映像変更テーブルには、前記少なくとも1つのパラメータを変更するための非色彩条件と、前記非色彩条件が満たされた場合の前記少なくとも1つのパラメータの変更内容とが定義されており、前記非色彩条件には、自監視カメラの画角内の移動体の数が複数の場合に関する第1の条件、又は前記自監視カメラの画角内の移動体の速度に関する第2の条件が含まれており、
前記映像解析データ解析部は、前記受信した映像解析データが前記非色彩条件を満たす場合に、前記変更内容に従って前記映像制御要求を行う、
監視カメラの情報受信装置。 - 前記映像制御部は、前記映像制御要求を受信して、画質の調整を要求する画質調整要求、画角の制御を要求する画角制御要求、又は映像の符号化の変更を要求する映像符号化変更要求を送信する映像主制御部と、
前記画質調整要求を受信して画質を調整する画質調整部と、
前記画角制御要求を受信して画角を調整するレンズ制御部と、
前記映像符号化変更要求を受信して映像符号化の方法を変更する映像符号化部と、
を備えた請求項1に記載の監視カメラの情報受信装置。 - 監視カメラの情報送信装置であって、
移動体の検知又は移動を検出できるように予め学習されたニューラルネットワークを保持し、監視カメラにより撮像された撮像データに対して深層学習の推論処理を行う深層学習推論処理部であって、画角内の領域情報を保持した検知領域テーブルを参照して、前記映像データ内の検知領域における前記移動体の出現、移動又は退出を検出して、検知情報を出力する深層学習推論処理部と、
監視カメラの検知領域と各検知領域に連携する監視カメラの識別情報とが登録されたカメラ連携テーブルを参照して、前記検知情報に含まれる前記移動体の出現、移動又は退出が検出された検知領域と、該検知領域に連携する他監視カメラの識別情報と、自監視カメラの識別情報とを含んだ映像解析データを作成する映像解析データ作成部と、
作成された映像解析データを通信ネットワークに送信する送信制御部と、
を備えた監視カメラの情報送信装置。 - 前記深層学習推論処理部は、前記深層学習の推論処理を行うことにより移動体が1.4[m/s]以上の速度で移動することを検出できる請求項3に記載の監視カメラの情報送信装置。
- 請求項1又は2に記載の監視カメラの情報受信装置と、
請求項3又は4に記載の監視カメラの情報送信装置と、
を備えた監視カメラシステム。 - 監視カメラの情報受信方法であって、
他の監視カメラにより撮像された映像の解析データである映像解析データを、通信ネットワークを介して受信するステップと、
カメラ連携テーブルと映像変更テーブルを参照して、受信した映像解析データを解析して、解析結果に基づいて自監視カメラについての映像制御要求を行うステップであって、前記カメラ連携テーブルには、前記通信ネットワークに接続された監視カメラの検知領域と、各検知領域に連携する監視カメラの識別情報とが登録されており、前記映像変更テーブルには、前記自監視カメラの撮影に関する少なくとも1つのパラメータを変更するための非色彩条件と、前記非色彩条件が満たされた場合の前記少なくとも1つのパラメータの変更内容とが定義されており、前記非色彩条件には、自監視カメラの画角内の移動体の数が複数の場合に関する第1の条件、又は前記自監視カメラの画角内の移動体の速度に関する第2の条件が含まれており、前記受信した映像解析データが前記非色彩条件を満たす場合に、前記変更内容に従って前記映像制御要求を行うステップと、
前記映像制御要求に従って、前記少なくとも1つのパラメータを変更するステップと、
を備えた、監視カメラの情報受信方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/007232 WO2022180756A1 (ja) | 2021-02-26 | 2021-02-26 | 監視カメラの情報送信装置、監視カメラの情報受信装置、及び監視カメラシステム、並びに監視カメラの情報受信方法 |
GB2311370.7A GB2618457A (en) | 2021-02-26 | 2021-02-26 | Monitoring camera information transmitting device, monitoring camera information receiving device, monitoring camera system, and monitoring camera information |
JP2023501933A JPWO2022180756A1 (ja) | 2021-02-26 | 2021-02-26 | |
US18/274,213 US20240098224A1 (en) | 2021-02-26 | 2021-02-26 | Monitoring camera information transmitting device, monitoring camera information receiving device, monitoring camera system, and monitoring camera information receiving method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/007232 WO2022180756A1 (ja) | 2021-02-26 | 2021-02-26 | 監視カメラの情報送信装置、監視カメラの情報受信装置、及び監視カメラシステム、並びに監視カメラの情報受信方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022180756A1 true WO2022180756A1 (ja) | 2022-09-01 |
Family
ID=83048898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/007232 WO2022180756A1 (ja) | 2021-02-26 | 2021-02-26 | 監視カメラの情報送信装置、監視カメラの情報受信装置、及び監視カメラシステム、並びに監視カメラの情報受信方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240098224A1 (ja) |
JP (1) | JPWO2022180756A1 (ja) |
GB (1) | GB2618457A (ja) |
WO (1) | WO2022180756A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006295604A (ja) * | 2005-04-12 | 2006-10-26 | Matsushita Electric Ind Co Ltd | 監視カメラ及び監視システムの制御方法 |
JP2006310901A (ja) * | 2005-04-26 | 2006-11-09 | Victor Co Of Japan Ltd | 監視システム及び監視方法 |
JP2007135093A (ja) * | 2005-11-11 | 2007-05-31 | Sony Corp | 映像監視システム及び方法 |
JP2018006910A (ja) * | 2016-06-29 | 2018-01-11 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法およびプログラム |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001086910A (ja) * | 1999-09-22 | 2001-04-03 | Mamiya Op Co Ltd | 魚釣用錘 |
-
2021
- 2021-02-26 WO PCT/JP2021/007232 patent/WO2022180756A1/ja active Application Filing
- 2021-02-26 GB GB2311370.7A patent/GB2618457A/en active Pending
- 2021-02-26 JP JP2023501933A patent/JPWO2022180756A1/ja active Pending
- 2021-02-26 US US18/274,213 patent/US20240098224A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006295604A (ja) * | 2005-04-12 | 2006-10-26 | Matsushita Electric Ind Co Ltd | 監視カメラ及び監視システムの制御方法 |
JP2006310901A (ja) * | 2005-04-26 | 2006-11-09 | Victor Co Of Japan Ltd | 監視システム及び監視方法 |
JP2007135093A (ja) * | 2005-11-11 | 2007-05-31 | Sony Corp | 映像監視システム及び方法 |
JP2018006910A (ja) * | 2016-06-29 | 2018-01-11 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法およびプログラム |
Also Published As
Publication number | Publication date |
---|---|
GB202311370D0 (en) | 2023-09-06 |
US20240098224A1 (en) | 2024-03-21 |
GB2618457A (en) | 2023-11-08 |
JPWO2022180756A1 (ja) | 2022-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8044992B2 (en) | Monitor for monitoring a panoramic image | |
CN109698905B (zh) | 控制设备、摄像设备、控制方法和计算机可读存储介质 | |
US8451329B2 (en) | PTZ presets control analytics configuration | |
US20200236290A1 (en) | Image-capturing apparatus | |
CN111107276B (zh) | 信息处理设备及其控制方法、存储介质以及摄像系统 | |
JP2009177472A (ja) | 画像処理方法、画像処理装置及び撮像装置 | |
JP2011130271A (ja) | 撮像装置および映像処理装置 | |
CN108810400B (zh) | 控制设备、控制方法和记录介质 | |
JP2006087083A (ja) | 撮像装置及び撮像装置の制御方法 | |
JP2011130271A5 (ja) | ||
KR101591396B1 (ko) | 촬상 장치, 통신 방법 및 기억 매체 및 통신 시스템 | |
JP2013223104A (ja) | カメラおよびカメラシステム | |
JP2007067510A (ja) | 映像撮影システム | |
WO2022180756A1 (ja) | 監視カメラの情報送信装置、監視カメラの情報受信装置、及び監視カメラシステム、並びに監視カメラの情報受信方法 | |
CN114697528A (zh) | 图像处理器、电子设备及对焦控制方法 | |
JP5256060B2 (ja) | 撮像装置 | |
US20120188437A1 (en) | Electronic camera | |
KR20120046509A (ko) | 카메라 초점조절 장치와 방법 | |
US9807311B2 (en) | Imaging apparatus, video data transmitting apparatus, video data transmitting and receiving system, image processing method, and program | |
JP7250433B2 (ja) | 撮像装置、制御方法及びプログラム | |
US11838634B2 (en) | Method of generating a digital video image using a wide-angle field of view lens | |
CN117280708A (zh) | 利用基于ai的对象识别的监控摄像机的快门值调节 | |
KR102148749B1 (ko) | 영상 감시 네트워크 시스템 및 그의 영상 감시 방법 | |
US20190052804A1 (en) | Image capturing device and control method | |
JP4438396B2 (ja) | 監視装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21927858 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023501933 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 202311370 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20210226 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18274213 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21927858 Country of ref document: EP Kind code of ref document: A1 |