WO2019244360A1 - Measurement information processing device - Google Patents

Measurement information processing device Download PDF

Info

Publication number
WO2019244360A1
WO2019244360A1 PCT/JP2018/023902 JP2018023902W WO2019244360A1 WO 2019244360 A1 WO2019244360 A1 WO 2019244360A1 JP 2018023902 W JP2018023902 W JP 2018023902W WO 2019244360 A1 WO2019244360 A1 WO 2019244360A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
scene
measurement
unit
scene information
Prior art date
Application number
PCT/JP2018/023902
Other languages
French (fr)
Japanese (ja)
Inventor
聡 笹谷
亮祐 三木
秀行 粂
誠也 伊藤
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to JP2020525216A priority Critical patent/JP6959444B2/en
Priority to PCT/JP2018/023902 priority patent/WO2019244360A1/en
Publication of WO2019244360A1 publication Critical patent/WO2019244360A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present invention relates to a measurement information processing apparatus, and is suitably applied to, for example, a measurement information processing apparatus that processes measurement information acquired by a measurement apparatus.
  • an object detection technology for detecting an object based on information (measurement information) acquired by the measurement device.
  • a measuring device a surveillance camera, a distance sensor, a laser radar, an infrared tag, and the like are widely used.
  • the surveillance cameras can be installed in the right place at the right place, the measurement accuracy required by the user can be achieved.However, the cost, the number of surveillance cameras to be used, the performance of object detection, etc. are limited due to cost, installation environment, etc. The expected measurement accuracy cannot be obtained. For this reason, there are many cases where a technician or the like goes to the site and adjusts parameters such as threshold values used by the surveillance camera at the time of object detection for a long time in order to improve measurement accuracy.
  • the parameters of the person tracking can be automatically optimized, if the face detection itself is difficult due to the installation environment of the camera, the accuracy of the obtained true value is low, and the optimal parameters are estimated. Becomes difficult.
  • the learning scene since a scene in which a face can be detected is selected, the learning scene may be limited to a scene where a person is located near a camera or a scene in which a front face of a person is imaged, and is effective for various scenes. There is a concern that parameters cannot be obtained.
  • the present invention has been made in view of the above points, and is intended to propose a measurement information processing apparatus capable of appropriately obtaining scene information used for optimizing parameters of a measurement apparatus.
  • the present invention provides a measurement information processing device that processes measurement information acquired by a measurement device, and includes a time information in which the measurement information is acquired based on the measurement information acquired by the measurement device. And a scene information extracting unit that extracts specific information related to specifying an object in a measurement range of the measuring device, and generates scene information including the measurement information, the time information, and the specific information, and the scene information extracting unit.
  • the second scene acquired by the second measurement device associated with the first scene information acquired by the first measurement device based on the time information of the generated scene information and the arrangement information of the measurement device Information, and based on the first scene information and the second scene information, optimize a parameter for specifying an object in the first measurement device.
  • a unit and as provided.
  • the first scene information acquired by the first measurement device and the second scene information acquired by the second measurement device associated with the first scene information are set to the first scene information.
  • the parameters related to the identification of the object in the measuring device are optimized. According to such a configuration, for example, in optimizing the parameters of the first measurement device, by adding the second scene information associated with the first scene information in addition to the first scene information, Scene information used for optimization can be appropriately obtained, and the accuracy of parameters can be improved.
  • the accuracy of the parameters of the measuring device can be improved.
  • FIG. 1 is a diagram illustrating an example of a configuration according to an image processing system according to a first embodiment.
  • FIG. 3 is a diagram illustrating an example of a configuration related to a scene information extraction unit according to the first embodiment.
  • FIG. 3 is a diagram illustrating an example of a configuration according to a scene information changing unit according to the first embodiment.
  • FIG. 4 is a diagram for describing analysis information according to the first embodiment.
  • FIG. 4 is a diagram for describing scene analysis information and scene information according to the first embodiment.
  • FIG. 5 is a diagram illustrating an example of a processing procedure related to processing in the image processing device according to the first embodiment.
  • FIG. 5 is a diagram for describing a scene information combining unit according to the first embodiment.
  • FIG. 13 is a diagram illustrating an example of a configuration according to an image processing system according to a second embodiment. It is a figure showing an example of composition concerning a learning scene change part by a 2nd embodiment.
  • reference numeral 100 denotes an image processing system according to the first embodiment as a whole.
  • the image processing system 100 is a system that includes an image processing device 110 and a plurality of cameras 120 (imaging devices), and automatically adjusts parameters for specifying an object in the cameras 120 for each camera 120.
  • the camera 120 will be described as an example.
  • the present invention is not limited to the camera 120, and can be applied to other measurement devices such as a distance sensor, a laser radar, and an infrared tag.
  • the parameters (kinds of parameters) related to the identification of an object in the camera 120 include an internal parameter of the camera 120, an external parameter of the camera 120, a threshold parameter used for object detection, and a threshold related to an application such as object tracking. There is no particular limitation on the value parameter and the like. In the following, the identification of an object will be described with reference to the detection of an object (object detection), but tracking of the object may be performed.
  • the image processing device 110 is an example of a measurement information processing device such as a computer and a computer, and processes digital image data (an example of measurement information) acquired by the camera 120.
  • the image processing device 110 is a notebook personal computer, a server device, or the like, and a CPU (Central Processing Unit), a random access memory (RAM), a read only memory (ROM), a hard disk drive (HDD), a communication unit, etc. It is comprised including.
  • the functions of the image processing device 110 are realized, for example, by the CPU reading a program stored in the ROM into the RAM and executing the program (software). May be realized, may be realized by hardware such as a dedicated circuit, or may be realized by a combination of software and hardware. Some of the functions of the image processing apparatus 110 are realized by another computer (the camera 120 or another computer not shown) that can communicate with the image processing apparatus 110. May be done.
  • the scene information 160 includes an imaging time (an example of time information) at which digital image data was acquired from digital image data acquired by the camera 120, and a true value.
  • the details of the scene information 160 will be described with reference to FIGS.
  • the camera arrangement information 170 is information that can specify the positional relationship between the cameras 120.
  • the camera arrangement information 170 includes position information indicating the position of the camera 120.
  • As the position information information capable of specifying the positional relationship between the cameras 120 can be appropriately adopted, and may be information on longitude and latitude or coordinate information on a floor map indicating the floor where the camera 120 is installed. May be.
  • the camera arrangement information 170 may include camera direction information indicating the direction of the camera 120. In other words, the camera direction information may be estimated from digital image data as necessary without being included in the camera arrangement information 170.
  • the image acquisition unit 130 acquires digital image data from the camera 120.
  • the scene information extracting unit 140 generates the scene information 160 from the digital image data acquired by the image acquiring unit 130 using the image feature amount.
  • the parameter optimizing unit 150 includes a scene information changing unit 151, a learning scene selecting unit 152, a parameter calculating unit 153, an output control unit 154, and an input control unit 155, and optimizes parameters of each camera 120.
  • the scene information changing unit 151 changes the scene information 160 for each camera 120 based on the scene information 160 and the camera arrangement information 170 of the cameras 120.
  • the learning scene selecting unit 152 selects the scene information 160 for optimizing the parameters of the camera 120 from the scene information 160 changed by the scene information changing unit 151 as a learning scene.
  • the parameter calculation unit 153 optimizes the parameters of each camera 120 using the learning scene selected by the learning scene selection unit 152.
  • the output control unit 154 outputs information related to the parameters (setting of the optimized parameters to the camera 120, information indicating that the accuracy of the object detection of the camera 120 is low, etc.) to the output device 180 (for example, the camera 120, other Computer, printer).
  • the input control unit 155 inputs information related to the parameter (for example, a true value specified by the user).
  • digital image data acquired by the image acquisition unit 130 from the camera 120 is referred to as a captured image.
  • FIG. 2 is a diagram showing an example of a configuration relating to the scene information extraction unit 140.
  • the scene information extraction unit 140 includes a scene analysis unit 210 and a scene reliability calculation unit 220.
  • the scene analysis unit 210 acquires analysis information by executing an analysis process using an image feature amount or the like on the captured image acquired by the image acquisition unit 130.
  • the scene reliability calculation unit 220 calculates the reliability of the analysis information obtained by the scene analysis unit 210, generates scene analysis information including the analysis information and the reliability, and generates a scene including the captured image and the scene analysis information.
  • the information 160 is generated (created).
  • the analysis information will be described later with reference to FIG.
  • the scene analysis information and the scene information 160 will be described later with reference to FIG.
  • FIG. 3 is a diagram showing an example of a configuration relating to the scene information changing unit 151.
  • the scene information changing unit 151 includes a scene information combining unit 310 and a scene analysis information correcting unit 320.
  • the scene information combining unit 310 combines the scene information 160 of each camera 120 based on the camera arrangement information 170.
  • the scene analysis information correcting unit 320 changes the scene information 160 (for example, scene analysis information) of each camera 120 by comparing the scene information 160 combined by the scene information combining unit 310.
  • the process of the scene information changing unit 151 will be described later with reference to FIG.
  • FIG. 4 is a diagram for explaining the analysis information.
  • the captured images 410 and 420 shown in FIG. 4 are captured images acquired at different times by the image acquisition unit 130 in FIG.
  • the moving objects 411, 412, 421, and 422 indicate persons to be detected by the camera 120.
  • the moving object refers to an object that does not exist within the imaging range of the camera 120 at the time of starting the measurement, but exists within the imaging range of the camera 120 after the measurement starts.
  • the fixed objects 413 and 423 indicate objects that are not detected by the camera 120.
  • the moving body region images 430 and 440 are images showing moving body regions 431, 432, and 441, which are regions of the moving bodies 411, 412, 421, and 422 in the captured images 410 and 420.
  • the analysis information 450 indicates information acquired by the scene analysis unit 210.
  • the analysis information 450 includes information on the true value and the imaging time. At least one of the congestion degree, the number of moving object regions, and the information of the moving object region details may or may not be included in the analysis information 450.
  • the true value is the number of detection targets in the captured image.
  • the true value is a value obtained by estimating the detection target, and may be a result of the camera 120 performing the object detection using the initial parameters, or the number of the moving body regions 431, 432, and 441 existing in the moving body region images 430 and 440. However, it is not always necessary to be an accurate value.
  • the imaging time is the time at which the captured image was acquired, and is acquired from time information built in the camera 120, the image processing device 110, and the like.
  • the congestion degree indicates the density of moving objects within the measurement range.
  • the congestion degree can be obtained by calculating the ratio of the number of pixels of the difference area of the moving body areas 431, 432, 441 to the resolution of the difference image of the moving body area images 430, 440.
  • the congestion degree may be used as it is as a value range of “0” to “100”, or a down-sampled value range such as “0” to “5”.
  • the number of moving object regions is the number of moving object regions existing in the moving object region image.
  • the number of moving body regions the number of moving body regions 431, 432, 441 existing in the moving body region images 430, 440 is used as it is.
  • the moving object region details include information such as the number of people, the position, the moving direction, and the moving speed.
  • the number of people is the number of moving objects expected to exist in the moving object area. For example, with respect to the number of persons, a method of acquiring in advance by holding a correlation between the size of the moving body regions 431, 432, 441 and the number of the moving bodies 411, 412, 421, 422 in the moving body regions 431, 432, 441, etc. Is calculated. For example, when the region is small like the moving body regions 431 and 432, the number of people is set to “1”. When the region is large like the moving body region 441, the number of people is set to “1” or “2” or more.
  • the position indicates the position of the moving object region in the captured image.
  • the position is calculated by using the coordinate axes 460 and treating the barycentric positions (x, y) of the moving body regions 431, 432, and 441 as position information.
  • the moving direction indicates the moving direction of the moving object.
  • the moving speed indicates the moving speed of the moving object.
  • the moving direction and the moving speed are obtained by performing a general image processing technique such as an optical flow.
  • the moving direction may be classified into eight directions, such as the moving object direction 470, based on the optical flow information.
  • the moving speed is roughly measured (km / hour) in consideration of the speed (pixels / second) in units of pixels (pixels), the size of the moving body regions 431, 432, and 441 and the installation position of the camera 120. ) May be used.
  • the analysis information 450 acquired by the scene analysis unit 210 includes at least a true value and an imaging time, and information other than the analysis information 450 that can be easily acquired by the image processing technique is added. It is also possible, and there is no particular limitation.
  • the coordinate axis 460 indicates the coordinate axis of the captured image.
  • the moving object direction 470 indicates the direction of the moving object in the captured image.
  • FIG. 5 is a diagram for explaining scene analysis information and scene information.
  • the scene analysis information 510 is information obtained by adding reliability information to the analysis information of the captured image 410.
  • the scene information 160 is information summarizing the captured image 410 and the scene analysis information 510.
  • the reliability is an example of reliability information indicating the reliability of a true value in the measurement range of the camera 120, and indicates the reliability of analysis information (for example, a true value) for a captured image.
  • the scene information (for example, the scene information 160) includes measurement information (for example, a captured image) acquired by the measurement device (for example, the camera 120) and time information (for example, Time) and identification information (for example, a true value) related to the identification of an object in the measurement range of the measurement device.
  • the scene information includes reliability information (for example, reliability), congestion degree information (for example, congestion degree) indicating the density of objects within the measurement range of the measurement device, and the number of object regions existing within the measurement range.
  • Position Information for example, the number of moving object regions
  • position information of the object region in the measurement range for example, the position of the moving object region
  • moving direction information of the object region for example, moving direction of the moving object region
  • moving speed information of the object region for example, Moving speed of the moving body region
  • FIG. 6 is a diagram illustrating an example of a processing procedure related to processing in the image processing apparatus 110.
  • step S601 the scene analysis unit 210 acquires analysis information and an imaging time from a captured image.
  • the scene analysis unit 210 generates a moving body region image from a captured image by image processing.
  • a general known method can be used as a method of generating the moving body region image, and is not particularly limited.
  • the scene analysis unit 210 calculates a difference between a captured background image and a captured image in which no detection target is held in advance by using a background difference method, and determines a region where the difference amount is larger than a threshold value as a moving object region. Is determined.
  • a region where the difference amount is larger than a threshold value is determined.
  • the moving body region is roughly obtained.
  • an accurate region in which the moving body (person) exists can be obtained from the captured image in pixel units.
  • a moving object region image is generated by using a high-accuracy background subtraction method utilizing background update, an inter-frame difference method of acquiring a difference between successive captured images, and the like. May be.
  • step S602 the scene reliability calculation unit 220 calculates the reliability of the analysis information, and generates the scene information 160.
  • the scene reliability calculation unit 220 calculates the reliability of the analysis information using, for example, part or all of the analysis information extracted by the scene analysis unit 210.
  • the analysis information used when calculating the reliability is not particularly limited.
  • the scene reliability calculation unit 220 determines that the smaller the difference area between the captured image and the background image is, the less likely it is for noise to be introduced when extracting information, and sets the reliability to be large. Further, for example, when the analysis information indicates that the degree of congestion is small or the number of moving object regions is small, the scene reliability calculation unit 220 may linearly set the reliability to be linearly large within a range of a value range specified in advance. However, the reliability of the analysis information in which the sum of the number of persons in each moving object region is different from the true value may be set small.
  • the scene reliability calculating unit 220 may use the camera information in calculating the reliability when camera information (camera installation information and camera characteristic information) can be acquired.
  • the camera installation information is, for example, the installation height, the installation angle, and the installation position of the camera 120.
  • the camera characteristic information is a characteristic of the camera 120 that may affect the accuracy of object detection, and includes, for example, an image resolution, a frame rate, and a type of the camera 120 (usually, a direct type, a fisheye, and the like).
  • the scene reliability calculation unit 220 uses simulation or the like from camera installation information, based on the fact that the accuracy of image processing generally increases as the distance between the camera 120 and the measurement target decreases or the depression angle of the camera 120 increases.
  • the distance between the camera 120 and the measured person is estimated by the camera 120, and the reliability of the analysis information extracted from the camera 120 expected to shorten the distance to the person, the camera 120 capable of capturing an image of the person from directly above, the high-resolution camera 120, and the like. Set the degree higher.
  • the scene reliability calculation unit 220 may acquire camera information by referring to a database of camera information held in advance, or may increase the installation position of the camera 120 based on image feature amounts in a captured image.
  • the camera information may be acquired by estimating aside.
  • the scene reliability calculation unit 220 may determine the final reliability from the sum of the reliability calculated from the analysis information and the camera information. Also, for example, the scene reliability calculation unit 220 combines analysis information such as the moving speed of the moving body region and camera information such as the frame rate of imaging, and analyzes the analysis information that the moving speed of the moving body region is fast in the camera 120 having a low frame rate. The reliability may be set small. In addition, for example, the scene reliability calculation unit 220 may set the reliability of the analysis information indicating that the degree of congestion is low in the camera 120 that can capture an image from directly above, to be large.
  • a method of calculating the reliability based on camera installation environment information indicating the installation environment of the camera 120 where the accuracy of object detection is expected to decrease may be used, and the calculation method is not particularly limited.
  • the scene reliability calculation unit 220 may deteriorate image processing accuracy when an environment where direct sunlight enters, an environment where a mirror is provided, or an environment where a constantly operating object such as an escalator is installed is within the shooting range. Then, the reliability is set to be small.
  • the scene information extraction unit measures the measurement information (captured image, analysis information extracted from the captured image, etc.) acquired by the measurement device (for example, the camera 120). ), Measurement device installation information, measurement device installation environment information, or measurement device characteristic information (using at least one of measurement information of camera 120, camera installation information, camera installation environment information, and camera characteristic information). ), And calculates reliability information (for example, reliability). According to this configuration, the reliability information can be appropriately calculated.
  • step S603 the scene information combining unit 310 combines the scene information 160 between the cameras 120 based on the camera arrangement information and the imaging time.
  • FIG. 7A shows an example (two-dimensional map 710) of a floor map showing a floor (place) where the camera 120 is installed.
  • the objects 720 and 730 indicate the position of the camera 120 and the shooting direction.
  • the camera arrangement information 170 is not particularly limited as long as the positional relationship between the cameras 120 in the measurement range can be grasped as shown in FIG.
  • the photographing direction may be information about a direction roughly estimated from the camera arrangement information and the image feature amount of the captured image.
  • the two-dimensional map 710 can be generated based on the camera arrangement information 170.
  • the camera 120 indicated by the object 720 is referred to as camera A
  • the camera 120 indicated by the object 730 is referred to as camera B as appropriate.
  • FIG. 7 shows an example of combining scene information 160 of camera A and scene information 160 of camera B.
  • the scene information combining unit 310 calculates that the moving time from the camera A to the camera B is 10 seconds, the image capturing time of 10 seconds before and after the image capturing time of the scene information 160 of the camera A is different or the closest value. Is associated with scene information 160 of camera B.
  • the present invention is not limited thereto.
  • step S604 the scene analysis information correction unit 320 corrects the scene information 160.
  • the scene analysis information correction unit 320 changes the scene information 160 (for example, scene analysis information) of each camera 120 by comparing the scene information 160 associated with the scene information combining unit 310.
  • the scene information 160 to be compared is not particularly limited. Further, there is no particular limitation as long as true values are included as candidates for the scene analysis information to be corrected.
  • the scene analysis information correction unit 320 may compare the reliability and replace the true value of the low reliability scene analysis information with the true value of the high reliability scene analysis information. Further, for example, when the reliability is lower than the threshold value, the scene analysis information correction unit 320 may discard the associated scene information 160 so as not to use it as a learning scene.
  • the scene analysis information correction unit 320 compares the congestion degree and / or the number of moving object regions, and the smaller the value, the smaller the noise. May be determined to be small, and the true value of the smaller value and the true value of the larger value may be changed. Further, for example, when the congestion degree and / or the number of moving object regions are too different, the scene analysis information correction unit 320 may discard the associated scene information 160.
  • the scene analysis information correction unit 320 may use the camera arrangement information 170, the detailed scene analysis information of the moving object region, and the camera information. For example, the scene analysis information correction unit 320 determines that the reliability of the association of the scene information 160 itself is high when the scene analysis information correction unit 320 is suitable for the actual moving path of the person in consideration of the moving direction and the camera arrangement information. Alternatively, both of the true values may be set to the true value having the higher reliability, and the reliability may be set to be higher.
  • the moving direction of the scene information 160 of the camera 120 of the object 720 is the left direction, and the associated object 730 after 10 seconds is considered. If the moving direction of the scene information 160 of the camera 120 is downward, it can be determined that the scene information 160 is suitable for the moving path of the person, and if it is also associated with the scene information 160 of the camera 120 of the object 730 10 seconds ago. May discard the associated scene information 160.
  • the scene analysis information correction unit 320 may compare all scene analysis information to calculate a sum of the differences between the values, or a value calculated using a predetermined weighting coefficient may be equal to or less than a threshold. , It may be determined that the reliability of the association of the scene information 160 is high. Further, for example, the scene analysis information correction unit 320 considers camera installation information and camera characteristic information, and if there is scene information 160 that is considered to have extremely high reliability, the scene information The true values of 160 may all be replaced with the true values of the scene information 160 considered to have extremely high reliability.
  • the parameter optimizing unit (for example, the parameter optimizing unit 150, the scene information changing unit 151, and the scene analysis information correcting unit 320) outputs the reliability information of the first scene information (for example, the reliability of the camera A). And the first scene information (for example, the scene information 160 of the camera A, the scene analysis information, the reliability, the true value) and the first scene information based on the reliability information of the second scene information (for example, the reliability of the camera A). And / or correct second scene information (eg, scene information 160 of camera B, scene analysis information, reliability, true value).
  • the analysis information of the captured image acquired by one camera 120 can be complemented by the analysis information of the captured image acquired by the other camera 120, so that more accurate reliability information can be obtained.
  • the set scene information 160 can be obtained efficiently.
  • the parameter optimizing unit discards the first scene information and / or the second scene information based on the reliability information of the first scene information and the reliability information of the second scene information. According to such a configuration, for example, since the scene information 160 determined to be unnecessary for parameter optimization is discarded, the capacity of the HDD can be reduced.
  • step S605 the learning scene selection unit 152 selects a learning scene for each camera 120.
  • the learning scene selecting unit 152 selects a learning scene for optimizing the parameters of each camera 120 from the scene information 160 output by the scene information changing unit 151.
  • the learning scene selection unit 152 may select the scene information 160 with high reliability as a learning scene by a preset number of data. Further, for example, the learning scene selection unit 152 classifies the scene information 160 for each true value in order to increase the variation of the learning scene, and thresholds the scene information 160 with high reliability in each true value. You may select as many learning scenes as the number of values.
  • the learning scene selection unit 152 may set a scene using information such as an imaging time, a congestion degree, and a moving body region. For example, the learning scene selection unit 152 may classify the scene information 160 into four scenes such as morning, noon, evening, and night in advance according to the imaging time, and select the scene information 160 corresponding to each scene. Further, for example, the learning scene selection unit 152 classifies the scene information 160 into two scenes, such as congested or uncongested, in advance according to the degree of congestion, and selects the scene information 160 corresponding to each scene. May be. Further, a scene may be set by combining information of the imaging time, the degree of congestion, and the moving body region, and the scene information 160 corresponding to each scene may be selected.
  • the parameter optimizing unit (for example, the parameter optimizing unit 150, the learning scene selecting unit 152) outputs the reliability information (for example, the scene information extracting unit 140) of the scene information extracted by the scene information extracting unit (for example, the scene information extracting unit 140).
  • the reliability information for example, the scene information extracting unit 140
  • step S606 and step S607 the parameter calculation unit 153 optimizes parameters related to the accuracy of object detection for each camera 120 using the learning scene selected by the learning scene selection unit 152.
  • the method for optimizing the parameters is not particularly limited, for example, a general algorithm using a least square error is used.
  • step S606 the parameter calculation unit 153 performs object detection on the learning scene with initial parameters. For example, the parameter calculation unit 153 extracts a captured image of the selected learning scene and a corresponding true value. Next, the parameter calculation unit 153 performs object detection based on initial parameters in each captured image.
  • step S607 the parameter calculation unit 153 calculates a parameter that minimizes the measurement result and the true value of the learning scene.
  • the parameter calculation unit 153 obtains the difference between the result of the object detection (object detection result) in step S606 and the true value.
  • the parameter calculation unit 153 changes the initial parameters within a preset value range, or changes the initial parameters at random.
  • the ⁇ parameter calculating unit 153 shifts the processing to step S606, executes the object detection in the same manner, and obtains the difference from the true value in step S607. These processes are repeated a fixed number of times, and the parameter with the smallest difference is calculated as the final parameter.
  • threshold parameters for object detection not only threshold parameters for object detection but also threshold parameters related to applications such as object tracking may be included.
  • the parameter calculation unit 153 selects a scene in which the same person is highly likely to be walking in the measurement range as a learning scene in advance, and selects the person as the learning scene. By changing to threshold parameters that can always be tracked, parameters related to tracking of the object are optimized.
  • step S608 the output control unit 154 performs an output process.
  • the output control unit 154 sets the parameters calculated by the parameter calculation unit 153 in the camera 120.
  • the parameters are automatically optimized and set in the camera 120, so that the time required for setting the parameters can be reduced. Further, labor costs can be reduced.
  • the parameter calculation unit 153 may not necessarily estimate one set of parameters for the selected learning scene, but may estimate a plurality of sets of parameters according to the scene information 160. For example, learning scenes are classified in advance into four scenes, such as morning, noon, evening, and night, according to the imaging time, and optimal parameters are estimated for each scene. The parameters used may be switched. In addition, optimal parameters are estimated for scenes classified according to the congestion degree, and only a part of image processing of the scene information extraction unit 140 is executed before object detection to calculate the congestion degree. The parameters may be switched accordingly.
  • the parameter optimizing unit converts the scene information generated by the scene information extracting unit (for example, the scene information extracting unit 140) into a plurality of scenes (for example, morning, noon, and evening). , Four scenes at night, two scenes such as congested and uncongested, and eight scenes combining these scenes), and parameters optimized for each classified scene are optimized, and measurement optimized for each scene is performed. Set the parameters of the device in the measuring device. According to such a configuration, it becomes possible to automatically set parameters corresponding to various scenes.
  • the output control unit 154 may specify the installation plan of the camera 120 by analyzing the learning scene selected by the learning scene selecting unit 152 and the reliability. For example, when the reliability of all the selected learning scenes is equal to or less than the threshold value, the output control unit 154 determines that the optimal parameters cannot be estimated in the current arrangement of the cameras 120, The fact that additional installation of the camera 120 is necessary is displayed on the output device 180 near the camera 120 that is a low learning scene (for example, a learning scene with the lowest reliability), and is clearly displayed to the user. An arrangement plan of the camera 120 for realizing accurate object detection is proposed.
  • the parameter optimizing unit (for example, the parameter optimizing unit 150 and the output control unit 154) outputs the reliability information of each scene information of the predetermined measuring device after the correction (for example, the reliability of the scene information 160 of the camera 120). Is determined to be lower than or equal to a predetermined ratio, and when it is determined to be lower than a predetermined ratio, information indicating that the specific accuracy of the object in the predetermined measuring device is low (for example, additional installation of the camera 120). Is required, the optimal parameters cannot be estimated in the current arrangement of the cameras 120).
  • a series of processes is executed in consideration of the free space of the HDD of the camera 120, the free space of the HDD of the image processing device 110, the processing specifications of the camera 120, or the processing specifications of the image processing device 110.
  • the scene information extraction unit processes the processing specification information of the measurement device (for example, the processing specification of the camera 120), the processing specification information of the measurement information processing device (for example, of the image processing device 110).
  • the extracted scene information may be stored in the storage unit based on the processing specifications) or the data capacity information of the measurement information processing apparatus (for example, the free space of the HDD of the image processing apparatus 110).
  • the scene information extracting unit 140 discards all the scene information 160 having a reliability lower than the threshold. With such a configuration, the capacity of the HDD may be reduced. Further, for example, the scene information extracting unit 140 may store a plurality of pieces of scene information 160 and output representative scene information 160 instead of outputting all the scene information 160 generated for each captured image. It may be. For example, scene information 160 corresponding to 10 consecutive captured images is held, and the fifth captured image in the middle and the average value of each piece of scene analysis information are set as new scene analysis information, and these are summarized. The data may be the final scene information 160. With such a configuration, the capacity of the HDD may be reduced.
  • the learning scene selecting unit 152 limits the number of learning scenes to be held in consideration of a parameter optimization algorithm. May be. According to such a configuration, it is possible to adjust the processing cost so as not to be unnecessarily large.
  • the scene information extracting unit 140 The generated scene information 160 may be modified.
  • the output control unit 154 outputs the captured image of the scene information 160 to the output device 180 or the like.
  • the input control unit 155 may add a manual operation such that a true value is manually input by a GUI or the like. According to such a configuration, accurate learning data can be obtained, and the parameter estimation accuracy can be improved.
  • the scene information extraction unit 140 generates time information (for example, imaging time) at which measurement information is obtained based on measurement information (for example, digital image data) obtained by a measurement device (for example, the camera 120). And extracting specific information (for example, a true value, a true value and an imaging time, a true value and a degree of congestion) relating to the identification of an object in the measurement range of the measurement device, and extracts scene information including the measurement information, the time information, and the specific information. Generate.
  • the parameter optimizing unit 150 acquires the first measurement device based on the time information of the scene information generated by the scene information extraction unit 140 and the arrangement information of the measurement device (for example, the position information of the camera).
  • the second scene information acquired by the second measurement device associated with the obtained first scene information is specified (for example, the scene information 160 of the camera A and the scene information 160 of the camera B are combined), and the first scene information is acquired. Based on the scene information and the second scene information, the parameters related to the identification of the object in the first measurement device are optimized.
  • various learning scenes with high true value accuracy can be selected for each measuring device.
  • Second Embodiment It is possible to update the parameters of the camera 120 when the arrangement (layout) of the fixed object is changed, when the camera 120 is deteriorated, when the camera 120 is added or replaced, or the like. Desired. A configuration in which the parameters of the camera 120 are always updated may be adopted. However, if the parameters are constantly updated, the processing is performed even when updating of the parameters is not necessary, and there is a problem that the load on the image processing apparatus increases. Thus, in the present embodiment, a configuration for appropriately updating the parameters of the camera 120 will be described. Note that, in the present embodiment, the same components as those in the first embodiment are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
  • FIG. 8 is a diagram showing an example of a configuration according to the image processing system 800 of the present embodiment.
  • the image processing system 800 includes a plurality of cameras 120 and an image processing device 810.
  • the image processing apparatus 810 uses the new scene information 860 of the own camera and other cameras and the camera arrangement information 170 to determine the parameters. This is a device for selecting a learning scene and optimizing parameters.
  • the object detection unit 820 detects an object in the captured image acquired by the image acquisition unit 130 using the current parameters.
  • the parameter optimization determining unit 830 determines whether to optimize the parameter using the result of the object detection by the object detecting unit 820 (object detection result).
  • object detection result is a result when the object detection is performed on the captured image, and includes information on the number of detected objects, position information of the object in the captured image such as the position of the moving body region in the scene analysis information, and the like.
  • the scene information extraction unit 840 substantially follows the function of the scene information extraction unit 140 of the first embodiment.
  • the learning scene changing unit 851 included in the parameter optimizing unit 850 includes scene information (new scene information 860) newly acquired from the scene information changing unit 151 and scene information (storage scene information 870) previously held as a learning scene. ) To change the learning scene.
  • scene information new scene information 860
  • scene information storage scene information 870
  • the parameter optimization determination unit 830 determines whether or not the parameters need to be optimized using the object detection result output from the object detection unit 820, and if the parameters need to be optimized, the scene information extraction unit 840. And controls the scene information extraction unit 840 to generate new scene information 860 from the captured image acquired by the image acquisition unit 130.
  • the method for determining the necessity of parameter optimization is not particularly limited.
  • the parameter optimization determination unit 830 refers to the object detection results for one day, and determines that parameter optimization is necessary when an object is always detected or not detected at all. Good.
  • the parameter optimization determination unit 830 specifies a threshold value such as the maximum number of detected objects in consideration of the measurement range in advance, and optimizes the parameter when the number of detected objects exceeds the threshold value. It may be determined that it is necessary.
  • the parameter optimization determination unit 830 may hold the latest layout information indicating the arrangement of the camera 120, and may determine to optimize the parameter when the layout information changes significantly.
  • the parameter optimization determination unit 830 compares the background image when the camera 120 is first installed with the current background image, and determines that the parameter is to be optimized if the difference is equal to or larger than the threshold value. Is also good.
  • information on the camera 120, camera installation environment information, which can be expected to reduce the accuracy of object detection, and information on the difference between the captured image obtained when the previous parameter was optimized by the simple image processing and the current captured image can be grasped. May be appropriately adopted.
  • the parameter optimization determination unit 830 determines a parameter when a predetermined time has elapsed since the camera 120 was installed, based on information (camera installation time information) that can specify the date and time when the camera 120 was installed. May need to be optimized.
  • the parameter optimization determination unit 830 may determine that parameter optimization is necessary based on an instruction from a user.
  • the parameter optimization determining unit determines the result of the object identification (for example, the object detection result) from the measurement information (for example, the image captured by the camera 120) of the measuring device. ), Whether to optimize the parameters of the measuring device is determined based on the setting environment information of the measuring device (for example, camera setting environment information) or the setting time information of the measuring device (camera setting time information). According to this configuration, the timing of parameter optimization can be appropriately determined, so that the load on the measurement information processing apparatus in parameter optimization can be reduced.
  • the setting environment information of the measuring device for example, camera setting environment information
  • the setting time information of the measuring device camera setting time information
  • the cameras 120 for optimizing parameters do not target all cameras 120 but cameras around the cameras 120 determined to need to optimize parameters using the camera arrangement information 170 and the like. 120 may be targeted.
  • FIG. 9 is a diagram illustrating an example of a configuration related to the learning scene changing unit 851.
  • the learning scene changing unit 851 includes a learning scene selecting unit 152 and a learning scene comparing unit 910.
  • the learning scene comparison unit 910 compares the current learning scene (new scene information 860) notified by the learning scene selection unit 152 with the previously saved learning scene (stored scene information 870), and determines a learning scene with high reliability. Output.
  • the learning scene comparing unit 910 stores the new scene information 860 of the current learning scene notified by the learning scene selecting unit 152 and the stored scene of the stored learning scene. The information is compared with the information 870, and the scene information of the learning scene with high reliability is stored as the stored scene information 870, and is output to the parameter calculation unit 153.
  • the method for selecting scene information of a learning scene with high reliability is not particularly limited.
  • the learning scene comparison unit 910 compares the reliability of the scene information having the same true value (the true value and the scene may be the same), and sets the scene information having a high reliability as the saved scene information 870. Is also good. Further, the learning scene comparison unit 910 may select scene information with high reliability similarly to the scene analysis information correction unit 320 using the scene analysis information, and use the selected scene information as the saved scene information 870.
  • the parameter optimization unit determines that the parameter optimization unit (for example, the parameter optimization determination unit 830) performs the optimization
  • the scene information for example, new scene information 860
  • the scene information extracting unit for example, scene information extracting unit 840
  • the scene information for example, new scene
  • the scene information for example, stored scene information 870 of the same specific information as the specific information (for example, true value, true value and scene) of the information 860) is updated.
  • the stored scene information of the stored learning scene is used.
  • the learning scene acquired by the learning scene selecting unit 152 is used. If the new scene information 860 with the true values “1” to “5” is included and the reliability is determined to be high for any of them, the stored learning scenes with the true values “1” to “5” are saved.
  • the image processing apparatus that includes two or more measuring devices and detects an object according to the configuration described above, it is determined that the parameters need to be optimized based on the object detection result. And learning scenes selected or updated based on scene information extracted from images captured by measuring devices installed in the surroundings, so that various learning scenes with high true value accuracy depending on the accuracy of object detection Can be selected. In addition, by repeatedly performing this process, it is possible to cope with environmental changes around the installed camera and with the aging of the camera when installed for a long time.
  • image acquisition section 130 is provided for each camera 120.
  • the present invention is not limited to this, and image acquisition section 130 may include one image acquisition section regardless of the number of cameras 120. A configuration may be provided.
  • information such as a program, a table, and a file for realizing each function is stored in a memory, a storage device such as an HDD or an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD. Can be put on.
  • a storage device such as an HDD or an SSD (Solid State Drive)
  • a recording medium such as an IC card, an SD card, or a DVD.
  • the accuracy of the parameters of the measuring device can be improved.
  • image processing system 110 image processing apparatus 120 camera 130 image acquisition unit 140 scene information extraction unit 150 parameter optimization unit 160 scene information 170 ... Camera placement information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

This measurement information processing device for processing measurement information acquired by measurement devices, is provided with: a scene information extraction unit that extracts, on the basis of the measurement information acquired by the measurement devices, information about a time at which the measurement information is acquired and specification information related to specification of an object within the measurement range of the measurement devices, and that generates scene information including the measurement information, the time information, and the specification information; and a parameter optimization unit that specifies, on the basis of the time information in the scene information generated by the scene information extraction unit and information about arrangement of the measurement devices, second scene information which is acquired by a second measurement device and which is associated with first scene information acquired by a first measurement device, and that optimizes, on the basis of the first scene information and the second scene information, a parameter related to specification of the object by the first measurement device.

Description

計測情報処理装置Measurement information processing device
 本発明は計測情報処理装置に関し、例えば計測装置で取得された計測情報を処理する計測情報処理装置に適用して好適なものである。 The present invention relates to a measurement information processing apparatus, and is suitably applied to, for example, a measurement information processing apparatus that processes measurement information acquired by a measurement apparatus.
 計測装置が取得した情報(計測情報)により、物体を検出する物体検出技術へのニーズが高まっている。計測装置としては、監視カメラ、距離センサ、レーザレーダ、赤外線タグなどが多く活用されている。特に、装置を新設するための導入コストが掛からず、かつ、既設の装置を利用できる監視カメラへの期待が高い。監視カメラを適材適所に満遍なく設置できるならば、ユーザが求める計測精度を達成できるが、実際にはコスト面、設置場所の環境などによって使用する監視カメラの数、物体検出の性能などが限定され、期待された計測精度が得られない。そのため、技術者などが現地に赴き、計測精度を高めるために監視カメラが物体検出の際に使用するしきい値などのパラメータを長時間調整する場面が多々ある。 (4) There is a growing need for an object detection technology for detecting an object based on information (measurement information) acquired by the measurement device. As a measuring device, a surveillance camera, a distance sensor, a laser radar, an infrared tag, and the like are widely used. In particular, there is a high expectation for a surveillance camera that does not require an introduction cost for newly installing a device and that can use an existing device. If the surveillance cameras can be installed in the right place at the right place, the measurement accuracy required by the user can be achieved.However, the cost, the number of surveillance cameras to be used, the performance of object detection, etc. are limited due to cost, installation environment, etc. The expected measurement accuracy cannot be obtained. For this reason, there are many cases where a technician or the like goes to the site and adjusts parameters such as threshold values used by the surveillance camera at the time of object detection for a long time in order to improve measurement accuracy.
 このような状況を鑑みて、近年では、設置環境に応じて監視カメラのパラメータを最適化する技術が開発されている。かかる技術は、設置した監視カメラの映像から真値情報が既知であるシーンを選定し、映像中の移動体の数、移動傾向、および画像パターンを学習することで、カメラの設置環境ごとに適したパラメータを推定する。 In view of such circumstances, in recent years, techniques for optimizing the parameters of the monitoring camera according to the installation environment have been developed. This technology is suitable for each camera installation environment by selecting a scene whose true value information is known from the video of the installed monitoring camera, and learning the number of moving objects in the video, the movement tendency, and the image pattern. Estimated parameters.
 真値情報の入力方法としては人手による手動が考えられるものの、多数の監視カメラを活用したシステムである場合、調整に必要なコストの肥大化が懸念される。そのため、自動で真値が既知であるシーンを選定し、パラメータを最適化する技術が求められる。 As a method of inputting the true value information, manual operation can be considered, but in the case of a system using a large number of surveillance cameras, there is a concern that the cost required for adjustment may be increased. Therefore, a technique for automatically selecting a scene whose true value is known and optimizing parameters is required.
 この点、顔検出の結果を利用して真値の信頼度の高い学習用のシーンを選定し、人物追跡のパラメータを自動で最適化する技術が開示されている(特許文献1参照)。 点 In this regard, there is disclosed a technology for selecting a learning scene with high reliability of a true value using the result of face detection and automatically optimizing parameters of person tracking (see Patent Document 1).
特開2012-59224号公報JP 2012-59224 A
 特許文献1に記載の技術では、自動で人物追跡のパラメータを最適化できるものの、カメラの設置環境により顔検出自体が難しい場合、取得する真値の精度が低くなり、最適なパラメータを推定することが困難となる。また、顔検出が可能なシーンを選定するため、学習シーンは、人物がカメラ近傍の位置にいるシーンまたは人物の正面顔を撮像したシーンに限定される可能性があり、多様なシーンに有効なパラメータを取得できない懸念がある。 According to the technology described in Patent Document 1, although the parameters of the person tracking can be automatically optimized, if the face detection itself is difficult due to the installation environment of the camera, the accuracy of the obtained true value is low, and the optimal parameters are estimated. Becomes difficult. In addition, since a scene in which a face can be detected is selected, the learning scene may be limited to a scene where a person is located near a camera or a scene in which a front face of a person is imaged, and is effective for various scenes. There is a concern that parameters cannot be obtained.
 本発明は以上の点を考慮してなされたもので、計測装置のパラメータの最適化に用いるシーン情報を適切に得ることができる計測情報処理装置を提案しようとするものである。 The present invention has been made in view of the above points, and is intended to propose a measurement information processing apparatus capable of appropriately obtaining scene information used for optimizing parameters of a measurement apparatus.
 かかる課題を解決するため本発明においては、計測装置で取得された計測情報を処理する計測情報処理装置であって、計測装置で取得された計測情報に基づいて前記計測情報が取得された時間情報および前記計測装置の計測範囲における物体の特定に係る特定情報を抽出し、前記計測情報と前記時間情報と前記特定情報とを含むシーン情報を生成するシーン情報抽出部と、前記シーン情報抽出部で生成されたシーン情報の時間情報と、計測装置の配置情報とに基づいて、第1の計測装置で取得された第1のシーン情報に関連付けられる第2の計測装置で取得された第2のシーン情報を特定し、前記第1のシーン情報と前記第2のシーン情報とに基づいて、前記第1の計測装置における物体の特定に係るパラメータを最適化するパラメータ最適化部と、を設けるようにした。 In order to solve such a problem, the present invention provides a measurement information processing device that processes measurement information acquired by a measurement device, and includes a time information in which the measurement information is acquired based on the measurement information acquired by the measurement device. And a scene information extracting unit that extracts specific information related to specifying an object in a measurement range of the measuring device, and generates scene information including the measurement information, the time information, and the specific information, and the scene information extracting unit. The second scene acquired by the second measurement device associated with the first scene information acquired by the first measurement device based on the time information of the generated scene information and the arrangement information of the measurement device Information, and based on the first scene information and the second scene information, optimize a parameter for specifying an object in the first measurement device. A unit, and as provided.
 上記構成では、第1の計測装置で取得された第1のシーン情報と、第1のシーン情報に関連付けられる第2の計測装置で取得された第2のシーン情報とに基づいて、第1の計測装置における物体の特定に係るパラメータが最適化される。かかる構成によれば、例えば、第1の計測装置のパラメータの最適化において、第1のシーン情報に加えて、第1のシーン情報と関連付けられる第2のシーン情報を加味することで、パラメータの最適化に用いるシーン情報を適切に得ることができ、パラメータの精度を高めることができる。 In the above configuration, the first scene information acquired by the first measurement device and the second scene information acquired by the second measurement device associated with the first scene information are set to the first scene information. The parameters related to the identification of the object in the measuring device are optimized. According to such a configuration, for example, in optimizing the parameters of the first measurement device, by adding the second scene information associated with the first scene information in addition to the first scene information, Scene information used for optimization can be appropriately obtained, and the accuracy of parameters can be improved.
 本発明によれば、計測装置のパラメータの精度を高めることができる。 According to the present invention, the accuracy of the parameters of the measuring device can be improved.
第1の実施の形態による画像処理システムに係る構成の一例を示す図である。FIG. 1 is a diagram illustrating an example of a configuration according to an image processing system according to a first embodiment. 第1の実施の形態によるシーン情報抽出部に係る構成の一例を示す図である。FIG. 3 is a diagram illustrating an example of a configuration related to a scene information extraction unit according to the first embodiment. 第1の実施の形態によるシーン情報変更部に係る構成の一例を示す図である。FIG. 3 is a diagram illustrating an example of a configuration according to a scene information changing unit according to the first embodiment. 第1の実施の形態による解析情報の説明に係る図である。FIG. 4 is a diagram for describing analysis information according to the first embodiment. 第1の実施の形態によるシーン分析情報およびシーン情報の説明に係る図である。FIG. 4 is a diagram for describing scene analysis information and scene information according to the first embodiment. 第1の実施の形態による画像処理装置における処理に係る処理手順の一例を示す図である。FIG. 5 is a diagram illustrating an example of a processing procedure related to processing in the image processing device according to the first embodiment. 第1の実施の形態によるシーン情報結合部の説明に係る図である。FIG. 5 is a diagram for describing a scene information combining unit according to the first embodiment. 第2の実施の形態による画像処理システムに係る構成の一例を示す図である。FIG. 13 is a diagram illustrating an example of a configuration according to an image processing system according to a second embodiment. 第2の実施の形態による学習シーン変更部に係る構成の一例を示す図である。It is a figure showing an example of composition concerning a learning scene change part by a 2nd embodiment.
 以下図面について、本発明の一実施の形態を詳述する。 Hereinafter, an embodiment of the present invention will be described in detail with reference to the drawings.
(1)第1の実施の形態
 図1において、100は全体として第1の実施の形態による画像処理システムを示す。画像処理システム100は、画像処理装置110と、複数のカメラ120(撮像装置)とを備え、カメラ120における物体の特定に係るパラメータをカメラ120ごとに自動で調整するシステムである。なお、本実施の形態では、カメラ120を例に挙げて説明するが、カメラ120に限られるものではなく、距離センサ、レーザレーダ、赤外線タグなど、他の計測装置にも適用できる。
(1) First Embodiment In FIG. 1, reference numeral 100 denotes an image processing system according to the first embodiment as a whole. The image processing system 100 is a system that includes an image processing device 110 and a plurality of cameras 120 (imaging devices), and automatically adjusts parameters for specifying an object in the cameras 120 for each camera 120. In the present embodiment, the camera 120 will be described as an example. However, the present invention is not limited to the camera 120, and can be applied to other measurement devices such as a distance sensor, a laser radar, and an infrared tag.
 カメラ120における物体の特定に係るパラメータ(パラメータの種類)としては、カメラ120の内部パラメータ、カメラ120の外部パラメータ、物体の検出に用いるしきい値パラメータ、物体の追跡などのアプリケーションに関連するしきい値パラメータなど、特に限定しない。なお、以下では、物体の特定としては、物体の検出(物体検出)を例に挙げて説明するが、物体の追跡であってもよい。 The parameters (kinds of parameters) related to the identification of an object in the camera 120 include an internal parameter of the camera 120, an external parameter of the camera 120, a threshold parameter used for object detection, and a threshold related to an application such as object tracking. There is no particular limitation on the value parameter and the like. In the following, the identification of an object will be described with reference to the detection of an object (object detection), but tracking of the object may be performed.
 画像処理装置110は、計算機、コンピュータなどの計測情報処理装置の一例であり、カメラ120で取得されたデジタル画像データ(計測情報の一例)を処理する。画像処理装置110は、ノートパソコン、サーバ装置等であり、図示は省略するCPU(Central Processing Unit)、RAM(Random Access Memory)、ROM(Read Only Memory)、HDD(Hard Disk Drive)、通信部などを含んで構成される。 The image processing device 110 is an example of a measurement information processing device such as a computer and a computer, and processes digital image data (an example of measurement information) acquired by the camera 120. The image processing device 110 is a notebook personal computer, a server device, or the like, and a CPU (Central Processing Unit), a random access memory (RAM), a read only memory (ROM), a hard disk drive (HDD), a communication unit, etc. It is comprised including.
 画像処理装置110の機能(画像取得部130、シーン情報抽出部140、パラメータ最適化部150など)は、例えば、CPUがROMに格納されたプログラムをRAMに読み出して実行すること(ソフトウェア)により実現されてもよいし、専用の回路などのハードウェアにより実現されてもよいし、ソフトウェアとハードウェアとが組み合わされて実現されてもよい。また、画像処理装置110の機能の一部は、画像処理装置110と通信可能な他のコンピュータ(カメラ120であってもよいし、図示は省略する他のコンピュータであってもよい。)により実現されてもよい。 The functions of the image processing device 110 (the image acquisition unit 130, the scene information extraction unit 140, the parameter optimization unit 150, and the like) are realized, for example, by the CPU reading a program stored in the ROM into the RAM and executing the program (software). May be realized, may be realized by hardware such as a dedicated circuit, or may be realized by a combination of software and hardware. Some of the functions of the image processing apparatus 110 are realized by another computer (the camera 120 or another computer not shown) that can communicate with the image processing apparatus 110. May be done.
 なお、HDD等の記憶部には、シーン情報160、カメラ配置情報170等が記憶されている。シーン情報160は、カメラ120で取得されたデジタル画像データからデジタル画像データが取得された撮像時刻(時間情報の一例)と真値とを含んで構成される。なお、シーン情報160の詳細については、図4および図5を用いて説明する。カメラ配置情報170は、カメラ120間の位置関係を特定可能な情報である。カメラ配置情報170は、カメラ120の位置を示す位置情報を含んで構成される。位置情報については、カメラ120間の位置関係を特定可能な情報を適宜に採用でき、経緯度の情報であってもよいし、カメラ120が設置されるフロアを示すフロアマップ上の座標情報であってもよい。なお、カメラ配置情報170には、カメラ120の向きを示すカメラ方向情報が含まれていてもよい。付言するならば、カメラ方向情報については、カメラ配置情報170に含まれることなく、デジタル画像データから必要に応じて推定されてもよい。 (4) Scene information 160, camera arrangement information 170, and the like are stored in a storage unit such as an HDD. The scene information 160 includes an imaging time (an example of time information) at which digital image data was acquired from digital image data acquired by the camera 120, and a true value. The details of the scene information 160 will be described with reference to FIGS. The camera arrangement information 170 is information that can specify the positional relationship between the cameras 120. The camera arrangement information 170 includes position information indicating the position of the camera 120. As the position information, information capable of specifying the positional relationship between the cameras 120 can be appropriately adopted, and may be information on longitude and latitude or coordinate information on a floor map indicating the floor where the camera 120 is installed. May be. Note that the camera arrangement information 170 may include camera direction information indicating the direction of the camera 120. In other words, the camera direction information may be estimated from digital image data as necessary without being included in the camera arrangement information 170.
 画像取得部130は、カメラ120からデジタル画像データを取得する。シーン情報抽出部140は、画像取得部130により取得されたデジタル画像データから画像特徴量を用いてシーン情報160を生成する。パラメータ最適化部150は、シーン情報変更部151、学習シーン選定部152、パラメータ算出部153、出力制御部154、および入力制御部155を含んで構成され、各カメラ120のパラメータを最適化する。 The image acquisition unit 130 acquires digital image data from the camera 120. The scene information extracting unit 140 generates the scene information 160 from the digital image data acquired by the image acquiring unit 130 using the image feature amount. The parameter optimizing unit 150 includes a scene information changing unit 151, a learning scene selecting unit 152, a parameter calculating unit 153, an output control unit 154, and an input control unit 155, and optimizes parameters of each camera 120.
 シーン情報変更部151は、複数のカメラ120におけるシーン情報160とカメラ配置情報170とに基づいて、カメラ120ごとのシーン情報160を変更する。学習シーン選定部152は、シーン情報変更部151により変更された後のシーン情報160からカメラ120のパラメータを最適化するためのシーン情報160を学習シーンとして選定する。パラメータ算出部153は、学習シーン選定部152により選定された学習シーンを用いて各カメラ120のパラメータを最適化する。出力制御部154は、パラメータに係る情報(最適化されたパラメータのカメラ120への設定、カメラ120の物体検出の精度が低いことを示す情報など)を出力装置180(例えば、カメラ120、他のコンピュータ、プリンタ)などに出力する。入力制御部155は、パラメータに係る情報(例えば、ユーザにより指定された真値)を入力する。 The scene information changing unit 151 changes the scene information 160 for each camera 120 based on the scene information 160 and the camera arrangement information 170 of the cameras 120. The learning scene selecting unit 152 selects the scene information 160 for optimizing the parameters of the camera 120 from the scene information 160 changed by the scene information changing unit 151 as a learning scene. The parameter calculation unit 153 optimizes the parameters of each camera 120 using the learning scene selected by the learning scene selection unit 152. The output control unit 154 outputs information related to the parameters (setting of the optimized parameters to the camera 120, information indicating that the accuracy of the object detection of the camera 120 is low, etc.) to the output device 180 (for example, the camera 120, other Computer, printer). The input control unit 155 inputs information related to the parameter (for example, a true value specified by the user).
 以下では、物体検出の対象を人物として説明するが、これに限られるものではなく、動物、ロボットなど、移動可能な他の物体を対象としてもよい。また、以下では、画像取得部130がカメラ120から取得するデジタル画像データを撮像画像と称する。 In the following, the target of object detection will be described as a person, but the present invention is not limited to this, and other movable objects such as animals and robots may be used. Hereinafter, digital image data acquired by the image acquisition unit 130 from the camera 120 is referred to as a captured image.
 図2は、シーン情報抽出部140に係る構成の一例を示す図である。 FIG. 2 is a diagram showing an example of a configuration relating to the scene information extraction unit 140.
 シーン情報抽出部140は、シーン解析部210と、シーン信頼度算出部220とを含んで構成される。シーン解析部210は、画像取得部130により取得された撮像画像に対して画像特徴量などを利用した解析処理を実行することで解析情報を取得する。シーン信頼度算出部220は、シーン解析部210により取得された解析情報の信頼度を算出し、解析情報と信頼度とを含むシーン分析情報を生成し、撮像画像とシーン分析情報とを含むシーン情報160を生成(作成)する。なお、解析情報については、図4を用いて後述する。シーン分析情報およびシーン情報160については、図5を用いて後述する。 The scene information extraction unit 140 includes a scene analysis unit 210 and a scene reliability calculation unit 220. The scene analysis unit 210 acquires analysis information by executing an analysis process using an image feature amount or the like on the captured image acquired by the image acquisition unit 130. The scene reliability calculation unit 220 calculates the reliability of the analysis information obtained by the scene analysis unit 210, generates scene analysis information including the analysis information and the reliability, and generates a scene including the captured image and the scene analysis information. The information 160 is generated (created). The analysis information will be described later with reference to FIG. The scene analysis information and the scene information 160 will be described later with reference to FIG.
 図3は、シーン情報変更部151に係る構成の一例を示す図である。 FIG. 3 is a diagram showing an example of a configuration relating to the scene information changing unit 151.
 シーン情報変更部151は、シーン情報結合部310と、シーン分析情報修正部320とを含んで構成される。シーン情報結合部310は、カメラ配置情報170に基づいて各カメラ120のシーン情報160を結合する。シーン分析情報修正部320は、シーン情報結合部310により結合されたシーン情報160を比較して各カメラ120のシーン情報160(例えば、シーン分析情報)を変更する。なお、シーン情報変更部151の処理については、図6を用いて後述する。 The scene information changing unit 151 includes a scene information combining unit 310 and a scene analysis information correcting unit 320. The scene information combining unit 310 combines the scene information 160 of each camera 120 based on the camera arrangement information 170. The scene analysis information correcting unit 320 changes the scene information 160 (for example, scene analysis information) of each camera 120 by comparing the scene information 160 combined by the scene information combining unit 310. The process of the scene information changing unit 151 will be described later with reference to FIG.
 図4は、解析情報を説明するための図である。 FIG. 4 is a diagram for explaining the analysis information.
 図4に示す撮像画像410,420は、図1の画像取得部130により異なる時刻に取得された撮像画像である。動体411,412,421,422は、カメラ120の検出対象である人物を示す。動体とは、計測開始時にカメラ120の撮像範囲内に存在せず、計測開始後にカメラ120の撮像範囲内に存在する物体をいう。固定物413,423は、カメラ120の検出対象外の設置物を示す。 4 The captured images 410 and 420 shown in FIG. 4 are captured images acquired at different times by the image acquisition unit 130 in FIG. The moving objects 411, 412, 421, and 422 indicate persons to be detected by the camera 120. The moving object refers to an object that does not exist within the imaging range of the camera 120 at the time of starting the measurement, but exists within the imaging range of the camera 120 after the measurement starts. The fixed objects 413 and 423 indicate objects that are not detected by the camera 120.
 動体領域画像430,440は、撮像画像410,420における動体411,412,421,422の領域である動体領域431,432,441を示す画像である。 The moving body region images 430 and 440 are images showing moving body regions 431, 432, and 441, which are regions of the moving bodies 411, 412, 421, and 422 in the captured images 410 and 420.
 解析情報450は、シーン解析部210により取得された情報を示す。解析情報450には、真値、および撮像時刻の情報が含まれる。混雑度合、動体領域数、動体領域詳細の情報のうちの少なくとも1つ以上については、解析情報450に、含まれていてもよいし、含まれていなくてもよい。 The analysis information 450 indicates information acquired by the scene analysis unit 210. The analysis information 450 includes information on the true value and the imaging time. At least one of the congestion degree, the number of moving object regions, and the information of the moving object region details may or may not be included in the analysis information 450.
 真値は、撮像画像中の検出対象の数である。真値は、検出対象を推定した値であり、カメラ120が初期のパラメータを用いて物体検出を実行した結果でもよいし、動体領域画像430,440に存在する動体領域431,432,441の数でもよく、必ずしも正確な値でなくてよい。 The true value is the number of detection targets in the captured image. The true value is a value obtained by estimating the detection target, and may be a result of the camera 120 performing the object detection using the initial parameters, or the number of the moving body regions 431, 432, and 441 existing in the moving body region images 430 and 440. However, it is not always necessary to be an accurate value.
 撮像時刻は、撮像画像を取得した時刻であり、カメラ120、画像処理装置110などに内蔵されている時刻情報から取得されたものである。 The imaging time is the time at which the captured image was acquired, and is acquired from time information built in the camera 120, the image processing device 110, and the like.
 混雑度合は、計測範囲内の動体の密集度合を示す。例えば、混雑度合については、動体領域画像430,440の差分画像の解像度に対する動体領域431,432,441の差分領域の画素数の割合を算出することで求めることができる。混雑度合は、「0」~「100」の値域としてそのまま使用してもよいし、「0」~「5」のように、ダウンサンプリングした値域を使用してもよい。 The congestion degree indicates the density of moving objects within the measurement range. For example, the congestion degree can be obtained by calculating the ratio of the number of pixels of the difference area of the moving body areas 431, 432, 441 to the resolution of the difference image of the moving body area images 430, 440. The congestion degree may be used as it is as a value range of “0” to “100”, or a down-sampled value range such as “0” to “5”.
 動体領域数は、動体領域画像に存在する動体領域の数である。例えば、動体領域数については、動体領域画像430,440に存在する動体領域431,432,441の数がそのまま使用される。 The number of moving object regions is the number of moving object regions existing in the moving object region image. For example, as the number of moving body regions, the number of moving body regions 431, 432, 441 existing in the moving body region images 430, 440 is used as it is.
 動体領域詳細には、人数、位置、移動方向、移動速度などの情報が含まれる。 The moving object region details include information such as the number of people, the position, the moving direction, and the moving speed.
 人数は、動体領域内に存在すると予想される動体の数である。例えば、人数については、予め動体領域431,432,441の大きさと動体領域431,432,441内の動体411,412,421,422の数との相関関係を保持することで取得する方法などにより算出される。例えば、動体領域431,432のように領域が小さい場合、人数を「1」とし、動体領域441のように領域が大きい場合、人数が「1」または「2」以上とする。 The number of people is the number of moving objects expected to exist in the moving object area. For example, with respect to the number of persons, a method of acquiring in advance by holding a correlation between the size of the moving body regions 431, 432, 441 and the number of the moving bodies 411, 412, 421, 422 in the moving body regions 431, 432, 441, etc. Is calculated. For example, when the region is small like the moving body regions 431 and 432, the number of people is set to “1”. When the region is large like the moving body region 441, the number of people is set to “1” or “2” or more.
 位置は、撮像画像中における動体領域の位置を示す。例えば、位置は、座標軸460を使用し、動体領域431,432,441の重心位置(x,y)を位置の情報として扱う方法により算出される。 The position indicates the position of the moving object region in the captured image. For example, the position is calculated by using the coordinate axes 460 and treating the barycentric positions (x, y) of the moving body regions 431, 432, and 441 as position information.
 移動方向は、動体の移動方向を示す。移動速度は、動体の移動速度を示す。 The moving direction indicates the moving direction of the moving object. The moving speed indicates the moving speed of the moving object.
 移動方向および移動速度については、オプティカルフローなどの一般的な画像処理技術を実施することで取得される。移動方向については、オプティカルフローの情報から動体方向470のような8方向などに分類してもよい。また、移動速度については、画素(ピクセル)単位の速度(ピクセル/秒)、動体領域431,432,441の大きさとカメラ120の設置位置とを考慮して大まかに求めた実測値(km/時間)を使用してよい。 The moving direction and the moving speed are obtained by performing a general image processing technique such as an optical flow. The moving direction may be classified into eight directions, such as the moving object direction 470, based on the optical flow information. The moving speed is roughly measured (km / hour) in consideration of the speed (pixels / second) in units of pixels (pixels), the size of the moving body regions 431, 432, and 441 and the installation position of the camera 120. ) May be used.
 なお、混雑度合、動体領域数、動体領域内の人数を算出する際、予めいくつかのパターンの設置環境における解像度との相関関係を学習しておき、シーン解析技術などによりカメラ120が設置された環境に最も近い相関関係を利用する方法を用いてもよく、特に限定しない。また、シーン解析部210において取得される解析情報450としては、少なくとも真値と撮像時刻とが含まれ、解析情報450に示すもの以外にも画像処理技術によって容易に取得可能である情報を追加することも可能であり、特に限定しない。 When calculating the congestion degree, the number of moving object regions, and the number of people in the moving object region, the correlation with the resolution in the installation environment of some patterns was learned in advance, and the camera 120 was installed by a scene analysis technique or the like. A method using a correlation closest to the environment may be used, and there is no particular limitation. Also, the analysis information 450 acquired by the scene analysis unit 210 includes at least a true value and an imaging time, and information other than the analysis information 450 that can be easily acquired by the image processing technique is added. It is also possible, and there is no particular limitation.
 座標軸460は、撮像画像の座標軸を示す。動体方向470は、撮像画像中の動体の方向を示す。 The coordinate axis 460 indicates the coordinate axis of the captured image. The moving object direction 470 indicates the direction of the moving object in the captured image.
 図5は、シーン分析情報およびシーン情報を説明するための図である。図5に示すように、シーン分析情報510は、撮像画像410の解析情報に信頼度情報を加えた情報である。シーン情報160は、撮像画像410とシーン分析情報510とを総括した情報である。 FIG. 5 is a diagram for explaining scene analysis information and scene information. As shown in FIG. 5, the scene analysis information 510 is information obtained by adding reliability information to the analysis information of the captured image 410. The scene information 160 is information summarizing the captured image 410 and the scene analysis information 510.
 信頼度は、カメラ120の計測範囲における真値の信頼性を示す信頼性情報の一例であり、撮像画像に対する解析情報(例えば、真値)の信頼性を示すものである。信頼度が高いシーン情報(例えば、撮像画像および真値)を学習シーンとして選定することで、後述のパラメータの最適化の精度が向上する。 The reliability is an example of reliability information indicating the reliability of a true value in the measurement range of the camera 120, and indicates the reliability of analysis information (for example, a true value) for a captured image. By selecting scene information having a high degree of reliability (for example, a captured image and a true value) as a learning scene, the accuracy of parameter optimization described later is improved.
 このように、シーン情報(例えば、シーン情報160)には、計測装置(例えば、カメラ120)で取得された計測情報(例えば、撮像画像)と、計測情報が取得された時間情報(例えば、撮像時刻)と、計測装置の計測範囲における物体の特定に係る特定情報(例えば、真値)とが含まれる。また、シーン情報には、信頼性情報(例えば、信頼度)、計測装置の計測範囲内の物体の密集度合を示す混雑度合情報(例えば、混雑度合)、計測範囲内に存在する物体領域の数情報(例えば、動体領域数)、計測範囲における物体領域の位置情報(例えば、動体領域の位置)、物体領域の移動方向情報(例えば、動体領域の移動方向)、物体領域の移動速度情報(例えば、動体領域の移動速度)の少なくとも1つ以上が含まれる。 As described above, the scene information (for example, the scene information 160) includes measurement information (for example, a captured image) acquired by the measurement device (for example, the camera 120) and time information (for example, Time) and identification information (for example, a true value) related to the identification of an object in the measurement range of the measurement device. The scene information includes reliability information (for example, reliability), congestion degree information (for example, congestion degree) indicating the density of objects within the measurement range of the measurement device, and the number of object regions existing within the measurement range. Information (for example, the number of moving object regions), position information of the object region in the measurement range (for example, the position of the moving object region), moving direction information of the object region (for example, moving direction of the moving object region), and moving speed information of the object region (for example, , Moving speed of the moving body region).
 図6は、画像処理装置110における処理に係る処理手順の一例を示す図である。 FIG. 6 is a diagram illustrating an example of a processing procedure related to processing in the image processing apparatus 110.
 ステップS601では、シーン解析部210は、撮像画像から解析情報と撮像時刻とを取得する。 In step S601, the scene analysis unit 210 acquires analysis information and an imaging time from a captured image.
 シーン解析部210は、まず、撮像画像から画像処理により、動体領域画像を生成する。なお、動体領域画像の生成方法としては、一般的な公知な方法を用いることができ、特に限定しない。 First, the scene analysis unit 210 generates a moving body region image from a captured image by image processing. Note that a general known method can be used as a method of generating the moving body region image, and is not particularly limited.
 例えば、シーン解析部210は、背景差分法を用いて、予め保持しておいた検出対象が存在しない背景画像と撮像画像との差分を計算し、差分量がしきい値より大きい領域を動体領域として判定する。ここでは、動体領域を大まかに取得する例を示したが、カメラ120の画像解像度が充分である場合、撮像画像中から動体(人物)が存在する正確な領域を画素単位で取得することもできる。また、単純な背景差分法を使用するのではなく、背景更新を活用した高精度な背景差分手法、連続する撮像画像間の差分を取得するフレーム間差分方法などを用いて動体領域画像を生成してもよい。 For example, the scene analysis unit 210 calculates a difference between a captured background image and a captured image in which no detection target is held in advance by using a background difference method, and determines a region where the difference amount is larger than a threshold value as a moving object region. Is determined. Here, an example in which the moving body region is roughly obtained is shown. However, when the image resolution of the camera 120 is sufficient, an accurate region in which the moving body (person) exists can be obtained from the captured image in pixel units. . Also, instead of using a simple background subtraction method, a moving object region image is generated by using a high-accuracy background subtraction method utilizing background update, an inter-frame difference method of acquiring a difference between successive captured images, and the like. May be.
 ステップS602では、シーン信頼度算出部220は、解析情報の信頼度を算出し、シーン情報160を生成する。 で は In step S602, the scene reliability calculation unit 220 calculates the reliability of the analysis information, and generates the scene information 160.
 シーン信頼度算出部220は、例えば、シーン解析部210により抽出された解析情報の一部または全てを使用して解析情報の信頼度を算出する。信頼度を算出する際に使用する解析情報は、特に限定しない。 The scene reliability calculation unit 220 calculates the reliability of the analysis information using, for example, part or all of the analysis information extracted by the scene analysis unit 210. The analysis information used when calculating the reliability is not particularly limited.
 例えば、シーン信頼度算出部220は、撮像画像と背景画像との差分領域が少ない程、情報の抽出の際にノイズが入りにくいと判断し、信頼度を大きく設定する。また、例えば、シーン信頼度算出部220は、混雑度合が小さいまたは動体領域数が少ないという解析情報である場合、予め指定した値域の範囲内にて信頼度を線型的に大きく設定してもよいし、各動体領域の人数の和が真値と異なる解析情報の信頼度を小さく設定してもよい。 {For example, the scene reliability calculation unit 220 determines that the smaller the difference area between the captured image and the background image is, the less likely it is for noise to be introduced when extracting information, and sets the reliability to be large. Further, for example, when the analysis information indicates that the degree of congestion is small or the number of moving object regions is small, the scene reliability calculation unit 220 may linearly set the reliability to be linearly large within a range of a value range specified in advance. However, the reliability of the analysis information in which the sum of the number of persons in each moving object region is different from the true value may be set small.
 また、シーン信頼度算出部220は、カメラ情報(カメラ設置情報、カメラ特性情報)が取得できる状況である場合、信頼度の算出においてカメラ情報を使用してもよい。カメラ設置情報は、例えば、カメラ120の設置高さ、設置角度、設置位置である。カメラ特性情報は、物体検出の精度に影響し得るカメラ120の特性であり、例えば、画像解像度、フレームレート、カメラ120の種類(通常、直上タイプ、魚眼など)である。 In addition, the scene reliability calculating unit 220 may use the camera information in calculating the reliability when camera information (camera installation information and camera characteristic information) can be acquired. The camera installation information is, for example, the installation height, the installation angle, and the installation position of the camera 120. The camera characteristic information is a characteristic of the camera 120 that may affect the accuracy of object detection, and includes, for example, an image resolution, a frame rate, and a type of the camera 120 (usually, a direct type, a fisheye, and the like).
 カメラ情報を用いた信頼度の算出方法としては、カメラ120の設置位置と画像処理による解析情報の取得精度との関係性を考慮することで、信頼度を算出する方法などがある。例えば、シーン信頼度算出部220は、一般的にカメラ120と計測対象との距離が近いまたはカメラ120の俯角が大きいほど画像処理の精度が高くなることを踏まえ、カメラ設置情報からシミュレーションなどを用いてカメラ120と計測される人物との距離を推定し、人物との距離が短くなると予想されるカメラ120、人物を直上から撮像できるカメラ120、高解像度のカメラ120等から抽出した解析情報の信頼度を大きく設定する。 As a method of calculating reliability using camera information, there is a method of calculating reliability by considering the relationship between the installation position of the camera 120 and the accuracy of obtaining analysis information by image processing. For example, the scene reliability calculation unit 220 uses simulation or the like from camera installation information, based on the fact that the accuracy of image processing generally increases as the distance between the camera 120 and the measurement target decreases or the depression angle of the camera 120 increases. The distance between the camera 120 and the measured person is estimated by the camera 120, and the reliability of the analysis information extracted from the camera 120 expected to shorten the distance to the person, the camera 120 capable of capturing an image of the person from directly above, the high-resolution camera 120, and the like. Set the degree higher.
 なお、シーン信頼度算出部220は、予め保持していたカメラ情報のデータベースなどを参照してカメラ情報を取得してもよいし、撮像画像中の画像特徴量などからカメラ120の設置位置を大よそに推定するなどによりカメラ情報を取得してもよい。 Note that the scene reliability calculation unit 220 may acquire camera information by referring to a database of camera information held in advance, or may increase the installation position of the camera 120 based on image feature amounts in a captured image. The camera information may be acquired by estimating aside.
 また、解析情報とカメラ情報とを組み合わせて信頼度を算出する方法でもよく、算出方法については、特に限定しない。例えば、シーン信頼度算出部220は、解析情報およびカメラ情報からそれぞれ算出した信頼度の和から最終的な信頼度を決定してもよい。また、例えば、シーン信頼度算出部220は、動体領域の移動速度といった解析情報と撮像のフレームレートといったカメラ情報とを組み合わせ、フレームレートが低いカメラ120において動体領域の移動速度が速いという解析情報の信頼度を小さく設定してもよい。また、例えば、シーン信頼度算出部220は、直上から撮像できるカメラ120において混雑度合が少ないという解析情報の信頼度を大きく設定してもよい。 Also, a method of calculating reliability by combining analysis information and camera information may be used, and the calculation method is not particularly limited. For example, the scene reliability calculation unit 220 may determine the final reliability from the sum of the reliability calculated from the analysis information and the camera information. Also, for example, the scene reliability calculation unit 220 combines analysis information such as the moving speed of the moving body region and camera information such as the frame rate of imaging, and analyzes the analysis information that the moving speed of the moving body region is fast in the camera 120 having a low frame rate. The reliability may be set small. In addition, for example, the scene reliability calculation unit 220 may set the reliability of the analysis information indicating that the degree of congestion is low in the camera 120 that can capture an image from directly above, to be large.
 また、物体検出の精度の低下が予想されるカメラ120の設置環境を示すカメラ設置環境情報に基づいて信頼度を算出する方法でもよく、算出方法については、特に限定しない。例えば、シーン信頼度算出部220は、直射日光が入る環境、鏡が設けられている環境、エスカレータのように常に動作するものが設置されている環境が撮影範囲にあると画像処理の精度が低下すると判断し、信頼度を小さく設定する。 Alternatively, a method of calculating the reliability based on camera installation environment information indicating the installation environment of the camera 120 where the accuracy of object detection is expected to decrease may be used, and the calculation method is not particularly limited. For example, the scene reliability calculation unit 220 may deteriorate image processing accuracy when an environment where direct sunlight enters, an environment where a mirror is provided, or an environment where a constantly operating object such as an escalator is installed is within the shooting range. Then, the reliability is set to be small.
 このように、シーン情報抽出部(シーン情報抽出部140、シーン信頼度算出部220)は、計測装置(例えば、カメラ120)で取得された計測情報(撮像画像、撮像画像から抽出した解析情報など)、計測装置の設置情報、計測装置の設置環境情報、または計測装置の特性情報に基づいて(カメラ120の計測情報、カメラ設置情報、カメラ設置環境情報、カメラ特性情報の少なくとも1つを用いて)、信頼性情報(例えば、信頼度)を算出する。かかる構成によれば、信頼性情報を適切に算出することができる。 As described above, the scene information extraction unit (the scene information extraction unit 140 and the scene reliability calculation unit 220) measures the measurement information (captured image, analysis information extracted from the captured image, etc.) acquired by the measurement device (for example, the camera 120). ), Measurement device installation information, measurement device installation environment information, or measurement device characteristic information (using at least one of measurement information of camera 120, camera installation information, camera installation environment information, and camera characteristic information). ), And calculates reliability information (for example, reliability). According to this configuration, the reliability information can be appropriately calculated.
 ステップS603では、シーン情報結合部310は、カメラ配置情報と撮像時刻とに基づいて、カメラ120間のシーン情報160を結合する。 In step S603, the scene information combining unit 310 combines the scene information 160 between the cameras 120 based on the camera arrangement information and the imaging time.
 図7を用いてシーン情報結合部310について説明する。図7の(A)は、カメラ120を設置したフロア(場所)を示すフロアマップの一例(2次元マップ710)を示す。2次元マップ710においては、オブジェクト720,730は、カメラ120の位置と撮影方向とを示す。カメラ配置情報170としては、図7の(A)のように、計測範囲におけるカメラ120間の位置関係が把握できるものであるならば、特に限定しない。撮影方向については、カメラ配置情報と撮像画像の画像特徴量とから大よそに推定した方向の情報などでもよい。なお、2次元マップ710については、カメラ配置情報170に基づいて生成可能である。以下では、説明の便宜上、オブジェクト720が示すカメラ120をカメラA、オブジェクト730が示すカメラ120をカメラBと適宜称する。 The scene information combining unit 310 will be described with reference to FIG. FIG. 7A shows an example (two-dimensional map 710) of a floor map showing a floor (place) where the camera 120 is installed. In the two-dimensional map 710, the objects 720 and 730 indicate the position of the camera 120 and the shooting direction. The camera arrangement information 170 is not particularly limited as long as the positional relationship between the cameras 120 in the measurement range can be grasped as shown in FIG. The photographing direction may be information about a direction roughly estimated from the camera arrangement information and the image feature amount of the captured image. The two-dimensional map 710 can be generated based on the camera arrangement information 170. Hereinafter, for convenience of explanation, the camera 120 indicated by the object 720 is referred to as camera A, and the camera 120 indicated by the object 730 is referred to as camera B as appropriate.
 図7の(B)は、カメラAのシーン情報160とカメラBのシーン情報160とを結合する際の組み合わせ例を示す。 (B) of FIG. 7 shows an example of combining scene information 160 of camera A and scene information 160 of camera B.
 カメラAおよびカメラB間のシーン情報160を対応付ける方法としては、図7の(B)のように、カメラ配置情報170からカメラAにて撮影された人物がカメラBにて撮影されるエリアまで、標準の歩行速度で移動した際に必要な時間を利用する方法などがある。 As a method for associating the scene information 160 between the camera A and the camera B, as shown in FIG. There is a method of using the time required when moving at a standard walking speed.
 例えば、シーン情報結合部310は、カメラAからカメラBへの移動時間が10秒であると算出した場合、カメラAのシーン情報160の撮像時刻の前後10秒が異なるまたは最も近い値の撮像時刻であるカメラBのシーン情報160を対応付ける。 For example, when the scene information combining unit 310 calculates that the moving time from the camera A to the camera B is 10 seconds, the image capturing time of 10 seconds before and after the image capturing time of the scene information 160 of the camera A is different or the closest value. Is associated with scene information 160 of camera B.
 なお、異なるカメラ120同士のシーン情報160を対応付ける方法については、図7の(A)のような計測範囲の2次元マップ710とカメラ120の配置関係とシーン情報160の撮像時刻とを比較する方法に限られるものではなく、特に限定しない。 The method of associating the scene information 160 between the different cameras 120 with the two-dimensional map 710 of the measurement range as shown in FIG. However, the present invention is not limited thereto.
 ステップS604では、シーン分析情報修正部320は、シーン情報160を修正する。 で は In step S604, the scene analysis information correction unit 320 corrects the scene information 160.
 シーン分析情報修正部320は、シーン情報結合部310により対応付けされたシーン情報160を比較することで、各カメラ120のシーン情報160(例えば、シーン分析情報)を変更する。比較するシーン情報160については特に限定しない。また、修正するシーン分析情報の候補としては真値が含まれていれば特に限定しない。 The scene analysis information correction unit 320 changes the scene information 160 (for example, scene analysis information) of each camera 120 by comparing the scene information 160 associated with the scene information combining unit 310. The scene information 160 to be compared is not particularly limited. Further, there is no particular limitation as long as true values are included as candidates for the scene analysis information to be corrected.
 例えば、シーン分析情報修正部320は、信頼度を比較して、信頼度の低いシーン分析情報における真値を信頼度の高いシーン分析情報の真値に置き換えてもよい。また、例えば、シーン分析情報修正部320は、信頼度がしきい値より低い場合、対応付けたシーン情報160を破棄して学習シーンとしては使用しないようにしてもよい。 For example, the scene analysis information correction unit 320 may compare the reliability and replace the true value of the low reliability scene analysis information with the true value of the high reliability scene analysis information. Further, for example, when the reliability is lower than the threshold value, the scene analysis information correction unit 320 may discard the associated scene information 160 so as not to use it as a learning scene.
 また、例えば、シーン分析情報修正部320は、信頼度が同じであるものの、真値の値が異なる場合は、混雑度合、および/または、動体領域数を比較して、小さい値であるほどノイズが少ないと判定し、小さい値の方の真値で大きい値の方の真値を変更してもよい。また、例えば、シーン分析情報修正部320は、混雑度合、および/または、動体領域数があまりにも異なる場合は、対応付けたシーン情報160を破棄してもよい。 Further, for example, when the reliability values are the same but the true value values are different, the scene analysis information correction unit 320 compares the congestion degree and / or the number of moving object regions, and the smaller the value, the smaller the noise. May be determined to be small, and the true value of the smaller value and the true value of the larger value may be changed. Further, for example, when the congestion degree and / or the number of moving object regions are too different, the scene analysis information correction unit 320 may discard the associated scene information 160.
 また、例えば、シーン分析情報修正部320は、カメラ配置情報170、動体領域の詳細なシーン分析情報、カメラ情報を活用してもよい。例えば、シーン分析情報修正部320は、移動方向とカメラの配置情報とを考慮して、実際の人物の移動経路に適している場合、シーン情報160の対応付け自体の信頼性が高いものと判定し、どちらの真値も信頼度の高い方の真値とし、信頼度も大きく設定してもよい。 For example, the scene analysis information correction unit 320 may use the camera arrangement information 170, the detailed scene analysis information of the moving object region, and the camera information. For example, the scene analysis information correction unit 320 determines that the reliability of the association of the scene information 160 itself is high when the scene analysis information correction unit 320 is suitable for the actual moving path of the person in consideration of the moving direction and the camera arrangement information. Alternatively, both of the true values may be set to the true value having the higher reliability, and the reliability may be set to be higher.
 移動方向とカメラの配置情報170との考慮については、図7を例に見ると、オブジェクト720のカメラ120のシーン情報160の移動方向が左方向であり、対応付けられた10秒後のオブジェクト730のカメラ120のシーン情報160の移動方向が下方向である場合、人物の移動経路に適していると判断でき、仮に10秒前のオブジェクト730のカメラ120のシーン情報160とも対応付けられている場合は、その対応付けられたシーン情報160を破棄してもよい。 Regarding the consideration of the moving direction and the camera arrangement information 170, referring to FIG. 7 as an example, the moving direction of the scene information 160 of the camera 120 of the object 720 is the left direction, and the associated object 730 after 10 seconds is considered. If the moving direction of the scene information 160 of the camera 120 is downward, it can be determined that the scene information 160 is suitable for the moving path of the person, and if it is also associated with the scene information 160 of the camera 120 of the object 730 10 seconds ago. May discard the associated scene information 160.
 また、例えば、シーン分析情報修正部320は、全てのシーン分析情報を比較して各値の差異の合計を算出した値、または予め定めた重み付け係数を使用して算出した値がしきい値以下である場合、シーン情報160の対応付けの信頼性が高いと判断してもよい。また、例えば、シーン分析情報修正部320は、カメラ設置情報、カメラ特性情報を考慮し、信頼度が極めて高いと考えられるシーン情報160が存在する場合、シーン情報結合部310により対応付けられるシーン情報160の真値は、信頼度が極めて高いと考えられるシーン情報160の真値に全て置き換えてもよい。 In addition, for example, the scene analysis information correction unit 320 may compare all scene analysis information to calculate a sum of the differences between the values, or a value calculated using a predetermined weighting coefficient may be equal to or less than a threshold. , It may be determined that the reliability of the association of the scene information 160 is high. Further, for example, the scene analysis information correction unit 320 considers camera installation information and camera characteristic information, and if there is scene information 160 that is considered to have extremely high reliability, the scene information The true values of 160 may all be replaced with the true values of the scene information 160 considered to have extremely high reliability.
 このように、パラメータ最適化部(例えば、パラメータ最適化部150、シーン情報変更部151、シーン分析情報修正部320)は、第1のシーン情報の信頼性情報(例えば、カメラAの信頼度)と第2のシーン情報の信頼性情報(例えば、カメラAの信頼度)とに基づいて、第1のシーン情報(例えば、カメラAのシーン情報160、シーン分析情報、信頼度、真値)および/または第2のシーン情報(例えば、カメラBのシーン情報160、シーン分析情報、信頼度、真値)を修正する。かかる構成によれば、例えば、一方のカメラ120で取得された撮像画像の解析情報を他方のカメラ120で取得された撮像画像の解析情報で補完することができるので、より正確な信頼性情報が設定されたシーン情報160を効率的に得ることができる。また、パラメータ最適化部は、第1のシーン情報の信頼性情報と第2のシーン情報の信頼性情報とに基づいて、第1のシーン情報および/または第2のシーン情報を破棄する。かかる構成によれば、例えば、パラメータの最適化に不要であると判断したシーン情報160が破棄されるので、HDDの容量を削減することができる。 As described above, the parameter optimizing unit (for example, the parameter optimizing unit 150, the scene information changing unit 151, and the scene analysis information correcting unit 320) outputs the reliability information of the first scene information (for example, the reliability of the camera A). And the first scene information (for example, the scene information 160 of the camera A, the scene analysis information, the reliability, the true value) and the first scene information based on the reliability information of the second scene information (for example, the reliability of the camera A). And / or correct second scene information (eg, scene information 160 of camera B, scene analysis information, reliability, true value). According to such a configuration, for example, the analysis information of the captured image acquired by one camera 120 can be complemented by the analysis information of the captured image acquired by the other camera 120, so that more accurate reliability information can be obtained. The set scene information 160 can be obtained efficiently. The parameter optimizing unit discards the first scene information and / or the second scene information based on the reliability information of the first scene information and the reliability information of the second scene information. According to such a configuration, for example, since the scene information 160 determined to be unnecessary for parameter optimization is discarded, the capacity of the HDD can be reduced.
 ステップS605では、学習シーン選定部152は、カメラ120ごとの学習シーンを選定する。 In step S605, the learning scene selection unit 152 selects a learning scene for each camera 120.
 学習シーン選定部152は、シーン情報変更部151により出力されたシーン情報160の中から各カメラ120のパラメータを最適化するための学習シーンを選定する。 The learning scene selecting unit 152 selects a learning scene for optimizing the parameters of each camera 120 from the scene information 160 output by the scene information changing unit 151.
 選定方法としては、信頼度を用いて選択する方法を適宜に採用できる。例えば、学習シーン選定部152は、信頼度が高いシーン情報160を予め設定されたデータ数だけ学習シーンとして選定してもよい。また、例えば、学習シーン選定部152は、学習シーンのバリエーションを増やすために、真値の値ごとに、シーン情報160を分類しておき、各真値において信頼度が高いシーン情報160をしきい値の数だけ学習シーンとして選定してもよい。 As a selection method, a method of selecting using reliability can be appropriately adopted. For example, the learning scene selection unit 152 may select the scene information 160 with high reliability as a learning scene by a preset number of data. Further, for example, the learning scene selection unit 152 classifies the scene information 160 for each true value in order to increase the variation of the learning scene, and thresholds the scene information 160 with high reliability in each true value. You may select as many learning scenes as the number of values.
 また、学習シーン選定部152は、撮像時刻、混雑度合、動体領域の情報などを用いて、シーンを設定してもよい。例えば、学習シーン選定部152は、撮像時刻に応じて、朝、昼、夕方、夜といった4つのシーンにシーン情報160を予め分類し、各シーンに対応するシーン情報160を選定してもよい。また、例えば、学習シーン選定部152は、混雑度合に応じて、混雑しているまたは混雑していないといった2つのシーンにシーン情報160を予め分類し、各シーンに対応するシーン情報160を選定してもよい。また、撮像時刻、混雑度合、動体領域の情報を組み合わせて、シーンを設定し、各シーンに対応するシーン情報160を選定してもよい。 (4) The learning scene selection unit 152 may set a scene using information such as an imaging time, a congestion degree, and a moving body region. For example, the learning scene selection unit 152 may classify the scene information 160 into four scenes such as morning, noon, evening, and night in advance according to the imaging time, and select the scene information 160 corresponding to each scene. Further, for example, the learning scene selection unit 152 classifies the scene information 160 into two scenes, such as congested or uncongested, in advance according to the degree of congestion, and selects the scene information 160 corresponding to each scene. May be. Further, a scene may be set by combining information of the imaging time, the degree of congestion, and the moving body region, and the scene information 160 corresponding to each scene may be selected.
 このように、パラメータ最適化部(例えば、パラメータ最適化部150、学習シーン選定部152)は、シーン情報抽出部(例えば、シーン情報抽出部140)により抽出されるシーン情報の信頼性情報(例えば、シーン情報160の信頼度)に基づいて、シーン情報抽出部により生成されたシーン情報の中からパラメータを最適化するために用いるシーン情報を選定する。かかる構成によれば、例えば、信頼度の高いシーン情報160を学習シーンとすることができるので、カメラ120における物体検出の精度を高めることができるようになる。 As described above, the parameter optimizing unit (for example, the parameter optimizing unit 150, the learning scene selecting unit 152) outputs the reliability information (for example, the scene information extracting unit 140) of the scene information extracted by the scene information extracting unit (for example, the scene information extracting unit 140). Scene information used for optimizing parameters from the scene information generated by the scene information extraction unit based on the reliability of the scene information 160). According to such a configuration, for example, since the scene information 160 with high reliability can be set as a learning scene, the accuracy of object detection by the camera 120 can be improved.
 続いて、ステップS606およびステップS607では、パラメータ算出部153は、学習シーン選定部152により選定された学習シーンを用いてカメラ120ごとの物体検出の精度に関連するパラメータを最適化する。なお、パラメータの最適化の方法については、最小二乗誤差などを利用した一般的なアルゴリズムを使用するなど、特に限定しない。 Subsequently, in step S606 and step S607, the parameter calculation unit 153 optimizes parameters related to the accuracy of object detection for each camera 120 using the learning scene selected by the learning scene selection unit 152. The method for optimizing the parameters is not particularly limited, for example, a general algorithm using a least square error is used.
 ステップS606では、パラメータ算出部153は、学習シーンに対し、初期パラメータにて物体検出を実行する。例えば、パラメータ算出部153は、選定された学習シーンの撮像画像と対応する真値とを抽出する。次に、パラメータ算出部153は、各撮像画像において初期パラメータによる物体検出を実行する。 In step S606, the parameter calculation unit 153 performs object detection on the learning scene with initial parameters. For example, the parameter calculation unit 153 extracts a captured image of the selected learning scene and a corresponding true value. Next, the parameter calculation unit 153 performs object detection based on initial parameters in each captured image.
 ステップS607では、パラメータ算出部153は、計測結果と学習シーンの真値とが最小となるパラメータを算出する。 In step S607, the parameter calculation unit 153 calculates a parameter that minimizes the measurement result and the true value of the learning scene.
 例えば、パラメータ算出部153は、ステップS606で物体検出した結果(物体検出結果)と真値との差異を求める。パラメータ算出部153は、予め設定した値域内で初期パラメータを変更し、または初期パラメータを無作為に変更する。 {For example, the parameter calculation unit 153 obtains the difference between the result of the object detection (object detection result) in step S606 and the true value. The parameter calculation unit 153 changes the initial parameters within a preset value range, or changes the initial parameters at random.
 パラメータ算出部153は、ステップS606に処理を移し、同様に物体検出を実行し、ステップS607で真値との差異を求める。これらの処理を一定の回数繰り返し、最も差異が小さくなるパラメータを最終パラメータとして算出する。 The 算出 parameter calculating unit 153 shifts the processing to step S606, executes the object detection in the same manner, and obtains the difference from the true value in step S607. These processes are repeated a fixed number of times, and the parameter with the smallest difference is calculated as the final parameter.
 また、最適化するパラメータの候補として物体検出のしきい値パラメータだけでなく、物体の追跡などのアプリケーションに関連するしきい値パラメータなどを含めてもよい。例えば、パラメータ算出部153は、シーン情報160の信頼度、撮像時刻、混雑状況などから、同一人物が計測範囲内を歩行している可能性が高いシーンを予め学習シーンとして選定し、当該人物を常に追跡できるしきい値パラメータに変更することで、物体の追跡に関連するパラメータの最適化を行う。 Also, as parameter candidates to be optimized, not only threshold parameters for object detection but also threshold parameters related to applications such as object tracking may be included. For example, based on the reliability of the scene information 160, the imaging time, the congestion state, and the like, the parameter calculation unit 153 selects a scene in which the same person is highly likely to be walking in the measurement range as a learning scene in advance, and selects the person as the learning scene. By changing to threshold parameters that can always be tracked, parameters related to tracking of the object are optimized.
 ステップS608では、出力制御部154は、出力処理を行う。例えば、出力制御部154は、パラメータ算出部153により算出されたパラメータをカメラ120に設定する。かかる構成によれば、自動でパラメータが最適化されてカメラ120に設定されるので、パラメータの設定に要する時間を削減できるようになる。更には、人件費を削減できるようになる。 In step S608, the output control unit 154 performs an output process. For example, the output control unit 154 sets the parameters calculated by the parameter calculation unit 153 in the camera 120. According to such a configuration, the parameters are automatically optimized and set in the camera 120, so that the time required for setting the parameters can be reduced. Further, labor costs can be reduced.
 付言するならば、パラメータ算出部153は、選定された学習シーンに対して必ずしも1組のパラメータを推定するのではなく、シーン情報160に応じて複数組のパラメータを推定してもよい。例えば、撮像時刻に応じて学習シーンを朝、昼、夕方、夜といった4つのシーンに予め分類しておき、各シーンに対して最適なパラメータを推定し、実際の物体検出時には撮像時刻に応じて使用するパラメータを切り替えてもよい。また、混雑度合に応じて分類したシーンに対して最適なパラメータを推定しておき、物体検出前にシーン情報抽出部140の一部の画像処理のみ実行して混雑度合を算出し、その値に応じてパラメータを切り替えてもよい。 Additionally, the parameter calculation unit 153 may not necessarily estimate one set of parameters for the selected learning scene, but may estimate a plurality of sets of parameters according to the scene information 160. For example, learning scenes are classified in advance into four scenes, such as morning, noon, evening, and night, according to the imaging time, and optimal parameters are estimated for each scene. The parameters used may be switched. In addition, optimal parameters are estimated for scenes classified according to the congestion degree, and only a part of image processing of the scene information extraction unit 140 is executed before object detection to calculate the congestion degree. The parameters may be switched accordingly.
 このように、パラメータ最適化部(例えば、パラメータ最適化部150)は、シーン情報抽出部(例えば、シーン情報抽出部140)により生成されたシーン情報を複数のシーン(例えば、朝、昼、夕方、夜の4つのシーン、混雑している、混雑していないといった2つのシーン、これらを組み合わせた8つのシーン)に分類し、分類したシーンごとにパラメータを最適化し、シーンごとに最適化した計測装置のパラメータを計測装置に設定する。かかる構成によれば、様々なシーンに対応してパラメータを自動で設定できるようになる。 As described above, the parameter optimizing unit (for example, the parameter optimizing unit 150) converts the scene information generated by the scene information extracting unit (for example, the scene information extracting unit 140) into a plurality of scenes (for example, morning, noon, and evening). , Four scenes at night, two scenes such as congested and uncongested, and eight scenes combining these scenes), and parameters optimized for each classified scene are optimized, and measurement optimized for each scene is performed. Set the parameters of the device in the measuring device. According to such a configuration, it becomes possible to automatically set parameters corresponding to various scenes.
 また、出力処理では、出力制御部154は、学習シーン選定部152により選定された学習シーンと信頼度とを解析することで、カメラ120の設置案を明示してもよい。例えば、出力制御部154は、選定された学習シーンの信頼度が全てしきい値以下である場合、現在のカメラ120の配置状況では最適なパラメータを推定できないと判断し、所定の信頼度よりも低い学習シーン(例えば、最も信頼度の低い学習シーン)であるカメラ120の近くに、カメラ120の追加設置が必要であるといった旨を出力装置180に表示し、使用者に明示することで、高精度な物体検出を実現するためのカメラ120の配置案を提案する。 In the output processing, the output control unit 154 may specify the installation plan of the camera 120 by analyzing the learning scene selected by the learning scene selecting unit 152 and the reliability. For example, when the reliability of all the selected learning scenes is equal to or less than the threshold value, the output control unit 154 determines that the optimal parameters cannot be estimated in the current arrangement of the cameras 120, The fact that additional installation of the camera 120 is necessary is displayed on the output device 180 near the camera 120 that is a low learning scene (for example, a learning scene with the lowest reliability), and is clearly displayed to the user. An arrangement plan of the camera 120 for realizing accurate object detection is proposed.
 このように、パラメータ最適化部(例えば、パラメータ最適化部150、出力制御部154)は、修正後の所定の計測装置の各シーン情報の信頼性情報(例えば、カメラ120のシーン情報160の信頼度)がしきい値より低いか否かを判定し、所定の割合以上、低いと判定した場合、所定の計測装置における物体の特定の精度が低いことを示す情報(例えば、カメラ120の追加設置が必要であるといった旨、現在のカメラ120の配置状況では最適なパラメータを推定できない旨)を出力する。 As described above, the parameter optimizing unit (for example, the parameter optimizing unit 150 and the output control unit 154) outputs the reliability information of each scene information of the predetermined measuring device after the correction (for example, the reliability of the scene information 160 of the camera 120). Is determined to be lower than or equal to a predetermined ratio, and when it is determined to be lower than a predetermined ratio, information indicating that the specific accuracy of the object in the predetermined measuring device is low (for example, additional installation of the camera 120). Is required, the optimal parameters cannot be estimated in the current arrangement of the cameras 120).
 なお、本実施の形態では、カメラ120のHDDの空き容量、画像処理装置110のHDDの空き容量、カメラ120の処理スペック、または画像処理装置110の処理スペックを考慮して一連の処理を実行してもよい。換言するならば、シーン情報抽出部(シーン情報抽出部140)は、計測装置の処理スペック情報(例えば、カメラ120の処理スペック)、計測情報処理装置の処理スペック情報(例えば、画像処理装置110の処理スペック)、または計測情報処理装置のデータ容量情報(例えば、画像処理装置110のHDDの空き容量)に基づいて、抽出したシーン情報を記憶部に記憶してもよい。 In the present embodiment, a series of processes is executed in consideration of the free space of the HDD of the camera 120, the free space of the HDD of the image processing device 110, the processing specifications of the camera 120, or the processing specifications of the image processing device 110. May be. In other words, the scene information extraction unit (scene information extraction unit 140) processes the processing specification information of the measurement device (for example, the processing specification of the camera 120), the processing specification information of the measurement information processing device (for example, of the image processing device 110). The extracted scene information may be stored in the storage unit based on the processing specifications) or the data capacity information of the measurement information processing apparatus (for example, the free space of the HDD of the image processing apparatus 110).
 例えば、シーン情報抽出部140は、シーン情報160を生成する際に、しきい値以下の信頼度であるシーン情報160を全て破棄する。かかる構成により、HDDの容量を削減してもよい。また、例えば、シーン情報抽出部140は、撮像画像ごとに生成したシーン情報160を全て出力する方法に替えて、複数のシーン情報160を蓄積しておき、代表的なシーン情報160を出力するようにしてもよい。例えば、連続した撮像画像10枚と対応するシーン情報160を保持しておき、中間である5枚目の撮像画像と各シーン分析情報の平均値とを新たなシーン分析情報とし、それらを総括したデータを最終的なシーン情報160としてもよい。かかる構成により、HDDの容量を削減してもよい。また、例えば、学習シーン選定部152は、シーン情報変更部151により出力されたシーン情報160から学習シーンを選定する際に、保持する学習シーン数をパラメータの最適化のアルゴリズムを考慮して制限してもよい。かかる構成によれば、処理コストが必要以上に大きくならないよう調整できるようになる。 {For example, when generating the scene information 160, the scene information extracting unit 140 discards all the scene information 160 having a reliability lower than the threshold. With such a configuration, the capacity of the HDD may be reduced. Further, for example, the scene information extracting unit 140 may store a plurality of pieces of scene information 160 and output representative scene information 160 instead of outputting all the scene information 160 generated for each captured image. It may be. For example, scene information 160 corresponding to 10 consecutive captured images is held, and the fifth captured image in the middle and the average value of each piece of scene analysis information are set as new scene analysis information, and these are summarized. The data may be the final scene information 160. With such a configuration, the capacity of the HDD may be reduced. Further, for example, when selecting a learning scene from the scene information 160 output by the scene information changing unit 151, the learning scene selecting unit 152 limits the number of learning scenes to be held in consideration of a parameter optimization algorithm. May be. According to such a configuration, it is possible to adjust the processing cost so as not to be unnecessarily large.
 また、本実施の形態では、人手による調整作業無しにパラメータを最適化する処理について説明したが、パラメータ最適化の精度を向上させるために、外部から入力された情報に従って、シーン情報抽出部140により生成されたシーン情報160を修正してもよい。例えば、出力制御部154は、学習シーン選定部152により出力される学習シーンにおいて信頼度がしきい値未満のシーン情報160が含まれている場合、当該シーン情報160の撮像画像を出力装置180などに表示し、入力制御部155は、GUIなどにより真値を人手にて入力するように人手作業を追加してもよい。かかる構成によれば、正確な学習データを取得でき、パラメータの推定精度も向上できる。さらに、かかる構成によれば、物体検出結果を用いて人流解析などを実施するシステムなどにおいて、ユーザが人流解析の精度に不満がある場合に、信頼度の低い学習シーンに真値を入力し、システムの精度を向上することも可能となる。 Further, in the present embodiment, the process of optimizing parameters without manual adjustment work has been described. However, in order to improve the accuracy of parameter optimization, the scene information extracting unit 140 The generated scene information 160 may be modified. For example, when the learning scene output by the learning scene selecting unit 152 includes the scene information 160 whose reliability is less than the threshold value, the output control unit 154 outputs the captured image of the scene information 160 to the output device 180 or the like. And the input control unit 155 may add a manual operation such that a true value is manually input by a GUI or the like. According to such a configuration, accurate learning data can be obtained, and the parameter estimation accuracy can be improved. Furthermore, according to such a configuration, in a system or the like that performs a human flow analysis or the like using an object detection result, when the user is dissatisfied with the accuracy of the human flow analysis, a true value is input to a learning scene with low reliability, It is also possible to improve the accuracy of the system.
 上述したように、シーン情報抽出部140は、計測装置(例えば、カメラ120)で取得された計測情報(例えば、デジタル画像データ)に基づいて計測情報が取得された時間情報(例えば、撮像時刻)および計測装置の計測範囲における物体の特定に係る特定情報(例えば、真値、真値および撮像時刻、真値および混雑度合)を抽出し、計測情報と時間情報と特定情報とを含むシーン情報を生成する。また、パラメータ最適化部150は、シーン情報抽出部140で生成されたシーン情報の時間情報と、計測装置の配置情報(例えば、カメラは位置情報)とに基づいて、第1の計測装置で取得された第1のシーン情報に関連付けられる第2の計測装置で取得された第2のシーン情報を特定(例えば、カメラAのシーン情報160と、カメラBのシーン情報160を結合)し、第1のシーン情報と第2のシーン情報とに基づいて、第1の計測装置における物体の特定に係るパラメータを最適化する。 As described above, the scene information extraction unit 140 generates time information (for example, imaging time) at which measurement information is obtained based on measurement information (for example, digital image data) obtained by a measurement device (for example, the camera 120). And extracting specific information (for example, a true value, a true value and an imaging time, a true value and a degree of congestion) relating to the identification of an object in the measurement range of the measurement device, and extracts scene information including the measurement information, the time information, and the specific information. Generate. In addition, the parameter optimizing unit 150 acquires the first measurement device based on the time information of the scene information generated by the scene information extraction unit 140 and the arrangement information of the measurement device (for example, the position information of the camera). The second scene information acquired by the second measurement device associated with the obtained first scene information is specified (for example, the scene information 160 of the camera A and the scene information 160 of the camera B are combined), and the first scene information is acquired. Based on the scene information and the second scene information, the parameters related to the identification of the object in the first measurement device are optimized.
 かかる構成によれば、例えば、第1の計測装置のパラメータの最適化において、第1のシーン情報に加えて、第1のシーン情報と関連付けられる第2のシーン情報を加味することで、パラメータの最適化に用いるシーン情報を適切に得ることができ、パラメータの精度を高めることができる。 According to such a configuration, for example, in optimizing the parameters of the first measurement device, by adding the second scene information associated with the first scene information in addition to the first scene information, Scene information used for optimization can be appropriately obtained, and the accuracy of parameters can be improved.
 本実施の形態によれば、真値の精度が高く多様な学習シーンを計測装置ごとに選定できる。 According to the present embodiment, various learning scenes with high true value accuracy can be selected for each measuring device.
(2)第2の実施の形態
 固定物の配置(レイアウト)が変更となったとき、カメラ120が劣化したとき、カメラ120を増設または交換したとき等に、カメラ120のパラメータを更新することが求められる。カメラ120のパラメータを常に更新する構成も取り得るが、パラメータを常に更新すると、パラメータの更新が必要でないときでも処理が行われてしまい、画像処理装置の負荷が大きくなってしまう問題がある。そこで、本実施の形態では、カメラ120のパラメータを適切に更新する構成について説明する。なお、本実施の形態では、第1の実施の形態と同じ構成については、同じ符号を用いてその説明を適宜省略する。
(2) Second Embodiment It is possible to update the parameters of the camera 120 when the arrangement (layout) of the fixed object is changed, when the camera 120 is deteriorated, when the camera 120 is added or replaced, or the like. Desired. A configuration in which the parameters of the camera 120 are always updated may be adopted. However, if the parameters are constantly updated, the processing is performed even when updating of the parameters is not necessary, and there is a problem that the load on the image processing apparatus increases. Thus, in the present embodiment, a configuration for appropriately updating the parameters of the camera 120 will be described. Note that, in the present embodiment, the same components as those in the first embodiment are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
 図8は、本実施の形態の画像処理システム800に係る構成の一例を示す図である。画像処理システム800は、複数のカメラ120と画像処理装置810とを含んで構成される。 FIG. 8 is a diagram showing an example of a configuration according to the image processing system 800 of the present embodiment. The image processing system 800 includes a plurality of cameras 120 and an image processing device 810.
 画像処理装置810は、カメラ120を使用して物体検出する際に、パラメータを最適化する必要があると判定したとき、自カメラと他のカメラの新シーン情報860とカメラ配置情報170とに基づいて学習シーンを選定し、パラメータを最適化する装置である。 When the image processing apparatus 810 determines that the parameters need to be optimized when detecting an object using the camera 120, the image processing apparatus 810 uses the new scene information 860 of the own camera and other cameras and the camera arrangement information 170 to determine the parameters. This is a device for selecting a learning scene and optimizing parameters.
 物体検出部820は、画像取得部130によって取得した撮像画像に対して、現在のパラメータを用いて物体を検出する。 The object detection unit 820 detects an object in the captured image acquired by the image acquisition unit 130 using the current parameters.
 パラメータ最適化判定部830は、物体検出部820による物体検出の結果(物体検出結果)を用いて、パラメータを最適化するか否かを判定する。物体検出結果とは、撮像画像に対して物体検出を実行した際の結果であり、物体検出数、シーン分析情報の動体領域の位置のような撮像画像中における物体の位置情報などである。 The parameter optimization determining unit 830 determines whether to optimize the parameter using the result of the object detection by the object detecting unit 820 (object detection result). The object detection result is a result when the object detection is performed on the captured image, and includes information on the number of detected objects, position information of the object in the captured image such as the position of the moving body region in the scene analysis information, and the like.
 シーン情報抽出部840は、第1の実施形態のシーン情報抽出部140の機能をほぼ踏襲している。 The scene information extraction unit 840 substantially follows the function of the scene information extraction unit 140 of the first embodiment.
 パラメータ最適化部850に含まれる学習シーン変更部851は、シーン情報変更部151から新たに取得したシーン情報(新シーン情報860)と、予め学習シーンとして保持していたシーン情報(保存シーン情報870)とを比較して学習シーンを変更する。以下、パラメータ最適化判定部830、学習シーン変更部851について説明する。 The learning scene changing unit 851 included in the parameter optimizing unit 850 includes scene information (new scene information 860) newly acquired from the scene information changing unit 151 and scene information (storage scene information 870) previously held as a learning scene. ) To change the learning scene. Hereinafter, the parameter optimization determining unit 830 and the learning scene changing unit 851 will be described.
 パラメータ最適化判定部830では、物体検出部820より出力される物体検出結果を用いてパラメータを最適化する必要があるか否かを判定し、最適化する必要がある場合、シーン情報抽出部840に信号(指令)を出力して画像取得部130により取得した撮像画像から新シーン情報860を生成するようにシーン情報抽出部840を制御する。 The parameter optimization determination unit 830 determines whether or not the parameters need to be optimized using the object detection result output from the object detection unit 820, and if the parameters need to be optimized, the scene information extraction unit 840. And controls the scene information extraction unit 840 to generate new scene information 860 from the captured image acquired by the image acquisition unit 130.
 パラメータの最適化の必要性の判断方法としては、特に限定しない。例えば、パラメータ最適化判定部830は、1日分の物体検出結果を参照し、常に物体が検出されている、または全く検出されてない場合、パラメータの最適化が必要であると判定してもよい。また、例えば、パラメータ最適化判定部830は、計測範囲を予め考慮して最大物体検出数などのしきい値を指定しておき、物体検出数がしきい値を上回る場合にパラメータの最適化が必要であると判定してもよい。 判断 The method for determining the necessity of parameter optimization is not particularly limited. For example, the parameter optimization determination unit 830 refers to the object detection results for one day, and determines that parameter optimization is necessary when an object is always detected or not detected at all. Good. In addition, for example, the parameter optimization determination unit 830 specifies a threshold value such as the maximum number of detected objects in consideration of the measurement range in advance, and optimizes the parameter when the number of detected objects exceeds the threshold value. It may be determined that it is necessary.
 また、物体検出結果以外の情報を用いてもよい。例えば、パラメータ最適化判定部830は、カメラ120の配置を示す最新のレイアウト情報を保持し、レイアウト情報が大きく変化した場合にパラメータを最適化すると判定してもよい。また、例えば、パラメータ最適化判定部830は、カメラ120を最初に設置した際の背景画像と現在の背景画像とを比較し、差分がしきい値以上ある場合、パラメータを最適化すると判定してもよい。このように、物体検出の精度の低下が予想できるカメラ120の情報、カメラ設置環境情報、簡易な画像処理により前回パラメータを最適化した際の撮像画像と現在の撮像画像との差異を把握できる情報を適宜に採用してもよい。また、例えば、パラメータ最適化判定部830は、カメラ120が設置された日時を特定可能な情報(カメラ設置時間情報)に基づいて、カメラ120が設置されてから所定の時間が経過したときにパラメータの最適化が必要であると判定してもよい。また、例えば、パラメータ最適化判定部830は、ユーザからの指示によりパラメータの最適化が必要であると判定してもよい。 情報 Alternatively, information other than the object detection result may be used. For example, the parameter optimization determination unit 830 may hold the latest layout information indicating the arrangement of the camera 120, and may determine to optimize the parameter when the layout information changes significantly. In addition, for example, the parameter optimization determination unit 830 compares the background image when the camera 120 is first installed with the current background image, and determines that the parameter is to be optimized if the difference is equal to or larger than the threshold value. Is also good. As described above, information on the camera 120, camera installation environment information, which can be expected to reduce the accuracy of object detection, and information on the difference between the captured image obtained when the previous parameter was optimized by the simple image processing and the current captured image can be grasped. May be appropriately adopted. Further, for example, the parameter optimization determination unit 830 determines a parameter when a predetermined time has elapsed since the camera 120 was installed, based on information (camera installation time information) that can specify the date and time when the camera 120 was installed. May need to be optimized. In addition, for example, the parameter optimization determination unit 830 may determine that parameter optimization is necessary based on an instruction from a user.
 このように、パラメータ最適化判定部(例えば、パラメータ最適化判定部830)は、計測装置の計測情報(例えば、カメラ120の撮像画像)から物体の特定が行われた結果(例えば、物体検出結果)、計測装置の設置環境情報(例えば、カメラ設置環境情報)、または計測装置の設置時間情報(カメラ設置時間情報)に基づいて、計測装置のパラメータを最適化するか否かを判定する。かかる構成によれば、パラメータの最適化のタイミングを適切に判定できるので、パラメータの最適化における計測情報処理装置の負荷を低減することができる。 As described above, the parameter optimization determining unit (for example, the parameter optimization determining unit 830) determines the result of the object identification (for example, the object detection result) from the measurement information (for example, the image captured by the camera 120) of the measuring device. ), Whether to optimize the parameters of the measuring device is determined based on the setting environment information of the measuring device (for example, camera setting environment information) or the setting time information of the measuring device (camera setting time information). According to this configuration, the timing of parameter optimization can be appropriately determined, so that the load on the measurement information processing apparatus in parameter optimization can be reduced.
 なお、パラメータの最適化を行うカメラ120については、全てのカメラ120を対象とするのではなく、カメラ配置情報170などを用いてパラメータを最適化する必要があると判定されたカメラ120周辺のカメラ120を対象としてもよい。 It should be noted that the cameras 120 for optimizing parameters do not target all cameras 120 but cameras around the cameras 120 determined to need to optimize parameters using the camera arrangement information 170 and the like. 120 may be targeted.
 図9は、学習シーン変更部851に係る構成の一例を示す図である。学習シーン変更部851は、学習シーン選定部152と学習シーン比較部910とを含んで構成される。学習シーン比較部910は、学習シーン選定部152により通知された現在の学習シーン(新シーン情報860)と予め保存した学習シーン(保存シーン情報870)とを比較し、信頼度の高い学習シーンを出力する。 FIG. 9 is a diagram illustrating an example of a configuration related to the learning scene changing unit 851. The learning scene changing unit 851 includes a learning scene selecting unit 152 and a learning scene comparing unit 910. The learning scene comparison unit 910 compares the current learning scene (new scene information 860) notified by the learning scene selection unit 152 with the previously saved learning scene (stored scene information 870), and determines a learning scene with high reliability. Output.
 より具体的には、学習シーン比較部910では、学習シーンが保持されている場合、学習シーン選定部152により通知された現在の学習シーンの新シーン情報860と保存されている学習シーンの保存シーン情報870とを比較し、信頼度の高い学習シーンのシーン情報を保存シーン情報870として保存するとともに、パラメータ算出部153に出力する。 More specifically, when the learning scene is held, the learning scene comparing unit 910 stores the new scene information 860 of the current learning scene notified by the learning scene selecting unit 152 and the stored scene of the stored learning scene. The information is compared with the information 870, and the scene information of the learning scene with high reliability is stored as the stored scene information 870, and is output to the parameter calculation unit 153.
 信頼度の高い学習シーンのシーン情報を選定する方法としては、特に限定しない。例えば、学習シーン比較部910は、真値(なお、真値およびシーンであってもよい。)が同じであるシーン情報の信頼度を比較し、信頼度の高いシーン情報を保存シーン情報870としてもよい。また、学習シーン比較部910は、シーン分析情報を使用してシーン分析情報修正部320と同様に信頼度の高いシーン情報を選定し、保存シーン情報870としてもよい。 The method for selecting scene information of a learning scene with high reliability is not particularly limited. For example, the learning scene comparison unit 910 compares the reliability of the scene information having the same true value (the true value and the scene may be the same), and sets the scene information having a high reliability as the saved scene information 870. Is also good. Further, the learning scene comparison unit 910 may select scene information with high reliability similarly to the scene analysis information correction unit 320 using the scene analysis information, and use the selected scene information as the saved scene information 870.
 このように、パラメータ最適化部(例えば、パラメータ最適化部850、学習シーン比較部910)は、パラメータ最適化判定部(例えば、パラメータ最適化判定部830)により最適化をすると判定された場合、シーン情報抽出部(例えば、シーン情報抽出部840)により生成されたシーン情報(例えば、新シーン情報860)に基づいて、記憶部(例えば、HDD)に記憶された上記シーン情報(例えば、新シーン情報860)の特定情報(例えば、真値、真値およびシーン)と同じ特定情報のシーン情報(例えば、保存シーン情報870)を更新する。 As described above, when the parameter optimization unit (for example, the parameter optimization unit 850, the learning scene comparison unit 910) determines that the parameter optimization unit (for example, the parameter optimization determination unit 830) performs the optimization, Based on the scene information (for example, new scene information 860) generated by the scene information extracting unit (for example, scene information extracting unit 840), the scene information (for example, new scene) stored in the storage unit (for example, HDD) The scene information (for example, stored scene information 870) of the same specific information as the specific information (for example, true value, true value and scene) of the information 860) is updated.
 なお、学習シーンのシーン情報の保存方法としては、現在の学習シーンの方が保存されている学習シーンよりも信頼度の高いシーン情報を多く持つ場合は、保存されている学習シーンの保存シーン情報870を破棄して現在の学習シーンの新シーン情報860を保存する方法、保存されている学習シーンの中で一部の保存シーン情報870を置き換えて保存する方法などがある。後者の方法では、例えば、保存されている学習シーンとして真値が「1」~「10」までの学習シーンが保存されている状況下にて、学習シーン選定部152により取得された学習シーンが真値「1」~「5」までの新シーン情報860が含まれ、何れも信頼度が高いと判断された場合、真値「1」~「5」までの保存されている学習シーンの保存シーン情報870は、置き換え、真値「6」~「10」までの保存されている学習シーンの保存シーン情報870は、そのまま保持するなどの方法がある。 As a method of storing scene information of a learning scene, if the current learning scene has more reliable scene information than the stored learning scene, the stored scene information of the stored learning scene is used. There are a method of discarding 870 and saving new scene information 860 of the current learning scene, and a method of replacing some saved scene information 870 in the saved learning scenes and saving. In the latter method, for example, in a situation where the learning scenes whose true values are “1” to “10” are stored as the stored learning scenes, the learning scene acquired by the learning scene selecting unit 152 is used. If the new scene information 860 with the true values “1” to “5” is included and the reliability is determined to be high for any of them, the stored learning scenes with the true values “1” to “5” are saved. There is a method of replacing the scene information 870 and retaining the stored scene information 870 of the stored learning scenes with the true values “6” to “10” as they are.
 本実施の形態では、以上説明した構成により、2つ以上の計測装置を備え、物体を検出するような画像処理装置において、物体検出結果によりパラメータを最適化する必要があると判定された計測装置と、周囲に設置された計測装置とにより撮像した映像から抽出したシーン情報に基づいて、学習シーンを選定または更新することで、物体検出の精度に応じて真値の精度が高く多様な学習シーンを選定できる。また、本処理を繰り返し実施することで、設置したカメラの周囲の環境変化への対応、長期間設置した際のカメラの経年劣化などにも対応できる。 In the present embodiment, in the image processing apparatus that includes two or more measuring devices and detects an object according to the configuration described above, it is determined that the parameters need to be optimized based on the object detection result. And learning scenes selected or updated based on scene information extracted from images captured by measuring devices installed in the surroundings, so that various learning scenes with high true value accuracy depending on the accuracy of object detection Can be selected. In addition, by repeatedly performing this process, it is possible to cope with environmental changes around the installed camera and with the aging of the camera when installed for a long time.
(3)他の実施の形態
 なお上述の実施の形態においては、本発明を画像処理システム100,800に適用するようにした場合について述べたが、本発明はこれに限らず、この他種々の画像処理装置、画像処理方法、計測情報処理システム、計測情報処理装置、計測情報処理方法などに広く適用することができる。
(3) Other Embodiments In the above-described embodiment, a case has been described in which the present invention is applied to the image processing systems 100 and 800. However, the present invention is not limited to this, and various other types are applicable. The present invention can be widely applied to an image processing device, an image processing method, a measurement information processing system, a measurement information processing device, a measurement information processing method, and the like.
 また上述の実施の形態においては、画像取得部130がカメラ120ごとに設けられる場合について述べたが、本発明はこれに限らず、画像取得部130は、カメラ120の数にかかわらず、1つ設けられる構成であってもよい。 Further, in the above-described embodiment, a case has been described in which image acquisition section 130 is provided for each camera 120. However, the present invention is not limited to this, and image acquisition section 130 may include one image acquisition section regardless of the number of cameras 120. A configuration may be provided.
 また上述の実施の形態においては、シーン情報抽出部140がカメラ120ごとに設けられる場合について述べたが、本発明はこれに限らず、シーン情報抽出部140は、カメラ120の数にかかわらず、1つ設けられる構成であってもよい。 In the above-described embodiment, the case where the scene information extracting unit 140 is provided for each camera 120 has been described. However, the present invention is not limited to this. A configuration in which one is provided may be used.
 また、上記の説明において各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリや、HDD、SSD(Solid State Drive)等の記憶装置、または、ICカード、SDカード、DVD等の記録媒体に置くことができる。 In the above description, information such as a program, a table, and a file for realizing each function is stored in a memory, a storage device such as an HDD or an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD. Can be put on.
 また上述した構成については、本発明の要旨を超えない範囲において、適宜に、変更したり、組み替えたり、組み合わせたり、省略したりしてもよい。 The configuration described above may be appropriately changed, rearranged, combined, or omitted without departing from the scope of the present invention.
 上述した構成によれば、計測装置のパラメータの精度を高めることができる。 According to the configuration described above, the accuracy of the parameters of the measuring device can be improved.
 100……画像処理システム、110……画像処理装置、120……カメラ、130……画像取得部、140……シーン情報抽出部、150……パラメータ最適化部、160……シーン情報、170……カメラ配置情報。 100 image processing system 110 image processing apparatus 120 camera 130 image acquisition unit 140 scene information extraction unit 150 parameter optimization unit 160 scene information 170 ... Camera placement information.

Claims (15)

  1.  計測装置で取得された計測情報を処理する計測情報処理装置であって、
     計測装置で取得された計測情報に基づいて前記計測情報が取得された時間情報および前記計測装置の計測範囲における物体の特定に係る特定情報を抽出し、前記計測情報と前記時間情報と前記特定情報とを含むシーン情報を生成するシーン情報抽出部と、
     前記シーン情報抽出部で生成されたシーン情報の時間情報と、計測装置の配置情報とに基づいて、第1の計測装置で取得された第1のシーン情報に関連付けられる第2の計測装置で取得された第2のシーン情報を特定し、前記第1のシーン情報と前記第2のシーン情報とに基づいて、前記第1の計測装置における物体の特定に係るパラメータを最適化するパラメータ最適化部と、
     を備えることを特徴とする計測情報処理装置。
    A measurement information processing device that processes measurement information acquired by the measurement device,
    Extracting the time information at which the measurement information is acquired based on the measurement information acquired by the measurement device and the identification information relating to the identification of the object in the measurement range of the measurement device, the measurement information, the time information, and the identification information A scene information extraction unit that generates scene information including
    The second measuring device associated with the first scene information acquired by the first measuring device, based on the time information of the scene information generated by the scene information extracting unit and the arrangement information of the measuring device. Parameter optimizing unit that specifies the specified second scene information, and optimizes parameters related to specifying an object in the first measurement device based on the first scene information and the second scene information. When,
    A measurement information processing apparatus comprising:
  2.  前記シーン情報抽出部により抽出されるシーン情報には、計測装置の計測範囲における物体の特定に係る特定情報の信頼性を示す信頼性情報が含まれる、
     ことを特徴とする請求項1に記載の計測情報処理装置。
    The scene information extracted by the scene information extraction unit includes reliability information indicating the reliability of specific information related to specifying an object in the measurement range of the measurement device,
    The measurement information processing apparatus according to claim 1, wherein:
  3.  前記シーン情報抽出部は、前記計測装置で取得された計測情報、前記計測装置の設置情報、前記計測装置の設置環境情報、または前記計測装置の特性情報に基づいて、前記信頼性情報を算出する、
     ことを特徴とする請求項2に記載の計測情報処理装置。
    The scene information extraction unit calculates the reliability information based on measurement information acquired by the measurement device, installation information of the measurement device, installation environment information of the measurement device, or characteristic information of the measurement device. ,
    The measurement information processing apparatus according to claim 2, wherein:
  4.  前記パラメータ最適化部は、前記シーン情報抽出部により抽出されるシーン情報の信頼性情報に基づいて、前記シーン情報抽出部により生成されたシーン情報の中からパラメータを最適化するために用いるシーン情報を選定する、
     ことを特徴とする請求項2に記載の計測情報処理装置。
    The parameter optimizing unit is a unit configured to optimize a parameter from among scene information generated by the scene information extracting unit based on reliability information of the scene information extracted by the scene information extracting unit. Choose,
    The measurement information processing apparatus according to claim 2, wherein:
  5.  前記シーン情報抽出部は、計測装置の処理スペック情報、前記計測情報処理装置の処理スペック情報、または前記計測情報処理装置のデータ容量情報に基づいて、抽出したシーン情報を記憶部に記憶する、
     ことを特徴とする請求項2に記載の計測情報処理装置。
    The scene information extraction unit stores the extracted scene information in the storage unit based on the processing specification information of the measurement device, the processing specification information of the measurement information processing device, or the data capacity information of the measurement information processing device.
    The measurement information processing apparatus according to claim 2, wherein:
  6.  前記パラメータ最適化部は、前記第1のシーン情報の信頼性情報と前記第2のシーン情報の信頼性情報とに基づいて、前記第1のシーン情報および/または前記第2のシーン情報を修正する、
     ことを特徴とする請求項2に記載の計測情報処理装置。
    The parameter optimizing unit corrects the first scene information and / or the second scene information based on the reliability information of the first scene information and the reliability information of the second scene information. Do
    The measurement information processing apparatus according to claim 2, wherein:
  7.  前記パラメータ最適化部は、前記第1のシーン情報の信頼性情報と前記第2のシーン情報の信頼性情報とに基づいて、前記第1のシーン情報および/または前記第2のシーン情報を破棄する、
     ことを特徴とする請求項2に記載の計測情報処理装置。
    The parameter optimizing unit discards the first scene information and / or the second scene information based on the reliability information of the first scene information and the reliability information of the second scene information. Do
    The measurement information processing apparatus according to claim 2, wherein:
  8.  前記パラメータ最適化部は、修正後の所定の計測装置の各シーン情報の信頼性情報がしきい値より低いか否かを判定し、所定の割合以上、低いと判定した場合、前記所定の計測装置における物体の特定の精度が低いことを示す情報を出力する、
     ことを特徴とする請求項6に記載の計測情報処理装置。
    The parameter optimizing unit determines whether or not the reliability information of each scene information of the predetermined measurement device after the correction is lower than a threshold, and when it is determined that the reliability information is lower than a predetermined ratio, the predetermined measurement is performed. Outputting information indicating that the specific accuracy of the object in the device is low,
    The measurement information processing apparatus according to claim 6, wherein:
  9.  前記パラメータ最適化部は、最適化した計測装置のパラメータを前記計測装置に設定する、
     ことを特徴とする請求項1に記載の計測情報処理装置。
    The parameter optimization unit sets an optimized parameter of the measurement device in the measurement device,
    The measurement information processing apparatus according to claim 1, wherein:
  10.  前記パラメータ最適化部は、前記シーン情報抽出部により生成されたシーン情報を複数のシーンに分類し、分類したシーンごとにパラメータを最適化し、シーンごとに最適化した計測装置のパラメータを前記計測装置に設定する、
     ことを特徴とする請求項9に記載の計測情報処理装置。
    The parameter optimizing unit classifies the scene information generated by the scene information extracting unit into a plurality of scenes, optimizes parameters for each of the classified scenes, and calculates parameters of the measuring device optimized for each scene by the measuring device. Set to
    The measurement information processing apparatus according to claim 9, wherein:
  11.  前記パラメータ最適化部は、外部から入力された情報に従って、前記シーン情報抽出部により生成されたシーン情報を修正する、
     ことを特徴とする請求項1に記載の計測情報処理装置。
    The parameter optimization unit corrects the scene information generated by the scene information extraction unit according to information input from the outside,
    The measurement information processing apparatus according to claim 1, wherein:
  12.  計測装置の計測情報から物体の特定が行われた結果、計測装置の設置環境情報、または計測装置の設置時間情報に基づいて、前記計測装置のパラメータを最適化するか否かを判定するパラメータ最適化判定部を備える、
     ことを特徴とする請求項1に記載の計測情報処理装置。
    As a result of specifying the object from the measurement information of the measurement device, parameter optimization for determining whether to optimize the parameters of the measurement device based on the installation environment information of the measurement device or the installation time information of the measurement device. Comprising a conversion determination unit,
    The measurement information processing apparatus according to claim 1, wherein:
  13.  前記シーン情報抽出部は、抽出したシーン情報を記憶部に記憶し、
     前記パラメータ最適化部は、前記パラメータ最適化判定部により最適化をすると判定された場合、前記シーン情報抽出部により生成されたシーン情報に基づいて、前記記憶部に記憶された前記シーン情報の特定情報と同じ特定情報のシーン情報を更新する、
     ことを特徴とする請求項12に記載の計測情報処理装置。
    The scene information extraction unit stores the extracted scene information in a storage unit,
    The parameter optimization unit specifies the scene information stored in the storage unit based on the scene information generated by the scene information extraction unit when the parameter optimization determination unit determines to perform optimization. Update the scene information of the same specific information as the information,
    The measurement information processing apparatus according to claim 12, wherein:
  14.  前記シーン情報抽出部により抽出されるシーン情報には、計測装置の計測範囲内の物体の密集度合を示す混雑度合情報、前記計測範囲内に存在する物体領域の数情報、前記計測範囲における前記物体領域の位置情報、前記物体領域の移動方向情報、前記物体領域の移動速度情報の少なくとも1つ以上が含まれる、
     ことを特徴とする請求項2に記載の計測情報処理装置。
    The scene information extracted by the scene information extraction unit includes congestion degree information indicating the degree of congestion of objects in the measurement range of the measurement device, information on the number of object regions existing in the measurement range, the object in the measurement range. At least one of position information of the region, moving direction information of the object region, and moving speed information of the object region is included.
    The measurement information processing apparatus according to claim 2, wherein:
  15.  計測装置は、カメラであり、
     前記パラメータ最適化部は、カメラの内部パラメータ、カメラの外部パラメータ、物体の検出に用いるしきい値パラメータの少なくとも1つを最適化する、
     ことを特徴とする請求項1に記載の計測情報処理装置。
    The measuring device is a camera,
    The parameter optimization unit optimizes at least one of an internal parameter of the camera, an external parameter of the camera, and a threshold parameter used for detecting an object.
    The measurement information processing apparatus according to claim 1, wherein:
PCT/JP2018/023902 2018-06-22 2018-06-22 Measurement information processing device WO2019244360A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2020525216A JP6959444B2 (en) 2018-06-22 2018-06-22 Measurement information processing device
PCT/JP2018/023902 WO2019244360A1 (en) 2018-06-22 2018-06-22 Measurement information processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/023902 WO2019244360A1 (en) 2018-06-22 2018-06-22 Measurement information processing device

Publications (1)

Publication Number Publication Date
WO2019244360A1 true WO2019244360A1 (en) 2019-12-26

Family

ID=68983282

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/023902 WO2019244360A1 (en) 2018-06-22 2018-06-22 Measurement information processing device

Country Status (2)

Country Link
JP (1) JP6959444B2 (en)
WO (1) WO2019244360A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012059224A (en) * 2010-09-13 2012-03-22 Toshiba Corp Moving object tracking system and moving object tracking method
WO2013179335A1 (en) * 2012-05-30 2013-12-05 株式会社 日立製作所 Monitoring camera control device and visual monitoring system
JP2016015116A (en) * 2014-06-12 2016-01-28 パナソニックIpマネジメント株式会社 Image recognition method and camera system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012059224A (en) * 2010-09-13 2012-03-22 Toshiba Corp Moving object tracking system and moving object tracking method
WO2013179335A1 (en) * 2012-05-30 2013-12-05 株式会社 日立製作所 Monitoring camera control device and visual monitoring system
JP2016015116A (en) * 2014-06-12 2016-01-28 パナソニックIpマネジメント株式会社 Image recognition method and camera system

Also Published As

Publication number Publication date
JPWO2019244360A1 (en) 2021-02-15
JP6959444B2 (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US8289402B2 (en) Image processing apparatus, image pickup apparatus and image processing method including image stabilization
JP4915655B2 (en) Automatic tracking device
JP6036824B2 (en) Angle of view variation detection device, angle of view variation detection method, and field angle variation detection program
JP6735592B2 (en) Image processing apparatus, control method thereof, and image processing system
JP5272886B2 (en) Moving object detection apparatus, moving object detection method, and computer program
KR101271098B1 (en) Digital photographing apparatus, method for tracking, and recording medium storing program to implement the method
JP7272024B2 (en) Object tracking device, monitoring system and object tracking method
US20080267453A1 (en) Method for estimating the pose of a ptz camera
JP2019145174A (en) Image processing system, image processing method and program storage medium
US20120020523A1 (en) Information creation device for estimating object position and information creation method and program for estimating object position
US20140037212A1 (en) Image processing method and device
JP6521626B2 (en) Object tracking device, method and program
US11430204B2 (en) Information processing device, information processing method, and program
JP2020149641A (en) Object tracking device and object tracking method
JP6116765B1 (en) Object detection apparatus and object detection method
JP5155110B2 (en) Monitoring device
JP5173549B2 (en) Image processing apparatus and imaging apparatus
JP2009294733A (en) Image processor and image processing method
JP7354767B2 (en) Object tracking device and object tracking method
WO2019244360A1 (en) Measurement information processing device
JPWO2020003764A1 (en) Image processors, mobile devices, and methods, and programs
WO2021230157A1 (en) Information processing device, information processing method, and information processing program
KR102450466B1 (en) System and method for removing camera movement in video
US20230069018A1 (en) Image processing apparatus, imaging apparatus, image processing system, image processing method, and non-transitory computer readable medium
JP6555940B2 (en) Subject tracking device, imaging device, and method for controlling subject tracking device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18923009

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020525216

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18923009

Country of ref document: EP

Kind code of ref document: A1