WO2008035411A1 - Mobile body information detection device, mobile body information detection method, and mobile body information detection program - Google Patents

Mobile body information detection device, mobile body information detection method, and mobile body information detection program Download PDF

Info

Publication number
WO2008035411A1
WO2008035411A1 PCT/JP2006/318635 JP2006318635W WO2008035411A1 WO 2008035411 A1 WO2008035411 A1 WO 2008035411A1 JP 2006318635 W JP2006318635 W JP 2006318635W WO 2008035411 A1 WO2008035411 A1 WO 2008035411A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
person
moving body
processing
Prior art date
Application number
PCT/JP2006/318635
Other languages
French (fr)
Japanese (ja)
Inventor
Osafumi Nakayama
Daisuke Abe
Morito Shiohara
Original Assignee
Fujitsu Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Limited filed Critical Fujitsu Limited
Priority to JP2008535230A priority Critical patent/JP4667508B2/en
Priority to PCT/JP2006/318635 priority patent/WO2008035411A1/en
Publication of WO2008035411A1 publication Critical patent/WO2008035411A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position

Definitions

  • Mobile object information detection apparatus mobile object information detection method, and mobile object information detection program
  • the present invention relates to a moving body information detecting device, a moving body information detecting method, and a moving body information detecting program for detecting passage information about a moving body that passes through the area using an imaging device that captures a predetermined area.
  • the present invention relates to a mobile object information detection apparatus, a mobile object information detection method, and a mobile object information detection program capable of simultaneously detecting and recognizing a mobile object without performing pan-tilt and zooming with an imaging device. It is.
  • This person information detection device that automatically detects (recognizes) a person by photographing a person who passes through a passage using a camera installed on the top, such as a ceiling, and processing the image.
  • This person information detection device is used, for example, in an entrance / exit monitoring system for monitoring a person entering / exiting an entrance / exit of a facility or a passing person counting system for counting the number of persons passing through a sidewalk or a passage.
  • methods proposed so far include a method using a camera that captures the upper force of an area to be monitored, a method using a plurality of cameras, There is a method using a camera (PTZ camera) having a pan / tilt / zoom function.
  • FIGS. 19 to 22 are diagrams (1) to (4) for explaining the method of Patent Document 1.
  • the camera is placed directly on the ceiling (see Fig. 19), and the captured image (see Fig. 20) is changed. Is recognized by image processing and detected as a person who has passed this change. As a result, since the force camera and the passage are facing each other, only the person who has passed can be photographed, and the person can walk.
  • FIG. 23 is a diagram for explaining the technique of Patent Document 2.
  • a camera is installed in the direction in which a person walks just underneath, and the face of a passing person is recognized. As a result, it is possible to acquire and recognize the face image of a passing person by simply counting the number of passing persons.
  • FIG. 24 is a diagram for explaining the method of Patent Document 3.
  • this method detects a person who passes the middle force of a widely captured image, and pans and tilts the camera to the detected position to acquire and recognize the face image of the person. Do. This makes it possible to detect (count) people and acquire and recognize facial images with a single camera.
  • Patent Document 1 Japanese Patent Laid-Open No. 6-223157
  • Patent Document 2 JP-A-9-54894
  • Patent Document 3 JP-A-10-188145
  • Patent Documents 1, 2, and 3 have the following problems.
  • FIG. 25 is a diagram for explaining the problem of the method using a plurality of force lenses.
  • this method in order to associate the same person appearing on different cameras with the same person by image processing, it is necessary to install each camera strictly, and the installation position shifts due to secular change or the like. Considering this, there is a problem that the cost of on-site adjustment is large.
  • this method uses a plurality of cameras, there are problems of introduction cost and installation space.
  • FIG. 26 is a diagram for explaining the problems of the method using the PTZ camera. As shown in the figure, this method pans and tilts the camera to focus on one passing person, so if multiple persons pass through the monitoring area at the same time, all passing persons are removed. There is a problem that it cannot be detected. To deal with this problem, using multiple PTZ cameras has the same problem as using multiple force cameras. In addition, pan / tilt / zoom operations cause mechanical wear on the camera, making it difficult to guarantee long-term use.
  • the present invention has been made in order to solve the above-described problems caused by the prior art, and it is possible to simultaneously detect and recognize a moving body without performing pan “tilt” and zoom in an imaging apparatus.
  • An object of the present invention is to provide a mobile object information detection apparatus, a mobile object information detection method, and a mobile object information detection program.
  • the present invention provides a moving body information detection device that detects passage information regarding a moving body that passes through a predetermined region using an imaging device that captures a predetermined region.
  • a central area processing means for performing first image processing using a moving body image projected in a central portion of the area photographed by the imaging device, and a region within the area photographed by the imaging device.
  • Peripheral area processing means for performing second image processing using a moving body image projected on the peripheral portion of the central portion, and first information detected by the first image processing by the central area processing means
  • passage information detection means for detecting, as the passage information, information in which the second information detected in the second image processing by the peripheral area processing means is associated with each moving object.
  • the central area processing means and Z or the peripheral area processing means determine the reliability of the detected first information and Z or second information, respectively. And the passing information detecting means associates the first information with the second information based on the reliability calculated by the central area processing means and Z or the peripheral area processing means.
  • the passage information detection unit may perform the central region processing when the reliability calculated by the central region processing unit and Z or the peripheral region processing unit is low. And Z or the surrounding area processing means is instructed to redetect the first information and Z or the second information.
  • the present invention is the first information detected in the first image processing by the central region processing means and the second information processing detected by z or the peripheral region processing means.
  • a mobile body collating means for identifying the mobile body by collating the second information with the mobile body specifying information stored in advance is further provided.
  • the mobile body collating means collates the first information and Z or the second information
  • the mobile body identifying information is included in the mobile body specifying information. If there is no matching information, the first information and the Z or second information are added to the information for specifying the moving object.
  • the central area processing means detects a traveling path of the moving body in the first image processing
  • the peripheral area processing means is the second area processing device.
  • the attribute information of the moving object is detected
  • the passing information detecting means determines the traveling path detected by the central area processing means and the attribute information detected by the peripheral area processing means for each moving object.
  • the associated information is detected as the passing information.
  • the moving body is a person
  • the central area processing unit detects a travel path of the person in the first image processing, and performs the peripheral area processing.
  • the means detects the face information of the person in the second image processing
  • the passage information detecting means detects, as the passage information, information that associates the face information detected by the peripheral area processing means with the travel path detected by the central area processing means for each person.
  • the present invention is characterized in that, in the above invention, the imaging device photographs the predetermined area using a wide-angle lens.
  • the present invention is characterized in that, in the above-mentioned invention, the imaging device photographs the predetermined area using a fisheye lens.
  • the present invention is a moving body information detection method for detecting passage information relating to a moving body that passes through a predetermined area using an imaging apparatus that captures a predetermined area, and is shot by the imaging apparatus.
  • a central region processing step for performing first image processing using an image of a moving object projected in the central portion of the region, and a peripheral portion of the central portion in the region captured by the imaging device.
  • the present invention provides a moving body information detection program for detecting passage information regarding a moving body that passes through a predetermined area using an imaging apparatus that takes an image of the predetermined area, which is shot by the imaging apparatus.
  • a central area processing procedure for performing a first image processing using an image of a moving object displayed in the central portion of the image area, and a peripheral portion of the central portion in the area photographed by the imaging device.
  • a peripheral region processing procedure for performing the second image processing using the image of the moving body, a first information detected by the first image processing by the central region processing procedure, and a second information by the peripheral region processing procedure.
  • a passage information detection procedure for detecting, as the passage information, information in which the second information detected by the second image processing is associated with each moving object;
  • the first image processing is performed using the image of the moving body imaged in the central portion in the area photographed by the imaging device, and the center in the area photographed by the imaging device is performed.
  • the second image processing is performed using the moving body image displayed in the peripheral portion of the portion, and the first information detected by the first image processing and the second information detected by the second image processing are transferred. Since the information associated with each moving object is detected as passing information, the number of moving objects and the traveling route are detected based on the first information, and the moving object is recognized based on the second information. As a result, it is possible to simultaneously detect and recognize a moving body without performing pan / tilt zoom in the imaging apparatus. In addition, since this configuration can be realized with a single imaging apparatus, it is possible to reduce the cost of installation and troublesome position adjustments and installation costs.
  • the reliability of the detected first information and Z or second information is calculated. Based on the calculated reliability, the first information, the second information, Therefore, it is possible to suppress the adverse effect such as a decrease in the detection performance of the moving body due to disturbance factors such as illuminance change and to detect the passage information of the person stably.
  • the first information and the Z or second information are instructed to be re-detected. Even when a state in which an error is likely to occur in the detection of a moving body due to a disturbance element occurs, the moving body information can be detected with high accuracy.
  • the first information detected in the first image processing and the second information detected in the second image processing or the second information detected in the second image processing are stored in advance. Since the mobile object is specified by collating with the information, it is possible to specify the mobile object simply by recognizing the mobile object. This makes it possible to count the number of passes for each moving object, or to detect the moving object to be monitored that has been determined in advance, and to improve the function of the moving object detection function. There is an effect that can be.
  • the traveling path of the moving body is detected in the first image processing, the attribute information of the moving body is detected in the second image processing, and the detected traveling path and attribute are detected. Since the information associated with each moving object is detected as passing information, it is possible to recognize the moving object based on the attribute information and simultaneously detect the traveling path of the moving object! / , Has an effect.
  • the moving body is a person
  • the person's path is detected in the first image processing
  • the person's face information is detected in the second image processing
  • the detected progress is detected. Since it is configured to detect the information that correlates the path and face information for each person as passage information, when the person is recognized based on the face information and at the same time the path of the person can be detected at the same time, Has an effect.
  • the imaging apparatus is configured to capture a predetermined area using a wide-angle lens, the area captured by the imaging apparatus is wider than a normal lens. More passage information can be detected for each moving body, and the accuracy of detecting and recognizing the moving body can be improved.
  • the imaging apparatus is configured to capture a predetermined area using a fisheye lens
  • the area captured by the imaging apparatus is wider than a normal lens or a wide-angle lens.
  • FIG. 1 is a diagram (1) for explaining the concept of the person information detecting apparatus according to the first embodiment.
  • FIG. 2 is a diagram (2) for explaining the concept of the person information detecting apparatus according to the first embodiment.
  • FIG. 3 is a functional block diagram of the configuration of the person information detecting apparatus according to the first embodiment.
  • FIG. 4 is a diagram (1) for explaining an example of image conversion of a peripheral region by a processed image conversion unit.
  • FIG. 5 is a diagram for explaining an example of image conversion of a peripheral region by the processing image conversion unit. It is figure (2).
  • FIG. 6 is a diagram (3) for explaining an example of the image conversion of the peripheral area by the processed image conversion unit.
  • FIG. 7 is a diagram (1) for explaining an example of image conversion of the central region by the processed image conversion unit.
  • FIG. 8 is a diagram (2) for explaining an example of image conversion of the central region by the processed image conversion unit.
  • FIG. 9 is a diagram for explaining an example of person detection by the central region processing unit.
  • FIG. 10 is a diagram for explaining person association by the processing result integration unit.
  • FIG. 11 is a flowchart showing a processing procedure of the person information detecting apparatus according to the first embodiment.
  • FIG. 12 is a functional block diagram of the configuration of the person information detecting apparatus according to the second embodiment.
  • FIG. 13 is a diagram for explaining re-detection by a reliability determination unit.
  • FIG. 14 is a functional block diagram of the configuration of the person information detecting apparatus according to the third embodiment.
  • FIG. 15 is a functional block diagram of the configuration of the person information detecting apparatus according to the fourth embodiment.
  • FIG. 16 is an explanatory diagram for explaining the concept of the package information detecting apparatus according to the fifth embodiment.
  • FIG. 17 is a functional block diagram illustrating the configuration of the package information detecting apparatus according to the fifth embodiment.
  • FIG. 18 is a functional block diagram illustrating a configuration of a computer that executes a moving body information detection program according to the present embodiment.
  • FIG. 19 is a diagram (1) for explaining the method of Patent Document 1.
  • FIG. 20 is a diagram (2) for explaining the method of Patent Document 1.
  • FIG. 21 is a diagram (3) for explaining the method of Patent Document 1.
  • FIG. 21 is a diagram (3) for explaining the method of Patent Document 1.
  • FIG. 22 is a diagram (4) for explaining the method of Patent Document 1.
  • FIG. 23 is a diagram for explaining the method of Patent Document 2.
  • FIG. 24 is a diagram for explaining the method of Patent Document 3.
  • FIG. 25 is a diagram for explaining problems of a method using a plurality of cameras.
  • FIG. 26 is a diagram for explaining the problems of the method using the PTZ camera. Explanation of symbols
  • 1 and 2 are diagrams (1) and (2) for explaining the concept of the person information detecting apparatus according to the first embodiment.
  • the human information detection apparatus simultaneously performs different image processing for each area in the video, with respect to the video taken by the monitoring camera installed above the area to be monitored, such as the ceiling. By doing so, the detection of the passing person and the face image of the person are acquired and recognized.
  • the surveillance camera used here is a camera that performs photographing using a wide-angle lens such as a fisheye lens.
  • a wide-angle lens such as a fisheye lens.
  • a camera 10 having a wide-angle lens is installed on the upper part of the monitoring area, and thereby a passing person is photographed.
  • the person A passes under the camera 10 between times T1 and T3.
  • the camera 10 captures an image as shown in FIG.
  • person A is located far away from the front of camera 10 (peripheral part), so the face is photographed.
  • time T2 person A is near the front of camera 10 ( Only the head is photographed (time T2).
  • the person information detecting device has two different processes: a process of detecting the number of people passing using the central part of the video, and a process of detecting and recognizing a face image of a passing person using the peripheral part of the video. Perform image processing simultaneously. Furthermore, by tracking the face detected in the peripheral part and the person detected in the central part, the passing person and the face are matched (matching )
  • FIG. 3 is a functional block diagram of the configuration of the person information detecting apparatus according to the first embodiment.
  • the person information detecting device 100 inputs an analog video sent from the camera 10 or a digital video sent from an image recording device (not shown). This is a device that detects a face image of a person who has been detected and outputs these as passage information.
  • the AZD conversion unit 110 is a processing unit that digitizes an analog video sent from the camera 10. Specifically, the AZD conversion unit 110 converts an analog image that can be transmitted by the camera 10 into a digital video, and sends the converted digital video to the image control unit 114. If the input video is a digital video, the AZD conversion unit 110 is not required due to its configuration.
  • the image storage unit 111 is a storage unit that stores video data of a digitalized input video. Specifically, the image storage unit 111 stores video data sequentially transmitted from an image control unit 114, which will be described later, for a predetermined period of time in a predetermined time interval.
  • the peripheral region processing result storage unit 112 is a storage unit that stores the processing result of the face detection processing and the video data used for the processing. Specifically, the peripheral region processing result storage unit 112 includes a processing result of a face detection process sent from a peripheral region processing unit 116 described later and a processing result of a processing result integration process sent from a processing result integration unit 118 described later. Are stored in time series at predetermined time intervals.
  • the central region processing result storage unit 113 is used for the processing result and processing of the person detection processing.
  • the central region processing result storage unit 113 includes a processing result of a person detection process sent from a central region processing unit 117 described later and a processing result of a processing result integration process sent from a processing result integration unit 118 described later. Are stored in chronological order at predetermined time intervals.
  • the image control unit 114 is a process for controlling the input of digital video sent from the AZD conversion unit 110 and an image recording device (not shown) and the input / output of the input digital video to the image storage unit 111. Part. Specifically, the image control unit 114 inputs digital video sent from the AZD conversion unit 110 and the image recording device, and sequentially sends them to the image storage unit 111 at predetermined time intervals. Further, the image control unit 114 stores the video data stored in the image storage unit 111 when the video data is stored in the image storage unit 111 and the processed image conversion unit 115 is in a processable state. The images are sequentially extracted and sent to the processed image conversion unit 115 sequentially.
  • the processed image conversion unit 115 is a processing unit that converts the digital video sequentially sent from the image control unit 114 into an image to be processed by the peripheral region processing unit 116 and the central region processing unit 117. Specifically, the processed image conversion unit 115 divides the digital video sequentially sent from the image control unit 114 into a peripheral image and a central image, converts each image, and then converts the peripheral image. Is sent to the peripheral region processing unit 116, and the central image is sent to the central region processing unit 117.
  • the image conversion of the peripheral part and the image conversion of the central part will be described.
  • FIG. 4 are diagrams (1), (2), and (3) for explaining an example of image conversion of the peripheral area by the processed image conversion unit 115.
  • FIG. 4 the processed image conversion unit 115 removes the central region from the processing target and forms a rectangular image with the lines PQ and RSS as opposite sides. By expanding the peripheral area, conversion to an image of only the peripheral part is performed.
  • the developed images may be two or one.
  • the processed image conversion unit 115 may perform processing that is not subject to processing by painting the center portion without expanding the image.
  • the processed image conversion unit 115 when the input video is a fish-eye video, the processed image conversion unit 115 normally converts the video from a fish-eye video using a known video conversion technique. You may convert to video and convert it to a rectangular image with no distortion and line TU and line Vw.
  • FIG. 7 and 8 are diagrams (1) and (2) for explaining an example of image conversion of the central region by the processed image conversion unit 115.
  • FIG. 7 the processed image conversion unit 115 converts the input image into an image of only the central portion, excluding the peripheral region from the processing target.
  • the processed image conversion unit 115 may perform processing to be excluded from the processing target by painting the peripheral portion.
  • the processed image conversion unit 115 performs a known video conversion technique on the video in the same manner as the image conversion of the peripheral area. Using the technique, the fish-eye image power is also converted to normal image.
  • the peripheral region processing unit 116 is a processing unit that performs face detection processing on an image of a peripheral part converted by the processing image conversion unit 115. Specifically, the peripheral region processing unit 116 performs face detection processing on the peripheral image sent from the processing image conversion unit 115 using a known face detection technique, and the detection result includes Acquires the position and size of the face, the face image, and the evaluation value in the detection process (for example, the output value of the neural network representing the likelihood of a face). Then, the peripheral area processing unit 116 sends these detection results to the peripheral area processing result storage unit 112.
  • the peripheral region processing unit 116 compares the detection result of the previous time and the detection result of the current time held in the peripheral region processing result storage unit 112, thereby moving the path of the face in the image. Also detect. Specifically, for example, the face image detected at the previous time and the current time are detected.
  • the normalized correlation shown in the following formula (1) is calculated using the value representing the face image that is output, and when a correlation value greater than a certain value is obtained, the face of each face image is Judge that they are the same.
  • “X” and “Y” indicate the pixel values of the face image detected at the previous time and the current time, respectively.
  • “ ⁇ ′— (bar)” and “— (bar)” The average values of the face images detected at the previous time and the current time are shown.
  • the peripheral area processing unit 116 sends the progress path detection result to the peripheral area processing result storage unit 112 even if it is detected.
  • the peripheral region processing unit 116 learns the image of the back of the head during learning, for example.
  • the position of the person is estimated based on the person's travel path information detected by the processing result integration unit 118 described later.
  • the central area processing unit 117 is a processing unit that performs person detection processing on the central partial image converted by the processing image conversion unit 115. Specifically, the central area processing unit 117 performs a person (silhouette) detection process on the central part image sent from the processing image conversion unit 115 using a known person detection technique.
  • Well-known person detection techniques used here include, for example, the techniques described in “Sasamura et al.,” Research on differential system by automatic background update ", IEICE General Conference, 2003". It is done. This technology calculates the difference between a reference image that does not appear as a detection target, called a background image, and the input image, and detects a person who exists in the central area by detecting the difference area as an object. Is detected.
  • the central area processing unit 117 By performing such person detection processing, the central area processing unit 117 obtains the position and size of the person in the input image, the image of the person (the person's head) as the detection result by the person detection processing. Part) and the evaluation value in the detection process.
  • FIG. 9 is a diagram for explaining an example of person detection by the central area processing unit 117. For example, as shown in the figure, the central area processing unit 117 compares a human model prepared in advance with a background difference result, and uses the degree of coincidence as an evaluation value.
  • the central area processing unit 117 sends these detection results to the central area processing result storage unit 113.
  • the central region processing unit 117 compares the detection result of the previous time stored in the central region processing result storage unit 113 with the detection result of the current time, thereby The traveling path in the image is also detected.
  • the central region processing unit 117 compares person videos using the normal correlation value expressed by the equation (1).
  • the processing result integration unit 118 integrates the face detection result held in the peripheral region processing result storage unit 112 and the person detection result held in the central region processing result storage unit 113, and passed. It is a processing unit that performs counting of persons and acquisition of face images of persons who have passed.
  • the processing result integration unit 118 integrates the processing results from the peripheral region processing unit 116 and the central region processing unit 117, and associates the passed person with the face image. Specifically, the processing result integration unit 118 compares the detection result of the person's (face) travel path in the peripheral image with the detection result of the person's travel path in the central image, Match those that satisfy the movements of
  • the processing result integration unit 118 detects the locus of a point defined at a predetermined position of the face (for example, the upper end of the face) detected in the peripheral image and the image of the central part.
  • the trajectory of a point defined at a predetermined position of the person is analyzed, and the face and the person whose trajectories are most naturally connected are associated with each other as the same person.
  • FIG. 10 is a diagram for explaining person association by the processing result integration unit 118.
  • the processing result integration unit 118 integrates the detection results in the respective regions, and associates the peripheral person 1, the central person 1, and the peripheral person 4 as the same person from the traveling path (see “Integration” on the right side of the figure).
  • “Person” shown in “Result” 1), and the peripheral person 2, the central person 2, and the peripheral person 3 are associated as the same person (see “person 2” shown in “integration result” on the right side of the figure).
  • the processing result integration unit 118 notifies the monitoring center or the like of the result of associating the person's travel route in the input image with the face image of the person, and also the peripheral region processing result storage unit 112 and the central region processing result storage unit 113.
  • FIG. 11 is a flowchart illustrating the processing procedure of the person information detecting apparatus 100 according to the first embodiment.
  • the A / D conversion unit 110 digitizes the analog video transmitted from the camera 10 (step S101), and then the image control unit. 114 divides the input image into a peripheral region and a central region (step S102).
  • the peripheral area processing unit 116 detects information about the image force / face of the peripheral part (step S103), and the central area processing unit 117 detects the image power of the central part and information about the person. Is detected (step S104).
  • the processing result integration unit 118 associates the face detection result with the person detection result (step S105), and notifies the monitoring center or the like of the processing result (step S106).
  • the central area processing unit 117 detects an individual person using an image of a person shown in the central part of the area captured by the camera 10.
  • the image processing is performed, and the peripheral region processing unit 116 performs image processing for detecting a person's face using the image of the person shown in the peripheral part of the central part in the region photographed by the camera 10, and the processing result
  • the integration unit 118 detects, as passage information, information that associates each person detected by the central region processing unit 117 and the face detected by the peripheral region processing unit 116 with each person.
  • this configuration can be realized with a single imaging device, it is possible to reduce the troublesome position adjustment at the installation site and the cost of introduction.
  • the passing information detected in this way can be used in various ways.
  • Example log powerful information on passing traffic and track the suspicious person by searching the log information when a crime occurs, or pass through a potential location for a new commercial facility. This can be used when investigating the attributes and number of people.
  • FIG. 12 is a functional block diagram of the configuration of the person information detecting apparatus according to the second embodiment.
  • those having the same functions as the functional units included in the person information detecting apparatus 100 shown in FIG. 3 are denoted by the same reference numerals. The detailed explanation is omitted.
  • the person information detecting device 200 inputs an analog video sent from the camera 10 or a digital video sent from an image recording device (not shown), and passes through the input video.
  • AZD converter 110, image storage unit 111, peripheral region processing result storage unit 112, and central region processing The result storage unit 113, the image control unit 114, the processing image conversion unit 115, the peripheral region processing unit 116, the central region processing unit 117, the processing result integration unit 118, and the reliability determination unit 219 are included.
  • the reliability determination unit 219 determines the reliability of the detection results detected by the peripheral region processing unit 116 and the central region processing unit 117, and re-detects a face or a person when the reliability is low. It is a processing unit that performs control such that each detection result is integrated while complementing the detection result with low reliability using the detection result with high reliability.
  • the reliability here refers to, for example, the evaluation value of the face detection result by the neural network for the peripheral image (see the description of the face detection process by the peripheral region processing unit 116).
  • the degree of coincidence with the person model of the background difference result is used for the image of the center part.
  • FIG. 13 is a diagram for explaining re-detection by the reliability determination unit 219.
  • the reliability determination unit 219 uses the person detection result in the central region to process the processed image so that the peripheral region processing unit 116 performs face detection again around the position where the person is estimated to have passed.
  • the conversion unit 115 is instructed (see “Redetection by the reliability determination unit” on the right side of the figure).
  • the reliability determination unit 219 re-detects a face or a person when the reliability is low, a state in which an error is likely to occur in the detection of a moving object due to a disturbance element such as an illuminance change occurs. Even when it is generated, it is possible to detect moving body information with high accuracy.
  • the reliability determination unit 219 controls the integration of the detection results performed in the processing result integration unit 118 so as to associate the travel route based on the reliability. For example, the reliability determination unit 219, for a certain traveling path detected in the image of the central part, if there are multiple traveling paths that can be associated with the traveling path in the image of the peripheral part, The processing result integration unit 118 is controlled so that the one with the highest reliability is associated with the traveling path in the central region.
  • the reliability of the detection result detected by the peripheral region processing unit 116 and the central region processing unit 117 is calculated, and the reliability determination unit 219 is calculated. Based on the reliability, control is performed so that the path of travel is associated, so that adverse effects such as deterioration in the detection performance of moving objects due to disturbance factors such as illuminance changes are suppressed, and the passage information of a person is detected stably. be able to.
  • the peripheral region processing unit 116 detects the face image and the traveling path of the passing person has been described.
  • the detected face image is registered in advance. It is also possible to identify the person who has passed by comparing with the face image of the person who has been sent. Therefore, in the following, a person is identified based on the detected face image. This case will be described as Example 3.
  • FIG. 14 is a functional block diagram of the configuration of the person information detecting apparatus according to the third embodiment.
  • those having the same functions as the functional units included in the person information detecting apparatus 100 shown in FIG. 3 are denoted by the same reference numerals. The detailed explanation is omitted.
  • the person information detecting device 300 inputs an analog video sent from the camera 10 or a digital video sent from an image recording device (not shown), and passes through the input image.
  • AZD converter 110, image storage unit 111, peripheral region processing result storage unit 112, and central region processing The result storage unit 113, the image control unit 114, the processing image conversion unit 115, the peripheral region processing unit 116, the central region processing unit 117, the processing result integration unit 118, and the face video matching unit 320 are included.
  • the face image collating unit 320 is a processing unit that identifies a person who has passed based on the face image detected by the peripheral region processing unit 116. Specifically, the face image collation unit 320 collates the face image detected by the peripheral region processing unit 116 with the face image of a person who has been registered in advance using a known face collation technique. The person who passed is identified.
  • the face image collation unit 320 identifies a person who has passed, and acquires information such as which person has passed and how many times. In addition, the face image collation unit 320 adds a new face image to the face image that has not been registered at that time in order to collate the next time it passes. .
  • the face video collation unit 320 collates the face video detected by the peripheral region processing unit 116 with the face video of the person who has been registered in advance. Do Therefore, it is possible to identify the person who has passed by simply recognizing the person. As a result, it is possible to count the number of passages for each person, or to detect a person to be monitored that has been determined in advance, so that the person detection function can be enhanced. In addition, since it is possible to improve the tracking accuracy of the person and to identify the person who has passed, it is possible to realize a more sophisticated function for detecting a passing person for security purposes.
  • the face image matching unit 320 collates the face image, if the matching face image does not exist in the face images of the registered human being, Since a new video is added, it becomes possible to learn and increase the number of people who can be identified, and the detection function of people can be automatically improved.
  • the reliability determination unit 219 for performing control and the face image collation unit 320 for identifying a person based on the detected face image have been added.
  • the person information detection apparatus may be configured by arbitrarily combining these. Therefore, in the following, a case where the second and third embodiments are combined will be described as a fourth embodiment.
  • FIG. 15 is a functional block diagram of the configuration of the person information detecting apparatus according to the fourth embodiment. It should be noted that each functional unit included in the person information detecting device 400 shown in the figure includes the person information detecting device 100 shown in FIG. 3, the person information detecting device 200 shown in FIG. 12, or the person information detecting device 300 shown in FIG. Therefore, detailed description thereof is omitted here.
  • the fourth embodiment by combining the functional units described in the first, second, and third embodiments, it is possible to detect a practical passing person with reduced influence of disturbance such as illuminance change. In addition to realizing a system, it is possible to increase the functionality of the person detection function, such as recognizing a passing person and calculating the number of times a particular person has passed.
  • the human body information detecting apparatus for detecting a person has been described as an example of the moving body information detecting apparatus according to the present invention.
  • the present invention is not limited to this, and for example, a bell
  • the present invention may be applied to a package information detection device that detects packages flowing on a conveyor. Therefore, in the following, the case where the present invention is applied to a package information detection apparatus will be described as Example 5.
  • FIG. 16 is an explanatory diagram for explaining the concept of the package information detecting apparatus according to the fifth embodiment.
  • the package information detection apparatus performs different image processing for each area in the video, with respect to the video shot by the monitoring camera installed on the upper part of the belt conveyor through which the package passes, such as the ceiling. By carrying out at the same time, the side and top of the package are detected.
  • the surveillance camera used here a camera having a wide-angle lens such as a fisheye lens is used as in the embodiments described above.
  • a camera 10 having a wide-angle lens is installed on the upper part of the belt conveyor through which the luggage passes, whereby the luggage flowing on the belt conveyor is photographed.
  • the cargo flowing on the belt conveyor has labels attached to the side surface and the upper surface, respectively.
  • This label shows the attributes related to the package, such as type and weight, in characters and colors.
  • the package information detection apparatus uses two different images: a process of counting the package using the central part of the video and a process of detecting and recognizing the attribute indicated by the label on the side of the package using the peripheral part of the video. Process simultaneously. Furthermore, the load flowing on the belt conveyor is specified by associating the attribute indicated by the label on the side of the load detected in the peripheral portion with the attribute indicated by the label on the upper surface of the load detected in the central portion.
  • the detection (counting) of the baggage flowing on the belt conveyor without performing the pan “tilt” zoom with the camera, and the side and upper surfaces of the baggage are detected. It is possible to detect the characters on the attached label at the same time and to associate the information on the label attached to the side with the information on the label attached to the upper surface.
  • FIG. 17 is a functional block diagram illustrating the configuration of the package information detection apparatus according to the fifth embodiment.
  • the functional units shown in the figure those having the same functions as those of the human information detecting device 100 shown in FIG. 3 or the human information detecting device 200 shown in FIG. 12 are the same.
  • symbol is attached
  • the package information detecting device 500 inputs an analog video sent from the camera 10 or a digital video sent from a camera such as an image recording device (not shown), and inputs the input image.
  • the number of loads flowing on the belt conveyor, the images of the side and top surfaces of the luggage, etc. are detected and output as passage information.
  • the side image collating unit 521 is a processing unit that identifies the attribute of the package flowing on the belt conveyor based on the image of the label on the side of the package detected by the peripheral area processing unit 116. Specifically, the side image collation unit 521 detects the character indicated by the label from the image of the label on the side of the package detected by the peripheral region processing unit 116, and registers the detected character and the character in advance. The attributes of the package are specified by collating with the characters using a known character collation technique.
  • the upper surface image collating unit 522 is a processing unit that identifies the attribute of the load flowing on the belt conveyor based on the image of the label on the upper surface of the load detected by the central area processing unit 117. Specifically, the upper surface image collation unit 522 detects the character indicated by the label from the image of the label on the upper surface of the package detected by the central region processing unit 117, and registers the detected character and the character in advance. The attributes of the package are specified by collating with the characters using a known character collation technique.
  • the side image collating unit 521 collates the information indicated on the label on the package side detected by the peripheral region processing unit 116 with the information for identifying the moving body accumulated in advance
  • the collation unit 522 identifies the baggage by collating the information shown on the label on the upper surface of the baggage detected by the central area processing unit 117 with the information for identifying the mobile object stored in advance. It is also possible to specify the name. As a result, it is possible to detect a package to be detected that has been determined in advance, and to enhance the function of the package detection function.
  • the package attribute may be specified using a color other than the characters.
  • the peripheral area processing unit 116 is displayed on the label on the upper surface of the luggage using the image of the luggage displayed in the peripheral area of the area photographed by the camera 10.
  • the center area processing unit 117 detects the information shown on the label on the side of the luggage using the image of the luggage displayed in the center of the area photographed by the camera 10.
  • the processing result integrating unit 118 uses the information associated with the label information detected by the peripheral region processing unit 116 and the label information detected by the central region processing unit 117 for each package as passing information. Since it is detected, it is possible to simultaneously count the number of packages that pass through and recognize the side and top surfaces of the packages.
  • the mobile body information detection device (person information detection device and baggage information detection device) has been described.
  • the configuration of the mobile body information detection device is realized by software.
  • a mobile object information detection program having the same function can be obtained. Therefore, a computer that executes the mobile body information detection program will be described.
  • FIG. 18 is a functional block diagram illustrating a configuration of a computer that executes a mobile object information detection program according to the present embodiment.
  • the computer 600 includes a RAM (Random Access Memory) 610, a CPU (Central Processing Unit) 620, a ROM ( It has a Read Only Memory (630), a LAN (Local Area Network) interface 640, and a power interface 650.
  • RAM Random Access Memory
  • CPU Central Processing Unit
  • ROM Read Only Memory
  • LAN Local Area Network
  • the RAM 610 is a memory that stores a program and a program execution result
  • the CPU 620 is a central processing unit that reads a program from the RAM 610 and the ROM 630 and executes the program.
  • the ROM 630 is a memory for storing programs and data
  • the LAN interface 640 is an interface for connecting the computer 600 to another computer or a terminal device of a monitoring center via the LAN. Is an interface for connecting surveillance cameras.
  • the moving body information detection program 611 executed in the computer 600 is stored in advance in the ROM 630 and executed as the moving body information detection task 621 by the CPU 620.
  • the person information detection device for detecting a person and the load information detection device for detecting a load have been described.
  • the present invention is not limited to this example.
  • the present invention can also be applied in the same manner when detecting a car that is running.
  • each component of each illustrated device is functionally conceptual and does not necessarily need to be physically configured as illustrated.
  • the specific form of distribution / integration of each device is not limited to the one shown in the figure, and all or a part thereof is functionally or physically distributed in an arbitrary unit according to various loads and usage conditions.
  • Can be integrated and configured.
  • the mobile body information detection apparatus, the mobile body information detection method, and the mobile body information detection program according to the present invention use the imaging device that captures a predetermined area and pass through the mobile body that passes through the area. This is useful for detecting information, and is particularly suitable for simultaneously detecting and recognizing a moving object without performing pan “tilt” zooming with an imaging device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A person information detection device for detecting passage information on a person passing a target monitoring area by using a camera (10) placed at the upper part of the area. The device performs image processing for detecting individual persons appearing in the center of the area imaged by the camera (10) by using images of the persons. Then, the device performs image processing for detecting the faces of persons in the periphery of the center of the area imaged by the camera (10) by using images of the persons. After that, the device detects each person's information associated with a detected face and an advance route of the person, and such information is detected as passage information together with the number of passages of the person and a face image of the person.

Description

明 細 書  Specification
移動体情報検出装置、移動体情報検出方法および移動体情報検出プロ グラム  Mobile object information detection apparatus, mobile object information detection method, and mobile object information detection program
技術分野  Technical field
[0001] 本発明は、所定の領域を撮影する撮像装置を用いて、該領域を通過する移動体に 関する通過情報を検出する移動体情報検出装置、移動体情報検出方法および移動 体情報検出プログラムに関し、特に、撮像装置でパン'ティルト 'ズームを行うことなく 、移動体の検出と認識とを同時に行うことができる移動体情報検出装置、移動体情 報検出方法および移動体情報検出プログラムに関するものである。  [0001] The present invention relates to a moving body information detecting device, a moving body information detecting method, and a moving body information detecting program for detecting passage information about a moving body that passes through the area using an imaging device that captures a predetermined area. In particular, the present invention relates to a mobile object information detection apparatus, a mobile object information detection method, and a mobile object information detection program capable of simultaneously detecting and recognizing a mobile object without performing pan-tilt and zooming with an imaging device. It is.
背景技術  Background art
[0002] 従来、天井など、上部に設置されたカメラを用いて通路などを通過する人物を撮影 し、その映像を画像処理することによって、人物を自動で検出 (認識)する人物情報 検出装置が存在する。この人物情報検出装置は、例えば、施設の出入口を入退出 する人物を監視する入退出監視システムや、歩道や通路を通過する人物の数を計 数する通過人物計数システムに利用されて 、る。  Conventionally, there has been a human information detection device that automatically detects (recognizes) a person by photographing a person who passes through a passage using a camera installed on the top, such as a ceiling, and processing the image. Exists. This person information detection device is used, for example, in an entrance / exit monitoring system for monitoring a person entering / exiting an entrance / exit of a facility or a passing person counting system for counting the number of persons passing through a sidewalk or a passage.
[0003] 力かる入退出監視システムや通過人物計数システムにおいては、入退出や通過す る人物を漏れなく検出(計数)できることや、導入'運用コストの面から、少ないカメラ 台数で実現できることが望まれている。さらに近年では、セキュリティの意識が高まつ ていることから、通過した人物を特定するための情報として顔映像を取得することも重 要視されている。  [0003] In a powerful entry / exit monitoring system and passing person counting system, it is hoped that people who enter / exit and pass through can be detected (counted) without omission and that it can be realized with a small number of cameras from the viewpoint of introduction and operation costs. It is rare. Furthermore, in recent years, since security awareness has increased, it is also important to acquire facial images as information for identifying the person who has passed.
[0004] このような要望を実現するために、これまでに提案されて 、る方法としては、監視し たい領域の上方力 真下を撮影するカメラを用いる方法や、複数のカメラを用いる方 法や、パン'ティルト 'ズーム機能を持つカメラ(PTZカメラ)を用いる方法などがある。  [0004] In order to realize such a demand, methods proposed so far include a method using a camera that captures the upper force of an area to be monitored, a method using a plurality of cameras, There is a method using a camera (PTZ camera) having a pan / tilt / zoom function.
[0005] 監視したい領域の上方力 真下を撮影するカメラを用いる方法としては、例えば、 特許文献 1 (特開平 6— 223157号公報)に記載されている方法がある。図 19〜図 2 2は、特許文献 1の方法を説明するための図(1)〜 (4)である。この方法では、天井 にカメラを真下に向けて設置して(図 19を参照)、撮影される映像(図 20を参照)の変 化を画像処理により認識し、この変化を通過した人物として検出する。これにより、力 メラと通路とが正対しているため、通過した人物のみを撮影することができ、人物が歩[0005] As a method of using a camera that captures an image just below the upward force of an area to be monitored, for example, there is a method described in Patent Document 1 (Japanese Patent Laid-Open No. 6-223157). FIGS. 19 to 22 are diagrams (1) to (4) for explaining the method of Patent Document 1. FIG. In this method, the camera is placed directly on the ceiling (see Fig. 19), and the captured image (see Fig. 20) is changed. Is recognized by image processing and detected as a person who has passed this change. As a result, since the force camera and the passage are facing each other, only the person who has passed can be photographed, and the person can walk.
V、てくる方向に向けて斜めから撮影する場合(図 21を参照)のように、奥行き方向に 別の人物が映る(図 22を参照)ことがなぐ通過する人物を一人ずつ区別して正しく 検出することができる。 V, as if shooting in an oblique direction toward the direction of coming (see Fig. 21), correctly distinguishing each passing person who can be seen in the depth direction (see Fig. 22) can do.
[0006] また、複数のカメラを用いる方法としては、例えば、特許文献 2 (特開平 9— 54894 号公報)に記載されている方法がある。図 23は、特許文献 2の技術を説明するため の図である。同図に示すように、この方法では、真下のみではなぐ人物が歩いてくる 方向にもカメラを設置して、通過する人物の顔も認識する。これにより、通過する人物 の計数のみではなぐ通過した人物の顔映像の取得および認識を行うことができる。  [0006] Further, as a method using a plurality of cameras, for example, there is a method described in Patent Document 2 (Japanese Patent Laid-Open No. 9-54894). FIG. 23 is a diagram for explaining the technique of Patent Document 2. In FIG. As shown in the figure, in this method, a camera is installed in the direction in which a person walks just underneath, and the face of a passing person is recognized. As a result, it is possible to acquire and recognize the face image of a passing person by simply counting the number of passing persons.
[0007] また、 PTZカメラを用いる方法としては、例えば、特許文献 3 (特開平 10— 188145 号公報)に記載されている方法がある。図 24は、特許文献 3の方法を説明するため の図である。同図に示すように、この方法では、広く撮影した映像の中力 通過する 人物を検出し、検出した位置にカメラをパン'ティルト 'ズームすることで、人物の顔映 像の取得、認識を行う。これにより、 1台のカメラで人物の検出(計数)と顔映像の取得 、認識を行うことができる。  [0007] Further, as a method using a PTZ camera, for example, there is a method described in Patent Document 3 (Japanese Patent Laid-Open No. 10-188145). FIG. 24 is a diagram for explaining the method of Patent Document 3. In FIG. As shown in the figure, this method detects a person who passes the middle force of a widely captured image, and pans and tilts the camera to the detected position to acquire and recognize the face image of the person. Do. This makes it possible to detect (count) people and acquire and recognize facial images with a single camera.
[0008] 以上のように、通過する人物を検出する従来技術は様々提案されており、それぞれ 特長も異なっている。  [0008] As described above, various conventional techniques for detecting a passing person have been proposed, and their features are also different.
[0009] 特許文献 1 :特開平 6— 223157号公報  Patent Document 1: Japanese Patent Laid-Open No. 6-223157
特許文献 2:特開平 9 - 54894号公報  Patent Document 2: JP-A-9-54894
特許文献 3 :特開平 10— 188145号公報  Patent Document 3: JP-A-10-188145
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0010] し力しながら、特許文献 1、 2および 3に代表される従来の技術では、以下のような 問題点がある。 However, the conventional techniques represented by Patent Documents 1, 2, and 3 have the following problems.
[0011] まず、監視したい領域の上方力も真下を撮影するカメラを用いる方法では、通過す る人物の計数は行えるものの、人物を認識するための顔映像を取得できず、セキユリ ティの用途には適さないという問題がある。 [0012] また、複数のカメラを用いる方法では、次のような問題点がある。図 25は、複数の力 メラを用いる方法の問題点を説明するための図である。同図に示すように、この方法 では、異なるカメラに映る同一人物を画像処理により同一人物と対応付けるためには 、各カメラを厳密に設置することが必要であり、経年変化などにより設置位置がずれる ことを考慮すると、現場調整のコストが大きいという問題がある。また、この方法では、 複数台のカメラを使用するため、導入コストおよび設置スペースの問題がある。 [0011] First, with the method using a camera that captures the upper force of the area to be monitored, the number of passing people can be counted, but the face image for recognizing the person cannot be obtained. There is a problem that it is not suitable. [0012] The method using a plurality of cameras has the following problems. FIG. 25 is a diagram for explaining the problem of the method using a plurality of force lenses. As shown in the figure, in this method, in order to associate the same person appearing on different cameras with the same person by image processing, it is necessary to install each camera strictly, and the installation position shifts due to secular change or the like. Considering this, there is a problem that the cost of on-site adjustment is large. In addition, since this method uses a plurality of cameras, there are problems of introduction cost and installation space.
[0013] また、 PTZカメラを用いる方法では、次のような問題点がある。図 26は、 PTZカメラ を用いる方法の問題点を説明するための図である。同図に示すように、この方法では 、通過する一人の人物に着目するようにカメラをパン'ティルト 'ズームするため、複数 の人物が同時に監視領域を通過した場合には、全ての通過人物を検出できないとい う問題がある。この問題に対して、複数台の PTZカメラを用いた場合には、複数の力 メラを用いる方法と同じ問題がある。また、パン'ティルト 'ズームの動作は、カメラに機 械的な磨耗が発生するため、長時間の使用を保証することが難しいという問題がある  [0013] In addition, the method using a PTZ camera has the following problems. FIG. 26 is a diagram for explaining the problems of the method using the PTZ camera. As shown in the figure, this method pans and tilts the camera to focus on one passing person, so if multiple persons pass through the monitoring area at the same time, all passing persons are removed. There is a problem that it cannot be detected. To deal with this problem, using multiple PTZ cameras has the same problem as using multiple force cameras. In addition, pan / tilt / zoom operations cause mechanical wear on the camera, making it difficult to guarantee long-term use.
[0014] この発明は、上述した従来技術による問題点を解消するためになされたものであり 、撮像装置でパン'ティルト 'ズームを行うことなぐ移動体の検出と認識とを同時に行 うことができる移動体情報検出装置、移動体情報検出方法および移動体情報検出プ ログラムを提供することを目的とする。 [0014] The present invention has been made in order to solve the above-described problems caused by the prior art, and it is possible to simultaneously detect and recognize a moving body without performing pan “tilt” and zoom in an imaging apparatus. An object of the present invention is to provide a mobile object information detection apparatus, a mobile object information detection method, and a mobile object information detection program.
課題を解決するための手段  Means for solving the problem
[0015] 上述した課題を解決し、目的を達成するため、本発明は、所定の領域を撮影する 撮像装置を用いて、該領域を通過する移動体に関する通過情報を検出する移動体 情報検出装置であって、前記撮像装置により撮影された領域内の中央部分に映され た移動体の映像を用いて第一の画像処理を行う中央領域処理手段と、前記撮像装 置により撮影された領域内の前記中央部分の周辺部分に映された移動体の映像を 用いて第二の画像処理を行う周辺領域処理手段と、前記中央領域処理手段による 第一の画像処理で検出された第一の情報および前記周辺領域処理手段による第二 の画像処理で検出された第二の情報を移動体ごとに対応付けた情報を前記通過情 報として検出する通過情報検出手段と、を備えたことを特徴とする。 [0016] また、本発明は、上記の発明において、前記中央領域処理手段および Zまたは前 記周辺領域処理手段は、それぞれ検出した前記第一の情報および Zまたは前記第 二の情報の信頼度を算出し、前記通過情報検出手段は、前記中央領域処理手段お よび Zまたは前記周辺領域処理手段により算出された信頼度に基づいて、前記第 一の情報と前記第二の情報とを対応付けることを特徴とする。 In order to solve the above-described problems and achieve the object, the present invention provides a moving body information detection device that detects passage information regarding a moving body that passes through a predetermined region using an imaging device that captures a predetermined region. A central area processing means for performing first image processing using a moving body image projected in a central portion of the area photographed by the imaging device, and a region within the area photographed by the imaging device. Peripheral area processing means for performing second image processing using a moving body image projected on the peripheral portion of the central portion, and first information detected by the first image processing by the central area processing means And passage information detection means for detecting, as the passage information, information in which the second information detected in the second image processing by the peripheral area processing means is associated with each moving object. To do. [0016] Further, according to the present invention, in the above invention, the central area processing means and Z or the peripheral area processing means determine the reliability of the detected first information and Z or second information, respectively. And the passing information detecting means associates the first information with the second information based on the reliability calculated by the central area processing means and Z or the peripheral area processing means. Features.
[0017] また、本発明は、上記の発明において、前記通過情報検出手段は、前記中央領域 処理手段および Zまたは前記周辺領域処理手段により算出された信頼度が低い場 合に、前記中央領域処理手段および Zまたは前記周辺領域処理手段に対して前記 第一の情報および Zまたは前記第二の情報の再検出を行うように指示することを特 徴とする。  [0017] Also, in the present invention according to the above-described invention, the passage information detection unit may perform the central region processing when the reliability calculated by the central region processing unit and Z or the peripheral region processing unit is low. And Z or the surrounding area processing means is instructed to redetect the first information and Z or the second information.
[0018] また、本発明は、上記の発明において、前記中央領域処理手段による第一の画像 処理において検出された第一の情報および zまたは前記周辺領域処理手段による 第二の画像処理において検出された第二の情報を、あらかじめ蓄積された移動体特 定用情報と照合することによって該移動体を特定する移動体照合手段をさらに備え たことを特徴とする。  [0018] Also, in the above invention, the present invention is the first information detected in the first image processing by the central region processing means and the second information processing detected by z or the peripheral region processing means. In addition, a mobile body collating means for identifying the mobile body by collating the second information with the mobile body specifying information stored in advance is further provided.
[0019] また、本発明は、上記の発明において、前記移動体照合手段は、前記第一の情報 および Zまたは前記第二の情報を照合した際に、前記移動体特定用情報の中に一 致する情報が存在していな力つた場合には、該第一の情報および Zまたは第二の情 報を前記移動体特定用情報に追加することを特徴とする。  [0019] Further, according to the present invention, in the above invention, when the mobile body collating means collates the first information and Z or the second information, the mobile body identifying information is included in the mobile body specifying information. If there is no matching information, the first information and the Z or second information are added to the information for specifying the moving object.
[0020] また、本発明は、上記の発明において、前記中央領域処理手段は、前記第一の画 像処理において前記移動体の進行経路を検出し、前記周辺領域処理手段は、前記 第二の画像処理において前記移動体の属性情報を検出し、前記通過情報検出手 段は、前記中央領域処理手段により検出された進行経路と前記周辺領域処理手段 により検出された属性情報とを移動体ごとに対応付けた情報を前記通過情報として 検出することを特徴とする。  [0020] Further, according to the present invention, in the above invention, the central area processing means detects a traveling path of the moving body in the first image processing, and the peripheral area processing means is the second area processing device. In the image processing, the attribute information of the moving object is detected, and the passing information detecting means determines the traveling path detected by the central area processing means and the attribute information detected by the peripheral area processing means for each moving object. The associated information is detected as the passing information.
[0021] また、本発明は、上記の発明において、前記移動体は人物であり、前記中央領域 処理手段は、前記第一の画像処理において前記人物の進行経路を検出し、前記周 辺領域処理手段は、前記第二の画像処理において前記人物の顔情報を検出し、前 記通過情報検出手段は、前記周辺領域処理手段により検出された顔情報と前記中 央領域処理手段により検出された進行経路とを人物ごとに対応付けた情報を前記通 過情報として検出することを特徴とする。 [0021] Further, in the present invention according to the above invention, the moving body is a person, and the central area processing unit detects a travel path of the person in the first image processing, and performs the peripheral area processing. The means detects the face information of the person in the second image processing, The passage information detecting means detects, as the passage information, information that associates the face information detected by the peripheral area processing means with the travel path detected by the central area processing means for each person. Features.
[0022] また、本発明は、上記の発明において、前記撮像装置は、広角レンズを用いて前 記所定の領域を撮影することを特徴とする。  [0022] Further, the present invention is characterized in that, in the above invention, the imaging device photographs the predetermined area using a wide-angle lens.
[0023] また、本発明は、上記の発明において、前記撮像装置は、魚眼レンズを用いて前 記所定の領域を撮影することを特徴とする。  [0023] Further, the present invention is characterized in that, in the above-mentioned invention, the imaging device photographs the predetermined area using a fisheye lens.
[0024] また、本発明は、所定の領域を撮影する撮像装置を用いて、該領域を通過する移 動体に関する通過情報を検出する移動体情報検出方法であって、前記撮像装置に より撮影された領域内の中央部分に映された移動体の画像を用いて第一の画像処 理を行う中央領域処理工程と、前記撮像装置により撮影された領域内の前記中央部 分の周辺部分に映された移動体の画像を用いて第二の画像処理を行う周辺領域処 理工程と、前記中央領域処理工程による第一の画像処理で検出された第一の情報 および前記周辺領域処理工程による第二の画像処理で検出された第二の情報を移 動体ごとに対応付けた情報を前記通過情報として検出する通過情報検出工程と、を 含んだことを特徴とする。  [0024] Further, the present invention is a moving body information detection method for detecting passage information relating to a moving body that passes through a predetermined area using an imaging apparatus that captures a predetermined area, and is shot by the imaging apparatus. A central region processing step for performing first image processing using an image of a moving object projected in the central portion of the region, and a peripheral portion of the central portion in the region captured by the imaging device. A peripheral region processing step of performing second image processing using the image of the moving body, a first information detected in the first image processing by the central region processing step, and a first region of the peripheral region processing step. A passage information detection step of detecting, as the passage information, information in which the second information detected in the second image processing is associated with each moving body.
[0025] また、本発明は、所定の領域を撮影する撮像装置を用いて、該領域を通過する移 動体に関する通過情報を検出する移動体情報検出プログラムであって、前記撮像装 置により撮影された領域内の中央部分に映された移動体の画像を用いて第一の画 像処理を行う中央領域処理手順と、前記撮像装置により撮影された領域内の前記中 央部分の周辺部分に映された移動体の画像を用いて第二の画像処理を行う周辺領 域処理手順と、前記中央領域処理手順による第一の画像処理で検出された第一の 情報および前記周辺領域処理手順による第二の画像処理で検出された第二の情報 を移動体ごとに対応付けた情報を前記通過情報として検出する通過情報検出手順と [0025] Further, the present invention provides a moving body information detection program for detecting passage information regarding a moving body that passes through a predetermined area using an imaging apparatus that takes an image of the predetermined area, which is shot by the imaging apparatus. A central area processing procedure for performing a first image processing using an image of a moving object displayed in the central portion of the image area, and a peripheral portion of the central portion in the area photographed by the imaging device. A peripheral region processing procedure for performing the second image processing using the image of the moving body, a first information detected by the first image processing by the central region processing procedure, and a second information by the peripheral region processing procedure. A passage information detection procedure for detecting, as the passage information, information in which the second information detected by the second image processing is associated with each moving object;
、をコンピュータに実行させることを特徴とする。 Are executed by a computer.
発明の効果  The invention's effect
[0026] 本発明によれば、撮像装置により撮影された領域内の中央部分に映された移動体 の画像を用いて第一の画像処理を行い、撮像装置により撮影された領域内の中央 部分の周辺部分に映された移動体の画像を用いて第二の画像処理を行い、第一の 画像処理で検出した第一の情報および第二の画像処理で検出した第二の情報を移 動体ごとに対応付けた情報を通過情報として検出するよう構成したので、第一の情報 に基づいて移動体の通過数や進行経路などを検出するとともに第二の情報に基づ いて移動体を認識することが可能になり、撮像装置でパン ·ティルトズームを行うこと なぐ移動体の検出および認識を同時に行うことができるという効果を奏する。また、 この構成は一台の撮像装置でも実現が可能なため、設置現場での煩雑な位置調整 や、導入に力かるコストを低減することができると 、う効果も奏する。 [0026] According to the present invention, the first image processing is performed using the image of the moving body imaged in the central portion in the area photographed by the imaging device, and the center in the area photographed by the imaging device is performed. The second image processing is performed using the moving body image displayed in the peripheral portion of the portion, and the first information detected by the first image processing and the second information detected by the second image processing are transferred. Since the information associated with each moving object is detected as passing information, the number of moving objects and the traveling route are detected based on the first information, and the moving object is recognized based on the second information. As a result, it is possible to simultaneously detect and recognize a moving body without performing pan / tilt zoom in the imaging apparatus. In addition, since this configuration can be realized with a single imaging apparatus, it is possible to reduce the cost of installation and troublesome position adjustments and installation costs.
[0027] また、本発明によれば、それぞれ検出した第一の情報および Zまたは第二の情報 の信頼度を算出し、算出した信頼度に基づいて、第一の情報と第二の情報とを対応 付けるよう構成したので、照度変化などの外乱要素によって移動体の検出性能が低 下するなどの悪影響を抑え、安定して人物の通過情報を検出することができるという 効果を奏する。  [0027] According to the present invention, the reliability of the detected first information and Z or second information is calculated. Based on the calculated reliability, the first information, the second information, Therefore, it is possible to suppress the adverse effect such as a decrease in the detection performance of the moving body due to disturbance factors such as illuminance change and to detect the passage information of the person stably.
[0028] また、本発明によれば、算出した信頼度が低い場合に、第一の情報および Zまた は第二の情報の再検出を行うように指示するよう構成したので、照度変化などの外乱 要素によって移動体の検出に誤差が生じやすい状態が発生した場合でも、高い精 度で移動体情報を検出することができるという効果を奏する。  [0028] Further, according to the present invention, when the calculated reliability is low, the first information and the Z or second information are instructed to be re-detected. Even when a state in which an error is likely to occur in the detection of a moving body due to a disturbance element occurs, the moving body information can be detected with high accuracy.
[0029] また、本発明によれば、第一の画像処理において検出された第一の情報および Z または第二の画像処理において検出された第二の情報を、あらかじめ蓄積された移 動体特定用情報と照合することによって移動体を特定するよう構成したので、移動体 を認識するだけでなぐ特定することも可能になる。これにより、移動体ごとに通過回 数を集計したり、あら力じめ決めておいた監視すべき移動体を検知したりすることが 可能になり、移動体検出機能の高機能化を図ることができるという効果を奏する。  [0029] Further, according to the present invention, the first information detected in the first image processing and the second information detected in the second image processing or the second information detected in the second image processing are stored in advance. Since the mobile object is specified by collating with the information, it is possible to specify the mobile object simply by recognizing the mobile object. This makes it possible to count the number of passes for each moving object, or to detect the moving object to be monitored that has been determined in advance, and to improve the function of the moving object detection function. There is an effect that can be.
[0030] また、本発明によれば、第一の情報および Zまたは第二の情報を照合した際に、 移動体特定用情報の中に一致する情報が存在して!/、なかった場合には、第一の情 報および Zまたは第二の情報を移動体特定用情報に追加するよう構成したので、特 定できる移動体を学習して増やすことが可能になり、移動体の検出機能を自動的に 向上させることができると!/、う効果を奏する。 [0031] また、本発明によれば、第一の画像処理において移動体の進行経路を検出し、第 二の画像処理にお 、て移動体の属性情報を検出し、検出した進行経路と属性情報 とを移動体ごとに対応付けた情報を通過情報として検出するよう構成したので、属性 情報に基づいて移動体を認識すると同時に、その移動体の進行経路を同時に検出 することができると!/、う効果を奏する。 [0030] Further, according to the present invention, when the first information and the Z or second information are collated, there is no matching information in the mobile object specifying information! Is configured to add the first information and Z or the second information to the mobile object identification information, so it becomes possible to learn and increase the number of mobile objects that can be identified, and the mobile object detection function. If you can improve it automatically! [0031] Further, according to the present invention, the traveling path of the moving body is detected in the first image processing, the attribute information of the moving body is detected in the second image processing, and the detected traveling path and attribute are detected. Since the information associated with each moving object is detected as passing information, it is possible to recognize the moving object based on the attribute information and simultaneously detect the traveling path of the moving object! / , Has an effect.
[0032] また、本発明によれば、移動体は人物であり、第一の画像処理において人物の進 行経路を検出し、第二の画像処理において人物の顔情報を検出し、検出した進行 経路と顔情報とを人物ごとに対応付けた情報を通過情報として検出するよう構成した ので、顔情報に基づいて人物を認識すると同時に、その人物の進行経路を同時に検 出することができると 、う効果を奏する。  [0032] According to the present invention, the moving body is a person, the person's path is detected in the first image processing, the person's face information is detected in the second image processing, and the detected progress is detected. Since it is configured to detect the information that correlates the path and face information for each person as passage information, when the person is recognized based on the face information and at the same time the path of the person can be detected at the same time, Has an effect.
[0033] また、本発明によれば、撮像装置は、広角レンズを用いて所定の領域を撮影するよ う構成したので、撮像装置により撮影される領域が通常のレンズに比べて広くなり、よ り多くの通過情報を移動体ごとに検出することが可能になり、移動体の検出および認 識の精度を高めることができると 、う効果を奏する。  [0033] Further, according to the present invention, since the imaging apparatus is configured to capture a predetermined area using a wide-angle lens, the area captured by the imaging apparatus is wider than a normal lens. More passage information can be detected for each moving body, and the accuracy of detecting and recognizing the moving body can be improved.
[0034] また、本発明によれば、撮像装置は、魚眼レンズを用いて所定の領域を撮影するよ う構成したので、撮像装置により撮影される領域が通常のレンズや広角レンズに比べ てさらに広くなり、より多くの通過情報を移動体ごとに検出することが可能になり、移 動体の検出および認識の精度を高めることができるという効果を奏する。  [0034] Further, according to the present invention, since the imaging apparatus is configured to capture a predetermined area using a fisheye lens, the area captured by the imaging apparatus is wider than a normal lens or a wide-angle lens. As a result, more passing information can be detected for each moving body, and the accuracy of detecting and recognizing the moving body can be improved.
図面の簡単な説明  Brief Description of Drawings
[0035] [図 1]図 1は、本実施例 1に係る人物情報検出装置の概念を説明するための図(1)で ある。  FIG. 1 is a diagram (1) for explaining the concept of the person information detecting apparatus according to the first embodiment.
[図 2]図 2は、本実施例 1に係る人物情報検出装置の概念を説明するための図(2)で ある。  FIG. 2 is a diagram (2) for explaining the concept of the person information detecting apparatus according to the first embodiment.
[図 3]図 3は、本実施例 1に係る人物情報検出装置の構成を示す機能ブロック図であ る。  FIG. 3 is a functional block diagram of the configuration of the person information detecting apparatus according to the first embodiment.
[図 4]図 4は、処理画像変換部による周辺領域の画像変換の一例を説明するための 図(1)である。  [FIG. 4] FIG. 4 is a diagram (1) for explaining an example of image conversion of a peripheral region by a processed image conversion unit.
[図 5]図 5は、処理画像変換部による周辺領域の画像変換の一例を説明するための 図(2)である。 [FIG. 5] FIG. 5 is a diagram for explaining an example of image conversion of a peripheral region by the processing image conversion unit. It is figure (2).
[図 6]図 6は、処理画像変換部による周辺領域の画像変換の一例を説明するための 図(3)である。  FIG. 6 is a diagram (3) for explaining an example of the image conversion of the peripheral area by the processed image conversion unit.
[図 7]図 7は、処理画像変換部による中央領域の画像変換の一例を説明するための 図(1)である。  FIG. 7 is a diagram (1) for explaining an example of image conversion of the central region by the processed image conversion unit.
[図 8]図 8は、処理画像変換部による中央領域の画像変換の一例を説明するための 図(2)である。  FIG. 8 is a diagram (2) for explaining an example of image conversion of the central region by the processed image conversion unit.
[図 9]図 9は、中央領域処理部による人物検出の一例を説明するための図である。  FIG. 9 is a diagram for explaining an example of person detection by the central region processing unit.
[図 10]図 10は、処理結果統合部による人物の対応付けを説明するための図である。 FIG. 10 is a diagram for explaining person association by the processing result integration unit.
[図 11]図 11は、本実施例 1に係る人物情報検出装置の処理手順を示すフローチヤ ートである。 FIG. 11 is a flowchart showing a processing procedure of the person information detecting apparatus according to the first embodiment.
[図 12]図 12は、本実施例 2に係る人物情報検出装置の構成を示す機能ブロック図で ある。  FIG. 12 is a functional block diagram of the configuration of the person information detecting apparatus according to the second embodiment.
[図 13]図 13は、信頼度判定部による再検出を説明するための図である。  FIG. 13 is a diagram for explaining re-detection by a reliability determination unit.
[図 14]図 14は、本実施例 3に係る人物情報検出装置の構成を示す機能ブロック図で ある。  FIG. 14 is a functional block diagram of the configuration of the person information detecting apparatus according to the third embodiment.
[図 15]図 15は、本実施例 4に係る人物情報検出装置の構成を示す機能ブロック図で ある。  FIG. 15 is a functional block diagram of the configuration of the person information detecting apparatus according to the fourth embodiment.
[図 16]図 16は、本実施例 5に係る荷物情報検出装置の概念を説明するための説明 図である。  FIG. 16 is an explanatory diagram for explaining the concept of the package information detecting apparatus according to the fifth embodiment.
[図 17]図 17は、本実施例 5に係る荷物情報検出装置の構成を示す機能ブロック図で ある。  FIG. 17 is a functional block diagram illustrating the configuration of the package information detecting apparatus according to the fifth embodiment.
[図 18]図 18は、本実施例に係る移動体情報検出プログラムを実行するコンピュータ の構成を示す機能ブロック図である。  FIG. 18 is a functional block diagram illustrating a configuration of a computer that executes a moving body information detection program according to the present embodiment.
圆 19]図 19は、特許文献 1の方法を説明するための図(1)である。 FIG. 19 is a diagram (1) for explaining the method of Patent Document 1.
[図 20]図 20は、特許文献 1の方法を説明するための図(2)である。 FIG. 20 is a diagram (2) for explaining the method of Patent Document 1.
圆 21]図 21は、特許文献 1の方法を説明するための図(3)である。 21] FIG. 21 is a diagram (3) for explaining the method of Patent Document 1. In FIG.
[図 22]図 22は、特許文献 1の方法を説明するための図(4)である。 [図 23]図 23は、特許文献 2の方法を説明するための図である。 FIG. 22 is a diagram (4) for explaining the method of Patent Document 1. FIG. 23 is a diagram for explaining the method of Patent Document 2.
[図 24]図 24は、特許文献 3の方法を説明するための図である。  FIG. 24 is a diagram for explaining the method of Patent Document 3.
[図 25]図 25は、複数のカメラを用いる方法の問題点を説明するための図である。  FIG. 25 is a diagram for explaining problems of a method using a plurality of cameras.
[図 26]図 26は、 PTZカメラを用いる方法の問題点を説明するための図である。 符号の説明  [FIG. 26] FIG. 26 is a diagram for explaining the problems of the method using the PTZ camera. Explanation of symbols
100, 200, 300, 400 人物情報検出装置  100, 200, 300, 400 Person information detection device
500 荷物情報検出装置 500 Baggage information detection device
110 AZD変換部  110 AZD converter
111 画像記憶部  111 Image storage
112 周辺領域処理結果記憶部  112 Peripheral area processing result storage
113 中央領域処理結果記憶部  113 Central area processing result storage
114 画像制御部  114 Image controller
115 処理画像変換部  115 Processed image converter
116 周辺領域処理部  116 Peripheral area processing section
117 中央領域処理部  117 Central area processing section
118 処理結果統合部  118 Processing result integration section
219 信頼度判定部  219 Reliability judgment unit
320 顔映像照合部  320 Face image matching unit
521 側面映像照合部  521 Side image matching unit
522 上面映像照合部  522 Top image matching unit
600 コンピュータ 600 computers
610 RAM  610 RAM
611 移動体情報検出プログラム  611 Mobile body information detection program
620 CPU  620 CPU
621 移動体情報検出タスク  621 Moving object information detection task
630 ROM  630 ROM
640 LANインタフェース  640 LAN interface
650 カメラインタフェース 発明を実施するための最良の形態 650 camera interface BEST MODE FOR CARRYING OUT THE INVENTION
[0037] 以下に添付図面を参照して、この発明に係る移動体情報検出装置、移動体情報検 出方法および移動体情報検出プログラムの好適な実施例を詳細に説明する。なお、 本実施例では、本発明を、通路を通過する人物に関する通過情報を検出する人物 情報検出装置に適用した場合を中心に説明する。  [0037] Exemplary embodiments of a mobile object information detection apparatus, a mobile object information detection method, and a mobile object information detection program according to the present invention will be described below in detail with reference to the accompanying drawings. In the present embodiment, the case where the present invention is applied to a person information detecting apparatus that detects passage information relating to a person passing through a passage will be mainly described.
実施例 1  Example 1
[0038] まず、本実施例 1に係る人物情報検出装置の概念について説明する。図 1および 2 は、本実施例 1に係る人物情報検出装置の概念を説明するための図(1)および (2) である。  [0038] First, the concept of the person information detecting apparatus according to the first embodiment will be described. 1 and 2 are diagrams (1) and (2) for explaining the concept of the person information detecting apparatus according to the first embodiment.
[0039] 本実施例 1に係る人物情報検出装置は、天井など、監視したい領域の上部に設置 された監視カメラで撮影された映像にっ ヽて、映像内の領域ごとに異なる画像処理 を同時に行うことによって、通過人物の検出と、その人物の顔映像とを、取得および 認識するものである。  [0039] The human information detection apparatus according to the first embodiment simultaneously performs different image processing for each area in the video, with respect to the video taken by the monitoring camera installed above the area to be monitored, such as the ceiling. By doing so, the detection of the passing person and the face image of the person are acquired and recognized.
[0040] ここで使用される監視カメラ〖こは、魚眼レンズなどの広角レンズを用いて撮影が行 われるカメラが用いられる。広角レンズを有するカメラを用いることにより、撮影される 領域が通常のレンズに比べて広くなり、より多くの通過情報を人物ごとに検出すること が可能になり、人物の検出および認識の精度を高めることができる。  [0040] The surveillance camera used here is a camera that performs photographing using a wide-angle lens such as a fisheye lens. By using a camera with a wide-angle lens, the area to be photographed becomes wider than that of a normal lens, and more passing information can be detected for each person, improving the accuracy of person detection and recognition. be able to.
[0041] 具体的には、図 1に示すように、監視領域の上部に、広角レンズを有するカメラ 10 が設置され、これにより、通過する人物が撮影される。例えば、同図に示すように、時 刻 T1から T3の間に人物 Aがカメラ 10の下を通過したとする。この場合、カメラ 10によ つて、図 2に示すような映像が撮影される。同図に示すように、時刻 T1では、人物 A はカメラ 10の正面よりも離れた場所 (周辺部分)にいるため、顔までが撮影され、時刻 T2では、人物 Aはカメラ 10の正面付近(中心部分)にいるため、頭部のみが撮影さ れる(時刻 T2)。  Specifically, as shown in FIG. 1, a camera 10 having a wide-angle lens is installed on the upper part of the monitoring area, and thereby a passing person is photographed. For example, as shown in the figure, it is assumed that the person A passes under the camera 10 between times T1 and T3. In this case, the camera 10 captures an image as shown in FIG. As shown in the figure, at time T1, person A is located far away from the front of camera 10 (peripheral part), so the face is photographed. At time T2, person A is near the front of camera 10 ( Only the head is photographed (time T2).
[0042] そこで、人物情報検出装置は、映像の中央部分を用いて通過する人数を検出する 処理と、映像の周辺部分を用いて通過人物の顔映像を検出および認識する処理と いう 2つの異なる画像処理を同時に行う。さらに、周辺部分で検出した顔および中央 部分で検出した人物を追跡することで、通過する人物と顔とを一致させる (対応付け る)。 [0042] Therefore, the person information detecting device has two different processes: a process of detecting the number of people passing using the central part of the video, and a process of detecting and recognizing a face image of a passing person using the peripheral part of the video. Perform image processing simultaneously. Furthermore, by tracking the face detected in the peripheral part and the person detected in the central part, the passing person and the face are matched (matching )
[0043] このように、本実施例 1に係る人物情報検出装置では、カメラでパン'ティルト 'ズー ムを行うことなぐ通過した人物の検出 (計数)と顔映像の取得 (認識)とを同時に行う ことができるともに、通過人物と顔との対応付けを行い、高い精度での通過人物の計 数や、通過人物の移動軌跡の取得など、従来技術を解決するだけではなぐ新しい 機能を提供可能なシステムを実現できる。  As described above, in the person information detecting apparatus according to the first embodiment, the detection (counting) of the person who has passed without performing the pan “tilt” zoom with the camera and the acquisition (recognition) of the face image at the same time. In addition to being able to do this, it is possible to provide new functions that are more than just solving conventional technologies, such as counting the number of passing people with high accuracy and acquiring movement trajectories of passing people by associating passing people with faces. System can be realized.
[0044] 次に、本実施例 1に係る人物情報検出装置の構成について説明する。図 3は、本 実施例 1に係る人物情報検出装置の構成を示す機能ブロック図である。同図に示す ように、この人物情報検出装置 100は、カメラ 10から送られるアナログ映像や、画像 記録装置(図示せず)から送られるディジタル映像を入力し、入力した映像から、通過 人数や通過した人物の顔映像などを検出して、これらを通過情報として出力する装 置であり、 AZD変換部 110と、画像記憶部 111と、周辺領域処理結果記憶部 112と 、中央領域処理結果記憶部 113と、画像制御部 114と、処理画像変換部 115と、周 辺領域処理部 116と、中央領域処理部 117と、処理結果統合部 118とを有する。  Next, the configuration of the person information detecting apparatus according to the first embodiment will be described. FIG. 3 is a functional block diagram of the configuration of the person information detecting apparatus according to the first embodiment. As shown in the figure, the person information detecting device 100 inputs an analog video sent from the camera 10 or a digital video sent from an image recording device (not shown). This is a device that detects a face image of a person who has been detected and outputs these as passage information. The AZD conversion unit 110, the image storage unit 111, the peripheral region processing result storage unit 112, and the central region processing result storage unit 113, an image control unit 114, a processed image conversion unit 115, a peripheral region processing unit 116, a central region processing unit 117, and a processing result integration unit 118.
[0045] AZD変換部 110は、カメラ 10から送られるアナログ映像をディジタルィ匕する処理 部である。具体的には、この AZD変換部 110は、カメラ 10力も送られるアナログ映 像をディジタル映像に変換し、変換したディジタル映像を画像制御部 114に送る。な お、入力される映像がディジタル映像である場合には、この AZD変換部 110は、構 成上、不要となる。  [0045] The AZD conversion unit 110 is a processing unit that digitizes an analog video sent from the camera 10. Specifically, the AZD conversion unit 110 converts an analog image that can be transmitted by the camera 10 into a digital video, and sends the converted digital video to the image control unit 114. If the input video is a digital video, the AZD conversion unit 110 is not required due to its configuration.
[0046] 画像記憶部 111は、ディジタルィ匕された入力映像の映像データを記憶する記憶部 である。具体的には、この画像記憶部 111は、後述する画像制御部 114から順次送 られる映像データを、所定の時間間隔で時系列に、所定の期間分、記憶する。  [0046] The image storage unit 111 is a storage unit that stores video data of a digitalized input video. Specifically, the image storage unit 111 stores video data sequentially transmitted from an image control unit 114, which will be described later, for a predetermined period of time in a predetermined time interval.
[0047] 周辺領域処理結果記憶部 112は、顔検出処理の処理結果および処理に用いられ た映像データを記憶する記憶部である。具体的には、この周辺領域処理結果記憶部 112は、後述する周辺領域処理部 116から送られる顔検出処理の処理結果と、後述 する処理結果統合部 118から送られる処理結果統合処理の処理結果とを、それぞれ 所定の時間間隔で時系列に記憶する。  [0047] The peripheral region processing result storage unit 112 is a storage unit that stores the processing result of the face detection processing and the video data used for the processing. Specifically, the peripheral region processing result storage unit 112 includes a processing result of a face detection process sent from a peripheral region processing unit 116 described later and a processing result of a processing result integration process sent from a processing result integration unit 118 described later. Are stored in time series at predetermined time intervals.
[0048] 中央領域処理結果記憶部 113は、人物検出処理の処理結果および処理に用いら れた映像データを記憶する記憶部である。具体的には、この中央領域処理結果記憶 部 113は、後述する中央領域処理部 117から送られる人物検出処理の処理結果と、 後述する処理結果統合部 118から送られる処理結果統合処理の処理結果とを、そ れぞれ所定の時間間隔で時系列に記憶する。 [0048] The central region processing result storage unit 113 is used for the processing result and processing of the person detection processing. A storage unit for storing the received video data. Specifically, the central region processing result storage unit 113 includes a processing result of a person detection process sent from a central region processing unit 117 described later and a processing result of a processing result integration process sent from a processing result integration unit 118 described later. Are stored in chronological order at predetermined time intervals.
[0049] 画像制御部 114は、 AZD変換部 110や画像記録装置(図示せず)から送られるデ イジタル映像の入力や、入力したディジタル映像の画像記憶部 111への入出力を制 御する処理部である。具体的には、この画像制御部 114は、 AZD変換部 110や画 像記録装置から送られるディジタル映像を入力し、所定の時間間隔で画像記憶部 1 11に順次送る。また、画像制御部 114は、画像記憶部 111に映像データが蓄積され ており、処理画像変換部 115が処理可能な状態である場合には、画像記憶部 111に 蓄積されて ヽる映像データを順次取り出し、処理画像変換部 115に順次送る。  [0049] The image control unit 114 is a process for controlling the input of digital video sent from the AZD conversion unit 110 and an image recording device (not shown) and the input / output of the input digital video to the image storage unit 111. Part. Specifically, the image control unit 114 inputs digital video sent from the AZD conversion unit 110 and the image recording device, and sequentially sends them to the image storage unit 111 at predetermined time intervals. Further, the image control unit 114 stores the video data stored in the image storage unit 111 when the video data is stored in the image storage unit 111 and the processed image conversion unit 115 is in a processable state. The images are sequentially extracted and sent to the processed image conversion unit 115 sequentially.
[0050] 処理画像変換部 115は、画像制御部 114から順次送られるディジタル映像を周辺 領域処理部 116および中央領域処理部 117で処理するための画像に変換する処理 部である。具体的には、この処理画像変換部 115は、画像制御部 114から順次送ら れるディジタル映像を周辺部分の画像と中央部分の画像とに分割し、それぞれを変 換したうえで、周辺部分の画像は周辺領域処理部 116に、中央部分の画像は中央 領域処理部 117に送る。以下に、周辺部分の画像変換および中央部分の画像変換 について説明する。  The processed image conversion unit 115 is a processing unit that converts the digital video sequentially sent from the image control unit 114 into an image to be processed by the peripheral region processing unit 116 and the central region processing unit 117. Specifically, the processed image conversion unit 115 divides the digital video sequentially sent from the image control unit 114 into a peripheral image and a central image, converts each image, and then converts the peripheral image. Is sent to the peripheral region processing unit 116, and the central image is sent to the central region processing unit 117. Hereinafter, the image conversion of the peripheral part and the image conversion of the central part will be described.
[0051] まず、周辺部分の画像変換について説明する。図 4、 5および 6は、処理画像変換 部 115による周辺領域の画像変換の一例を説明するための図(1)、(2)および (3) である。例えば、処理画像変換部 115は、図 4に示すように、入力した映像について 、中央領域を処理対象外とし、線 P— Qと線 R—Sとを対辺とする長方形の画像になる ように周辺領域を展開することによって、周辺部分のみの画像への変換を行う。この 場合、同図に示すように、展開した画像を 2枚にしてもよいし、 1枚にしてもよい。また は、処理画像変換部 115は、図 5に示すように、画像を展開せずに、中央部分を塗り つぶすことによって、処理対象外とする処理を行ってもよい。  First, the image conversion of the peripheral part will be described. 4, 5, and 6 are diagrams (1), (2), and (3) for explaining an example of image conversion of the peripheral area by the processed image conversion unit 115. FIG. For example, as shown in FIG. 4, the processed image conversion unit 115 removes the central region from the processing target and forms a rectangular image with the lines PQ and RSS as opposite sides. By expanding the peripheral area, conversion to an image of only the peripheral part is performed. In this case, as shown in the figure, the developed images may be two or one. Alternatively, as shown in FIG. 5, the processed image conversion unit 115 may perform processing that is not subject to processing by painting the center portion without expanding the image.
[0052] また、図 6に示すように、入力する映像が魚眼映像である場合には、処理画像変換 部 115は、当該映像について、公知の映像変換技術を用いて、魚眼映像から通常 映像への変換処理を行い、線 T Uと線 V— wとを対辺とする歪みのない長方形の 画像へ変換するようにしてもょ 、。 [0052] Also, as shown in FIG. 6, when the input video is a fish-eye video, the processed image conversion unit 115 normally converts the video from a fish-eye video using a known video conversion technique. You may convert to video and convert it to a rectangular image with no distortion and line TU and line Vw.
[0053] ここで用いられる公知の映像変換技術としては、例えば、「小林他、 "魚眼レンズカメ ラ画像の矯正"、 2005総合大会論文集 A— 4 19、電子情報通信学会」に記載され た映像変換技術などが挙げられる。  [0053] As a known video conversion technique used here, for example, a video described in "Kobayashi et al.," Fisheye lens camera image correction ", 2005 General Conference Proceedings A-4 19, IEICE. Conversion technology and so on.
[0054] 続いて、中央領域の画像変換について説明する。図 7および 8は、処理画像変換 部 115による中央領域の画像変換の一例を説明するための図(1)および (2)である 。例えば、処理画像変換部 115は、図 7に示すように、入力画像について、周辺領域 を処理対象外とし、中央部分のみの画像への変換を行う。または、処理画像変換部 115は、図 8に示すように、周辺部分を塗りつぶすことによって、処理対象外とする処 理を行ってもよい。  [0054] Next, image conversion in the central area will be described. 7 and 8 are diagrams (1) and (2) for explaining an example of image conversion of the central region by the processed image conversion unit 115. FIG. For example, as illustrated in FIG. 7, the processed image conversion unit 115 converts the input image into an image of only the central portion, excluding the peripheral region from the processing target. Alternatively, as shown in FIG. 8, the processed image conversion unit 115 may perform processing to be excluded from the processing target by painting the peripheral portion.
[0055] また、図 6に示すように、入力する映像が魚眼映像である場合には、処理画像変換 部 115は、当該映像について、周辺領域の画像変換と同様に、公知の映像変換技 術を用いて、魚眼映像力も通常映像への変換処理を行うようにしてもょ 、。  [0055] Also, as shown in FIG. 6, when the input video is a fisheye video, the processed image conversion unit 115 performs a known video conversion technique on the video in the same manner as the image conversion of the peripheral area. Using the technique, the fish-eye image power is also converted to normal image.
[0056] 周辺領域処理部 116は、処理画像変換部 115により変換された周辺部分の画像に ついて顔検出処理を行う処理部である。具体的には、この周辺領域処理部 116は、 処理画像変換部 115から送られる周辺部分の画像について、公知の顔検出技術を 用いて顔の検出処理を行い、検出結果として、入力画像中の顔の位置と大きさ、顔 映像および検出処理での評価値 (例えば、顔らしさを表す、ニューラルネットワークの 出力値など)を取得する。そして、周辺領域処理部 116は、これらの検出結果を周辺 領域処理結果記憶部 112に送る。  [0056] The peripheral region processing unit 116 is a processing unit that performs face detection processing on an image of a peripheral part converted by the processing image conversion unit 115. Specifically, the peripheral region processing unit 116 performs face detection processing on the peripheral image sent from the processing image conversion unit 115 using a known face detection technique, and the detection result includes Acquires the position and size of the face, the face image, and the evaluation value in the detection process (for example, the output value of the neural network representing the likelihood of a face). Then, the peripheral area processing unit 116 sends these detection results to the peripheral area processing result storage unit 112.
[0057] ここで用いられる公知の顔検出技術としては、例えば、この顔の検出処理には、「H.  As a known face detection technique used here, for example, in this face detection process, “H.
A. Rowley他、 Neural network-based face detection 、 IEEE Trans, on Pattern Analysis and Machine Intelligence Vol.20、 No.l、 pp.23- 38、 1998」に記載されて いる技術などが挙げられる。  A. Rowley et al., Neural network-based face detection, IEEE Trans, on Pattern Analysis and Machine Intelligence Vol. 20, No. 1, pp. 23-38, 1998 ”.
[0058] さらに、周辺領域処理部 116は、周辺領域処理結果記憶部 112に保持されている 前時刻の検出結果と現時刻の検出結果とを比較することによって、画像内の顔の進 行経路も検出する。具体的には、例えば、前時刻で検出された顔映像と現時刻で検 出された顔映像とを表す値を用いて、以下に示す式(1)で示される正規化相関を計 算し、一定以上の相関値が得られた場合に、それぞれの顔映像の顔が同一であると 判定する。式(1)において、「X」および「Y」は、それぞれ、前時刻および現時刻で検 出した顔映像の画素値を示し、「χ'— (バー)」および「 —(バー)」は、それぞれ、 前時刻および現時刻で検出した顔映像の平均値を示す。 Further, the peripheral region processing unit 116 compares the detection result of the previous time and the detection result of the current time held in the peripheral region processing result storage unit 112, thereby moving the path of the face in the image. Also detect. Specifically, for example, the face image detected at the previous time and the current time are detected. The normalized correlation shown in the following formula (1) is calculated using the value representing the face image that is output, and when a correlation value greater than a certain value is obtained, the face of each face image is Judge that they are the same. In Equation (1), “X” and “Y” indicate the pixel values of the face image detected at the previous time and the current time, respectively. “Χ′— (bar)” and “— (bar)” The average values of the face images detected at the previous time and the current time are shown.
[0059]  [0059]
…い)
Figure imgf000016_0001
... I)
Figure imgf000016_0001
[0060] このように判定することにより、複数の人物が同時に存在する場合にも、各人物を区 別して、各人物の進行経路を検出することができる。周辺領域処理部 116は、この進 行経路の検出結果にっ 、ても、周辺領域処理結果記憶部 112に送る。  By determining in this way, even when a plurality of persons exist at the same time, it is possible to distinguish each person and detect the traveling path of each person. The peripheral area processing unit 116 sends the progress path detection result to the peripheral area processing result storage unit 112 even if it is detected.
[0061] また、周辺部分の画像では、人物の背中側のみ撮影されて顔が映らない場合があ る力 この場合は、周辺領域処理部 116は、例えば、学習時に後頭部の映像を学習 させておくことによって、顔の検出結果が後ろ向きの顔であることを判断する力 後述 する処理結果統合部 118で検出される人物の進行経路情報に基づいて、人物の位 置を推定する。  [0061] In addition, in the peripheral portion image, only the back side of the person is photographed and the face may not be reflected. In this case, for example, the peripheral region processing unit 116 learns the image of the back of the head during learning, for example. Thus, the position of the person is estimated based on the person's travel path information detected by the processing result integration unit 118 described later.
[0062] 中央領域処理部 117は、処理画像変換部 115により変換された中央部分画像に ついて人物検出処理を行う処理部である。具体的には、この中央領域処理部 117は 、処理画像変換部 115から送られる中央部分の画像について、公知の人物検出技 術を用いて人物(シルエット)の検出処理を行う。  The central area processing unit 117 is a processing unit that performs person detection processing on the central partial image converted by the processing image conversion unit 115. Specifically, the central area processing unit 117 performs a person (silhouette) detection process on the central part image sent from the processing image conversion unit 115 using a known person detection technique.
[0063] ここで用いられる公知の人物検出技術としては、例えば、「辻村他、 "背景自動更新 による差分システムに関する研究"、電子情報通信学会総合大会、 2003」に記載さ れている技術が挙げられる。当該技術は、背景画像と呼ばれる、検出対象の映って いない基準画像と入力画像との間で差を計算し、差のある領域を物体として検出す る背景差分法により、中央領域に存在する人物を検出するものである。  [0063] Well-known person detection techniques used here include, for example, the techniques described in "Sasamura et al.," Research on differential system by automatic background update ", IEICE General Conference, 2003". It is done. This technology calculates the difference between a reference image that does not appear as a detection target, called a background image, and the input image, and detects a person who exists in the central area by detecting the difference area as an object. Is detected.
[0064] このような人物検出処理を行うことにより、中央領域処理部 117は、人物検出処理 による検出結果として、入力画像中の人物の位置と大きさ、人物の映像 (人物の頭部 部分)および検出処理での評価値を取得する。図 9は、中央領域処理部 117による 人物検出の一例を説明するための図である。例えば、同図に示すように、中央領域 処理部 117は、予め用意された人物のモデルと背景差分結果とを比較し、それらの 一致度を、評価値とする。 By performing such person detection processing, the central area processing unit 117 obtains the position and size of the person in the input image, the image of the person (the person's head) as the detection result by the person detection processing. Part) and the evaluation value in the detection process. FIG. 9 is a diagram for explaining an example of person detection by the central area processing unit 117. For example, as shown in the figure, the central area processing unit 117 compares a human model prepared in advance with a background difference result, and uses the degree of coincidence as an evaluation value.
[0065] そして、中央領域処理部 117は、これらの検出結果を中央領域処理結果記憶部 1 13に送る。また、中央領域処理部 117は、周辺領域処理部 116と同様に、中央領域 処理結果記憶部 113に保持されている前時刻の検出結果と現時刻の検出結果とを 比較することによって、人物の画像内の進行経路も検出する。例えば、中央領域処 理部 117は、周辺領域処理部 116と同様に、式(1)で示した正規ィ匕相関値を用いて 、人物映像を比較する。  Then, the central area processing unit 117 sends these detection results to the central area processing result storage unit 113. Similarly to the peripheral region processing unit 116, the central region processing unit 117 compares the detection result of the previous time stored in the central region processing result storage unit 113 with the detection result of the current time, thereby The traveling path in the image is also detected. For example, similar to the peripheral region processing unit 116, the central region processing unit 117 compares person videos using the normal correlation value expressed by the equation (1).
[0066] 処理結果統合部 118は、周辺領域処理結果記憶部 112に保持された顔の検出結 果と中央領域処理結果記憶部 113に保持された人物の検出結果とを統合し、通過し た人物の計数および通過した人物の顔映像の取得を行う処理部である。  [0066] The processing result integration unit 118 integrates the face detection result held in the peripheral region processing result storage unit 112 and the person detection result held in the central region processing result storage unit 113, and passed. It is a processing unit that performs counting of persons and acquisition of face images of persons who have passed.
[0067] また、処理結果統合部 118は、周辺領域処理部 116および中央領域処理部 117 による処理結果を統合し、通過した人物と顔映像とを対応付ける。具体的には、処理 結果統合部 118は、周辺部分の画像での人物 (顔)の進行経路の検出結果と、中央 部分の画像での人物の進行経路の検出結果とを比較し、最も人物の動きを満足する ものを対応付ける。  In addition, the processing result integration unit 118 integrates the processing results from the peripheral region processing unit 116 and the central region processing unit 117, and associates the passed person with the face image. Specifically, the processing result integration unit 118 compares the detection result of the person's (face) travel path in the peripheral image with the detection result of the person's travel path in the central image, Match those that satisfy the movements of
[0068] 例えば、処理結果統合部 118は、周辺部分の画像で検出された顔の所定の位置( 例えば、顔の上端部など)に定義された点の軌跡と、中央部分の画像で検出された 人物の所定の位置 (例えば、頭頂部など)に定義された点の軌跡とをそれぞれ解析し 、互いの軌跡が最も自然につながる顔と人物とを、同一人物として対応付ける。  [0068] For example, the processing result integration unit 118 detects the locus of a point defined at a predetermined position of the face (for example, the upper end of the face) detected in the peripheral image and the image of the central part. The trajectory of a point defined at a predetermined position of the person (for example, the top of the head) is analyzed, and the face and the person whose trajectories are most naturally connected are associated with each other as the same person.
[0069] 図 10は、処理結果統合部 118による人物の対応付けを説明するための図である。  FIG. 10 is a diagram for explaining person association by the processing result integration unit 118.
例えば、同図に示すように、一定時間の間に、周辺領域において 4人の人物(周辺 人物 1〜4)が検出され、中央領域において 2人の人物(中央人物 1および 2)が検出 されたとする(同図左の「入力画像と検出位置」を参照)。この場合、処理結果統合部 118は、それぞれの領域での検出結果を統合し、進行経路から、周辺人物 1と中央 人物 1と周辺人物 4とを同一人物として対応付け(同図右の「統合結果」に示す「人物 1」を参照)、周辺人物 2と中央人物 2と周辺人物 3とを同一人物として対応付ける(同 図右の「統合結果」に示す「人物 2」を参照)。 For example, as shown in the figure, during a certain period of time, four persons (peripheral persons 1 to 4) are detected in the peripheral area, and two persons (central persons 1 and 2) are detected in the central area. (See “Input Image and Detection Position” on the left side of the figure). In this case, the processing result integration unit 118 integrates the detection results in the respective regions, and associates the peripheral person 1, the central person 1, and the peripheral person 4 as the same person from the traveling path (see “Integration” on the right side of the figure). "Person" shown in "Result" 1), and the peripheral person 2, the central person 2, and the peripheral person 3 are associated as the same person (see “person 2” shown in “integration result” on the right side of the figure).
[0070] そして、処理結果統合部 118は、入力画像内での人物の進行経路とその人物の顔 映像とを対応付けた結果を、監視センターなどに通知するとともに、周辺領域処理結 果記憶部 112および中央領域処理結果記憶部 113に送る。  [0070] Then, the processing result integration unit 118 notifies the monitoring center or the like of the result of associating the person's travel route in the input image with the face image of the person, and also the peripheral region processing result storage unit 112 and the central region processing result storage unit 113.
[0071] 次に、本実施例 1に係る人物情報検出装置 100において行われる処理の処理手 順について説明する。図 11は、本実施例 1に係る人物情報検出装置 100の処理手 順を示すフローチャートである。同図に示すように、人物情報検出装置 100において は、まず、 A/D変換部 110が、カメラ 10から送信されたアナログ映像をディジタルィ匕 し (ステップ S 101)、続いて、画像制御部 114が、入力画像を周辺領域と中央領域と に分割する (ステップ S 102)。  Next, a processing procedure of processing performed in the person information detection apparatus 100 according to the first embodiment will be described. FIG. 11 is a flowchart illustrating the processing procedure of the person information detecting apparatus 100 according to the first embodiment. As shown in the figure, in the person information detecting apparatus 100, first, the A / D conversion unit 110 digitizes the analog video transmitted from the camera 10 (step S101), and then the image control unit. 114 divides the input image into a peripheral region and a central region (step S102).
[0072] そして、分割された入力画像について、周辺領域処理部 116が、周辺部分の画像 力 顔に関する情報を検出し (ステップ S103)、中央領域処理部 117が、中央部分 の画像力も人物に関する情報を検出する (ステップ S 104)。  [0072] Then, for the divided input image, the peripheral area processing unit 116 detects information about the image force / face of the peripheral part (step S103), and the central area processing unit 117 detects the image power of the central part and information about the person. Is detected (step S104).
[0073] その後、処理結果統合部 118が、顔の検出結果と人物の検出結果とを対応付け( ステップ S105)、その処理結果を監視センターなどに通知する(ステップ S106)。  Thereafter, the processing result integration unit 118 associates the face detection result with the person detection result (step S105), and notifies the monitoring center or the like of the processing result (step S106).
[0074] 上述してきたように、本実施例 1では、中央領域処理部 117が、カメラ 10により撮影 された領域内の中央部分に映された人物の画像を用いて個々の人物を検出する画 像処理を行い、周辺領域処理部 116が、カメラ 10により撮影された領域内の中央部 分の周辺部分に映された人物の画像を用いて人物の顔を検出する画像処理を行い 、処理結果統合部 118が、中央領域処理部 117により検出された個々の人物および 周辺領域処理部 116により検出された顔を人物ごとに対応付けた情報を通過情報と して検出するので、顔に基づいて人物を認識するとともに、個々に検出した人物に基 づいて通過数や進行経路などを検出することが可能になり、カメラ 10でパン'ティルト 'ズームを行うことなぐ人物の検出および認識を同時に行うことができる。また、この 構成は一台の撮像装置でも実現が可能なため、設置現場での煩雑な位置調整や、 導入に力かるコストを低減することができる。  [0074] As described above, in the first embodiment, the central area processing unit 117 detects an individual person using an image of a person shown in the central part of the area captured by the camera 10. The image processing is performed, and the peripheral region processing unit 116 performs image processing for detecting a person's face using the image of the person shown in the peripheral part of the central part in the region photographed by the camera 10, and the processing result The integration unit 118 detects, as passage information, information that associates each person detected by the central region processing unit 117 and the face detected by the peripheral region processing unit 116 with each person. In addition to recognizing people, it is possible to detect the number of passages and travel paths based on the individual people detected, and simultaneously detect and recognize people without panning and tilting with the camera 10. be able to. In addition, since this configuration can be realized with a single imaging device, it is possible to reduce the troublesome position adjustment at the installation site and the cost of introduction.
[0075] このようにして検出された通過情報は、さまざまな方法で利用することができる。例 えば、力かる通過情報をログ情報として記録しておき、犯罪発生時にそのログ情報を 検索することによって不審人物を追跡したり、新しく商業施設を建設する際に立地場 所の候補地を通行する人の属性や数を調査したりする場合などに利用することがで きる。 [0075] The passing information detected in this way can be used in various ways. Example For example, log powerful information on passing traffic and track the suspicious person by searching the log information when a crime occurs, or pass through a potential location for a new commercial facility. This can be used when investigating the attributes and number of people.
実施例 2  Example 2
[0076] ところで、上記実施例 1では、周辺部分の画像での顔の検出結果と中央部分の画 像での人物の検出結果とを単純に統合する場合について説明したが、それぞれの 検出処理によって検出された検出結果の信頼度に基づいて再検出を行ったり、信頼 度が高!、検出結果で信頼度が低 、検出結果を補完し合ったりするようにしてもょ 、。 そこで、以下では、力かる信頼度に基づいた制御を行う場合を、実施例 2として説明 する。  By the way, in the first embodiment, the case where the face detection result in the peripheral image and the person detection result in the central image are simply integrated has been described. Re-detection based on the reliability of the detected detection result, or high reliability! The detection result is low in reliability. Therefore, in the following, a case where control based on strong reliability is performed will be described as a second embodiment.
[0077] 図 12は、本実施例 2に係る人物情報検出装置の構成を示す機能ブロック図である 。なお、同図に示す各機能部のうち、図 3に示した人物情報検出装置 100が有する 機能部と同様の役割を果たすものについては同一の符号を付しており、ここでは、そ の詳細な説明を省略する。  FIG. 12 is a functional block diagram of the configuration of the person information detecting apparatus according to the second embodiment. Of the functional units shown in the figure, those having the same functions as the functional units included in the person information detecting apparatus 100 shown in FIG. 3 are denoted by the same reference numerals. The detailed explanation is omitted.
[0078] 同図に示すように、この人物情報検出装置 200は、カメラ 10から送られるアナログ 映像や、画像記録装置(図示せず)から送られるディジタル映像を入力し、入力した 映像から、通過人数や通過した人物の顔映像などを検出して、これらを通過情報とし て出力する装置であり、 AZD変換部 110と、画像記憶部 111と、周辺領域処理結果 記憶部 112と、中央領域処理結果記憶部 113と、画像制御部 114と、処理画像変換 部 115と、周辺領域処理部 116と、中央領域処理部 117と、処理結果統合部 118と、 信頼度判定部 219とを有する。  As shown in the figure, the person information detecting device 200 inputs an analog video sent from the camera 10 or a digital video sent from an image recording device (not shown), and passes through the input video. A device that detects the number of people and face images of people who have passed through and outputs them as passing information. AZD converter 110, image storage unit 111, peripheral region processing result storage unit 112, and central region processing The result storage unit 113, the image control unit 114, the processing image conversion unit 115, the peripheral region processing unit 116, the central region processing unit 117, the processing result integration unit 118, and the reliability determination unit 219 are included.
[0079] 信頼度判定部 219は、周辺領域処理部 116および中央領域処理部 117によって 検出された検出結果について、その信頼度を判定し、信頼度が低い場合には顔や 人物の再検出を行ったり、信頼度が高い検出結果を用いて、信頼度が低い検出結 果を補完しながら、各検出結果を統合したりするよう制御する処理部である。なお、こ こでいう信頼度には、例えば、周辺部分の画像については、ニューラルネットワーク による顔検出結果の評価値 (周辺領域処理部 116による顔検出処理の説明を参照) が用いられ、中央部分の画像については、背景差分結果の人物モデルとの一致度( 中央領域処理部 117による人物検出処理の説明を参照)が用いられる。 [0079] The reliability determination unit 219 determines the reliability of the detection results detected by the peripheral region processing unit 116 and the central region processing unit 117, and re-detects a face or a person when the reliability is low. It is a processing unit that performs control such that each detection result is integrated while complementing the detection result with low reliability using the detection result with high reliability. Note that the reliability here refers to, for example, the evaluation value of the face detection result by the neural network for the peripheral image (see the description of the face detection process by the peripheral region processing unit 116). The degree of coincidence with the person model of the background difference result (see the description of the person detection process by the center area processing unit 117) is used for the image of the center part.
[0080] 図 13は、信頼度判定部 219による再検出を説明するための図である。例えば、同 図に示すように、周辺領域での顔検出結果の信頼度が小さぐ中央領域での人物検 出結果の信頼度が大き力つたとする(同図左の「入力画像と検出位置」を参照)。この 場合、信頼度判定部 219は、中央領域での人物検出結果を利用して、人物が通過 したと推測される位置周辺について、周辺領域処理部 116で再度顔検出を行うよう に、処理画像変換部 115に対して指示する(同図右の「信頼度判定部による再検出」 を参照)。 FIG. 13 is a diagram for explaining re-detection by the reliability determination unit 219. For example, as shown in the figure, it is assumed that the reliability of the person detection result in the central area where the reliability of the face detection result in the peripheral area is small (the “input image and detection position” on the left of the figure). ). In this case, the reliability determination unit 219 uses the person detection result in the central region to process the processed image so that the peripheral region processing unit 116 performs face detection again around the position where the person is estimated to have passed. The conversion unit 115 is instructed (see “Redetection by the reliability determination unit” on the right side of the figure).
[0081] ここで、信頼度判定部 219が、信頼度が低い場合に、顔や人物の再検出を行うの で、照度変化などの外乱要素によって移動体の検出に誤差が生じやすい状態が発 生した場合でも、高 、精度で移動体情報を検出することができる。  [0081] Here, since the reliability determination unit 219 re-detects a face or a person when the reliability is low, a state in which an error is likely to occur in the detection of a moving object due to a disturbance element such as an illuminance change occurs. Even when it is generated, it is possible to detect moving body information with high accuracy.
[0082] また、信頼度判定部 219は、処理結果統合部 118において行われる検出結果の統 合について、信頼度に基づいて、進行経路の対応付けを行うように制御する。例え ば、信頼度判定部 219は、中央部分の画像で検出したある一つの進行経路に対し て、その進行経路に対応付け得る進行経路が周辺部分の画像内に複数あった場合 には、その中で信頼度が最も高いものを、当該中央領域での進行経路に対応付ける よう処理結果統合部 118を制御する。  [0082] Further, the reliability determination unit 219 controls the integration of the detection results performed in the processing result integration unit 118 so as to associate the travel route based on the reliability. For example, the reliability determination unit 219, for a certain traveling path detected in the image of the central part, if there are multiple traveling paths that can be associated with the traveling path in the image of the peripheral part, The processing result integration unit 118 is controlled so that the one with the highest reliability is associated with the traveling path in the central region.
[0083] 上述してきたように、本実施例 2では、周辺領域処理部 116および中央領域処理部 117によって検出された検出結果の信頼度を算出し、信頼度判定部 219が、算出さ れた信頼度に基づいて、進行経路の対応付けを行うように制御するので、照度変化 などの外乱要素によって移動体の検出性能が低下するなどの悪影響を抑え、安定し て人物の通過情報を検出することができる。  As described above, in the second embodiment, the reliability of the detection result detected by the peripheral region processing unit 116 and the central region processing unit 117 is calculated, and the reliability determination unit 219 is calculated. Based on the reliability, control is performed so that the path of travel is associated, so that adverse effects such as deterioration in the detection performance of moving objects due to disturbance factors such as illuminance changes are suppressed, and the passage information of a person is detected stably. be able to.
実施例 3  Example 3
[0084] ところで、上記実施例 1および 2では、周辺領域処理部 116において、通過した人 物の顔映像や進行経路を検出する場合について説明したが、ここで検出された顔映 像をあらかじめ登録された人物の顔映像と比較することによって、通過した人物を特 定するようにしてもよい。そこで、以下では、検出された顔映像に基づいて人物を特 定する場合を、実施例 3として説明する。 By the way, in the first and second embodiments, the case where the peripheral region processing unit 116 detects the face image and the traveling path of the passing person has been described. However, the detected face image is registered in advance. It is also possible to identify the person who has passed by comparing with the face image of the person who has been sent. Therefore, in the following, a person is identified based on the detected face image. This case will be described as Example 3.
[0085] 図 14は、本実施例 3に係る人物情報検出装置の構成を示す機能ブロック図である 。なお、同図に示す各機能部のうち、図 3に示した人物情報検出装置 100が有する 機能部と同様の役割を果たすものについては同一の符号を付しており、ここでは、そ の詳細な説明を省略する。  FIG. 14 is a functional block diagram of the configuration of the person information detecting apparatus according to the third embodiment. Of the functional units shown in the figure, those having the same functions as the functional units included in the person information detecting apparatus 100 shown in FIG. 3 are denoted by the same reference numerals. The detailed explanation is omitted.
[0086] 同図に示すように、この人物情報検出装置 300は、カメラ 10から送られるアナログ 映像や、画像記録装置(図示せず)から送られるディジタル映像を入力し、入力した 画像から、通過人数や通過した人物の顔映像などを検出して、これらを通過情報とし て出力する装置であり、 AZD変換部 110と、画像記憶部 111と、周辺領域処理結果 記憶部 112と、中央領域処理結果記憶部 113と、画像制御部 114と、処理画像変換 部 115と、周辺領域処理部 116と、中央領域処理部 117と、処理結果統合部 118と、 顔映像照合部 320とを有する。  [0086] As shown in the figure, the person information detecting device 300 inputs an analog video sent from the camera 10 or a digital video sent from an image recording device (not shown), and passes through the input image. A device that detects the number of people and face images of people who have passed through and outputs them as passing information. AZD converter 110, image storage unit 111, peripheral region processing result storage unit 112, and central region processing The result storage unit 113, the image control unit 114, the processing image conversion unit 115, the peripheral region processing unit 116, the central region processing unit 117, the processing result integration unit 118, and the face video matching unit 320 are included.
[0087] 顔映像照合部 320は、周辺領域処理部 116によって検出された顔映像に基づいて 、通過した人物を特定する処理部である。具体的には、この顔映像照合部 320は、 周辺領域処理部 116によって検出された顔映像と、あら力じめ登録されている人物 の顔映像とを公知の顔照合技術を用いて照合することによって、通過した人物を特 定する。  The face image collating unit 320 is a processing unit that identifies a person who has passed based on the face image detected by the peripheral region processing unit 116. Specifically, the face image collation unit 320 collates the face image detected by the peripheral region processing unit 116 with the face image of a person who has been registered in advance using a known face collation technique. The person who passed is identified.
[0088] ここで用いられる公知の顔照合技術としては、例えば、「小松他、 "部分空間法を用 いた向きによらない顔の切り出しと認識"、 PRU95-191, pp.7-14、 1996」が挙げられる 。当該技術は、目、鼻、口などの顔の部品位置で構成されるベクトルを、登録してい る顔映像と検出した顔映像とで比較し、この類似度によって人物を特定、または、類 似して 、る人物を列挙するものである。  [0088] As a known face matching technique used here, for example, "Komatsu et al.," Cut and recognize face without orientation using subspace method ", PRU95-191, pp.7-14, 1996. ". This technology compares a vector composed of face part positions such as eyes, nose and mouth with the registered face image and the detected face image, and identifies or resembles a person based on this similarity. The person who enumerates is enumerated.
[0089] このような顔照合処理を行うことによって、顔映像照合部 320は、通過した人物を特 定し、どの人物がいつ、何回通過したなどの情報を取得する。また、顔映像照合部 3 20は、検出した顔映像力 その時点で登録されていない顔映像であった場合には、 次回通過する際に照合するために、当該顔映像を新たに追加登録する。  By performing such face collation processing, the face image collation unit 320 identifies a person who has passed, and acquires information such as which person has passed and how many times. In addition, the face image collation unit 320 adds a new face image to the face image that has not been registered at that time in order to collate the next time it passes. .
[0090] 上述してきたように、本実施例 3では、顔映像照合部 320が、周辺領域処理部 116 によって検出された顔映像と、あら力じめ登録されている人物の顔映像とを照合する ことによって、通過した人物を特定するので、人物を認識するだけでなぐ特定するこ とも可能になる。これにより、人物ごとに通過回数を集計したり、あら力じめ決めてお いた監視すべき人物を検知したりすることが可能になり、人物検出機能の高機能化 を図ることができる。また、人物の追跡精度の向上および、通過した人物の特定が可 能となるため、より高機能でセキュリティ用途向けの通過人物の検出システムを実現 できる。 [0090] As described above, in the third embodiment, the face video collation unit 320 collates the face video detected by the peripheral region processing unit 116 with the face video of the person who has been registered in advance. Do Therefore, it is possible to identify the person who has passed by simply recognizing the person. As a result, it is possible to count the number of passages for each person, or to detect a person to be monitored that has been determined in advance, so that the person detection function can be enhanced. In addition, since it is possible to improve the tracking accuracy of the person and to identify the person who has passed, it is possible to realize a more sophisticated function for detecting a passing person for security purposes.
[0091] また、顔映像照合部 320が、顔映像を照合した際に、あらかじめ登録されている人 物の顔映像の中に一致する顔映像が存在していな力つた場合には、当該顔映像を 新たに追加するので、特定できる人物を学習して増やすことが可能になり、人物の検 出機能を自動的に向上させることができる。  [0091] Also, when the face image matching unit 320 collates the face image, if the matching face image does not exist in the face images of the registered human being, Since a new video is added, it becomes possible to learn and increase the number of people who can be identified, and the detection function of people can be automatically improved.
実施例 4  Example 4
[0092] ところで、上記実施例 2および 3では、実施例 1に対して、信頼度に基づ!ヽて制御を 行うための信頼度判定部 219と、検出された顔映像に基づいて人物を特定するため の顔映像照合部 320とを、それぞれ追加した場合について説明したが、これらの機 能部を任意に組み合わせて人物情報検出装置を構成してもよい。そこで、以下では 、実施例 2および 3を組み合わせた場合を、実施例 4として説明する。  By the way, in the above-described Embodiments 2 and 3, the reliability is higher than that of Embodiment 1! In the above description, the reliability determination unit 219 for performing control and the face image collation unit 320 for identifying a person based on the detected face image have been added. The person information detection apparatus may be configured by arbitrarily combining these. Therefore, in the following, a case where the second and third embodiments are combined will be described as a fourth embodiment.
[0093] 図 15は、本実施例 4に係る人物情報検出装置の構成を示す機能ブロック図である 。なお、同図に示す人物情報検出装置 400が有する各機能部は、図 3に示した人物 情報検出装置 100、図 12に示した人物情報検出装置 200または図 14に示した人物 情報検出装置 300が有する機能部を組み合わせただけであるので、ここでは、その 詳細な説明を省略する。  FIG. 15 is a functional block diagram of the configuration of the person information detecting apparatus according to the fourth embodiment. It should be noted that each functional unit included in the person information detecting device 400 shown in the figure includes the person information detecting device 100 shown in FIG. 3, the person information detecting device 200 shown in FIG. 12, or the person information detecting device 300 shown in FIG. Therefore, detailed description thereof is omitted here.
[0094] 上述してきたように、本実施例 4では、実施例 1、 2および 3で説明した各機能部を 組み合わせることによって、照度変化などの外乱の影響を抑えた実用的な通過人物 の検出システムを実現できるとともに、通過人物の認識や特定の人物の通過回数算 出など、人物検出機能の高機能化を図ることができる。  [0094] As described above, in the fourth embodiment, by combining the functional units described in the first, second, and third embodiments, it is possible to detect a practical passing person with reduced influence of disturbance such as illuminance change. In addition to realizing a system, it is possible to increase the functionality of the person detection function, such as recognizing a passing person and calculating the number of times a particular person has passed.
実施例 5  Example 5
[0095] これまで、本発明に係る移動体情報検出装置の実施例として、人物を検出する人 物情報検出装置について説明してきたが、本発明はこれに限られず、例えば、ベル トコンベア上を流れる荷物を検出する荷物情報検出装置に適用してもよい。そこで、 以下では、本発明を荷物情報検出装置に適用した場合を、実施例 5として説明する So far, the human body information detecting apparatus for detecting a person has been described as an example of the moving body information detecting apparatus according to the present invention. However, the present invention is not limited to this, and for example, a bell The present invention may be applied to a package information detection device that detects packages flowing on a conveyor. Therefore, in the following, the case where the present invention is applied to a package information detection apparatus will be described as Example 5.
[0096] まず、本実施例 5に係る荷物情報検出装置の概念について説明する。図 16は、本 実施例 5に係る荷物情報検出装置の概念を説明するための説明図である。 First, the concept of the package information detecting apparatus according to the fifth embodiment will be described. FIG. 16 is an explanatory diagram for explaining the concept of the package information detecting apparatus according to the fifth embodiment.
[0097] 本実施例 5に係る荷物情報検出装置は、天井など、荷物が通過するベルトコンベア の上部に設置された監視カメラで撮影された映像について、映像内の領域ごとに異 なる画像処理を同時に行うことによって、荷物の側面と上面とをそれぞれ検出するも のである。ここで使用される監視カメラとしては、これまで説明した実施例同様、魚眼 レンズなどの広角レンズを有するカメラが用いられる。  [0097] The package information detection apparatus according to the fifth embodiment performs different image processing for each area in the video, with respect to the video shot by the monitoring camera installed on the upper part of the belt conveyor through which the package passes, such as the ceiling. By carrying out at the same time, the side and top of the package are detected. As the surveillance camera used here, a camera having a wide-angle lens such as a fisheye lens is used as in the embodiments described above.
[0098] 具体的には、図 16に示すように、荷物が通過するベルトコンベアの上部に、広角レ ンズを有するカメラ 10が設置され、これにより、ベルトコンベア上を流れる荷物が撮影 される。  [0098] Specifically, as shown in FIG. 16, a camera 10 having a wide-angle lens is installed on the upper part of the belt conveyor through which the luggage passes, whereby the luggage flowing on the belt conveyor is photographed.
[0099] ベルトコンベア上を流れる荷物には、同図に示すように、その側面と上面とにそれ ぞれラベルが貼られている。このラベルには、種類や重さなど荷物に関する属性が文 字や色などで示されている。そして、荷物がカメラ 10の正面よりも離れた場所 (周辺 部分)にいる時には、側面に貼られたラベルが撮影され、荷物がカメラ 10の正面付近 (中心部分)にいる時には、上面に貼られたラベルが撮影される。  [0099] As shown in the figure, the cargo flowing on the belt conveyor has labels attached to the side surface and the upper surface, respectively. This label shows the attributes related to the package, such as type and weight, in characters and colors. When the baggage is at a location (peripheral part) farther away from the front of the camera 10, the label attached to the side is photographed. When the baggage is near the front of the camera 10 (center part), it is attached to the upper surface. The label is shot.
[0100] そこで、荷物情報検出装置は、映像の中央部分を用いて荷物を計数する処理と、 映像の周辺部分を用いて荷物側面のラベルが示す属性を検出および認識する処理 という 2つの異なる画像処理を同時に行う。さらに、周辺部分で検出した荷物側面の ラベルが示す属性と、中央部分で検出した荷物上面のラベルが示す属性とを対応付 けることによって、ベルトコンベア上を流れる荷物を特定する。  [0100] Therefore, the package information detection apparatus uses two different images: a process of counting the package using the central part of the video and a process of detecting and recognizing the attribute indicated by the label on the side of the package using the peripheral part of the video. Process simultaneously. Furthermore, the load flowing on the belt conveyor is specified by associating the attribute indicated by the label on the side of the load detected in the peripheral portion with the attribute indicated by the label on the upper surface of the load detected in the central portion.
[0101] このように、本実施例 5に係る荷物情報検出装置では、カメラでパン'ティルト 'ズー ムを行うことなぐベルトコンベア上を流れる荷物の検出(計数)と、荷物の側面および 上面に貼られたラベルの文字の検出を同時に行うことができるともに、側面に貼られ たラベルの情報と、上面に貼られたラベルの情報との対応付けを行うことができる。  As described above, in the baggage information detection apparatus according to the fifth embodiment, the detection (counting) of the baggage flowing on the belt conveyor without performing the pan “tilt” zoom with the camera, and the side and upper surfaces of the baggage are detected. It is possible to detect the characters on the attached label at the same time and to associate the information on the label attached to the side with the information on the label attached to the upper surface.
[0102] 図 17は、本実施例 5に係る荷物情報検出装置の構成を示す機能ブロック図である 。なお、同図に示す各機能部のうち、図 3に示した人物情報検出装置 100または図 1 2に示した人物情報検出装置 200が有する機能部と同様の役割を果たすものについ ては同一の符号を付している。これらの機能部については、具体的には、入力画像 の周辺領域で検出して 、た対象が顔力 荷物側面のラベルに替わり、中央領域で検 出する対象が人物力 荷物上面のラベルに替わっただけであるので、ここでは、詳細 な説明を省略する。 FIG. 17 is a functional block diagram illustrating the configuration of the package information detection apparatus according to the fifth embodiment. . Of the functional units shown in the figure, those having the same functions as those of the human information detecting device 100 shown in FIG. 3 or the human information detecting device 200 shown in FIG. 12 are the same. The code | symbol is attached | subjected. Specifically, these functional units are detected in the peripheral area of the input image, the target is replaced with the label on the side of the facial load, and the target detected in the central area is replaced with the label on the upper side of the human load. Therefore, detailed explanation is omitted here.
[0103] 同図に示すように、この荷物情報検出装置 500は、カメラ 10から送られるアナログ 映像や、画像記録装置(図示せず)などカゝら送られるディジタル映像を入力し、入力 した画像から、ベルトコンベア上を流れる荷物の数や、荷物の側面および上面の映 像などを検出して、これらを通過情報として出力する装置であり、 AZD変換部 110と 、画像記憶部 111と、周辺領域処理結果記憶部 112と、中央領域処理結果記憶部 1 13と、画像制御部 114と、処理画像変換部 115と、周辺領域処理部 116と、中央領 域処理部 117と、処理結果統合部 118と、信頼度判定部 219と、側面映像照合部 52 1と、上面映像照合部 522とを有する。  [0103] As shown in the figure, the package information detecting device 500 inputs an analog video sent from the camera 10 or a digital video sent from a camera such as an image recording device (not shown), and inputs the input image. The number of loads flowing on the belt conveyor, the images of the side and top surfaces of the luggage, etc. are detected and output as passage information.The AZD conversion unit 110, the image storage unit 111, and the peripheral Region processing result storage unit 112, central region processing result storage unit 113, image control unit 114, processing image conversion unit 115, peripheral region processing unit 116, central region processing unit 117, and processing result integration unit 118, a reliability determination unit 219, a side image collation unit 521, and a top image collation unit 522.
[0104] 側面映像照合部 521は、周辺領域処理部 116によって検出された荷物側面のラベ ルの映像に基づ 、て、ベルトコンベア上を流れる荷物の属性を特定する処理部であ る。具体的には、この側面映像照合部 521は、周辺領域処理部 116によって検出さ れた荷物側面のラベルの映像から、当該ラベルに示された文字を検出し、検出した 文字と、あらかじめ登録されている文字とを公知の文字照合技術を用いて照合するこ とによって、荷物の属性を特定する。  The side image collating unit 521 is a processing unit that identifies the attribute of the package flowing on the belt conveyor based on the image of the label on the side of the package detected by the peripheral area processing unit 116. Specifically, the side image collation unit 521 detects the character indicated by the label from the image of the label on the side of the package detected by the peripheral region processing unit 116, and registers the detected character and the character in advance. The attributes of the package are specified by collating with the characters using a known character collation technique.
[0105] 上面映像照合部 522は、中央領域処理部 117によって検出された荷物上面のラベ ルの映像に基づ 、て、ベルトコンベア上を流れる荷物の属性を特定する処理部であ る。具体的には、この上面映像照合部 522は、中央領域処理部 117によって検出さ れた荷物上面のラベルの映像から、当該ラベルに示された文字を検出し、検出した 文字と、あらかじめ登録されている文字とを公知の文字照合技術を用いて照合するこ とによって、荷物の属性を特定する。  The upper surface image collating unit 522 is a processing unit that identifies the attribute of the load flowing on the belt conveyor based on the image of the label on the upper surface of the load detected by the central area processing unit 117. Specifically, the upper surface image collation unit 522 detects the character indicated by the label from the image of the label on the upper surface of the package detected by the central region processing unit 117, and registers the detected character and the character in advance. The attributes of the package are specified by collating with the characters using a known character collation technique.
[0106] 側面映像照合部 521および上面映像照合部 522で用いられる公知の文字照合技 術としては、例えば、「根岸他、 "情景画像中文字認識のための変形を許容する高速 なテンプレートマッチング"、 PRMU2005-247, ρρ.101-106、 2005」に記載されている テンプレートマッチング法が挙げられる。当該技術は、あらかじめ登録された辞書画 像と比較することによって、文字を照合するものである。 For example, “Negishi et al.,” “High-speed allowing deformation for character recognition in scene image” is used as a known character matching technique used in the side image matching unit 521 and the top image matching unit 522. Template matching method described in "Nano Template Matching", PRMU2005-247, ρρ.101-106, 2005 ". This technology collates characters by comparing them with previously registered dictionary images.
[0107] このように、側面映像照合部 521が、周辺領域処理部 116により検出された荷物側 面のラベルに示された情報をあらかじめ蓄積された移動体特定用の情報と照合し、 上面映像照合部 522が、中央領域処理部 117により検出された荷物上面のラベル に示された情報をあらかじめ蓄積された移動体特定用の情報と照合することによって 荷物を特定するので、荷物を認識するだけでなぐ特定することも可能になる。これに より、あら力じめ決めておいた検出すべき荷物を検知することが可能になり、荷物検 出機能の高機能化を図ることができる。  As described above, the side image collating unit 521 collates the information indicated on the label on the package side detected by the peripheral region processing unit 116 with the information for identifying the moving body accumulated in advance, The collation unit 522 identifies the baggage by collating the information shown on the label on the upper surface of the baggage detected by the central area processing unit 117 with the information for identifying the mobile object stored in advance. It is also possible to specify the name. As a result, it is possible to detect a package to be detected that has been determined in advance, and to enhance the function of the package detection function.
[0108] なお、ここでは、ラベルに示された文字を用いて荷物の属性を特定する場合につい て説明したが、文字ではなぐ色を用いて荷物の属性を特定するようにしてもよい。  [0108] Although the case where the package attribute is specified using the characters shown on the label has been described here, the package attribute may be specified using a color other than the characters.
[0109] 上述してきたように、本実施例 5では、周辺領域処理部 116が、カメラ 10により撮影 された領域の周辺部分に映された荷物の画像を用いて、荷物上面のラベルに示され た情報を検出する画像処理を行い、中央領域処理部 117が、カメラ 10により撮影さ れた領域の中央部分に映された荷物の画像を用いて、荷物側面のラベルに示され た情報を検出する画像処理を行い、処理結果統合部 118が、周辺領域処理部 116 により検出されたラベルの情報および中央領域処理部 117により検出されたラベル の情報を荷物ごとに対応付けた情報を通過情報として検出するので、通過する荷物 の計数と荷物の側面および上面の認識を同時に行うことができる。  [0109] As described above, in the fifth embodiment, the peripheral area processing unit 116 is displayed on the label on the upper surface of the luggage using the image of the luggage displayed in the peripheral area of the area photographed by the camera 10. The center area processing unit 117 detects the information shown on the label on the side of the luggage using the image of the luggage displayed in the center of the area photographed by the camera 10. The processing result integrating unit 118 uses the information associated with the label information detected by the peripheral region processing unit 116 and the label information detected by the central region processing unit 117 for each package as passing information. Since it is detected, it is possible to simultaneously count the number of packages that pass through and recognize the side and top surfaces of the packages.
[0110] ところで、上記実施例 1〜5では、移動体情報検出装置 (人物情報検出装置および 荷物情報検出装置)について説明してきたが、移動体情報検出装置が有する構成を ソフトウェアによって実現することで、同様の機能を有する移動体情報検出プログラム を得ることができる。そこで、この移動体情報検出プログラムを実行するコンピュータ について説明する。  In the first to fifth embodiments, the mobile body information detection device (person information detection device and baggage information detection device) has been described. However, the configuration of the mobile body information detection device is realized by software. Thus, a mobile object information detection program having the same function can be obtained. Therefore, a computer that executes the mobile body information detection program will be described.
[0111] 図 18は、本実施例に係る移動体情報検出プログラムを実行するコンピュータの構 成を示す機能ブロック図である。同図に示すように、このコンピュータ 600は、 RAM ( Random Access Memory) 610と、 CPU (Central Processing Unit) 620と、 ROM ( Read Only Memory) 630と、 LAN (Local Area Network)インタフェース 640と、力 メラインタフェース 650とを有する。 FIG. 18 is a functional block diagram illustrating a configuration of a computer that executes a mobile object information detection program according to the present embodiment. As shown in the figure, the computer 600 includes a RAM (Random Access Memory) 610, a CPU (Central Processing Unit) 620, a ROM ( It has a Read Only Memory (630), a LAN (Local Area Network) interface 640, and a power interface 650.
[0112] RAM610は、プログラムやプログラムの実行途中結果などを記憶するメモリであり、 CPU620は、 RAM610や ROM630からプログラムを読み出して実行する中央処理 装置である。 [0112] The RAM 610 is a memory that stores a program and a program execution result, and the CPU 620 is a central processing unit that reads a program from the RAM 610 and the ROM 630 and executes the program.
[0113] ROM630は、プログラムやデータを格納するメモリであり、 LANインタフェース 640 は、コンピュータ 600を LAN経由で他のコンピュータや監視センターの端末装置な どに接続するためのインタフェースであり、カメラインタフェース 650は、監視カメラを 接続するためのインタフェースである。  [0113] The ROM 630 is a memory for storing programs and data, and the LAN interface 640 is an interface for connecting the computer 600 to another computer or a terminal device of a monitoring center via the LAN. Is an interface for connecting surveillance cameras.
[0114] そして、このコンピュータ 600において実行される移動体情報検出プログラム 611 は、あらかじめ ROM630に記憶され、 CPU620によって移動体情報検出タスク 621 として実行される。  Then, the moving body information detection program 611 executed in the computer 600 is stored in advance in the ROM 630 and executed as the moving body information detection task 621 by the CPU 620.
[0115] また、本実施例では、人物を検出する人物情報検出装置および荷物を検出する荷 物情報検出装置について説明したが、本発明はこれに限定されるものではなぐ例 えば、道路を走行している車を検出する場合などにも同様に適用することができる。  [0115] In the present embodiment, the person information detection device for detecting a person and the load information detection device for detecting a load have been described. However, the present invention is not limited to this example. The present invention can also be applied in the same manner when detecting a car that is running.
[0116] また、本実施例において説明した各処理のうち、自動的に行われるものとして説明 した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われる ものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる  [0116] In addition, among the processes described in the present embodiment, all or part of the processes described as being performed automatically can be performed manually, or are described as being performed manually. All or part of the processing can be automatically performed by a known method.
[0117] この他、上記文書中や図面中で示した処理手順、制御手順、具体的名称、各種の データやパラメータを含む情報については、特記する場合を除いて任意に変更する ことができる。 [0117] In addition, the processing procedure, control procedure, specific name, information including various data and parameters shown in the above-mentioned document and drawings can be arbitrarily changed unless otherwise specified.
[0118] また、図示した各装置の各構成要素は機能概念的なものであり、必ずしも物理的に 図示のように構成されていることを要しない。すなわち、各装置の分散'統合の具体 的形態は図示のものに限られず、その全部または一部を、各種の負荷や使用状況な どに応じて、任意の単位で機能的または物理的に分散 ·統合して構成することができ る。  [0118] Each component of each illustrated device is functionally conceptual and does not necessarily need to be physically configured as illustrated. In other words, the specific form of distribution / integration of each device is not limited to the one shown in the figure, and all or a part thereof is functionally or physically distributed in an arbitrary unit according to various loads and usage conditions. · Can be integrated and configured.
産業上の利用可能性 以上のように、本発明に係る移動体情報検出装置、移動体情報検出方法および移 動体情報検出プログラムは、所定の領域を撮影する撮像装置を用いて、該領域を通 過する移動体に関する通過情報を検出する場合に有用であり、特に、撮像装置でパ ン 'ティルト 'ズームを行うことなぐ移動体の検出と認識とを同時に行う場合に適して いる。 Industrial applicability As described above, the mobile body information detection apparatus, the mobile body information detection method, and the mobile body information detection program according to the present invention use the imaging device that captures a predetermined area and pass through the mobile body that passes through the area. This is useful for detecting information, and is particularly suitable for simultaneously detecting and recognizing a moving object without performing pan “tilt” zooming with an imaging device.

Claims

請求の範囲 The scope of the claims
[1] 所定の領域を撮影する撮像装置を用いて、該領域を通過する移動体に関する通 過情報を検出する移動体情報検出装置であって、  [1] A moving body information detecting apparatus that detects passing information about a moving body that passes through a predetermined area using an imaging device.
前記撮像装置により撮影された領域内の中央部分に映された移動体の画像を用 Using the image of the moving body shown in the center of the area photographed by the imaging device
V、て第一の画像処理を行う中央領域処理手段と、 V, central area processing means for performing first image processing;
前記撮像装置により撮影された領域内の前記中央部分の周辺部分に映された移 動体の画像を用いて第二の画像処理を行う周辺領域処理手段と、  A peripheral region processing means for performing second image processing using an image of a moving body imaged on a peripheral portion of the central portion in the region captured by the imaging device;
前記中央領域処理手段による第一の画像処理で検出された第一の情報および前 記周辺領域処理手段による第二の画像処理で検出された第二の情報を移動体ごと に対応付けた情報を前記通過情報として検出する通過情報検出手段と、  Information in which the first information detected by the first image processing by the central area processing means and the second information detected by the second image processing by the peripheral area processing means are associated with each moving object. Passage information detecting means for detecting the passage information;
を備えたことを特徴とする移動体情報検出装置。  A moving body information detection apparatus comprising:
[2] 前記中央領域処理手段および Zまたは前記周辺領域処理手段は、それぞれ検出 した前記第一の情報および Zまたは前記第二の情報の信頼度を算出し、  [2] The central area processing means and Z or the peripheral area processing means calculate the reliability of the detected first information and Z or the second information, respectively.
前記通過情報検出手段は、前記中央領域処理手段および Zまたは前記周辺領域 処理手段により算出された信頼度に基づいて、前記第一の情報と前記第二の情報と を対応付けることを特徴とする請求項 1に記載の移動体情報検出装置。  The passage information detecting means associates the first information with the second information based on the reliability calculated by the central area processing means and Z or the peripheral area processing means. Item 4. The moving body information detection device according to Item 1.
[3] 前記通過情報検出手段は、前記中央領域処理手段および Zまたは前記周辺領域 処理手段により算出された信頼度が低い場合に、前記中央領域処理手段および Z または前記周辺領域処理手段に対して前記第一の情報および Zまたは前記第二の 情報の再検出を行うように指示することを特徴とする請求項 2に記載の移動体情報検 出装置。  [3] The passage information detection unit is configured to detect the central region processing unit and Z or the peripheral region processing unit when the reliability calculated by the central region processing unit and Z or the peripheral region processing unit is low. 3. The mobile body information detection apparatus according to claim 2, wherein the mobile information detection apparatus instructs to perform redetection of the first information and Z or the second information.
[4] 前記中央領域処理手段による第一の画像処理において検出された第一の情報お よび Zまたは前記周辺領域処理手段による第二の画像処理において検出された第 二の情報を、あらかじめ蓄積された移動体特定用情報と照合することによって該移動 体を特定する移動体照合手段をさらに備えたことを特徴とする請求項 1に記載の移 動体情報検出装置。  [4] The first information detected in the first image processing by the central region processing means and the second information detected in the second image processing by Z or the peripheral region processing means are stored in advance. 2. The moving body information detecting apparatus according to claim 1, further comprising moving body collating means for identifying the moving body by collating with the moving body specifying information.
[5] 前記移動体照合手段は、前記第一の情報および Zまたは前記第二の情報を照合 した際に、前記移動体特定用情報の中に一致する情報が存在して!/、なかった場合 には、該第一の情報および zまたは第二の情報を前記移動体特定用情報に追加す ることを特徴とする請求項 4に記載の移動体情報検出装置。 [5] When the mobile body collating means collates the first information and Z or the second information, there is no matching information in the mobile body identifying information! / Case 5. The moving body information detecting apparatus according to claim 4, wherein the first information and z or second information are added to the moving body specifying information.
[6] 前記中央領域処理手段は、前記第一の画像処理において前記移動体の進行経 路を検出し、 [6] The central region processing means detects a traveling path of the moving body in the first image processing,
前記周辺領域処理手段は、前記第二の画像処理にお!、て前記移動体の属性情 報を検出し、  The peripheral region processing means detects the attribute information of the moving object in the second image processing,
前記通過情報検出手段は、前記中央領域処理手段により検出された進行経路と 前記周辺領域処理手段により検出された属性情報とを移動体ごとに対応付けた情報 を前記通過情報として検出することを特徴とする請求項 1に記載の移動体情報検出 装置。  The passage information detection unit detects, as the passage information, information in which a traveling path detected by the central region processing unit and attribute information detected by the peripheral region processing unit are associated with each moving object. The moving body information detecting device according to claim 1.
[7] 前記移動体は人物であり、  [7] The moving body is a person,
前記中央領域処理手段は、前記第一の画像処理にお 、て前記人物の進行経路を 検出し、  In the first image processing, the central area processing means detects a travel path of the person,
前記周辺領域処理手段は、前記第二の画像処理にお!、て前記人物の顔情報を検 出し、  The peripheral area processing means detects the face information of the person in the second image processing,
前記通過情報検出手段は、前記中央領域処理手段により検出された進行経路と 前記周辺領域処理手段により検出された顔情報とを人物ごとに対応付けた情報を前 記通過情報として検出することを特徴とする請求項 1に記載の移動体情報検出装置  The passage information detection unit detects, as the passage information, information associating the travel path detected by the central region processing unit with the face information detected by the peripheral region processing unit for each person. The moving body information detecting device according to claim 1
[8] 前記撮像装置は、広角レンズを用いて前記所定の領域を撮影することを特徴とす る請求項 1〜7のいずれか一つに記載の移動体情報検出装置。 8. The moving body information detecting apparatus according to any one of claims 1 to 7, wherein the imaging apparatus captures the predetermined area using a wide-angle lens.
[9] 前記撮像装置は、魚眼レンズを用いて前記所定の領域を撮影することを特徴とす る請求項 1〜7のいずれか一つに記載の移動体情報検出装置。 9. The moving body information detecting device according to any one of claims 1 to 7, wherein the imaging device images the predetermined area using a fisheye lens.
[10] 所定の領域を撮影する撮像装置を用いて、該領域を通過する移動体に関する通 過情報を検出する移動体情報検出方法であって、 [10] A moving body information detection method for detecting passing information about a moving body passing through a predetermined area using an imaging device that captures a predetermined area,
前記撮像装置により撮影された領域内の中央部分に映された移動体の画像を用 V、て第一の画像処理を行う中央領域処理工程と、  A central area processing step for performing first image processing using an image of a moving object projected on a central portion in an area captured by the imaging device; and
前記撮像装置により撮影された領域内の前記中央部分の周辺部分に映された移 動体の画像を用いて第二の画像処理を行う周辺領域処理工程と、 The transition projected on the peripheral part of the central part in the area photographed by the imaging device. A peripheral region processing step of performing second image processing using an image of a moving object;
前記中央領域処理工程による第一の画像処理で検出された第一の情報および前 記周辺領域処理工程による第二の画像処理で検出された第二の情報を移動体ごと に対応付けた情報を前記通過情報として検出する通過情報検出工程と、  Information in which the first information detected by the first image processing by the central region processing step and the second information detected by the second image processing by the peripheral region processing step are associated with each moving object. A passage information detection step for detecting the passage information;
を含んだことを特徴とする移動体情報検出方法。  A moving body information detection method comprising:
所定の領域を撮影する撮像装置を用いて、該領域を通過する移動体に関する通 過情報を検出する移動体情報検出プログラムであって、  A moving body information detection program for detecting passing information relating to a moving body that passes through a predetermined area using an imaging device.
前記撮像装置により撮影された領域内の中央部分に映された移動体の画像を用 Vヽて第一の画像処理を行う中央領域処理手順と、  A central region processing procedure for performing a first image processing using an image of a moving object projected on a central portion in a region captured by the imaging device;
前記撮像装置により撮影された領域内の前記中央部分の周辺部分に映された移 動体の画像を用いて第二の画像処理を行う周辺領域処理手順と、  A peripheral region processing procedure for performing second image processing using an image of a moving body imaged on a peripheral portion of the central portion in the region captured by the imaging device;
前記中央領域処理手順による第一の画像処理で検出された第一の情報および前 記周辺領域処理手順による第二の画像処理で検出された第二の情報を移動体ごと に対応付けた情報を前記通過情報として検出する通過情報検出手順と、  Information that associates the first information detected by the first image processing by the central region processing procedure and the second information detected by the second image processing by the peripheral region processing procedure for each moving object. A passage information detection procedure to detect as the passage information;
をコンピュータに実行させることを特徴とする移動体情報検出プログラム。  A program for detecting moving body information, which causes a computer to execute.
PCT/JP2006/318635 2006-09-20 2006-09-20 Mobile body information detection device, mobile body information detection method, and mobile body information detection program WO2008035411A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2008535230A JP4667508B2 (en) 2006-09-20 2006-09-20 Mobile object information detection apparatus, mobile object information detection method, and mobile object information detection program
PCT/JP2006/318635 WO2008035411A1 (en) 2006-09-20 2006-09-20 Mobile body information detection device, mobile body information detection method, and mobile body information detection program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2006/318635 WO2008035411A1 (en) 2006-09-20 2006-09-20 Mobile body information detection device, mobile body information detection method, and mobile body information detection program

Publications (1)

Publication Number Publication Date
WO2008035411A1 true WO2008035411A1 (en) 2008-03-27

Family

ID=39200244

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/318635 WO2008035411A1 (en) 2006-09-20 2006-09-20 Mobile body information detection device, mobile body information detection method, and mobile body information detection program

Country Status (2)

Country Link
JP (1) JP4667508B2 (en)
WO (1) WO2008035411A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009223434A (en) * 2008-03-13 2009-10-01 Secom Co Ltd Moving object tracing device
JP2010250571A (en) * 2009-04-16 2010-11-04 Mitsubishi Electric Corp Person counting device
JP2011193198A (en) * 2010-03-15 2011-09-29 Omron Corp Monitoring camera terminal
JP2016039539A (en) * 2014-08-08 2016-03-22 キヤノン株式会社 Image processing system, image processing method, and program
JP2017168882A (en) * 2016-03-14 2017-09-21 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
JP2019193110A (en) * 2018-04-25 2019-10-31 キヤノン株式会社 Information processing apparatus, information processing method, and program
JP2020072469A (en) * 2018-10-25 2020-05-07 キヤノン株式会社 Information processing apparatus, control method and program of the same, and imaging system
JP2020201880A (en) * 2019-06-13 2020-12-17 富士通クライアントコンピューティング株式会社 Image processing apparatus and image processing program

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7034690B2 (en) * 2017-12-05 2022-03-14 キヤノン株式会社 Information processing equipment, information processing methods and programs

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06203160A (en) * 1992-12-28 1994-07-22 Nippon Telegr & Teleph Corp <Ntt> Following method for object in image and device therefor
JPH0954894A (en) * 1995-08-18 1997-02-25 Nippon Telegr & Teleph Corp <Ntt> Mobile object monitoring and measuring device
JPH11175730A (en) * 1997-12-05 1999-07-02 Omron Corp Human body detection and trace system
JP2000200357A (en) * 1998-10-27 2000-07-18 Toshiba Tec Corp Method and device for collecting human movement line information
JP2003276963A (en) * 2003-02-03 2003-10-02 Toshiba Corp Elevator controller by use of image monitoring device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06203160A (en) * 1992-12-28 1994-07-22 Nippon Telegr & Teleph Corp <Ntt> Following method for object in image and device therefor
JPH0954894A (en) * 1995-08-18 1997-02-25 Nippon Telegr & Teleph Corp <Ntt> Mobile object monitoring and measuring device
JPH11175730A (en) * 1997-12-05 1999-07-02 Omron Corp Human body detection and trace system
JP2000200357A (en) * 1998-10-27 2000-07-18 Toshiba Tec Corp Method and device for collecting human movement line information
JP2003276963A (en) * 2003-02-03 2003-10-02 Toshiba Corp Elevator controller by use of image monitoring device

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009223434A (en) * 2008-03-13 2009-10-01 Secom Co Ltd Moving object tracing device
JP2010250571A (en) * 2009-04-16 2010-11-04 Mitsubishi Electric Corp Person counting device
JP2011193198A (en) * 2010-03-15 2011-09-29 Omron Corp Monitoring camera terminal
JP2016039539A (en) * 2014-08-08 2016-03-22 キヤノン株式会社 Image processing system, image processing method, and program
JP2017168882A (en) * 2016-03-14 2017-09-21 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
JP2019193110A (en) * 2018-04-25 2019-10-31 キヤノン株式会社 Information processing apparatus, information processing method, and program
JP7150462B2 (en) 2018-04-25 2022-10-11 キヤノン株式会社 Information processing device, information processing method and program
JP2020072469A (en) * 2018-10-25 2020-05-07 キヤノン株式会社 Information processing apparatus, control method and program of the same, and imaging system
JP7353864B2 (en) 2018-10-25 2023-10-02 キヤノン株式会社 Information processing device, control method and program for information processing device, imaging system
JP2020201880A (en) * 2019-06-13 2020-12-17 富士通クライアントコンピューティング株式会社 Image processing apparatus and image processing program

Also Published As

Publication number Publication date
JPWO2008035411A1 (en) 2010-01-28
JP4667508B2 (en) 2011-04-13

Similar Documents

Publication Publication Date Title
JP4667508B2 (en) Mobile object information detection apparatus, mobile object information detection method, and mobile object information detection program
JP5390322B2 (en) Image processing apparatus and image processing method
KR102465532B1 (en) Method for recognizing an object and apparatus thereof
KR100831122B1 (en) Face authentication apparatus, face authentication method, and entrance and exit management apparatus
US8314854B2 (en) Apparatus and method for image recognition of facial areas in photographic images from a digital camera
Snidaro et al. Video security for ambient intelligence
KR101615254B1 (en) Detecting facial expressions in digital images
KR101179497B1 (en) Apparatus and method for detecting face image
JP6555906B2 (en) Information processing apparatus, information processing method, and program
US20070116364A1 (en) Apparatus and method for feature recognition
JP2009143722A (en) Person tracking apparatus, person tracking method and person tracking program
US20090041312A1 (en) Image processing apparatus and method
JP2007148988A (en) Face authentication unit, face authentication method, and entrance/exit management device
JP2008071172A (en) Face authentication system, face authentication method, and access control device
JP2006236260A (en) Face authentication device, face authentication method, and entrance/exit management device
US8923552B2 (en) Object detection apparatus and object detection method
US9154682B2 (en) Method of detecting predetermined object from image and apparatus therefor
JP2007249298A (en) Face authentication apparatus and face authentication method
KR20200010690A (en) Moving Object Linkage Tracking System and Method Using Multiple Cameras
Huang et al. Distributed video arrays for tracking, human identification, and activity analysis
Valle et al. People counting in low density video sequences
JP5777389B2 (en) Image processing apparatus, image processing system, and image processing method
CN117315560A (en) Monitoring camera and control method thereof
CN116959065A (en) Face recognition method, storage medium and system for face completion based on multiple camera devices
KR20230034124A (en) System for face recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06810331

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2008535230

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06810331

Country of ref document: EP

Kind code of ref document: A1