EP2701094A2 - Appareil de détection d'objets, son procédé de contrôle, programme et support de stockage - Google Patents

Appareil de détection d'objets, son procédé de contrôle, programme et support de stockage Download PDF

Info

Publication number
EP2701094A2
EP2701094A2 EP13004034.8A EP13004034A EP2701094A2 EP 2701094 A2 EP2701094 A2 EP 2701094A2 EP 13004034 A EP13004034 A EP 13004034A EP 2701094 A2 EP2701094 A2 EP 2701094A2
Authority
EP
European Patent Office
Prior art keywords
background
region
video
background object
object region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP13004034.8A
Other languages
German (de)
English (en)
Other versions
EP2701094A3 (fr
Inventor
Hiroshi Tojo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Publication of EP2701094A2 publication Critical patent/EP2701094A2/fr
Publication of EP2701094A3 publication Critical patent/EP2701094A3/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Definitions

  • the present invention relates to a object detection apparatus and control method thereof.
  • a background subtraction method As a technique for detecting an object from an image captured by a camera, a background subtraction method is known.
  • a fixed camera captures, in advance, an image of a background from which an object to be detected is removed, and stores feature amounts extracted from that image as a background model. After that, differences between feature amounts extracted from an image input from the camera and those in the background model are calculated, and a different region is detected as a foreground (object).
  • an object such as a chair in a waiting room will be examined.
  • the chair originally exists in the waiting room, and is not an object to be detected such as a person or a bag brought in by a person.
  • people frequently moves the chair or changes its direction. If such change takes place, differences from the background model are generated, and the background subtraction method erroneously detects such change as an object.
  • a background object an object such as a chair which originally exists in a background will be referred to as a background object hereinafter.
  • the technique according to the above literature erroneously detects a case in which new features which are not included in the background model appear upon movement or change of the background object. That is, since features of an input image are no longer similar to those included in the background image generated from the background model, a change of the background object is not determined. For example, when a red vase is placed in front of a blue wall, and a chair is placed in front of the red vase, features of the red vase are not included in the background model since the red vase is occluded behind the chair. When the chair is moved at this time, the occluded red vase appears in a video.
  • the present invention has been made in consideration of the aforementioned problems. Then, the present specification provides a technique which can prevent or reduce detection errors caused by a change of an object in a background, which object appears frequently.
  • an object detection apparatus comprises the following arrangement. That is, this specification in its first aspect provides an object detection apparatus as specified in claims 1 to 8.
  • This specification in its second aspect provides a control method of controlling an object detection apparatus in claim 9.
  • This specification in its third aspect provides a computer program in claim 10 or a computer-readable storage medium in claim 11.
  • detection errors caused by a change of an object in a background, which object appears frequently can be prevented or reduced.
  • Fig. 1 is a block diagram showing the hardware arrangement of an object detection apparatus according to an embodiment
  • Fig. 2 is a block diagram showing the functional arrangement of the object detection apparatus according to the embodiment
  • Fig. 3 is a flowchart showing the sequence of processing in a registration phase according to the embodiment
  • Fig. 4 is a flowchart showing the detailed processing sequence of comparison processing
  • Fig. 5 is a table showing an example of a background model
  • Fig. 6 is a flowchart showing the detailed processing sequence of background model update processing
  • Fig. 7 is a table showing an example of comparison result information
  • Fig. 8 is a flowchart showing the detailed processing sequence of foreground/background determination processing
  • Fig. 9 is a table showing an example of foreground/background information
  • Fig. 10 is a flowchart showing the detailed processing sequence of object region output processing
  • Fig. 11 is a table showing an example of object region information
  • Fig. 12 is a flowchart showing the sequence of first background object region selection processing
  • Fig. 13 is a view for explaining a processing result of the first background object region selection processing
  • Fig. 14 is a table showing an example of first scene-dependent background object region selection rules
  • Fig. 15 is a table showing an example of background object candidate region information
  • Fig. 16 is a flowchart showing the sequence of second feature amount extraction processing
  • Fig. 17 is a table showing an example of scene-dependent feature amount type information
  • Fig. 18 is a table showing an example of feature amount information
  • Fig. 19 is a flowchart showing the sequence of first background object region selection processing
  • Fig. 20 is a view for explaining a processing result of second background object region selection processing
  • Fig. 21 is a table showing an example of second scene-dependent background object region selection rules
  • Fig. 22 is a table showing an example of weighted feature amount information
  • Fig. 23 is a view for explaining an object detection result when an object is parallelly translated
  • Fig. 24 is a flowchart showing the sequence of parallel translation/out-of-plane rotation determination processing
  • Fig. 25 is a flowchart showing the sequence of background object feature information registration processing.
  • Fig. 26 is a flowchart showing the sequence of processing in an operation phase according to the embodiment.
  • Fig. 1 is a block diagram showing the hardware arrangement of an image processing apparatus for executing object detection (to be referred to as an object detection apparatus hereinafter) according to this embodiment.
  • the object detection apparatus of this embodiment has the following arrangement.
  • a CPU 101 executes instructions according to programs stored in a ROM 102 and RAM 103.
  • the ROM 102 is a nonvolatile memory, and stores programs of the present invention, and programs and data required for other kinds of control.
  • the RAM 103 is a volatile memory, and stores temporal data such as frame image data and a pattern discrimination result.
  • a secondary storage device 104 is a rewritable secondary storage device such as a hard disk drive or flash memory, and stores an OS (Operating System), image information, an object detection program, various setting contents, and the like. These pieces of information are transferred to the RAM 103, are executed as a program of the CPU 101, and are used as data.
  • An image input device 105 includes a digital video camera, network camera, infrared camera, or the like, and outputs a video captured by an imaging unit as digital image data.
  • An input device 106 includes a keyboard, mouse, and the like, and allows the user to make inputs.
  • a display device 107 includes a CRT, liquid crystal display, or the like, and displays a processing result and the like for the user.
  • a network I/F 108 includes a modem and LAN used to establish connection to a network such as the Internet or intranet.
  • a bus 109 connects these components to allow them to mutually exchange data.
  • the apparatus of this embodiment is implemented as an application which runs on the OS.
  • Fig. 2 is a block diagram showing the functional arrangement of the object detection apparatus of this embodiment. Processing units to be described below are implemented when the CPU 101 executes programs, but some or all of these processing units may be implemented as hardware.
  • Reference numeral 201 denotes a video input unit, which includes the image input device 105, and inputs a video.
  • Reference numeral 202 denotes a first feature amount extraction unit, which extracts feature amounts required to build a background model (to be described later) from a video.
  • Reference numeral 203 denotes a comparison unit, which compares a background model read out from a background model storage unit 204 (to be described below) and an input video.
  • Reference numeral 204 denotes a background model storage unit, which includes the RAM 103 or secondary storage device 104, and stores a background model (to be described in detail later) which represents states at respective positions in a video using image feature amounts.
  • Reference numeral 205 denotes a background model update unit, which updates the background model stored in the background model storage unit 204 based on the output from the comparison unit 203.
  • Reference numeral 206 denotes a foreground/background determination unit, which determines based on the output from the comparison unit 203 whether each position in an input video corresponds to a foreground or background.
  • Reference numeral 207 denotes an object region output unit, which combines and outputs detection results for respective object regions based on the output from the background/foreground determination unit 206.
  • Reference numeral 208 denotes a first selection unit which classifies object regions as outputs of the object region output unit 207 into regions which include background objects and those which do not include any background objects.
  • Reference numeral 209 denotes a second feature amount extraction unit, which extracts feature amounts required to generate background object feature information (to be described later) from background object candidate regions as outputs of the first selection unit 208.
  • Reference numeral 210 denotes a second selection unit, which narrows down background object candidate regions selected by the first selection unit 208 to partial regions including only background objects.
  • Reference numeral 211 denotes a rule storage unit, which stores scene-dependent background object region selection rules, that is, rules required to select background object regions for respective scenes (a waiting room, an entrance with an automatic door, etc.) where the object detection apparatus is equipped (to be described in detail later).
  • the first determination unit 208, second feature amount extraction unit 209, and second selection unit 210 select background objects according to a predetermined rule with reference to information stored in this rule storage unit 211.
  • Reference numeral 212 denotes a human body detection unit which detects a human body region included in a video. This unit is called from the first selection unit 208 and second selection unit 210 according to the scene-dependent background object selection rule.
  • Reference numeral 213 denotes a duration determination unit, which determines based on the output results of the object region output unit 207 whether or not duration of each object region satisfies a predetermined condition. This unit is called from the first selection unit 208 according to the scene-dependent background object selection rule.
  • Reference numeral 214 denotes a movement determination unit, which determines whether a region selected as a background object is generated by parallel translation or out-of-plane rotation of the background object. This movement determination unit 214 is called from the second selection unit 210 according to the scene-dependent background object selection rule.
  • Reference numeral 215 denotes a frame image storage unit, which temporarily stores a video input by the video input unit 201. This storage unit is used by the movement determination unit 214.
  • Reference numeral 216 denotes a statistical amount generation unit, which generates a statistical amount based on second feature amounts included in a selected background object region.
  • Reference numeral 217 denotes a background object registration unit, which registers the statistical amount generated by the statistical amount generation unit 216 as background object feature information.
  • Reference numeral 218 denotes a background object storage unit, which stores background object feature information (to be described in detail later).
  • Reference numeral 219 denotes a background object discrimination unit, which determines with reference to the background object feature information whether or not a detected object is a background object. The determination result is fed back to the background model update unit 205.
  • the processing of the object detection apparatus roughly includes a registration phase for registering a background object, and an operation phase for detecting an object.
  • the registration phase is executed in an initial stage when the object detection apparatus is set, and parallel to the operation phase.
  • Fig. 3 shows the processing sequence of a part related to the registration phase of the application to be executed by the CPU 101.
  • a video captured by the video input unit 201 is input, and a frame image is obtained for each predetermined time (step S301).
  • the first feature amount extraction unit 202 extracts feature amounts from the frame image, and the comparison unit 203 compares the feature amounts in the frame image with those in a background model, which are read out from the background model storage unit 204 (step S302). (Details will be described later.)
  • the background model update unit 205 reflects the result of the comparison unit 203 to the background model, thus updating the background model (step S303). (Details will be described later.)
  • the foreground/background determination unit 206 determines a foreground and background based on duration from the result of the comparison unit 203 (step S304). (Details will be described later.)
  • detected object regions are output (step S305).
  • the output object regions are used in an abandoned object detection apparatus or the like, which detects an abandoned object. (Details will be described later.)
  • the first selection unit 208 selects first background object regions used to select regions including background objects from the detected object regions (step S306). (Details will be described later.)
  • the second feature amount extraction unit 209 extracts feature amounts from the selected background object regions (step S307). (Details will be described later.)
  • the second selection unit 210 selects second background object regions used to further narrow down from the regions including background objects selected by the first selection unit 208 to regions of only background objects (step S308). (Details will be described later.)
  • the statistical amount generation unit 216 generates a statistical amount from feature amounts included in the regions selected as background object regions, and the background object registration unit 217 registers background object feature information in the background object storage unit 218 (step S309). (Details will be described later.)
  • step S302 of the aforementioned processing Details of the comparison processing (comparison unit 203) in step S302 of the aforementioned processing will be described below with reference to Fig. 4 .
  • the first feature amount extraction unit 202 extracts image feature amounts as values which represent states of respective positions from an input frame image acquired by the video input unit 201 (step S401).
  • image feature amounts include brightness values, colors, edges, and the like, but the present invention is not particularly limited to these feature amounts.
  • feature amounts for respective pixels or those for respective partial regions may be extracted.
  • an average brightness value, DCT coefficients, and the like of pixels in a block of 8 pixels ⁇ 8 pixels are enumerated.
  • the DCT coefficients correspond to Discrete Cosine Transform results.
  • DCT coefficients may be directly extracted from a JPEG input frame image, and may be used as feature amounts.
  • feature amounts are brightness values for respective pixels. Note that an upper left pixel of a frame pixel is defined as a start point, and the following processing is executed while moving a pixel position from the left to the right, and then to each lower row (a raster scan order).
  • position-dependent background model information of a position of interest is read out from a background model stored in the background model storage unit 204, and is temporarily stored in the RAM 103 (step S402).
  • the background model stored in the background model storage unit 204 will be described below with reference to Fig. 5 .
  • the background model expresses states of respective positions in a frame image using image feature amounts.
  • the background model includes two types of information: background model management information and position-dependent background model information.
  • the background model management information includes position information and a pointer to position-dependent background model information at each position.
  • the position information may assume a value which expresses a pixel position of a frame image using X-Y coordinates, or may be a number of each block of 8 ⁇ 8 pixels assigned in a raster scan order. Note that in this embodiment, the position information assumes a value which expresses a pixel position of a frame image using X-Y coordinates.
  • the position-dependent background model information holds a plurality of states corresponding to each position.
  • a state is represented by a feature amount. Therefore, non-similar feature amounts correspond to different states. For example, when, a red car comes and stops in front of a blue wall, pixels included in a region where the red car stops hold two states of blue and red feature amounts.
  • Each state holds a state number, an image feature amount which represents that state, a time of creation, and an active flag.
  • the state number is used to identify each state, and is generated in turn from 1.
  • the time of creation is that at which the state was created in a background model for the first time, and is expressed by a time or frame number. In this embodiment, the time of creation is expressed by a frame count.
  • the active flag indicates a state corresponding to the current frame image, and is set to be 1 at this time (0 in other cases). Then, a plurality of states at an identical position in a frame image are continuously stored at an address referred to by a pointer of the background model management information.
  • one position-dependent background model information may include a field for storing pointers to subsequent position-dependent background model information having a different state number, and if that field stores a non-existent value, that position-dependent background model information may be considered as last information.
  • a pointer to position-dependent background model information of the position of interest is referred to, and pieces of position-dependent background model information of all states of the position of interest are read out.
  • pieces of position-dependent background model information of the following two states are read out.
  • a feature amount of one state is read out from the pieces of position-dependent background model information of the position of interest read out in step S402 (step S403). Then, an active flag is set to be 0 (step S404). This is to initialize the previous result. Then, a difference from a feature amount at the same position in the input frame image is calculated (step S405).
  • a difference calculation method an absolute value of a difference between the two feature amounts is used.
  • the present invention is not particularly limited to this. For example, a square of the difference may be used.
  • the difference value is temporarily stored in the RAM 103 in association with the position in the input frame image and the state number used to calculate the difference.
  • step S406 It is then determined whether or not states used to calculate a difference still remain at the position of interest (step S406). If such states still remain, the next state is read out from the position-dependent background model information (step S407). Then, the processes of steps S403 and S405 are repeated.
  • a minimum value of the difference values between the feature amount of the input frame image and all the states is calculated in association with the position of interest (step S408).
  • the minimum difference value at the position of interest is compared with a threshold A (step S409). If the difference value is smaller than the threshold, it can be judged that the state of the input frame image is similar to that stored in the background model. Conversely, if the difference value is larger than the threshold, it can be determined the state of the input frame image is different from all the states stored in the background model, and is a new state.
  • a special number (example: 0) which means a new state is set as a state number (step S410).
  • a new state number is generated again when the background model update unit 205 updates the background model.
  • the current time is set as a time of creation at which this state is created for the first time (step S411). Note that in this embodiment, the current frame number is used. However, a normal time expression (for example, 00:00:00) may be used.
  • an active flag is set to be 1 to indicate a state corresponding to the current frame (step S412).
  • step S413 the state number, the feature amount of the input image, and the time of creation are temporarily stored in the RAM 103 as comparison result information in association with the coordinates in the input frame image (step S413).
  • step S414 It is then determined whether or not the processes are complete for all pixels (coordinates) in the frame image (step S414). If pixels to be processed still remain, the process advances to the next pixel in a raster scan order (step S415), thus repeating the processes of steps S401 to S413.
  • comparison result information (exemplified in Fig. 7 ) for all the pixels is output to the background model update unit 205 and foreground/background determination unit 206 (step S416).
  • step S302 The details of the comparison processing in step S302 have been described.
  • background model update processing (background model update unit 205) in step S303 will be described below with reference to the flowchart shown in Fig. 6 .
  • Comparison result information for one pixel is acquired in turn with reference to coordinates to have an upper left pixel of the frame image as a start point from the comparison result information ( Fig. 7 ) as the outputs of the comparison unit 203 (step S601).
  • step S602 It is checked whether or not a state of the current pixel is a new state (step S602). This checking step can be attained with reference to the state number in the comparison result information. That is, if the state number is 0, the state of the current pixel is a new state; otherwise, the state of the current pixel is an existing state included in the background model.
  • step S603 If the state of the current pixel is an existing state, corresponding position-dependent background model information in the background model ( Fig. 5 ) is updated. A pointer to a state of the matched coordinates is acquired with reference to background model management information in the background model from the coordinates of the current pixel. The pointer is advanced in turn while reading out information, and position-dependent background model information which matches the state number read out from the comparison result information ( Fig. 7 ) is referred to (step S603).
  • the feature amount in the background model is updated by the input feature amount in the comparison result information ( Fig. 7 ) (step S604).
  • ⁇ t-1 is a feature amount value before update
  • ⁇ t is a feature amount value after update.
  • I t is a feature amount value of the input frame.
  • is a weight having a value ranging from 0 to 1, and the updated value becomes closer to the input value as the weight assumes a larger value.
  • step S602 if a new state is determined in step S602, that state is added to the background model.
  • a pointer to a state of the matched coordinates is acquired with reference to background model management information in the background model from the coordinates of the current pixel. Then, the pointer is advanced to that of a state of coordinates of the next pixel to acquire a last state number of a state of the current coordinates (step S605).
  • a pointer to a state of the matched coordinates is acquired with reference to background model management information in the background model from the coordinates of the next pixel (step S607).
  • an input feature amount of the current coordinates in the input state information and a time of creation are inserted here together with the generated state number (step S608).
  • step S609 It is then determined whether or not the processes of steps S601 to S608 are complete for all pixels (coordinates) in the frame image (step S609). If pixels to be processed still remain, the process advances to the next pixel in a raster scan order (step S610), thus repeating the processes of steps S601 to S608. If the processes are complete for all the pixels, this processing ends.
  • step S304 The details of the foreground/background determination processing (foreground/background determination unit 206) in step S304 will be described below with reference to Fig. 8 .
  • Comparison result information is referred to and acquired one by one in a raster scan order to have an upper left pixel of the frame image as a start point from the comparison result information ( Fig. 7 ) as the outputs of the comparison processing in step S302 (step S801).
  • Duration (current time - time of creation) from the appearance time of a certain state (feature) in the video until the current time is calculated based on the time of creation of the comparison result information ( Fig. 7 ) (step S802), and is compared with a threshold of a background conversion time (step S803).
  • the threshold of the background conversion time means that an object detected as a foreground object is handled as a background object (to be converted into a background object) to have that value as a border. If the duration is not less than the threshold of the background conversion time, a foreground flag is set to be "0" which means "background” (step S804).
  • a foreground is determined, and a foreground flag is set to be "1" (step S805).
  • the foreground flag is temporarily stored as foreground/background information (exemplified in Fig. 9 ) in association with the coordinates of the current pixel in the frame image and the duration time (step S806).
  • step S807 It is then determined if the processes are complete for all pixels (coordinates) in the frame image (step S807). If pixels to be processed still remain, the process advances to the next pixel (step S808), thus repeating the processes of steps S801 to S806. If the processes of steps S801 to S806 are complete for all the pixels, foreground/background information ( Fig. 9 ) for all the pixels is output to the object region output unit 207 (step S809).
  • step S305 of Fig. 3 details of the object region output processing (object region output unit 207) in step S305 of Fig. 3 will be described below with reference to Fig. 10 .
  • a foreground flag is acquired with reference to coordinates of the foreground/background information stored in the RAM 103 ( Fig. 9 ) to have an upper left pixel of the frame image as a start point (step S1002).
  • step S1003 it is checked if the foreground flag of the current coordinates is 1 (step S1003). If the foreground flag is 0, since it indicates a background, the process advances from the current pixel to the next pixel in a raster scan order (step S1004).
  • a circumscribed rectangle is calculated from the coordinates of these pixels, which are temporarily stored, and upper left coordinates and lower right coordinates of that circumscribed rectangle are temporarily stored in the RAM 103 (step S1011).
  • durations corresponding to these pixels are acquired from the comparison result information, and an average value of the acquired durations is calculated and temporarily stored in the RAM 103 (step S1012).
  • step S1013 It is determined whether or not the processes of steps S1002 to S1012 are complete for all pixels in the frame image. If pixels to be processed still remain, the process advances from the current pixel to the next pixel in a raster scan order (step S1004).
  • step S1014 If the processes of steps S1002 to S1012 are complete for all pixels, the upper left coordinates and lower right coordinates of object regions and their average appearance times, which are temporarily stored, are output as object region information (step S1014).
  • Fig. 11 shows an example of the object region information, and the upper left coordinates and lower right coordinates and average appearance times of two object regions can be read out in from a start address.
  • the output object region information is used in, for example, an abandoned object detection apparatus (not shown) which detects an abandoned object.
  • the abandoned object detection apparatus generates an abandonment event when a predetermined time period continues with reference to the average durations of objects. Also, the apparatus generates a rectangle with reference to the upper left coordinates and lower right coordinates of the rectangle of the object region, and superimposes the rectangle on an input video, thus presenting the position of the abandoned object to the user.
  • first background object region selection processing (first selection unit 208) in step S306 of Fig. 3 will be described below with reference to Fig. 12 .
  • Object regions in the object region information are classified into object regions including background objects and those which do not include any background objects, and background object candidate regions are output.
  • Fig. 13 is a view for explaining the processing result of this processing.
  • reference numeral 1301 denotes a frame image, which includes a chair 1302, a person 1303 who stands in front of the chair 1302, and a person 1304 cuts across in the frame.
  • a frame 1305 object regions detected from background differences are superimposed, and regions 1306 and 1307 are detected as objects. Assume that the chair included in the region 1306 has a direction different from that when the background model is generated, and is detected as a part of the object.
  • the first background object region selection processing selects an object region including a background object (the chair 1302 in this example), and outputs a region 1309 including the chair as a background object candidate region, as denoted by reference numeral 1308. This processing will be described in detail below.
  • a first scene-dependent background object region selection rule corresponding to a scene ID designated by the user is referred to from the rule storage unit 211 (step S1201).
  • the rule storage unit 211 includes the input device 106 and the display device 107 confirmed by the user, and the user designates a scene ID by selecting it from a scene ID list displayed on the screen.
  • the first scene-dependent background object region selection rules loaded in the first background object region selection processing will be described in detail below with reference to Fig. 14 .
  • Each rule of the first scene-dependent background object region selection rules includes a scene ID, determination conditions (the number of determination conditions, a determination condition start pointer), parameters (the number of parameters, a parameter start pointer), and an adoption condition. Note that the scene ID is as described above.
  • the determination conditions are required to select a background object region, and include, for example, a condition for determining whether or not the (average) duration of an object region is not less than a predetermined value (condition 11), a condition for determining whether or not an object region includes a human body region (condition 12), and the like.
  • the determination conditions as many as the number described as the number of determination conditions are defined, and can be read out and acquired in turn from an address pointed by the determination condition start pointer.
  • the parameters include parameter values such as a threshold used in the determination condition.
  • the parameters as many as the number described as the number of parameters are defined, and can be read out and acquired in turn from an address pointed by the parameter start pointer.
  • the adoption condition indicates that of a background object candidate region depending on the determination conditions to be satisfied.
  • the adoption condition includes adoption of only an object region which satisfies the determination conditions (ONLY), that of all object regions if at least one object region satisfies the determination conditions (ALL), and the like.
  • step S1202 one of the determination conditions acquired from the loaded first background object selection rule is acquired.
  • One object region is acquired from the object region information ( Fig. 11 ) (step S1203). It is respectively checked in steps S1204 and S1206 whether or not predetermined determination conditions (11, 12) are designated. If determination condition 11 is designated (YES in step S1204), duration determination processing is executed in this example (step S1205) (details will be described later). If determination condition 12 is designated (YES in step S1206), human body presence/absence determination processing is executed in this example (step S1207) (details will be described later). A determination result is temporarily stored in the RAM 103 in association with the coordinates of the current object region as 1 when the determination condition is satisfied or as 0 in another case (step S1208).
  • step S1209 It is determined in step S1209 whether or not the processing is complete for all object regions. If object regions to be processed still remain, the process returns to step S1203 to select the next object region. If it is determined that the processing is complete for all object regions (YES in step S1209), it is determined whether or not determination is complete for all determination conditions specified in the rule (step S1210). If determination is not complete yet, the process returns to step S1202 to select the next determination condition; otherwise, the process advances to step S1211.
  • Background object candidate regions are adopted according to the adoption rule specified in the rule, and the adopted object region information is output as background object candidate region information (step S1211).
  • Fig. 15 shows an example.
  • a background object ID is generated in turn from "1" for an object region selected as a background object.
  • Upper left coordinates and lower right coordinates of an object region are the same as those in the object region information ( Fig. 11 ).
  • a detection window having a predetermined size is scanned on an input image to execute 2-class classification for each pattern image obtained by clipping an image in the detection window as to whether or not an object (human body) is detected.
  • a classifier is configured by effectively combining many weak classifiers using AdaBoost, thereby improving the classification precision.
  • AdaBoost AdaBoost
  • the classifiers are connected in series to configure a cascade type detector.
  • Each weak classifier is configured by a HOG (Histogram of Oriented Gradients) feature amount.
  • the cascade type detector immediately removes a candidate of a pattern which is apparently not an object using simple classifiers in the former stage. Then, whether or not each of only the remaining candidates is an object is classified using complicated classifiers in the latter stage having higher identification performance.
  • This processing extracts feature amounts of a type suited to a scene from the background object candidate regions selected by the aforementioned first background object region selection processing.
  • a feature amount type according to the currently designated scene is acquired from scene-dependent feature amount type information exemplified in Fig. 17 (step S1601).
  • one background object candidate region (coordinates thereof) is acquired from the background object candidate region information ( Fig. 15 ) (step S1602).
  • Feature amounts are extracted from the background object candidate region of the current frame image. It is respectively checked in steps S1603 and S1605 whether or not predetermined feature amount types (feature amount 1, feature amount 2) are designated. If feature amount 1 is designated (YES in step S1603), SIFT feature amount extraction processing is executed in this example (step S1604). Details of SIFT feature amounts will be described later. If feature amount 2 is designated (YES in step S1605), HOG feature amount extraction processing is executed in this example (step S1606). Details of HOG feature amounts will be described later.
  • Extracted feature amounts are temporarily stored as feature amount information in the RAM 103 in association with a background object ID (step S1607).
  • Fig. 18 shows an example.
  • the number of feature amounts is that of feature amounts extracted from a region of the background object ID.
  • a feature amount pointer is a storage destination address of feature amounts. Feature amounts as many as the number of feature amounts can be read out in turn from an address pointed by the feature amount pointer. Feature amounts are stored in the order of coordinates and feature amounts together with the coordinates at which feature amounts are extracted.
  • step S1608 It is determined in step S1608 whether or not the processing is complete for all background object candidate regions. If candidate regions to be processed still remain, the process returns to step S1602 to select the next background object candidate region.
  • extracted feature amount information is output (step S1609).
  • SIFT feature amounts For further details of SIFT feature amounts, please refer to literature [ D.G. Lowe, "Object recognition from local scale - invariant features", Proc. of IEEE International Conference on Computer Vision (ICCV), pp. 1150 - 1157, 1999 .].
  • the SIFT feature amounts will be briefly described below.
  • a plurality of images, which are smoothed by a Gaussian function and have different scales, are generated, and an extremal value is detected from their difference image. From a point as this extremal value (to be referred to as a key point hereinafter), a feature is extracted.
  • a dominant gradient direction in the key point is decided, and a Gaussian window used to extract feature amounts is set with reference to that direction to fit the scale of the difference image from which the key point is extracted.
  • the extracted feature amounts are invariable against in-plane rotation and scale. Therefore, using the feature amounts, even when a distance change from a camera upon movement of a background object or a change in direction (in-plane rotation) of the object have occurred, the object can be expressed using identical feature amounts. Since new feature amounts need not be registered in the background object feature information every time such change has occurred, the SIFT feature amounts are suited to the waiting room scene.
  • the feature amount is divided into 4 ⁇ 4 blocks, and histograms in eight directions are calculated from respective blocks. Therefore, 128-dimensional feature amounts are obtained.
  • HOG feature amount For further details of HOG feature amounts, please refer to literature [ N. Dalal and B. Triggs, "Histogram of Gradients for Human Detection", Computer Vision and Pattern Recognition, Vol. 1, pp. 886 - 893, 2005 .]. HOG feature amount will be briefly described below. A gradient image is calculated from an input image, and is divided into blocks each including 2 ⁇ 2 cells each including 8 ⁇ 8 pixels. Edge strength histograms of nine directions are calculated in respective cells. Therefore, a 36-dimensional feature amount is extracted per block. Since attention is focused on edge strengths for respective edge directions, the feature amounts are suited to expression of the shape of the door frame and the like.
  • second background object region selection processing (second background object region selection unit 210) in step S308 will be described below with reference to Fig. 19 .
  • This processing further narrows down the background object candidate regions selected by the first background object region selection processing to partial regions of background objects.
  • Fig. 20 is a view for explaining a processing result of this processing.
  • Reference numeral 2001 denotes a background object candidate region corresponding to the region 1309 in Fig. 13 .
  • Reference numerals 2002 to 2012 denote points from which feature amounts are extracted by the second feature amount extraction processing. Of these points, the points 2002 to 2005 are extracted from the chair, and the points 2006 to 2012 are extracted from the person.
  • An object such as the person of this example, a dog, or an automobile is an object which autonomously moves (to be referred to as a moving object), comes into and goes away from a video, and is not a background object. Therefore, a unit, which detects a moving object region, removes a moving object region from background object candidate regions.
  • the human body detection unit 212 calculates a human body region 2013, thus classifying feature amounts into those of the chair as a true background object and those in the human body region. More specifically, weights are given to respective feature amounts, so that weights for the feature amounts (2002 to 2005) of the background object are larger than those for the feature amounts (2006 to 2012) of the person. That is, a weight for each feature amount assumes a larger value as that feature amount is included in the background object with a higher possibility.
  • the second background object region selection processing outputs feature amounts with weights decided in this way. This processing will be described in detail below.
  • a second scene-dependent background object region selection rule corresponding to a scene ID designated by the user is referred to (step S1901).
  • the second scene-dependent background object region selection rules to be referred to by the second background object region selection processing will be described in detail below with reference to Fig. 21 .
  • Each rule of the second scene-dependent background object region selection rules includes a scene ID, determination conditions (the number of determination conditions, a determination condition start pointer), and parameters (the number of parameters, a parameter start pointer).
  • the scene ID is as described above.
  • Each determination condition is used to separate each background object region selected by the first background object region selection processing into a background object and other objects.
  • the determination conditions include a condition for determining whether or not a human body is included, and which region includes the human body if the human body is included (condition 21), a condition for determining whether parallel translation or out-of-plane rotation of an object is made (condition 22), and the like.
  • the determination conditions as many as the number described in the number of determination conditions are included, and can be read out and acquired in turn from an address pointed by the determination condition start pointer.
  • weights for feature amounts used upon generation of background object feature information are given to all feature amounts of the feature amount information ( Fig. 18 ) to obtain weighted feature amount information (exemplified in Fig. 22 ) (step S1902).
  • a weight assumes a value ranging from 0 to 1, and indicates a higher degree of a feature amount included in a background object as it is closer to 1.
  • 1 is given as an initial value.
  • One of the determination conditions acquired from the second scene-dependent background object region selection rule ( Fig. 21 ) is acquired (step S1903).
  • step S1904 and S1907 It is respectively checked in step S1904 and S1907 whether or not predetermined determination conditions 21 and 22 are designated. If determination condition 21 is designated (YES in step S1905), human body region detection processing is executed in this example (step S1906). If determination condition 22 is designated (YES in step S1907), parallel translation/out-of-plane rotation determination processing is executed in this example (step S1908) (details will be described later). As a result of determination, weights for feature amounts included in a region which is selected not to be included in a background object are reduced. From a background object ID to be processed, corresponding feature amounts are referred to based on coordinates of a selected region from the weighted feature amount information ( Fig. 22 ). Weights of the feature amounts are reduced (by, for example, subtracting a fixed amount) (step S1909).
  • step S1910 It is determined in step S1910 whether or not the processing is complete for all background object candidate regions. If background object candidate regions to be processed still remain, the process returns to step S1904 to select the next background object candidate region.
  • step S1911 If it is determined that the processing for determining whether or not the determination condition specified in the rule is satisfied is complete for all background object candidate regions (YES in step S1910), it is determined whether or not determination is complete for all determination conditions specified in the rule (step S1911). If the determination is not complete yet, the control returns to step S1903 to select the next determination condition; otherwise, the process advances to step S1912. Then, weighted feature amount information ( Fig. 22 ) having weights decided based on the determination conditions as attributes is output (step S1912).
  • the chair as a typical background object in the waiting room is often parallelly translated or rotated by a person.
  • the chair is (out-of-plane) rotated at an identical position, new features of the chair appear.
  • the new features are those of the background object itself, as a matter of course, they are required to be registered as background object feature information.
  • a region of a part of a background (to be referred to as a partial background hereinafter) hidden behind the chair generates a difference from the background model, it is unwantedly included in a background object candidate region.
  • Fig. 23 shows an example. In Fig.
  • reference numeral 2301 denotes a frame image input at an activation timing of this object detection apparatus, and a background model is generated while including a chair 2302.
  • Reference numeral 2303 denotes a detection result, and nothing is obviously detected at this timing.
  • a state after an elapse of a certain time period since the chair 2302 is parallelly translated by a person corresponds to a frame image denoted by reference numeral 2304.
  • the chair 2302 is parallelly translated to the right. Then, a wall pattern 2305 hidden behind the chair 2302 appears.
  • a difference is also generated from a region which appears as a result of movement of the chair 2302 at a timing of the frame image 2304.
  • Reference numeral 2306 denotes a background difference result.
  • a hatched rectangular region 2307 indicates a region detected as an object.
  • a rectangular region 2308 bounded by a bold black frame in the region 2307 is a partial background region which is not the chair as a background object.
  • step S1908 the movement determination unit 214 executes parallel translation/out-of-plane rotation determination processing for a background object candidate region as the current processing target (step S1908).
  • step S1908 Details of the parallel translation/out-of-plane rotation determination processing in step S1908 will be described below with reference to Fig. 24 .
  • a previous frame image is acquired from the frame image storage unit 215 (step S2401).
  • the previous frame image to be acquired can be that before the object (the chair 2302 in Fig. 23 ) is moved.
  • a method of selecting a frame image a sufficiently long fixed time period before may be used.
  • object region information is stored in association with a frame image, the following method can also be used. That is, with reference to the previous object region information, a frame image at a timing before the object began to be detected in the region of the current frame in which the object is detected can be found.
  • an image may be reconstructed based on the background model. For example, if the background model is expressed by the DCT coefficients, inverse DCT transformation is executed to convert the background model into an image expressed by RGB values.
  • step S2402 feature amounts of a type corresponding to the current scene ID are acquired from the same region as the object region (the region 2307 in Fig. 23 ) as the current processing target in the acquired previous frame (step S2402).
  • SIFT feature amounts are acquired.
  • step S2403 the feature amounts acquired from the object regions of the previous frame image and the current frame image are compared (step S2403), and it is determined whether or not background objects (2302 in Fig. 23 ) included in the two object regions match (step S2404).
  • an adequate projection transform matrix can be calculated between points of a plurality of feature amounts including corresponding feature amounts in the object region in the current frame image and those of corresponding feature amounts in the object region in the previous frame image, it is determined that a similar positional relationship is maintained. Thus, it can be determined that background objects (2302 in Fig. 23 ) in the current frame and previous frame including corresponding feature amounts match.
  • step S2405 If the two background objects match, it is considered that the background object (2302 in Fig. 23 ) was parallelly translated. At this time, non-corresponding feature amounts (extracted from the partial background region 2308 in Fig. 23 ) are output (step S2405). If the two background objects do not match, it is considered that new feature amounts appear due to out-of-plane rotation of the background object. At this time, it is considered that all feature amounts included in the object region as the current target form the background object.
  • weights of non-corresponding feature amounts in the weighted feature amount information are reduced (by, for example, subtracting a fixed amount) (step S1909).
  • step S309 Details of the background object feature information registration processing in step S309 will be described below with reference to Fig. 25 .
  • Feature amounts included in one background object are acquired from the weighted feature amount information ( Fig. 22 ) (step S2501).
  • the statistical amount generation unit 216 generates a histogram from the feature amounts (step S2502). This is known as "Bag of words" in literature [ J. Sivic and A. Zisserman, Video google: A text retrieval approach to object matching in videos, In Proc. ICCV, 2003 .] and the like. Assume that bins of the histogram are decided in advance by the following processing. Feature amounts acquired from various videos are clustered into the predetermined number (k) by vector quantization using a k-means method on a feature amount space. Each clustered unit will be referred to as a bin hereinafter. By generating the histogram, information of an extraction position of a feature amount is lost, but a change in feature amount caused by an illuminance variation, out-of-plane rotation, and the like can be absorbed.
  • step S2503 It is checked whether or not the background object feature histogram has been generated for all feature amounts included in all background objects. If NO in step S2503, the control returns to step S2501 to repeat generation of the background object feature histogram (step S2502).
  • one background object feature histogram is generated from all feature amounts included in all background object candidates.
  • the generated background object feature histogram is normalized using the total number of feature amounts multiplied by the weights (step S2504). This is because the numbers of feature amounts in the detected background object candidate regions are not constant depending on the number of background objects, out-of-plane rotation directions, and the like.
  • the normalized background object feature histogram is registered in the background object storage unit 218 as background object feature information (step S2505).
  • the two pieces of information are merged by dividing a sum total of frequency values of respective bins by 2.
  • the background object feature information in an installation environment (scene) of this object detection apparatus is generated from all background object candidates detected in the registration phase.
  • background object regions can be selected from object regions detected once. Furthermore, by calculating a histogram from feature amounts extracted from all the selected regions, background object feature information robust against changes of background objects can be generated.
  • a feature amount type corresponding to the current scene is extracted from the scene-dependent feature amount type information ( Fig. 17 ) in the rule storage unit 211 (step S2601).
  • one object region (coordinates thereof) is acquired from the object region information ( Fig. 11 ) (step S2602).
  • the second feature amount extraction unit 209 extracts feature amounts according to the feature amount type from a corresponding region of an input frame image based on the acquired object region (step S2603) in the same manner as in step S307 of the registration phase.
  • a histogram is calculated based on the extracted feature amounts, thus generating a background object feature histogram (step S2604) as in step S2502 of the registration phase.
  • the background object discrimination unit 219 compares the background object feature histogram acquired from the object region to be currently processed with the background object feature information (step S2605), thus determining whether or not the object region includes a background object (step S2606).
  • a histogram intersection disclosed in literature [ M.J. Swain and D.H. Ballard: Color Indexing, International Journal of Computer Vision, Vol. 7, No. 1, pp. 11 - 32 (1991 )] is used as a similarity.
  • the histogram intersection is calculated by comparing corresponding bins of two histograms, and calculating a sum total of minimum values.
  • the similarity is compared with a predetermined threshold, and if the similarity is higher than the threshold, a background object is determined.
  • a background object is determined in step S2606, the background model update unit 205 is notified of a corresponding region. Then, a corresponding region of the background model in the background model storage unit 204 is added as a background.
  • the time of creation of pixels included in the corresponding region in the background model are changed to that by going back from the current time by the threshold of the background conversion time.
  • step S2608 It is checked whether or not the processes of steps S2602 to S2607 are complete for all detected object regions. If regions to be processed still remain, the process returns to step S2602; otherwise, this processing ends to select the next frame image to be processed.
  • the background subtraction method in the embodiment is executed based on durations since feature amounts extracted from a video appear in the video.
  • the present invention is not limited to this method, and various other methods are applicable.
  • an input frame image at an initialization timing is handled intact as a background model, and is compared with a subsequent input frame image to determine pixels that generates differences not less than a predetermined value as an object.
  • a unit which calculates duration of an object is required to generate background object feature information in the aforementioned waiting room scene.
  • Such unit can be implemented by further including a tracking unit which calculates associations based on object region positions, feature amounts, and the like detected between frames.
  • the background object feature histogram is used as the background object feature information.
  • the present invention is not limited to this.
  • background object regions may be extracted from an input image, and pixel data may be used intact.
  • background object regions are selected by the first selection unit 208 and second selection unit 210.
  • the user may make selection. For example, the following method is available. Initially, an input frame image is displayed using the display device 107, and the user designates background object regions via the input device 106. Alternatively, background object regions selected by the first selection unit 208 and second selection unit 210 are temporarily displayed using the display device 107. The user corrects the displayed background object regions via the input device 106.
  • the statistical amount generation unit 216 may generate a background object feature histogram from the background object regions obtained by the aforementioned method.
  • a region determined as a background object by the background object discrimination unit 219 is output to the background model update unit 205, which registers that region in the background model. Thus, subsequent detection errors are suppressed.
  • a region determined as a background object may be output to the object detection region output unit 207, which may delete that region from the object region information ( Fig. 9 ), thereby suppressing detection errors output from the object detection apparatus.
  • respective devices are connected via the bus 109.
  • some devices may be connected via the network I/F 108.
  • the image input device may be connected via the network I/F 108.
  • all units may be stored in an integrated circuit chip, and may be integrated with the image input device 105.
  • aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s).
  • the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (for example, computer-readable medium).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)
  • Burglar Alarm Systems (AREA)
EP13004034.8A 2012-08-22 2013-08-13 Appareil de détection d'objets, son procédé de contrôle, programme et support de stockage Withdrawn EP2701094A3 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2012183596A JP6046948B2 (ja) 2012-08-22 2012-08-22 物体検知装置及びその制御方法、プログラム、並びに記憶媒体

Publications (2)

Publication Number Publication Date
EP2701094A2 true EP2701094A2 (fr) 2014-02-26
EP2701094A3 EP2701094A3 (fr) 2015-02-25

Family

ID=49033765

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13004034.8A Withdrawn EP2701094A3 (fr) 2012-08-22 2013-08-13 Appareil de détection d'objets, son procédé de contrôle, programme et support de stockage

Country Status (4)

Country Link
US (1) US9202126B2 (fr)
EP (1) EP2701094A3 (fr)
JP (1) JP6046948B2 (fr)
CN (1) CN103632379B (fr)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102576412B (zh) 2009-01-13 2014-11-05 华为技术有限公司 图像处理以为图像中的对象进行分类的方法和系统
JPWO2014017003A1 (ja) * 2012-07-25 2016-07-07 日本電気株式会社 更新領域検出装置
US9224071B2 (en) * 2012-11-19 2015-12-29 Microsoft Technology Licensing, Llc Unsupervised object class discovery via bottom up multiple class learning
US9953240B2 (en) * 2013-05-31 2018-04-24 Nec Corporation Image processing system, image processing method, and recording medium for detecting a static object
JP6532190B2 (ja) 2014-03-26 2019-06-19 キヤノン株式会社 画像検索装置、画像検索方法
CN103902976B (zh) * 2014-03-31 2017-12-29 浙江大学 一种基于红外图像的行人检测方法
JP6395423B2 (ja) * 2014-04-04 2018-09-26 キヤノン株式会社 画像処理装置、制御方法及びプログラム
US20160371847A1 (en) * 2014-07-24 2016-12-22 Bonanza.com, LLC Background profiles
US10157327B2 (en) * 2014-08-06 2018-12-18 Sony Semiconductor Solutions Corporation Image processing device, image processing method, and program
US9514523B2 (en) 2014-11-18 2016-12-06 Intel Corporation Method and apparatus for filling images captured by array cameras
WO2016139868A1 (fr) * 2015-03-04 2016-09-09 ノ-リツプレシジョン株式会社 Dispositif d'analyse d'image, procédé d'analyse d'image, et programme d'analyse d'image
JP6309913B2 (ja) * 2015-03-31 2018-04-11 セコム株式会社 物体検出装置
CN105336074A (zh) * 2015-10-28 2016-02-17 小米科技有限责任公司 报警方法及装置
EP3246874B1 (fr) 2016-05-16 2018-03-14 Axis AB Procédé et appareil de mise à jour d'un modèle de fond utilisé pour la soustraction d'arrière-plan d'une image
JP7085812B2 (ja) 2017-08-02 2022-06-17 キヤノン株式会社 画像処理装置およびその制御方法
US10580144B2 (en) 2017-11-29 2020-03-03 International Business Machines Corporation Method and system for tracking holographic object
CN111008992B (zh) * 2019-11-28 2024-04-05 驭势科技(浙江)有限公司 目标跟踪方法、装置和系统及存储介质
JP7542978B2 (ja) * 2020-04-01 2024-09-02 キヤノン株式会社 画像処理装置、画像処理方法およびプログラム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003346156A (ja) 2002-05-23 2003-12-05 Nippon Telegr & Teleph Corp <Ntt> 物体検出装置、物体検出方法、プログラムおよび記録媒体
US20040001612A1 (en) * 2002-06-28 2004-01-01 Koninklijke Philips Electronics N.V. Enhanced background model employing object classification for improved background-foreground segmentation
US20070237387A1 (en) 2006-04-11 2007-10-11 Shmuel Avidan Method for detecting humans in images

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7418134B2 (en) * 2003-05-12 2008-08-26 Princeton University Method and apparatus for foreground segmentation of video sequences
WO2005029264A2 (fr) * 2003-09-19 2005-03-31 Alphatech, Inc. Systemes et procedes de poursuite
JP4378261B2 (ja) 2004-10-27 2009-12-02 キヤノン株式会社 画像処理方法、画像処理装置
JP4633841B2 (ja) * 2005-04-18 2011-02-16 インテル コーポレイション 歩行者の追跡による、ビデオ系列からの3次元道路配置の推定
US7720283B2 (en) * 2005-12-09 2010-05-18 Microsoft Corporation Background removal in a live video
JP2007164690A (ja) * 2005-12-16 2007-06-28 Matsushita Electric Ind Co Ltd 画像処理装置及び画像処理方法
JP4636064B2 (ja) * 2007-09-18 2011-02-23 ソニー株式会社 画像処理装置および画像処理方法、並びにプログラム
CN101470802B (zh) * 2007-12-28 2012-05-09 清华大学 物体检测装置和方法
US8538063B2 (en) * 2008-05-08 2013-09-17 Utc Fire & Security System and method for ensuring the performance of a video-based fire detection system
US8160366B2 (en) * 2008-06-20 2012-04-17 Sony Corporation Object recognition device, object recognition method, program for object recognition method, and recording medium having recorded thereon program for object recognition method
TW201034430A (en) * 2009-03-11 2010-09-16 Inventec Appliances Corp Method for changing the video background of multimedia cell phone
WO2011017806A1 (fr) * 2009-08-14 2011-02-17 Genesis Group Inc. Trucage vidéo et d’images en temps réel
JP5236607B2 (ja) * 2009-09-25 2013-07-17 セコム株式会社 異常検知装置
US8699852B2 (en) * 2011-10-10 2014-04-15 Intellectual Ventures Fund 83 Llc Video concept classification using video similarity scores
JP6041515B2 (ja) 2012-04-11 2016-12-07 キヤノン株式会社 画像処理装置および画像処理方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003346156A (ja) 2002-05-23 2003-12-05 Nippon Telegr & Teleph Corp <Ntt> 物体検出装置、物体検出方法、プログラムおよび記録媒体
US20040001612A1 (en) * 2002-06-28 2004-01-01 Koninklijke Philips Electronics N.V. Enhanced background model employing object classification for improved background-foreground segmentation
US20070237387A1 (en) 2006-04-11 2007-10-11 Shmuel Avidan Method for detecting humans in images

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
D.G. LOWE: "Object recognition from local scale - invariant features", PROC. OF IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV, 1999, pages 1150 - 1157
J. SIVIC; A. ZISSERMAN: "Video google: A text retrieval approach to object matching in videos", PROC. ICCV, 2003
M.J. SWAIN; D.H. BALLARD, COLOR INDEXING, INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 7, no. 1, 1991, pages 11 - 32
N. DALAL; B. TRIGGS: "Histogram of Gradients for Human Detection", COMPUTER VISION AND PATTERN RECOGNITION, vol. 1, 2005, pages 886 - 893
YU NAKAGAWA; TOMOKAZU TAKAHASHI; YOSHITO MEKADA; ICHIRO IDE; HIROSHI MURASE: "Landmark symbol detection in real environment by multi-template generation", PROCEEDINGS OF DYNAMIC IMAGE PROCESSING FOR REAL APPLICATION WORKSHOP (DIA2008, pages 259 - 264

Also Published As

Publication number Publication date
US9202126B2 (en) 2015-12-01
CN103632379B (zh) 2017-06-06
JP2014041488A (ja) 2014-03-06
US20140056473A1 (en) 2014-02-27
EP2701094A3 (fr) 2015-02-25
JP6046948B2 (ja) 2016-12-21
CN103632379A (zh) 2014-03-12

Similar Documents

Publication Publication Date Title
EP2701094A2 (fr) Appareil de détection d&#39;objets, son procédé de contrôle, programme et support de stockage
CN110235138B (zh) 用于外观搜索的系统和方法
CN105938622B (zh) 检测运动图像中的物体的方法和装置
US10217010B2 (en) Information processing apparatus for registration of facial features in a collation database and control method of the same
US11663502B2 (en) Information processing apparatus and rule generation method
US7336830B2 (en) Face detection
US8737740B2 (en) Information processing apparatus, information processing method, and non-transitory computer-readable storage medium
JP5713790B2 (ja) 画像処理装置、画像処理方法、及びプログラム
US10079974B2 (en) Image processing apparatus, method, and medium for extracting feature amount of image
Wang et al. An effective method for plate number recognition
US10353954B2 (en) Information processing apparatus, method of controlling the same, and storage medium
Audebert et al. How useful is region-based classification of remote sensing images in a deep learning framework?
US11049256B2 (en) Image processing apparatus, image processing method, and storage medium
US10891740B2 (en) Moving object tracking apparatus, moving object tracking method, and computer program product
US10762133B2 (en) Information processing apparatus, method of controlling the same, and storage medium
KR101836811B1 (ko) 이미지 상호간의 매칭을 판단하는 방법, 장치 및 컴퓨터 프로그램
US8718362B2 (en) Appearance and context based object classification in images
KR102286571B1 (ko) 영상에서 다수의 객체를 인식하는 방법
JP2017084006A (ja) 画像処理装置およびその方法
US20210034915A1 (en) Method and apparatus for object re-identification
Loderer et al. Optimization of LBP parameters
JP2015187770A (ja) 画像認識装置、画像認識方法及びプログラム
Chen et al. Pose estimation based on human detection and segmentation
JP2011076575A (ja) 画像処理装置、画像処理方法及びプログラム
Akheel Detecting and Recognizing Face Images from Videos using Spherical Harmonics and RBF Kernel Techniques

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RIC1 Information provided on ipc code assigned before grant

Ipc: G06K 9/00 20060101AFI20150119BHEP

17P Request for examination filed

Effective date: 20150825

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

17Q First examination report despatched

Effective date: 20170530

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20180928