WO2010018669A1 - 検出用情報登録装置、対象物体検出装置、電子機器、検出用情報登録装置の制御方法、対象物体検出装置の制御方法、検出用情報登録装置制御プログラム、対象物体検出装置制御プログラム - Google Patents
検出用情報登録装置、対象物体検出装置、電子機器、検出用情報登録装置の制御方法、対象物体検出装置の制御方法、検出用情報登録装置制御プログラム、対象物体検出装置制御プログラム Download PDFInfo
- Publication number
- WO2010018669A1 WO2010018669A1 PCT/JP2009/003767 JP2009003767W WO2010018669A1 WO 2010018669 A1 WO2010018669 A1 WO 2010018669A1 JP 2009003767 W JP2009003767 W JP 2009003767W WO 2010018669 A1 WO2010018669 A1 WO 2010018669A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- detection
- target object
- information
- feature information
- feature
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 285
- 238000000034 method Methods 0.000 title claims description 71
- 238000000605 extraction Methods 0.000 claims abstract description 94
- 230000008859 change Effects 0.000 claims abstract description 35
- 230000033001 locomotion Effects 0.000 claims description 26
- 239000000284 extract Substances 0.000 claims description 17
- 238000012545 processing Methods 0.000 description 40
- 230000008569 process Effects 0.000 description 39
- 241001465754 Metazoa Species 0.000 description 15
- 238000010586 diagram Methods 0.000 description 12
- 238000003384 imaging method Methods 0.000 description 12
- 238000006073 displacement reaction Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 241000824799 Canis lupus dingo Species 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 3
- 230000036544 posture Effects 0.000 description 3
- 241000282412 Homo Species 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 241000406668 Loxodonta cyclotis Species 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K11/00—Marking of animals
- A01K11/006—Automatic identification systems for animals, e.g. electronic devices, transponders for animals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/634—Warning indications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
Definitions
- the present invention relates to an apparatus that performs at least one of registration and detection of an object, in particular, an information registration apparatus for detection that registers information on an object and detects an object based on the registered information, a target object detection apparatus, an electronic device,
- the present invention relates to a detection information registration apparatus control method, a target object detection apparatus control method, a detection information registration apparatus control program, and a target object detection apparatus control program.
- Patent Document 1 discloses a technique for performing various processes based on recognized subject information in an electronic camera having a face recognition function.
- Patent Document 2 images taken by a plurality of cameras are input to a common image processing apparatus, and the image processing apparatus collates model data registered in a database with images taken by the camera.
- the image processing apparatus collates model data registered in a database with images taken by the camera.
- Patent Document 3 describes a technique for estimating and modeling a change in the appearance of an object and collating the modeled data with input image information.
- Patent Document 4 in an image recognition apparatus that detects a detection target existing in a monitoring space, when a background image and an input image change, a small animal index is used to determine whether or not the change is caused by a small animal. The technique to judge is described.
- Patent Document 5 discloses a technique for recognizing a vehicle using a heat source image and discriminating / classifying the type.
- Patent Document 6 discloses a technique for extracting the outline of the nose from the image of the entire nose of the animal, two nostrils and their respective outlines, and creating information for identifying the animal.
- Japanese Patent Publication “JP 2007-282119 A (published on Oct. 25, 2007)” Japanese Patent Publication “Japanese Patent Laid-Open No. 2002-83297 (published on March 22, 2002)” Japanese Patent Publication “Japanese Patent Laid-Open No. 2001-307096” (published on November 2, 2001) Japanese Patent Publication “Japanese Patent Laid-Open No. 2006-155167 (published on June 15, 2006)” Japanese Patent Publication “JP-A-8-16987” (published January 19, 1996) Japanese Patent Publication “JP 2007-135501 A (published on June 7, 2007)”
- Patent Document 1 authenticates a human face, and as described above, it is difficult to predefine an animal in the first place. It is difficult.
- Patent Document 2 when the technique described in Patent Document 2 is applied to animals, it is necessary to register a huge amount of model data. Furthermore, for registration, it is necessary to register data from various orientations and various positions, but the user does not know how much amount should be registered from what orientation or position, and registration for the user. Is not easy.
- Patent Document 3 it is necessary to accurately calculate information on the shooting environment (orientation, posture, etc.) in order to perform modeling at the time of registration. However, it is difficult to cut out an object from a still image, which requires time and effort, and registration is not easy.
- the present invention has been made in view of the above problems, and an object of the present invention is an apparatus that allows a user to easily register an animal to be detected, and an apparatus that detects an animal registered from an input image.
- An object is to realize a certain information registration device for detection, a target object detection device, and the like.
- a detection information registration apparatus is information for detecting a target object included in a moving image obtained by shooting, and is information for characterizing the target object.
- a registration unit for detecting information, a storage unit for storing information, image acquisition means for acquiring a frame image in the moving image, and information for tracking a target object included in the moving image obtained by shooting The tracking feature extraction means for extracting tracking feature information, which is information characterizing the target object, from the frame image and storing it in the storage unit; and the tracking feature information extracted by the tracking feature extraction means
- target object region detection means for detecting an image region of the target object from a change from the tracking feature information regarding the past frame image stored in the storage unit,
- the detection feature extraction means for extracting the detection feature information from the image area of the target object detected by the elephant object region detection means, and part or all of the detection feature information extracted by the detection feature extraction means
- a detection feature registering means for registering in the storage unit.
- control method of the detection information registration apparatus includes an image acquisition step for acquiring a frame image in the moving image, and information for tracking a target object included in the moving image obtained by shooting, Tracking feature information, which is information that characterizes the target object, is extracted from the frame image and stored in the storage unit, the tracking feature information extracted in the tracking feature extraction step, and the storage unit
- a target object region detecting step for detecting an image region of the target object based on a change from the tracking feature information relating to the past frame image stored; and an image region of the target object detected in the target object region detecting step From the detection feature extraction step for extracting the detection feature information, and a part of the detection feature information extracted in the detection feature extraction step. It is characterized in that it comprises a detection feature registration step of registering all the storage unit.
- a frame image is acquired from a moving image. Then, tracking feature information for tracking the target object in the moving image is extracted from the acquired frame image and stored in the storage unit. Then, the image area of the target object is detected from the change between the past tracking feature information stored in the storage unit and the current tracking feature information. Then, the detection feature information is extracted from the image area of the target object and stored in the storage unit.
- the feature information for detection for detecting the target object is registered only by shooting the target object by moving image, so that the feature of the target object can be easily registered.
- the target object is detected from a similarity calculation unit that calculates the similarity between the feature information for detection and the feature information for detection stored in the storage unit, and an arbitrary captured image obtained by capturing the subject including the target object.
- the amount of change may be one which further comprises a richness calculating means for calculating on the basis of the similarity degree of similarity calculation means has calculated.
- the movement change amount calculating unit changes the tracking feature information extracted by the tracking feature extracting unit and the tracking feature information regarding the past frame image stored in the storage unit, and the target object. Based on the information on the target object region detected by the region detection means, the amount of change in motion in the frame image of the target object is calculated. Then, the satisfaction degree calculating means calculates the satisfaction degree based on the exercise change amount calculated by the exercise change amount calculating means and the similarity degree calculated by the similarity degree calculating means.
- examples of the amount of change in the above movement include the amount of movement and the amount of change in direction.
- a degree of fulfillment is calculated that indicates how much the stored amount of feature information for detection is relative to the amount of feature information for detection estimated to be necessary for detecting the target object.
- the detection information registration apparatus further includes initial position acquisition means for acquiring in advance information on the initial position of the target object in the moving image and storing the information in the storage unit, and the tracking feature extraction means and the tracking feature extraction means described above
- the target object region detection unit may use information on the initial position stored in the storage unit for the first frame image acquired by the image acquisition unit.
- the initial position acquisition means acquires in advance information on the initial position of the target object and stores it in the storage unit. Then, the tracking feature extracting unit and the target object region detecting unit use the initial position information stored in the storage unit for the first frame image acquired by the image acquiring unit.
- the tracking feature extracting unit and the target object region detecting unit can more accurately detect the region where the target object exists.
- common feature specifying means for specifying one or a plurality of detection common feature information common to all or a part of the plurality of detection feature information stored in the storage unit. It is preferable that the detection feature registration unit further registers the detection common feature information specified by the common feature specification unit in the storage unit.
- the common feature specifying unit specifies one or a plurality of common detection feature information common to all or a part of the plurality of detection feature information stored in the storage unit. Then, the specified common feature information for detection is stored in the storage unit.
- the detection feature information stored in the storage unit may be stored in association with identification information for identifying the plurality of target objects.
- the detection feature information is stored in association with identification information for identifying the plurality of target objects.
- a target object detection device is a target object detection device that detects a target object included in a moving image obtained by shooting, and is information for detecting the target object.
- a storage unit that stores a plurality of detection feature information that is information characterizing the target object, and one or a plurality of detection common feature information that is common to all or a part of the plurality of detection feature information;
- Image acquisition means for acquiring a frame image in a moving image, detection feature extraction means for extracting detection feature information from the frame image acquired by the image acquisition means, and detection features extracted by the detection feature extraction means Using the information and the common feature information for detection in the storage unit, the target object exists in the frame image in a region having the feature information for detection similar to the common feature information for detection.
- a candidate area search means for searching as a candidate area that is a potential area, and the similarity between the feature information for detection included in the candidate area searched by the candidate area search means and the feature information for detection in the storage section.
- Similarity calculating means for calculating, and determination means for determining whether the candidate area is an area where the target object exists in the frame image based on the similarity calculated by the similarity calculating means. It is characterized by having.
- a control method for a target object detection device is a control method for a target object detection device that detects a target object included in a moving image obtained by photographing, and an image acquisition step for acquiring a frame image in the moving image.
- a detection feature extraction step for extracting detection feature information from the frame image acquired in the image acquisition step, a detection feature information extracted in the detection feature extraction step, and an object stored in the storage unit A region having detection feature information similar to the detection common feature information using one or a plurality of detection common feature information common to all or part of the plurality of detection feature information that is information characterizing the object
- a candidate area search step for searching for a candidate area as a candidate area that has a possibility that the target object exists in the frame image; and
- the similarity calculation step for calculating the similarity between the detection feature information included in the candidate area searched in the step and the detection feature information stored in the storage unit, and the similarity calculated in the similarity calculation step And determining whether the candidate area is an area where the target object is present in the frame image.
- a candidate area which is an area having detection feature information similar to the detection common feature information, is searched for on a frame image acquired from a moving image obtained by shooting.
- the similarity between the searched feature information for detection of the candidate area and the feature information for detection is calculated, and based on the calculated similarity, whether the candidate area is an area where the target object exists or not is determined. Determined.
- the area of the detection feature information similar to the detection feature information stored in the storage unit can be determined as the area where the target object exists.
- the feature information for detection of the user's domestic dog is registered, if the user's domestic dog is within the imaging range when the target object detection device is capturing an image, the region where the dog is present is detected. can do.
- the target object detection apparatus may include a tracking unit that tracks a region where the desired object is present.
- the tracking means tracks an area where a desired object exists.
- the electronic device including the detection information registration device and the target object detection device can achieve the above-described effects.
- the electronic apparatus may be provided with notifying means for notifying the user based on the degree of fulfillment calculated by the fulfillment degree calculating means.
- the notifying unit notifies the user that registration can be completed. It can be recognized that the identifiable feature quantity is stored.
- the predetermined threshold value is a value such that when the degree of fulfillment exceeds the threshold value, the feature amount stored in the feature amount storage unit becomes a sufficient amount for specifying the target object.
- the notifying means indicates that the degree of fulfillment does not exceed a predetermined threshold and that the degree of fulfillment is a predetermined threshold.
- the user may be notified of at least one of the operation instructions necessary to exceed the above.
- the degree of fulfillment does not exceed a predetermined threshold
- the user is instructed that the degree of fulfillment does not exceed the predetermined threshold, and an operation instruction necessary for the degree of fulfillment to exceed the predetermined threshold. At least one of them can be recognized.
- the detection information registration device and the target object detection device may be realized by a computer.
- the detection information registration device and the target object detection device are operated by operating the computer as the respective means.
- a detection information registration device control program and a target object detection device control program realized by a computer, and a computer-readable recording medium on which these are recorded also fall within the scope of the present invention.
- the detection information registration device registers detection feature information that is information for detecting a target object included in a moving image obtained by shooting and is information that characterizes the target object.
- An information registration device for detection a storage unit for storing information, image acquisition means for acquiring a frame image in the moving image, and information for tracking a target object included in the moving image obtained by shooting, Tracking feature information that is information that characterizes the target object is extracted from the frame image and stored in the storage unit, tracking feature information extracted by the tracking feature extraction unit, and A target object region detecting means for detecting an image region of the target object from a change from the tracking feature information relating to the past frame image stored in the storage unit; and the target object region A detection feature extraction unit that extracts the detection feature information from the image area of the target object detected by the output unit, and a part or all of the detection feature information extracted by the detection feature extraction unit It is the structure provided with the characteristic registration means for a detection registered into.
- control method of the detection information registration apparatus includes an image acquisition step for acquiring a frame image in the moving image, and information for tracking a target object included in the moving image obtained by shooting, Tracking feature information, which is information that characterizes the target object, is extracted from the frame image and stored in the storage unit, the tracking feature information extracted in the tracking feature extraction step, and the storage unit
- a target object region detecting step for detecting an image region of the target object based on a change from the tracking feature information relating to the past frame image stored; and an image region of the target object detected in the target object region detecting step From the detection feature extraction step for extracting the detection feature information, and a part of the detection feature information extracted in the detection feature extraction step.
- a method comprising a detection feature registration step of registering all the storage unit.
- the feature information for detection for detecting the target object is registered only by shooting the target object by moving image, so that the feature of the target object can be easily registered.
- a target object detection apparatus is a target object detection apparatus that detects a target object included in a moving image obtained by shooting, and is information for detecting the target object, and characterizes the target object.
- a storage unit for storing a plurality of detection feature information as information and one or a plurality of detection common feature information common to all or a part of the plurality of detection feature information; and a frame image in the moving image.
- Candidate area searching means for searching as a candidate area
- similarity calculating means for calculating the similarity between the feature information for detection included in the candidate area searched by the candidate area searching means and the feature information for detection in the storage section
- determination means for determining whether the candidate area is an area where the target object is present in the frame image based on the similarity calculated by the similarity calculation means.
- a control method for a target object detection device is a control method for a target object detection device that detects a target object included in a moving image obtained by photographing, and an image acquisition step for acquiring a frame image in the moving image.
- a detection feature extraction step for extracting detection feature information from the frame image acquired in the image acquisition step, a detection feature information extracted in the detection feature extraction step, and an object stored in the storage unit A region having detection feature information similar to the detection common feature information using one or a plurality of detection common feature information common to all or part of the plurality of detection feature information that is information characterizing the object
- a candidate area search step for searching for a candidate area as a candidate area that has a possibility that the target object exists in the frame image; and
- the similarity calculation step for calculating the similarity between the detection feature information included in the candidate area searched in the step and the detection feature information stored in the storage unit, and the similarity calculated in the similarity calculation step And determining whether the candidate region is a region where the target object is present in the frame image.
- a candidate area which is an area having detection feature information similar to the detection common feature information, is searched for on a frame image acquired from a moving image obtained by shooting.
- the area of the detection feature information similar to the detection feature information stored in the storage unit can be determined as the area where the target object exists.
- the feature information for detection of the user's domestic dog is registered, if the user's domestic dog is within the imaging range when the target object detection device is capturing an image, the region where the dog is present is detected. can do.
- FIG. 1 is a block diagram illustrating a configuration of a registration processing unit of a digital camera.
- FIG. It is a block diagram which shows the structure of the digital camera which concerns on the said embodiment. It is explanatory drawing which shows the content memorize
- FIGS. 1 to 12 An embodiment of the present invention will be described with reference to FIGS. 1 to 12 as follows.
- FIG. 2 is a block diagram showing a configuration of the digital camera 1 according to the present embodiment.
- the digital camera 1 includes a storage unit 2, an operation reception unit 3, an imaging unit 4, a display unit 5, and a control unit 6.
- the digital camera 1 easily registers an object to be detected (target object), and uses the information of the object (detection feature information) registered for detection of a desired object from the captured image. It detects and alert
- the storage unit 2 stores images taken by the digital camera 1, data used for detection processing, data used for tracking processing, and the like.
- the configuration includes a frame buffer 21, a model information storage unit (feature amount storage unit) 22, and a tracking information storage unit 23. Details of the frame buffer 21, the model information storage unit 22, and the tracking information storage unit 23 will be described later.
- a specific example of the storage unit 2 is a flash memory.
- a RAM Random Access Memory
- Memory can also be used.
- the operation accepting unit 3 accepts an operation on the digital camera 1. For example, an ID (identification, identification information) indicating a registration target object or an operation indicating the position of the target object on the display unit 5 is accepted. Regarding the position of the target object, the screen is in a touch panel format, and the position of the target object may be specified on the screen, or a cursor is displayed on the screen, and the position is determined using the cursor. It may be specified. Specific examples of the operation accepting unit 3 include various buttons and a touch panel.
- the imaging unit 4 captures a subject such as a target object and generates video data. Specifically, the imaging unit 4 receives an image sensor such as a CCD (Charge-Coupled Device) or CMOS (Complementary Metal-oxide Semiconductor) image sensor that converts light from an object into an electrical signal, and an electrical signal from the image sensor. A configuration including a video processing circuit for converting into digital video data of each color of RGB is given. Then, the imaging unit 4 transmits the generated video data to the control unit 6.
- the imaging unit 4 may have an AF (autofocus) function.
- the display unit 5 displays the image captured by the imaging unit 4, the state of the digital camera 1, the detection result, and the like.
- the display unit 5 may be implemented by any device as long as it can display information. Specific examples include a liquid crystal display, an organic EL (Electro Luminescence) display, and a plasma display. Is mentioned.
- the control unit 6 includes a frame image extraction unit 11, a registration processing unit 12, a tracking processing unit (tracking means) 13, and a detection processing unit 14. Then, the control unit 6 performs registration, tracking, and detection of an object to be detected. Details of the registration processing unit 12, the tracking processing unit 13, and the detection processing unit 14 will be described later.
- the frame image extraction unit 11 extracts a frame image from the video data transmitted from the imaging unit 4 and stores the extracted frame image in the frame buffer 21.
- the frame buffer 21 stores the frame image extracted by the frame image extraction unit 11.
- FIG. 1 is a block diagram illustrating a configuration of the registration processing unit 12 of the digital camera 1 according to the present embodiment.
- the registration processing unit 12 includes a frame image acquisition unit (image acquisition unit) 51, a detection feature amount extraction unit (target object region detection unit, detection feature extraction unit) 52, a similarity calculation unit ( Similarity calculation means) 53, registration section (detection feature registration means) 54, fullness calculation section (exercise change amount calculation means, fullness calculation means) 55, result output section 56, common feature quantity extraction section (common feature specification) Means) 57 and an initial position acquisition unit (initial position acquisition means) 58.
- the registration processing unit 12 extracts the feature amount (detection feature information) of the target object in the acquired frame image and stores it in the model information storage unit 22.
- the model information storage unit 22 stores the following items as shown in FIG. FIG. 3 is an explanatory diagram showing the contents stored in the model information storage unit 22.
- the model information storage unit 22 includes an ID indicating the target object, feature amount information indicating each feature amount of each frame of the target object, and a common feature amount that is a feature amount common to the feature amounts of each frame. Feature amount information is stored.
- the above information for the plurality of IDs is stored.
- the frame image acquisition unit 51 acquires a frame image from the frame buffer 21 and transmits it to the detection feature amount extraction unit 52.
- the detection feature quantity extraction unit 52 extracts the feature quantity of the target object from the frame image acquired from the frame image acquisition unit 51. Then, the extracted feature amount is transmitted to the similarity calculation unit 53.
- the feature quantity of the target object is extracted as follows. That is, the target object region is detected from the position of the target object designated by the user using the operation accepting unit 3 and the change in the frame image of the feature point information by the tracking processing unit 13.
- the target object region may be detected by separating the background region and the target object region from the upper image change in the plurality of frame images.
- the feature amount of the detected target object area is extracted.
- the feature amount expresses the feature of the target object such as color information and edge information obtained from the image.
- the target object area can always be detected, and the feature amount can be extracted. Details of the tracking process will be described later.
- the similarity calculation unit 53 is the same as the acquired feature amount and stored in the model information storage unit 22. The similarity with the feature quantity of the target object is calculated. Then, the calculated result is transmitted to the registration unit 54.
- the registration unit 54 indicates to the model information storage unit 22 the feature amount extracted by the detection feature amount extraction unit 52 as a target object.
- information indicating that the registration has been completed is transmitted to the fulfillment degree calculation unit 55.
- the feature amount is stored only when the similarity is within a predetermined range for the following reason. That is, if the degree of similarity is too high, in other words, if the already registered feature quantity is too similar to the feature quantity to be registered, the meaning of registration is lost. On the other hand, if the degree of similarity is too small, in other words, if the already registered feature quantity is too different from the feature quantity to be registered, it is inappropriate to indicate the same target object. is there.
- the degree-of-fullness calculation unit 55 acquires information indicating that registration has been completed from the registration unit 54, the degree of feature that indicates how much the feature quantity stored in the model information storage unit 22 can identify the target object. Is calculated. If the degree of fulfillment calculated indicates that the target object can be specified, information for notifying that registration is possible is indicated; otherwise, it indicates that registration data is still required. Information for notification is transmitted to the result output unit 56.
- the degree of fulfillment is calculated by estimating the orientation of the target object based on the amount of change in the rotation angle based on the movement amount calculation, and estimating the orientation based on the difference in the feature amount when the rotation is at the same position.
- the dimension of the feature quantity is n
- the feature quantity of the i-th registered data is F i
- the k-th feature of the feature quantity of the registered data is F i [k].
- Each registration data holds the estimated direction (rotation angle) from the registration start frame (first registration data).
- This rotation angle is expressed by the rotation ⁇ (roll), ⁇ (pitch), and ⁇ (yaw) with the three-dimensional x-axis, y-axis, and z-axis as the central axes. Etc. can be used.
- a rotation matrix from the i-th registered data to the j-th registered data can be expressed by the following equation.
- R [i ⁇ j] R [j] ⁇ R [i] ⁇ 1
- the magnitude of the rotation angle at this time is as follows.
- the degree of fulfillment represents a variation of the registered data.
- the first method is to determine the degree of fulfillment based on the magnitude of the direction displacement. This expresses the degree of fulfillment by the sum of the magnitudes of the direction displacements by all combinations of registered data, and can be obtained by the following equation.
- the degree of fulfillment can be obtained by variation in the feature amount. This is to calculate the degree of fulfillment as the sum of the variation in the feature amount due to all combinations of registered data, and can be obtained by the following equation.
- the degree of fulfillment can be obtained by using both the magnitude of the displacement in the direction and the variation in the feature amount. This is a combination of the two registered data. When the displacement in the direction is large, the degree of displacement in the direction is calculated. Can be obtained.
- the result output unit 56 notifies the content indicated by the information from the information acquired from the fulfillment level calculation unit 55.
- the common feature amount extraction unit extracts a feature amount (detection common feature information) common to the feature amounts for each frame stored in the model information storage unit 22, and stores the model information storage unit as a common feature amount indicating the target object. 22 is stored.
- the initial position acquisition unit 58 acquires data indicating the position received by the operation reception unit 3 and transmits the data to the frame image acquisition unit 51.
- FIG. 4 is a block diagram illustrating a configuration of the tracking processing unit 13 of the digital camera 1 according to the present embodiment.
- the tracking processing unit 13 includes a moving region prediction unit 31, a feature point extraction unit (tracking feature extraction unit) 32, a movement amount calculation unit 33, a tracking target region calculation unit 34, and a frame information update unit 35. , And a tracking information initial setting unit 36. And the tracking process part 13 performs a tracking process using the tracking information memorize
- FIG. FIG. 5 is an explanatory diagram showing the tracking information stored in the tracking information storage unit 23.
- the tracking information storage unit 23 stores a tracking ID indicating the tracking target, a status indicating whether the tracking target has been detected and whether tracking is in progress, and frame information. These pieces of information are stored for each ID. That is, if there are a plurality of IDs, the information for the IDs is stored.
- the frame information includes position information indicating the center position coordinates of the tracking target, feature point information (tracking feature information), tracking target area information which is area information on the tracking target image, an initial frame and a previous frame.
- a frame movement amount indicating the movement amount is included. Further, when the status is tracking, information for the frame for which tracking is being performed is stored. In addition, the past several frames may be stored.
- the movement region prediction unit 31 predicts the region to be tracked in the current frame from the amount of frame movement stored in the tracking information storage unit 23.
- the feature point extraction unit 32 extracts feature points to be tracked.
- a feature point is a color, edge information, or the like at a partial point that represents a feature to be tracked. Note that the feature points are not limited to points but may be regions.
- the movement amount calculation unit 33 calculates the relative movement amount of the tracking target from the transition of the feature points extracted by the feature point extraction unit 32.
- the amount of movement of the tracking target can be expressed by a change in the relative position (x, y, z), the rotation ( ⁇ , ⁇ , ⁇ ) of the tracking target, and the like.
- the tracking target area calculation unit 34 separates the tracking target area and the background area from the transition of the feature points extracted by the feature point extraction unit 32, and specifies the tracking target area.
- the frame information update unit 35 updates the information stored in the tracking information storage unit 23 to the position of the feature point in the current frame, the feature point information, the tracking target area information, the frame movement amount, and the like.
- the tracking information initial setting unit 36 sets the information stored in the tracking information storage unit 23 to an initial value.
- the initial values are position information, feature point information, and tracking target area information in a frame when tracking is started.
- FIG. 6 is a block diagram illustrating a configuration of the detection processing unit 14 of the digital camera 1 according to the present embodiment.
- the detection processing unit 14 includes a frame image acquisition unit (image acquisition unit) 41, a feature amount extraction unit (detection feature extraction unit) 42, a candidate region search unit (candidate region search unit) 43, a candidate The region feature quantity extraction unit 44, a similarity calculation unit (similarity calculation unit) 45, a center position calculation unit (determination unit) 46, and a result output unit (notification unit) 47 are included.
- the detection processing unit 14 searches the frame image where the target object stored in the model information storage unit 22 exists, and outputs the result.
- the frame image acquisition unit 41 acquires a frame image from the frame buffer 21. Then, the acquired frame image is transmitted to the feature amount extraction unit 42.
- the feature amount extraction unit 42 extracts the feature amount of the frame image acquired from the frame image acquisition unit 41.
- the candidate area search unit 43 scans the frame image acquired by the frame image acquisition unit 41, and uses the common feature amount stored in the model information storage unit 22, an area (candidate for which the target object is likely to exist) Search). Then, the candidate area as a result of the search is transmitted to the candidate area feature amount extraction unit 44. Note that a plurality of areas may be searched for in one frame image.
- the candidate area feature quantity extraction unit 44 extracts the feature quantity of the candidate area acquired from the candidate area search unit 43. Then, the extracted feature amount is transmitted to the similarity calculation unit 45.
- the similarity calculation unit 45 compares the feature amount acquired from the candidate area feature amount extraction unit 44 with the feature amount stored in the model information storage unit 22, and calculates the similarity.
- the center position calculation unit 46 assumes that the candidate area searched by the candidate area search unit 43 is the target object existing area. The center position is calculated. Then, information indicating the center position is transmitted to the result output unit 47.
- the result output unit 47 causes the display unit 5 to display the center position acquired from the center position calculation unit 46.
- FIG. 7 is a flowchart showing the flow of target object registration processing in the digital camera 1.
- the frame image extracting unit 11 extracts a frame image from the moving image being captured (S702). Then, the registration processing unit 12 determines whether or not the registration is completed (S703). If the registration is not completed (NO in S703), the tracking processing unit 13 uses the frame image extracted by the frame image extraction unit 11. Is an initial frame (S704).
- the tracking processing unit 13 If the frame is an initial frame (YES in S704), the tracking processing unit 13 creates a tracking ID and initializes the tracking process (S706). The initialization of the tracking process will be described later. Then, the process returns to S702. On the other hand, if it is not the initial frame (NO in S704), the tracking processing unit 13 starts the tracking process (S707). The tracking process will be described later.
- the detection feature quantity extraction unit 52 checks the status and frame information stored in the tracking information storage unit 23 (S708), and determines whether the status is invalid (S709). If the status is invalid (NO in S709), the display unit 5 displays an error (S719) and ends the registration process. On the other hand, if the status is not invalid (YES in S709), the detection feature quantity extraction unit 52 extracts the feature quantity of the target object (S710).
- the status error is when the tracking process is not completed normally.
- the status indicates a tracking state, and there are three states of “unrecognized”, “recognized”, and “tracking”.
- Unrecognized indicates an initialized state.
- Recognized indicates a state in which tracking (tracking) has not been initialized.
- Track indicates that tracking (tracking) is in progress.
- the status changes from “unrecognized” to “recognized” when the process proceeds from S705 to S706, or when the process proceeds from S1013 to S1014 described later.
- the status changes from “Recognized” to “Tracking” in S802 to be described later.
- the status changes from “tracking” to “unrecognized” before S716 or before S907 described later.
- the similarity calculation unit 53 compares the feature quantity of the target object extracted by the detection feature quantity extraction unit 52 with the feature quantity of the same target object that has already been registered, and calculates the similarity (S711). If the similarity is not within the predetermined range (NO in S712), the process returns to S702 without performing registration. On the other hand, if the similarity is within a predetermined range (YES in S712), the registration unit 54 stores the feature quantity of the target object in the model information storage unit 22 (S713).
- the fulfillment level calculation unit 55 calculates the fulfillment level of the feature amount stored in the model information storage unit 22 (S714).
- the display unit 5 displays the result (S715). Then, the process returns to S702.
- the common feature amount extraction unit 57 calculates the common feature amount of the feature amount for each frame stored in the model information storage unit 22. Calculate (S716). Then, the ID of the target object is received (S717), the display unit 5 displays the registration result (S718), and the registration process ends.
- FIG. 8 is a flowchart showing a flow of initialization of the tracking process in the digital camera 1.
- the feature point extraction unit 32 extracts a feature point to be tracked (S801). Then, the tracking information initial setting unit 36 stores only the position information of the feature points extracted by the feature point extraction unit 32 in the tracking information storage unit 23, and resets other information (S802). This is the end of the initialization process of the tracking process.
- FIG. 9 is a flowchart showing the flow of the tracking process in the digital camera 1.
- the tracking processing unit 13 predicts the position in the current frame from the movement amount of the tracking target of the past frame (S901). Then, the feature point extraction unit 32 extracts feature points (S902), and the movement amount calculation unit 33 calculates the movement amount of the tracking target from the change in the position of the feature points between the past frame and the current frame (S903). ). Then, the tracking target area calculation unit 34 calculates the matching degree by comparing the images of the previous frame and the current frame, and determines whether or not the calculated matching degree is larger than the reference value (S904). If the matching degree is equal to or less than the reference value (NO in S904), the frame information update unit 35 clears the tracking information stored in the tracking information storage unit 23. This is because it is considered that tracking cannot be performed when the previous frame and the current frame are too different.
- the tracking target area calculation unit 34 calculates the boundary between the tracking target and the background from the movement amount calculated by the movement amount calculation unit 33, and the tracking target The area is calculated (S905). Then, the frame information update unit 35 updates the frame information (S906).
- FIG. 10 is a flowchart showing a flow of processing for detecting a target object in the digital camera 1.
- the frame image extraction unit 11 extracts a frame image from the moving image being captured (S1001).
- the detection processing unit 14 determines whether or not an ID indicating the target object is registered in the model information storage unit 22 (S1002). If the ID is not registered (NO in S1002), the result is displayed as it is (S1016). On the other hand, if the ID is registered (YES in S1002), the feature amount extraction unit 42 extracts the feature amount of the frame image (S1003). Then, the tracking processing unit 13 checks the status of the tracking information (S1004). If the status is tracking (YES in S1005), the tracking processing is performed (S1006). On the other hand, if the status is not tracking (NO in S1005), the candidate area search unit 43 searches for a candidate area that is an area where the target object is considered to exist from the frame image (S1007).
- the candidate area feature quantity extraction unit 44 extracts the feature quantity of the candidate area (S1009). Then, the feature amount extracted by the similarity calculation unit 45 is compared with the feature amount stored in the model information storage unit 22, and the similarity is calculated (S1010). When feature amount extraction and similarity calculation are completed for all candidate regions (YES in S1011), the similarity calculation unit 45 determines whether or not the calculated maximum value of the similarity is equal to or greater than a threshold (S1012). ).
- the center position calculation unit 46 calculates the center position of the candidate area (S1013).
- a display indicating the center position is performed (S1016).
- the target object when registering a target object, the target object is photographed with a moving image. Then, the position where the target object is present is received in the frame where registration is started. Thus, by looking at the change of each frame of the moving image, the target object and the background can be separated, and the area of the target object can be determined. Therefore, the target object can be easily registered.
- a region (candidate region) in which the target object is considered to exist in the captured image is searched using the common feature amount of the registered feature amount for each frame. Then, by comparing the feature quantity of the searched candidate area with the feature quantity for each frame, it is determined whether the candidate area is an area where the target object exists. Thereby, a target object can be detected easily.
- the target object once detected can be automatically tracked by performing the tracking process. Therefore, since tracking can be performed even if the posture or orientation is not registered, the region where the target object exists can be detected.
- FIG. 11 is an explanatory diagram for the case of registering a dog.
- FIG. 11A shows a state in which one point of an area where the dog exists is designated, and
- FIG. This shows a state in which the area of the dog to be registered is determined.
- one point in the area where the dog exists is designated as the designated point 110. Then, the dog is followed by the method described above, and the feature amount is extracted and registered from the region where the dog exists (the region surrounded by the thick line).
- FIG. 12 is an explanatory diagram of a case where a dog is detected.
- FIG. 12A shows a state where the dog is present in the image being shot, and FIG. The candidate area is shown, and (c) in the figure shows the area where the detected dog exists.
- FIG. 12A when there is a dog registered in the image being photographed, when a candidate area is searched, an area as shown in FIG. 12B is searched. . Then, the feature quantity of the searched candidate area is compared with the registered feature quantity of the dog, and if it is determined that the dog exists, the dog exists as shown in FIG. A display indicating the area being displayed is performed.
- each block of the digital camera 1, particularly the control unit 6, may be configured by hardware logic, or may be realized by software using a CPU (central processing unit) as follows.
- the digital camera 1 includes a CPU that executes instructions of a control program for realizing each function, a ROM (read memory) that stores the program, a RAM (random access memory) that develops the program, the program, and various data
- a storage device such as a memory for storing the.
- An object of the present invention is to provide a recording medium in which a program code (execution format program, intermediate code program, source program) of a control program for the digital camera 1 which is software that realizes the above-described functions is recorded in a computer-readable manner. This can also be achieved by supplying the digital camera 1 and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU (microprocessor unit)).
- Examples of the recording medium include a tape system such as a magnetic tape and a cassette tape, a magnetic disk such as a floppy (registered trademark) disk / hard disk, a CD-ROM (compact disk-read-only memory) / MO (magneto-optical) / Disc system including optical disc such as MD (Mini Disc) / DVD (digital versatile disc) / CD-R (CD Recordable), card system such as IC card (including memory card) / optical card, or mask ROM / EPROM ( A semiconductor memory system such as erasable, programmable, read-only memory, EEPROM (electrically erasable, programmable, read-only memory) / flash ROM, or the like can be used.
- a tape system such as a magnetic tape and a cassette tape
- a magnetic disk such as a floppy (registered trademark) disk / hard disk
- the digital camera 1 may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
- the communication network is not particularly limited.
- the Internet an intranet, an extranet, a LAN (local area network), an ISDN (integrated services network, digital network), a VAN (value-added network), and a CATV (community antenna) television communication.
- a network, a virtual private network, a telephone line network, a mobile communication network, a satellite communication network, etc. can be used.
- the transmission medium constituting the communication network is not particularly limited.
- IEEE institute of electrical and electronic engineering
- USB power line carrier
- cable TV line cable TV line
- telephone line ADSL (asynchronous digital subscriber loop) loop Wireless
- IrDA infrared data association
- remote control Bluetooth (registered trademark)
- 802.11 wireless high data rate
- mobile phone network satellite line, terrestrial digital network, etc.
- the present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
Landscapes
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Environmental Sciences (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Birds (AREA)
- Zoology (AREA)
- Animal Husbandry (AREA)
- Biodiversity & Conservation Biology (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
Description
R[i]=Rz(γi)・Rx(αi)・Ry(βi)
R[i→j]=R[j]・R[i]-1
このときの回転角の大きさ(i番目の登録データとj番目の登録データの向きの変位)は以下となる。
2 記憶部
3 操作受付部
4 撮像部
5 表示部
6 制御部
11 フレーム画像取り出し部
12 登録処理部
13 追尾処理部(追尾手段)
14 検出処理部
21 フレームバッファ
22 モデル情報記憶部
23 追尾情報記憶部
31 移動領域予測部
32 特徴点抽出部(追尾用特徴抽出手段)
33 移動量算出部
34 追尾対象領域算出部
35 フレーム情報更新部
36 追尾情報初期設定部
41、51 フレーム画像取得部(画像取得手段)
42 特徴量抽出部(検出用特徴抽出手段)
43 候補領域探索部(候補領域探索手段)
44 候補領域特徴量抽出部
45、53 類似度算出部(類似度算出手段)
46 中心位置算出部(判定手段)
47、56 結果出力部(報知手段)
52 検出用特徴量抽出部(対象物体領域検出手段、検出用特徴抽出手段)
54 登録部(検出用特徴登録手段)
55 充実度算出部(運動変化量算出手段、充実度算出手段)
57 共通特徴量抽出部(共通特徴特定手段)
58 初期位置取得部(初期位置取得手段)
Claims (13)
- 撮影による動画像に含まれる対象物体を検出するための情報であって、該対象物体を特徴付ける情報である検出用特徴情報を登録する検出用情報登録装置であって、
情報を記憶する記憶部と、
上記動画像におけるフレーム画像を取得する画像取得手段と、
撮影による動画像に含まれる対象物体を追尾するための情報であって、該対象物体を特徴付ける情報である追尾用特徴情報を、上記フレーム画像から抽出して上記記憶部に記憶する追尾用特徴抽出手段と、
上記追尾用特徴抽出手段が抽出した追尾用特徴情報と、上記記憶部に記憶された過去の上記フレーム画像に関する上記追尾用特徴情報との変化から、上記対象物体の画像領域を検出する対象物体領域検出手段と、
該対象物体領域検出手段が検出した上記対象物体の画像領域から、上記検出用特徴情報を抽出する検出用特徴抽出手段と
該検出用特徴抽出手段が抽出した上記検出用特徴情報の一部または全部を上記記憶部に登録する検出用特徴登録手段とを備えていることを特徴とする検出用情報登録装置。 - 上記追尾用特徴抽出手段が抽出した追尾用特徴情報と、上記記憶部に記憶された過去の上記フレーム画像に関する上記追尾用特徴情報との変化と、上記対象物体領域検出手段が検出した対象物体領域の情報とに基づいて、上記対象物体の上記フレーム画像における運動の変化量を算出する運動変化量算出手段と、
上記検出用特徴抽出手段が抽出した検出用特徴情報と、上記記憶部に記憶された検出用特徴情報との類似度を算出する類似度算出手段と、
上記対象物体を含む被写体の撮影による任意の撮影画像から、上記対象物体を検出するために必要と推定される検出用特徴情報の量に対する、上記記憶部に記憶された検出用特徴情報の量の度合を示す充実度を、上記運動変化量算出手段が算出した運動の変化量と、上記類似度算出手段が算出した類似度とに基づいて算出する充実度算出手段とをさらに備えていることを特徴とする請求項1に記載の検出用情報登録装置。 - 動画像における対象物体の初期位置の情報を予め取得して上記記憶部に記憶する初期位置取得手段をさらに備えており、
上記追尾用特徴抽出手段および上記対象物体領域検出手段は、上記画像取得手段が取得した最初のフレーム画像に対し、上記記憶部に記憶された初期位置の情報を利用することを特徴とする請求項1または2に記載の検出用情報登録装置。 - 上記記憶部に記憶された複数の検出用特徴情報の全てまたは一部に共通する1または複数の検出用共通特徴情報を特定する共通特徴特定手段をさらに備えており、
上記検出用特徴登録手段は、上記共通特徴特定手段が特定した検出用共通特徴情報をさらに上記記憶部に登録することを特徴とする請求項1~3のいずれか1項に記載の検出用情報登録装置。 - 上記記憶部に記憶される検出用特徴情報は、複数の上記対象物体を識別する識別情報に対応づけて記憶されていることを特徴とする請求項1~4のいずれか1項に記載の検出用情報登録装置。
- 撮影による動画像に含まれる対象物体を検出する対象物体検出装置であって、
上記対象物体を検出するための情報であって、該対象物体を特徴付ける情報である複数の検出用特徴情報と、該複数の検出用特徴情報の全てまたは一部に共通する1または複数の検出用共通特徴情報とを記憶する記憶部と、
上記動画像におけるフレーム画像を取得する画像取得手段と、
該画像取得手段が取得したフレーム画像から、検出用特徴情報を抽出する検出用特徴抽出手段と、
該検出用特徴抽出手段が抽出した検出用特徴情報と、上記記憶部の検出用共通特徴情報とを用いて、上記検出用共通特徴情報と類似する検出用特徴情報を有する領域を、上記フレーム画像に上記対象物体が存在する可能性を有する領域である候補領域として探索する候補領域探索手段と、
上記候補領域探索手段が探索した候補領域に含まれる検出用特徴情報と、上記記憶部の検出用特徴情報との類似度を算出する類似度算出手段と、
該類似度算出手段が算出した類似度に基づいて、上記候補領域が、上記フレーム画像に上記対象物体が存在する領域であるかを判定する判定手段と、を備えていることを特徴とする対象物体検出装置。 - 上記対象物体が存在する領域を追尾する追尾手段を備えていることを特徴とする請求項6に記載の対象物体検出装置。
- 請求項1~5のいずれか1項に記載の検出用情報登録装置と請求項6または7に記載の対象物体検出装置とを含む電子機器。
- 請求項2~5のいずれか1項に記載の検出用情報登録装置と請求項6または7に記載の対象物体検出装置とを含む電子機器であって、
上記充実度算出手段が算出した充実度に基づいて、ユーザに対し報知を行う報知手段を備えていることを特徴とする電子機器。 - 撮影による動画像に含まれる対象物体を検出するための情報であって、該対象物体を特徴付ける情報である検出用特徴情報を登録する検出用情報登録装置の制御方法であって、
上記動画像におけるフレーム画像を取得する画像取得ステップと、
撮影による動画像に含まれる対象物体を追尾するための情報であって、該対象物体を特徴付ける情報である追尾用特徴情報を、上記フレーム画像から抽出して記憶部に記憶する追尾用特徴抽出ステップと、
上記追尾用特徴抽出ステップで抽出した追尾用特徴情報と、記憶部に記憶された過去の上記フレーム画像に関する上記追尾用特徴情報との変化から、上記対象物体の画像領域を検出する対象物体領域検出ステップと、
該対象物体領域検出ステップで検出した上記対象物体の画像領域から、上記検出用特徴情報を抽出する検出用特徴抽出ステップと
該検出用特徴抽出ステップで抽出した上記検出用特徴情報の一部または全部を記憶部に登録する検出用特徴登録ステップとを含むことを特徴とする検出用情報登録装置の制御方法。 - 撮影による動画像に含まれる対象物体を検出する対象物体検出装置の制御方法であって、
上記動画像におけるフレーム画像を取得する画像取得ステップと、
該画像取得ステップで取得したフレーム画像から、検出用特徴情報を抽出する検出用特徴抽出ステップと、
該検出用特徴抽出ステップで抽出した検出用特徴情報と、記憶部に記憶されている対象物体を特徴付ける情報である複数の検出用特徴情報の全てまたは一部に共通する1または複数の検出用共通特徴情報とを用いて、上記検出用共通特徴情報と類似する検出用特徴情報を有する領域を、上記フレーム画像に上記対象物体が存在する可能性を有する領域である候補領域として探索する候補領域探索ステップと、
上記候補領域探索ステップで探索した候補領域に含まれる検出用特徴情報と、記憶部に記憶されている検出用特徴情報との類似度を算出する類似度算出ステップと、
該類似度算出ステップで算出した類似度に基づいて、上記候補領域が、上記フレーム画像に上記対象物体が存在する領域であるかを判定する判定ステップと、を含むことを特徴とする対象物体検出装置の制御方法。 - 撮影による動画像に含まれる対象物体を検出するための情報であって、該対象物体を特徴付ける情報である検出用特徴情報を登録する検出用情報登録装置制御プログラムであって、
上記動画像におけるフレーム画像を取得する画像取得ステップと、
撮影による動画像に含まれる対象物体を追尾するための情報であって、該対象物体を特徴付ける情報である追尾用特徴情報を、上記フレーム画像から抽出して記憶部に記憶する追尾用特徴抽出ステップと、
上記追尾用特徴抽出ステップで抽出した追尾用特徴情報と、記憶部に記憶された過去の上記フレーム画像に関する上記追尾用特徴情報との変化から、上記対象物体の画像領域を検出する対象物体領域検出ステップと、
該対象物体領域検出ステップで検出した上記対象物体の画像領域から、上記検出用特徴情報を抽出する検出用特徴抽出ステップと
該検出用特徴抽出ステップで抽出した上記検出用特徴情報の一部または全部を記憶部に登録する検出用特徴登録ステップとを、コンピュータに実行させるための検出用情報登録装置制御プログラム。 - 撮影による動画像に含まれる対象物体を検出する対象物体検出装置制御プログラムであって、
上記動画像におけるフレーム画像を取得する画像取得ステップと、
該画像取得ステップで取得したフレーム画像から、検出用特徴情報を抽出する検出用特徴抽出ステップと、
該検出用特徴抽出ステップで抽出した検出用特徴情報と、記憶部に記憶されている対象物体を特徴付ける情報である複数の検出用特徴情報の全てまたは一部に共通する1または複数の検出用共通特徴情報とを用いて、上記検出用共通特徴情報と類似する検出用特徴情報を有する領域を、上記フレーム画像に上記対象物体が存在する可能性を有する領域である候補領域として探索する候補領域探索ステップと、
上記候補領域探索ステップで探索した候補領域に含まれる検出用特徴情報と、記憶部に記憶されている検出用特徴情報との類似度を算出する類似度算出ステップと、
該類似度算出ステップで算出した類似度に基づいて、上記候補領域が、上記フレーム画像に上記対象物体が存在する領域であるかを判定する判定ステップとを、コンピュータに実行させるための対象物体検出装置制御プログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP09806565.9A EP2323103B1 (en) | 2008-08-11 | 2009-08-06 | Detective information registration device, target object detection device, method of controlling detective information registration device, method of controlling targer object detection device, control program for detective information registration device, and control program for target object detection device |
CN200980128887.XA CN102105904B (zh) | 2008-08-11 | 2009-08-06 | 检测用信息登录装置、对象物体检测装置、电子设备、检测用信息登录装置的控制方法、对象物体检测装置的控制方法、检测用信息登录装置控制程序、对象物体检测装置控制程序 |
US13/058,300 US8774456B2 (en) | 2008-08-11 | 2009-08-06 | Detective information registration device and target object detection device for detecting an object in an image |
KR1020117004393A KR101166994B1 (ko) | 2008-08-11 | 2009-08-06 | 검출용 정보 등록 장치, 대상물체 검출 장치, 전자 기기, 검출용 정보 등록 장치의 제어 방법, 대상물체 검출 장치의 제어 방법, 검출용 정보 등록 장치 제어 프로그램, 대상물체 검출 장치 제어 프로그램 |
EP15152474.1A EP2892027B1 (en) | 2008-08-11 | 2009-08-06 | Detective information registration device, method of controlling detective information registration device and control program for detective information registration device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008207259A JP4497236B2 (ja) | 2008-08-11 | 2008-08-11 | 検出用情報登録装置、電子機器、検出用情報登録装置の制御方法、電子機器の制御方法、検出用情報登録装置制御プログラム、電子機器の制御プログラム |
JP2008-207259 | 2008-08-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010018669A1 true WO2010018669A1 (ja) | 2010-02-18 |
Family
ID=41668820
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/003767 WO2010018669A1 (ja) | 2008-08-11 | 2009-08-06 | 検出用情報登録装置、対象物体検出装置、電子機器、検出用情報登録装置の制御方法、対象物体検出装置の制御方法、検出用情報登録装置制御プログラム、対象物体検出装置制御プログラム |
Country Status (7)
Country | Link |
---|---|
US (1) | US8774456B2 (ja) |
EP (2) | EP2892027B1 (ja) |
JP (1) | JP4497236B2 (ja) |
KR (1) | KR101166994B1 (ja) |
CN (1) | CN102105904B (ja) |
TW (1) | TWI438719B (ja) |
WO (1) | WO2010018669A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012073997A (ja) * | 2010-09-01 | 2012-04-12 | Ricoh Co Ltd | 物体追尾装置、物体追尾方法およびそのプログラム |
KR101180570B1 (ko) | 2008-09-25 | 2012-09-06 | 라쿠텐 인코포레이티드 | 전경 영역 추출 프로그램, 전경 영역 추출 장치, 및 전경 영역 추출 방법 |
CN111612822A (zh) * | 2020-05-21 | 2020-09-01 | 广州海格通信集团股份有限公司 | 对象跟踪方法、装置、计算机设备和存储介质 |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100259614A1 (en) * | 2009-04-14 | 2010-10-14 | Honeywell International Inc. | Delay Compensated Feature Target System |
JP5476955B2 (ja) * | 2009-12-04 | 2014-04-23 | ソニー株式会社 | 画像処理装置および画像処理方法、並びにプログラム |
JP5427577B2 (ja) * | 2009-12-04 | 2014-02-26 | パナソニック株式会社 | 表示制御装置及び表示画像形成方法 |
JP5582924B2 (ja) * | 2010-08-26 | 2014-09-03 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
WO2012065031A2 (en) * | 2010-11-11 | 2012-05-18 | Mlp Technology Inc. | Animal data management |
JP5801601B2 (ja) * | 2011-05-10 | 2015-10-28 | キヤノン株式会社 | 画像認識装置、画像認識装置の制御方法、およびプログラム |
JP5664478B2 (ja) * | 2011-06-30 | 2015-02-04 | 富士通株式会社 | 物体認識支援装置,プログラムおよび方法 |
JP6316540B2 (ja) * | 2012-04-13 | 2018-04-25 | 三星電子株式会社Samsung Electronics Co.,Ltd. | カメラ装置及びその制御方法 |
JP5803868B2 (ja) * | 2012-09-20 | 2015-11-04 | カシオ計算機株式会社 | 動画処理装置、動画処理方法及びプログラム |
CN103870798B (zh) * | 2012-12-18 | 2017-05-24 | 佳能株式会社 | 对象检测方法、对象检测设备以及图像拾取设备 |
WO2014175477A1 (en) * | 2013-04-24 | 2014-10-30 | Lg Electronics Inc. | Apparatus and method for processing image |
CN103716541B (zh) * | 2013-12-23 | 2019-08-02 | 宇龙计算机通信科技(深圳)有限公司 | 一种拍摄大分辨率照片的方法及终端 |
KR102257620B1 (ko) * | 2014-03-20 | 2021-05-28 | 엘지전자 주식회사 | 디스플레이 디바이스 및 그 제어 방법 |
CN103996292A (zh) * | 2014-05-29 | 2014-08-20 | 南京新奕天科技有限公司 | 一种基于角点匹配的运动车辆跟踪方法 |
JP6340957B2 (ja) * | 2014-07-02 | 2018-06-13 | 株式会社デンソー | 物体検出装置および物体検出プログラム |
JP6524619B2 (ja) * | 2014-08-18 | 2019-06-05 | 株式会社リコー | 軌跡描画装置、軌跡描画方法、軌跡描画システム、及びプログラム |
US10810539B1 (en) * | 2015-03-25 | 2020-10-20 | Amazon Technologies, Inc. | Re-establishing tracking of a user within a materials handling facility |
JP6389803B2 (ja) * | 2015-05-27 | 2018-09-12 | 富士フイルム株式会社 | 画像処理装置、画像処理方法、プログラムおよび記録媒体 |
CN107209854A (zh) * | 2015-09-15 | 2017-09-26 | 深圳市大疆创新科技有限公司 | 用于支持顺畅的目标跟随的系统和方法 |
EP3368957B1 (en) | 2015-10-30 | 2022-02-09 | SZ DJI Technology Co., Ltd. | Systems and methods for uav path planning and control |
WO2017081839A1 (ja) * | 2015-11-13 | 2017-05-18 | パナソニックIpマネジメント株式会社 | 移動体追跡方法、移動体追跡装置、およびプログラム |
CN106791637A (zh) * | 2016-12-15 | 2017-05-31 | 大连文森特软件科技有限公司 | 基于ar增强现实技术的鸟类观赏和区域保护系统 |
JP6988704B2 (ja) * | 2018-06-06 | 2022-01-05 | トヨタ自動車株式会社 | センサ制御装置、物体探索システム、物体探索方法及びプログラム |
JP6814178B2 (ja) * | 2018-06-11 | 2021-01-13 | 日本電信電話株式会社 | 物体検出装置、方法、及びプログラム |
JP6773732B2 (ja) * | 2018-08-03 | 2020-10-21 | ファナック株式会社 | トレース装置 |
JP6579727B1 (ja) * | 2019-02-04 | 2019-09-25 | 株式会社Qoncept | 動体検出装置、動体検出方法、動体検出プログラム |
JP7192582B2 (ja) * | 2019-03-11 | 2022-12-20 | オムロン株式会社 | 物体追跡装置および物体追跡方法 |
JP7446760B2 (ja) | 2019-10-07 | 2024-03-11 | キヤノン株式会社 | 情報処理装置、映像の要約方法、およびプログラム |
CN110991465B (zh) * | 2019-11-15 | 2023-05-23 | 泰康保险集团股份有限公司 | 一种物体识别方法、装置、计算设备及存储介质 |
WO2024040588A1 (zh) | 2022-08-26 | 2024-02-29 | 宁德时代新能源科技股份有限公司 | 检测图像中的目标点的方法、装置和计算机存储介质 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06231252A (ja) * | 1993-02-04 | 1994-08-19 | Toshiba Corp | 監視画像の移動物体追跡方法 |
JPH06274625A (ja) * | 1993-03-18 | 1994-09-30 | Toshiba Corp | 監視画像の移動物体追跡方法 |
JPH0816987A (ja) | 1994-06-24 | 1996-01-19 | Japan Radio Co Ltd | ナビゲーション装置及びナビゲーション方法 |
JP2001307096A (ja) | 2000-04-25 | 2001-11-02 | Fujitsu Ltd | 画像認識装置及び方法 |
JP2002083297A (ja) | 2000-06-28 | 2002-03-22 | Matsushita Electric Ind Co Ltd | 物体認識方法および物体認識装置 |
JP2003346158A (ja) * | 2002-05-28 | 2003-12-05 | Toshiba Corp | 顔画像による顔領域追跡方法 |
JP2006155167A (ja) | 2004-11-29 | 2006-06-15 | Secom Co Ltd | 画像認識装置 |
JP2007135501A (ja) | 2005-11-21 | 2007-06-07 | Atom System:Kk | 鼻特徴情報生成装置及び鼻特徴情報生成プログラム |
JP2007282119A (ja) | 2006-04-11 | 2007-10-25 | Nikon Corp | 電子カメラおよび画像処理装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100589561C (zh) * | 2005-12-06 | 2010-02-10 | 南望信息产业集团有限公司 | 基于视频内容分析的可疑静止物检测方法 |
US8306280B2 (en) | 2006-04-11 | 2012-11-06 | Nikon Corporation | Electronic camera and image processing apparatus |
CN101216885A (zh) * | 2008-01-04 | 2008-07-09 | 中山大学 | 一种基于视频的行人人脸检测与跟踪算法 |
-
2008
- 2008-08-11 JP JP2008207259A patent/JP4497236B2/ja active Active
-
2009
- 2009-08-05 TW TW098126302A patent/TWI438719B/zh active
- 2009-08-06 US US13/058,300 patent/US8774456B2/en active Active
- 2009-08-06 KR KR1020117004393A patent/KR101166994B1/ko active IP Right Grant
- 2009-08-06 WO PCT/JP2009/003767 patent/WO2010018669A1/ja active Application Filing
- 2009-08-06 CN CN200980128887.XA patent/CN102105904B/zh active Active
- 2009-08-06 EP EP15152474.1A patent/EP2892027B1/en active Active
- 2009-08-06 EP EP09806565.9A patent/EP2323103B1/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06231252A (ja) * | 1993-02-04 | 1994-08-19 | Toshiba Corp | 監視画像の移動物体追跡方法 |
JPH06274625A (ja) * | 1993-03-18 | 1994-09-30 | Toshiba Corp | 監視画像の移動物体追跡方法 |
JPH0816987A (ja) | 1994-06-24 | 1996-01-19 | Japan Radio Co Ltd | ナビゲーション装置及びナビゲーション方法 |
JP2001307096A (ja) | 2000-04-25 | 2001-11-02 | Fujitsu Ltd | 画像認識装置及び方法 |
JP2002083297A (ja) | 2000-06-28 | 2002-03-22 | Matsushita Electric Ind Co Ltd | 物体認識方法および物体認識装置 |
JP2003346158A (ja) * | 2002-05-28 | 2003-12-05 | Toshiba Corp | 顔画像による顔領域追跡方法 |
JP2006155167A (ja) | 2004-11-29 | 2006-06-15 | Secom Co Ltd | 画像認識装置 |
JP2007135501A (ja) | 2005-11-21 | 2007-06-07 | Atom System:Kk | 鼻特徴情報生成装置及び鼻特徴情報生成プログラム |
JP2007282119A (ja) | 2006-04-11 | 2007-10-25 | Nikon Corp | 電子カメラおよび画像処理装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2323103A4 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101180570B1 (ko) | 2008-09-25 | 2012-09-06 | 라쿠텐 인코포레이티드 | 전경 영역 추출 프로그램, 전경 영역 추출 장치, 및 전경 영역 추출 방법 |
JP2012073997A (ja) * | 2010-09-01 | 2012-04-12 | Ricoh Co Ltd | 物体追尾装置、物体追尾方法およびそのプログラム |
CN111612822A (zh) * | 2020-05-21 | 2020-09-01 | 广州海格通信集团股份有限公司 | 对象跟踪方法、装置、计算机设备和存储介质 |
CN111612822B (zh) * | 2020-05-21 | 2024-03-15 | 广州海格通信集团股份有限公司 | 对象跟踪方法、装置、计算机设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP2892027A1 (en) | 2015-07-08 |
TWI438719B (zh) | 2014-05-21 |
US8774456B2 (en) | 2014-07-08 |
JP2010044516A (ja) | 2010-02-25 |
KR101166994B1 (ko) | 2012-07-24 |
EP2323103B1 (en) | 2017-11-29 |
US20110142286A1 (en) | 2011-06-16 |
EP2892027B1 (en) | 2018-09-19 |
EP2323103A4 (en) | 2014-08-27 |
CN102105904A (zh) | 2011-06-22 |
TW201011696A (en) | 2010-03-16 |
CN102105904B (zh) | 2014-06-25 |
KR20110036942A (ko) | 2011-04-12 |
JP4497236B2 (ja) | 2010-07-07 |
EP2323103A1 (en) | 2011-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4497236B2 (ja) | 検出用情報登録装置、電子機器、検出用情報登録装置の制御方法、電子機器の制御方法、検出用情報登録装置制御プログラム、電子機器の制御プログラム | |
JP5247356B2 (ja) | 情報処理装置およびその制御方法 | |
KR101381439B1 (ko) | 얼굴 인식 장치 및 얼굴 인식 방법 | |
JP4725377B2 (ja) | 顔画像登録装置、顔画像登録方法、顔画像登録プログラム、および記録媒体 | |
KR101001060B1 (ko) | 트래킹 장치, 트래킹 방법, 트래킹 장치의 제어 프로그램, 및 컴퓨터 판독 가능한 기록 매체 | |
EP2339536B1 (en) | Image processing system, image processing apparatus, image processing method, and program | |
JP6106921B2 (ja) | 撮像装置、撮像方法および撮像プログラム | |
JP6049448B2 (ja) | 被写体領域追跡装置、その制御方法及びプログラム | |
JP5483863B2 (ja) | 情報処理装置およびその制御方法 | |
JP5959923B2 (ja) | 検出装置、その制御方法、および制御プログラム、並びに撮像装置および表示装置 | |
JP2010218060A (ja) | 顔認証装置、人物画像検索システム、顔認証装置制御プログラム、コンピュータ読み取り可能な記録媒体、および顔認証装置の制御方法 | |
JP2005149370A (ja) | 画像撮影装置、個人認証装置及び画像撮影方法 | |
JP2021071794A (ja) | 主被写体判定装置、撮像装置、主被写体判定方法、及びプログラム | |
JP5539565B2 (ja) | 撮像装置及び被写体追跡方法 | |
JP4818997B2 (ja) | 顔検出装置及び顔検出プログラム | |
JP5278307B2 (ja) | 画像処理装置及び方法、並びにプログラム | |
JP2016081095A (ja) | 被写体追跡装置、その制御方法、撮像装置、表示装置及びプログラム | |
JP5247419B2 (ja) | 撮像装置および被写体追跡方法 | |
JP2017005582A (ja) | 画像処理装置、画像処理方法、及びプログラム | |
JP5445127B2 (ja) | 画像処理装置及び方法、並びにプログラム | |
WO2023106103A1 (ja) | 画像処理装置およびその制御方法 | |
JP2021125847A (ja) | 画像処理装置、撮像装置、画像処理方法およびプログラム | |
JP2023084461A (ja) | 主被写体判定装置、撮像装置、主被写体判定方法、及びプログラム | |
CN117522707A (zh) | 支持多输入的姿势推荐与迁移方法、系统、设备及介质 | |
JP2012118620A (ja) | 画像生成システムおよび画像生成方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980128887.X Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09806565 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2009806565 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009806565 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13058300 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20117004393 Country of ref document: KR Kind code of ref document: A |