WO2015186347A1 - 検出システム、検出方法及びプログラム記憶媒体 - Google Patents
検出システム、検出方法及びプログラム記憶媒体 Download PDFInfo
- Publication number
- WO2015186347A1 WO2015186347A1 PCT/JP2015/002775 JP2015002775W WO2015186347A1 WO 2015186347 A1 WO2015186347 A1 WO 2015186347A1 JP 2015002775 W JP2015002775 W JP 2015002775W WO 2015186347 A1 WO2015186347 A1 WO 2015186347A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- background model
- image frame
- difference
- background
- model
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19604—Image analysis to detect motion of the intruder, e.g. by frame subtraction involving reference image or background adaptation with time to compensate for changing conditions, e.g. reference image update on detection of light level change
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Definitions
- Some aspects according to the present invention relate to a detection system, a detection method, and a program storage medium.
- the moving body is not limited to an object that continues to move among objects reflected in an image, and includes a case where the moving body temporarily stops (also referred to as stationary / staying).
- the moving object refers to all objects reflected in a portion other than the portion considered as the background in the image.
- a person or a vehicle that is general as a monitoring target in video surveillance does not always move, but a stationary state such as temporary stop or parking occurs. For this reason, it is important in applications such as video surveillance that detection is possible even when temporarily stopped.
- a background subtraction method is known as one method for detecting a moving object (see, for example, Non-Patent Document 1 and Non-Patent Document 2).
- the background subtraction method is a method in which an image stored as a background is compared with an image photographed by a camera, and an area having a difference is extracted as a moving object.
- accurate background extraction at the time of analysis is required. This is because if the data at the start of observation is simply used as a fixed background, many false detections will occur due to the influence of the background change accompanying changes in the environment, such as changes in lighting. .
- Non-Patent Document 1 discloses a method of applying the background difference method while sequentially updating the background.
- Patent Document 1 discloses a method of analyzing a motion in a scene using a plurality of background models having different time widths. In this method, a long-term background model analyzed in a long range and a short-term background model analyzed in a short range are created. If the moving object is not detected by the background difference based on the short-term background model and the moving object is detected by the background difference based on the long-term background model for a predetermined number of times, it is assumed that there is a temporary stationary object. An object is detected as a moving object.
- Non-Patent Document 1 In a method of extracting a difference (also referred to as “background difference”) between a background image that has been sequentially updated and an image to be analyzed, as in Non-Patent Document 1, a person or vehicle that is longer than the time width for analyzing the background image
- a moving body such as In this case, since the said mobile body will be determined to be a part of background image, there exists a subject that it cannot detect.
- the analysis time length is extended to detect temporary stationary objects, it becomes more susceptible to background changes due to external noise such as lighting fluctuations. There arises a problem of misdetecting many.
- Patent Document 1 is intended to detect a temporary stationary object, it is assumed that a background difference based on a long-term background model can represent a true background when an observed image is acquired. For this reason, in an environment where the background changes from moment to moment, such as lighting fluctuations, it is difficult to sufficiently prevent false detection because the difference between the long-term background model and the true background at the time of observation image acquisition is large. .
- Some aspects of the present invention have been made in view of the above-described problems, and an object thereof is to provide a detection system, a detection method, and a program storage medium that can suitably detect a moving object.
- One detection system includes an input unit that receives an input of a plurality of image frames having different shooting times, a first background model generated based on an image frame at a processing time, and the first background model. Difference between the second background model having a smaller influence of the image frame at the processing time than the third background model having a smaller influence of the image frame at the processing time than the second background model.
- the difference between the calculating means for calculating, the second background model and the third background model is greater than or equal to a first threshold, and the difference between the first background model and the third background model is Detecting means for detecting a first region in the image frame that is equal to or greater than a second threshold value times a difference between the first background model and the second background model.
- One detection method includes a step of receiving a plurality of image frames having different shooting times, a first background model generated based on an image frame at a processing time, and the first background model. Also, a difference is calculated between the second background model in which the influence of the image frame at the processing time is small and the third background model in which the influence of the image frame at the processing time is smaller than that of the second background model. And the difference between the second background model and the third background model is not less than a first threshold, and the difference between the first background model and the third background model is The computer performs a step of detecting a first region in the image frame that is equal to or greater than a second threshold times a difference between the first background model and the second background model.
- One program according to the present invention includes a process for receiving an input of a plurality of image frames having different shooting times, a first background model generated based on an image frame at the processing time, and the first background model.
- a difference is calculated between the second background model in which the influence of the image frame at the processing time is small and the third background model in which the influence of the image frame at the processing time is smaller than that of the second background model.
- the difference between the processing and the second background model and the third background model is not less than a first threshold, and the difference between the first background model and the third background model is Causing the computer to execute a process of detecting a first region in the image frame that is equal to or greater than a second threshold value times the difference between one background model and the second background model.
- “part”, “means”, “apparatus”, and “system” do not simply mean physical means, but “part”, “means”, “apparatus”, “system”. This includes the case where the functions possessed by "are realized by software. Further, even if the functions of one “unit”, “means”, “apparatus”, and “system” are realized by two or more physical means or devices, two or more “parts” or “means”, The functions of “device” and “system” may be realized by a single physical means or device.
- the present invention it is possible to provide a detection system, a detection method, and a program storage medium that can suitably detect a moving object.
- the present embodiment relates to a detection system that detects a moving body that repeatedly moves and temporarily stays, such as a person and a car, from an image captured by an imaging device such as a camera.
- the detection system according to the present embodiment suitably detects a moving object such as a person or a car even when the environment changes from moment to moment, such as illumination fluctuations.
- the detection system generates three background models created based on the image frames at each time cut out from the video, and uses these background models. To detect moving objects.
- These three background models have different time widths (time widths to be analyzed) at which a plurality of image frames that are the basis of the background models are captured.
- these three background models are referred to as a long-term background model, a medium-term background model, and a short-term background model.
- a long-term background model and a short-term background model are compared, and a pixel area having a difference can be detected as a moving object.
- the long-term background model is created from an image frame having a duration that is sufficiently longer than the time when the moving body is assumed to be stationary.
- the difference between the long-term background model and the short-term background model is simply detected.
- the presence or absence of a moving body can be determined.
- the background itself changes for reasons such as lighting fluctuations, not only the region where the moving object exists but also the background part where the moving object does not exist is large between the short-term background model and the long-term background model. Differences will occur. Therefore, it is difficult to specify the area of the moving object simply by comparing the long-term background model and the short-term background model.
- the detection system according to the present embodiment detects a moving object using a medium-term background model that has a smaller difference in time width to be analyzed than between the long-term background model and the short-term background model.
- the detection system according to the present embodiment detects a pixel area that satisfies the condition as an area where a moving body that is temporarily stationary exists.
- the difference between the medium-term background model and the long-term background model is equal to or greater than a predetermined threshold.
- the difference between the short-term background model and the medium-term background model is more than a predetermined constant multiple between the short-term background model and the long-term background model.
- the detection system can detect the area as the background.
- the stationary moving body has a great influence on the medium-term background model, but has little influence on the long-term background model. For this reason, a difference equal to or greater than a predetermined threshold value occurs in the area of the moving object between the medium-term background model and the long-term background model. That is, the above condition 1 is satisfied.
- the difference between the short-term background model and the medium-term background model is small. Therefore, the difference between the short-term background model and the medium-term background model is more than a predetermined constant multiple between the short-term background model and the long-term background model. That is, the condition 2 is satisfied. Thereby, since both conditions 1 and 2 are satisfied regarding the area where the moving body exists, the detection system can detect the area as the area where the moving body exists.
- the medium-term background model continues to have a significant impact on the short-term background model, The influence of the moving body is almost eliminated. For this reason, the difference between the medium-term background model and the short-term background model becomes very large. In general, the difference between the long-term background model and the short-term background model is reduced. Thereby, since the condition 2 is not satisfied, the detection system can detect the region as the background.
- the detection system according to the present embodiment can suitably detect a moving object such as a person or a vehicle that is temporarily stationary even in an environment in which a background change due to external noise such as illumination fluctuation occurs.
- the detection system 100 detects a moving body using three background models, but is not limited to this.
- Three background models are extracted as a short-term background model, a medium-term background model, and a long-term background model from four or more generated background models according to the stationary time of the moving object to be detected, and according to the difference between them. Then, the moving body may be detected.
- FIG. 2 is a block diagram showing a system configuration of the detection system 100 according to the present embodiment.
- the detection system 100 shown in FIG. 2 includes an image input unit 110, a background model acquisition unit 120, a background model database (DB) 130, a background model update unit 140, a background model distance calculation unit 150, a moving body detection unit 160, a moving body.
- a detection parameter dictionary 170 and a result output unit 180 are included.
- the image input unit 110 receives input of image frames constituting sequential videos, that is, image frames having different shooting times, from an imaging device such as a camera (not shown). In other words, the image input unit 110 receives an input of an image frame at the processing time.
- the image frame may be a monochrome image or a color image.
- the image frame contains one value for each pixel.
- the image frame has three values (for example, color representation such as RGB and YCbCr) in each pixel.
- the image frame may have four or more values for each pixel such as distance information obtained by a TOF (Time of Flight) camera.
- TOF Time of Flight
- the background model acquisition unit 120 reads three background models of the image frame input from the image input unit 110 and the short-term background model, medium-term background model, and long-term background model stored in the background model DB 130.
- the background model DB 130 stores a plurality of background models including a short-term background model, a medium-term background model, and a long-term background model having different time widths of the image frames taken as analysis sources.
- various types of background models are conceivable.
- an image format similar to an image frame input from the image input unit 110 can be used. In this case, for example, if the background model is a monochrome image, one value is included for each pixel. If the background model is a color image, three values are included for each pixel.
- the background model may be a distribution function for each pixel that indicates the likelihood of the pixel value of each image frame that is the source for each pixel.
- the distribution function may be a histogram, for example, or may be a distribution function by the sum of a plurality of Gaussians.
- the short-term background model, the medium-term background model, and the long-term background model have different time widths of the original image frame shooting time, and the time width is in the order of the short-term background model, the medium-term background model, and the long-term background model. become longer.
- the short-term background model it may be possible to adopt the image frame input from the image input unit 110 as it is as the short-term background model. In that case, the background model DB 130 may not manage the short-term background model.
- the background model update unit 140 considers an image frame at the processing time (image frame at the latest time) from the image frame at the processing time acquired by the background model acquisition unit 120 and the background model stored in the background model DB 130. Generate models, medium-term background models, and long-term background models. The generated background model is stored in the background model DB 130.
- the short-term background model, the medium-term background model, and the long-term background model have different time widths of the original image frames.
- the long-term background model is the longest from the image frame captured in the shortest time width from the processing time
- the long-term background model is from the image frame captured in the longer time width. Each is generated from an image frame photographed in the time width.
- the background model for example, it is conceivable to take an average value for each pixel or take the most frequent pixel value for each image frame for a time width determined for each background model.
- the distribution function for each pixel is used as described above, it is conceivable to generate a distribution function of pixel values of each image frame.
- the background model update unit 140 may change the update method so that the influence (weight) of the input image near the processing time given to the medium-term background model is increased and the medium-term background model is closer to the short-term background model.
- the medium-term background model can be quickly brought close to the background state in which there is no moving object, so that erroneous detection can be suppressed.
- the short-term background model, the medium-term background model, and the long-term background model are described as having different time widths of the original image frames, but the present invention is not limited to this.
- the short-term background model, the medium-term background model, and the long-term background model are background models that have different magnitudes of influence from the image frame at the processing time (newest time). That is, the short-term background model has the greatest influence of the image frame at the processing time, and the long-term background model has the least influence of the image frame at the processing time.
- the concept of update coefficient is introduced, and the update coefficient when updating the background model using the image frame input from the image input unit 110 is set as the short-term background model, the medium-term background model, and the long-term background model. You may make it change with a background model.
- the background model is I bg and the image frame input from the image input unit 110 is I,
- a is a constant not less than 0 and not more than 1, and takes different values for the short-term background model, the medium-term background model and the long-term background model. If the constants of the short-term background model, the medium-term background model, and the long-term background model are a 1 , a 2 , and a 3 ,
- the inter-background model distance calculation unit 150 calculates, for each pixel, a distance value that numerically indicates the difference between the three background models acquired by the background model acquisition unit 120. Specifically, for each pixel, the distance between the short-term background model and the medium-term background model, the distance between the short-term background model and the long-term background model, the distance between the medium-term background model and the long-term background model, Each of 150 is calculated.
- the inter-background model distance calculation unit 150 calculates a difference value or a difference vector of pixel values of each pixel and then calculates the absolute value or size as a distance. It is possible.
- the background model has a plurality of values for each pixel, for example, in the case of a color image format such as RGB, YCbCr, HSV, etc., after calculating a difference value for each value, the absolute value of the difference value is calculated. It is also conceivable that the sum of values is the distance of each pixel.
- the pixel values of the two neighboring neighboring images extracted are regarded as two vectors, respectively.
- the vector distance between the two vectors and the normalized correlation r may be calculated. In this case, for example, in the case of calculating a distance from a neighboring 3 ⁇ 3 image in a monochrome image format background model, a distance between 9-dimensional vectors is calculated. Further, when calculating the distance from the neighboring 5 ⁇ 5 image in the RGB color image, the distance between the 75-dimensional (5 ⁇ 5 ⁇ 3) vectors is calculated.
- 1-r can be used as a value indicating distance in order to convert to a distance scale.
- the distance may be calculated after pre-processing the neighboring partial image with an edge enhancement filter or the like.
- the inter-background model distance calculation unit 150 uses a histogram distance calculation method such as the area of the common part of the two histograms or the batcha rear distance to determine the distance between the background models. Can be calculated.
- the background model distance calculation unit 150 has been described as calculating the distance for each pixel.
- the present invention is not limited to this.
- the inter-background model distance calculation unit 150 may use a technique of dividing the image into several region units, for example, meshes, and calculating a distance for each mesh unit. The distance may take a negative value.
- the short-term background model, the medium-term background model, and the long-term background model may be in different formats.
- the short-term background model may be an image format
- the medium-term background model may be a distribution function for each pixel.
- a method for calculating the distance for example, a normal distribution histogram having a predetermined standard deviation is generated around the pixel value held in the short-term background model. Then, it is conceivable to adopt a method of calculating the distance by regarding the histogram as a distribution function in the short-term background model and comparing the histogram with the histogram of the medium-term background model.
- the distance is obtained by comparing the medium-term background model of the image format generated as a set of the average values with the short-term background model. It is also possible to calculate.
- the moving object detection unit 160 determines whether or not each pixel is included in the area where the moving object is captured using the information on the distance between the background models and the parameters stored in the moving object detection parameter dictionary 170. To do. More specifically, the moving object detection unit 160 determines that the pixel to be processed is included in the area where the moving object is captured when the following two conditions are satisfied. (Condition 1) The distance between the medium-term background model and the long-term background model is equal to or greater than a predetermined threshold. (Condition 2) Between the short-term background model and the long-term background model, there is a distance equal to or more than a predetermined constant multiple of the distance between the short-term background model and the medium-term background model.
- the “predetermined threshold value” of condition 1 and the “predetermined constant multiple” of condition 2 are parameters included in the moving object detection parameter dictionary 170, respectively.
- the long-term background model is generated from an image frame having a longer duration than the time when the moving body is assumed to be stationary. If there is no change in the background area where there is no moving object, simply compare the long-term background model with the short-term background model (which may be the image frame itself at the processing time) to determine the presence or absence of the moving object. Can be determined. However, if the background area also changes from moment to moment due to external noise such as lighting fluctuations, simply comparing the short-term background model with the long-term background model, not only the area where the moving object exists but also the background Even in the region, a large distance value occurs between the two. That is, it is difficult to detect the area of the moving body.
- the short-term background model is closest to the background of the processing time, and then the medium-term background model is close to the background of the processing time. That is, the long-term background model is farthest from the background of the processing time. That is, the distance between the short-term background model and the medium-term background model is smaller than a value obtained by multiplying the distance between the short-term background model and the long-term background model by a constant of 1 or less.
- the constant varies depending on the time width to be analyzed for each background model.
- the constant is close to 1
- the time width of the medium-term background model and the long-term background If the difference with the time width of the model is large, the value is smaller than 1.
- the detection method using the above condition 1 and condition 2 by the moving body detection unit 160 utilizes such characteristics. If the distance between the medium-term background model and the long-term background model is small (below the threshold value), the moving body detection unit 160 can determine the background region as the background region according to Condition 1.
- the stationary moving object has a large effect on the medium-term background model, but does not significantly affect the long-term background model.
- the distance to the long-term background model is equal to or greater than a predetermined threshold. That is, the condition 1 is satisfied.
- the moving object greatly affects the background in the medium-term background model, the distance from the short-term background model is reduced. Therefore, the distance between the short-term background model and the long-term background model is equal to or greater than a predetermined constant multiple of the distance between the short-term background model and the medium-term background model. That is, the condition 2 is also satisfied. Therefore, the moving body detection unit 160 can appropriately extract the region where the moving body exists.
- the moving object detection unit 160 does not detect the moving object in the image frame at the processing time.
- a threshold value for detecting a moving moving body may be prepared in the moving body detection parameter dictionary 170. .
- the moving object detection unit 160 assumes that there is a moving moving object, (Region) may be detected. Thereby, it is possible to always detect a moving body such as a person or a vehicle that repeatedly moves and stops temporarily.
- the result output unit 180 outputs information on the moving object obtained by the moving object detection unit 160.
- it can be output as a binary image in which the moving body region is set to 1 and the other regions are set to 0.
- the detection method by the detection system according to the present embodiment is used to output the detected moving object as the moving object because the distance between the short-term background model and the medium-term background model is large. Since the distance from the background model is large, the detected moving body may be output as a moving body that is temporarily staying (still).
- the pixel value of a pixel detected as a moving body that is moving is set to 1
- the pixel value of a pixel that is detected as a moving body that is temporarily stationary is set to 2. It is conceivable that the pixel values other than are set to 0 and output as ternary values.
- both the distance between the short-term background model and the medium-term background model and the distance between the medium-term background model and the long-term background model may be increased.
- 1 is output as the moving object.
- the pixel value 3 may be output as being unknown.
- FIG. 3 is a flowchart showing a process flow of the detection system 100 according to the present embodiment.
- Each processing step to be described later can be executed in any order or in parallel as long as there is no contradiction in processing contents, and other steps can be added between the processing steps. good. Further, a step described as a single step for convenience can be executed by being divided into a plurality of steps, and a step described as being divided into a plurality of steps for convenience can be executed as one step.
- the image input unit 110 receives an input of a new image frame (image frame at the processing time) (S301).
- the background model acquisition unit 120 reads the short-term background model, the medium-term background model, and the long-term background model stored in the background model DB 130 (S303).
- the background model distance calculation unit 150 determines the distance between the short-term background model and the medium-term background model, the distance between the medium-term background model and the long-term background model, and the short-term background model and the long-term background model. Is calculated (S305).
- the moving body detection unit 160 determines whether or not each of the pixels of the moving body satisfies the condition 1 and the condition 2 with respect to the distance between the background models calculated by the background model distance calculation unit 150. It is determined whether or not it is an image area (S307).
- the result output unit 180 outputs the detection result (S309).
- the background model update unit 140 updates each background model using the image frame input from the image input unit 110, and stores the updated background model in the background model DB 130 (S311).
- the detection system 100 includes a processor 401, a memory 403, a storage device 405, an input interface (I / F) unit 407, a data I / F unit 409, a communication I / F unit 411, and a display device 413. including.
- I / F input interface
- the processor 401 controls various processes of the detection system 100 by executing a program stored in the memory 403.
- the processes relating to the image input unit 110, the background model acquisition unit 120, the background model update unit 140, the background model distance calculation unit 150, the moving object detection unit 160, and the result output unit 180 illustrated in FIG. Can be realized as a program mainly operating on the processor 401 after being temporarily stored.
- the memory 403 is a storage medium such as a RAM (Random Access Memory).
- the memory 403 temporarily stores a program code of a program executed by the processor 401 and data necessary for executing the program.
- the storage device 405 is a non-volatile storage medium such as a hard disk or flash memory.
- the storage device 405 implements an operating system, an image input unit 110, a background model acquisition unit 120, a background model update unit 140, a background model distance calculation unit 150, a moving object detection unit 160, and a result output unit 180.
- Various programs and various data including the background model DB 130 and the moving object detection parameter dictionary 170 can be stored. Programs and data stored in the storage device 405 are referred to by the processor 401 by being loaded into the memory 403 as necessary.
- the input I / F unit 407 is a device for receiving input from the user. Specific examples of the input I / F unit 407 include a keyboard, a mouse, and a touch panel. The input I / F unit 407 may be connected to the detection system 100 via an interface such as USB (Universal Serial Bus).
- USB Universal Serial Bus
- the data I / F unit 409 is a device for inputting data from the outside of the detection system 100.
- Specific examples of the data I / F unit 409 include a drive device for reading data stored in various storage media.
- the data I / F unit 409 may be provided outside the detection system 100. In this case, the data I / F unit 409 is connected to the detection system 100 via an interface such as USB.
- the communication I / F unit 411 is a device for data communication with an external device of the detection system 100, for example, a photographing device (video camera, surveillance camera, digital camera) or the like by wire or wireless.
- the communication I / F unit 411 may be provided outside the detection system 100. In that case, the communication I / F unit 411 is connected to the detection system 100 via an interface such as a USB.
- the display device 413 is a device for displaying, for example, the detection result of the moving body output by the result output unit 180.
- Specific examples of the display device 413 include a liquid crystal display and an organic EL (Electro-Luminescence) display.
- the display device 413 may be provided outside the detection system 100. In that case, the display device 413 is connected to the detection system 100 via a display cable or the like, for example.
- the detection system 100 detects each difference between the short-term background model, the medium-term background model, and the long-term background model, and uses them to make a temporary stop.
- the moving body to be detected can be suitably detected.
- FIG. 5 is a block diagram illustrating a functional configuration of the detection system 500.
- the detection system 500 includes an input unit 510, a calculation unit 520, and a detection unit 530.
- the input unit 510 receives, for example, input of a plurality of image frames that make up a video and have different shooting times.
- the calculation unit 520 includes a first background model generated based on the image frame at the processing time, a second background model in which the influence of the image frame at the processing time is smaller than that of the first background model, and the second background Differences are calculated with respect to the third background model, which is less affected by the image frame at the processing time than the model.
- the detection unit 530 determines that the difference between the second background model and the third background model is greater than or equal to the first threshold and that the difference between the first background model and the third background model is the first background model. A first region in the image frame that is greater than or equal to a second threshold times the difference between the model and the second background model is detected.
- the input means for receiving the input of a plurality of image frames having different shooting times, the first background model generated based on the image frame at the processing time, and the influence of the image frame at the processing time than the first background model
- Calculating means for calculating a difference between the second background model having a smaller value and the third background model having a smaller influence of the image frame at the processing time than the second background model;
- the difference between the background model and the third background model is not less than a first threshold, and the difference between the first background model and the third background model is the first background model and the first background model.
- a detection unit that detects a first region in the image frame that is equal to or greater than a second threshold value times the difference between the two background models.
- (Appendix 9) 9. The appendix 7 or appendix 8, wherein the detecting means detects a second region in the image frame in which a difference between the first background model and the second background model is a third threshold or more. Detection method.
- Appendix 10 The detection method according to any one of appendix 7 to appendix 9, further comprising output means for distinguishing and outputting the first region and the second region.
- Appendix 17 The program according to any one of appendix 13 to appendix 16, wherein the influence of the image frame at the processing time in the second background model is variable.
- Detection system 110 Image input part 120: Background model acquisition part 130: Background model database 140: Background model update part 150: Background model distance calculation part 160: Moving body detection part 170: Moving body detection parameter dictionary 180: Result Output unit 401: Processor 403: Memory 405: Storage device 407: Input interface unit 409: Data interface unit 411: Communication interface unit 413: Display device 500: Detection system 510: Input unit 520: Calculation unit 530: Detection unit
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
(1.1 概要)
図1乃至図4は、第1実施形態を説明するための図である。以下、これらの図を参照しながら説明する。
本実施形態は、カメラ等の撮影装置により撮影された映像中から、人物や車などの、移動や一時的な滞留を繰り返す移動体を検出する検出システムに関する。特に、本実施形態に係る検出システムでは、照明変動のような時々刻々と環境が変化する場合であっても、人や車などの移動体を好適に検出する。
(条件1)中期背景モデルと長期背景モデルとの差異が、予め定めた閾値以上である。
(条件2)短期背景モデルと長期背景モデルとの間に、短期背景モデルと中期背景モデルとの差異の、予め定めた定数倍以上の差異がある。
以下、図2を参照しながら、本実施形態に係る検出システムのシステム構成を説明する。図2は、本実施形態に係る検出システム100のシステム構成を示すブロック図である。図2に示す検出システム100は、画像入力部110、背景モデル取得部120、背景モデルデータベース(DB)130、背景モデル更新部140、背景モデル間距離計算部150、移動体検出部160、移動体検出パラメータ辞書170、及び結果出力部180を含む。
画像入力部110は、図示しないカメラ等の撮影装置から逐次映像を構成する画像フレーム、すなわちそれぞれ撮影時刻の異なる画像フレームの入力を受ける。換言すれば、画像入力部110は、処理時刻における画像フレームの入力を受ける。ここで、画像フレームはモノクロ画像であっても良いし、カラー画像であっても良い。モノクロ画像であれば、画像フレームには各画素に1つの値が含まれる。カラー画像であれば、画像フレームには各画素に3つの値(例えばRGB、YCbCr等の色表現)を有する。或いは画像フレームには、TOF(Time of Flight)カメラなどにより得られる距離情報等、画素毎に4つ以上の値を有してもよい。
背景モデル取得部120は、画像入力部110から入力された画像フレーム、及び、背景モデルDB130に格納されている短期背景モデル、中期背景モデル、及び長期背景モデルの3つの背景モデルを読み込む。
背景モデルDB130は、解析元となる画像フレームの撮影時刻の時刻幅の異なる短期背景モデル、中期背景モデル、及び長期背景モデルを含む複数の背景モデルを格納する。
ここで、各背景モデルの形式は種々考えられるが、例えば画像入力部110から入力される画像フレームと同様の画像形式とすることができる。この場合、例えば背景モデルをモノクロ画像とするのであれば、各画素毎に1つの値が、カラー画像とするのであれば各画素毎に3つの値が含まれる。
背景モデル更新部140は、背景モデル取得部120が取得した処理時刻の画像フレーム及び背景モデルDB130に記憶された背景モデルから、処理時刻の画像フレーム(最も新しい時刻の画像フレーム)を考慮した短期背景モデル、中期背景モデル、長期背景モデルを生成する。生成された背景モデルは、背景モデルDB130に格納される。
この場合、例えば、背景モデルがIbgであり、画像入力部110から入力された画像フレームをIとすると、
背景モデル間距離計算部150は、背景モデル取得部120が取得した3つの背景モデル間の差異を数値で示す距離値を、各画素毎に計算する。具体的には、各画素毎に、短期背景モデルと中期背景モデルとの距離、短期背景モデルと長期背景モデルとの距離、中期背景モデルと長期背景モデルとの距離を、背景モデル間距離計算部150はそれぞれ算出する。
移動体検出部160は、各画素毎に、背景モデル間距離の情報、及び移動体検出パラメータ辞書170に格納されているパラメータを用いて、移動体の写る領域に含まれているか否かを判定する。より具体的には、移動体検出部160は、以下の2つの条件を満たす場合に、処理対象の画素が移動体の写る領域に含まれていると判定する。
(条件1)中期背景モデルと長期背景モデルとの距離が、予め定めた閾値以上である。
(条件2)短期背景モデルと長期背景モデルとの間に、短期背景モデルと中期背景モデルとの距離の、予め定めた定数倍以上の距離がある。
ここで、条件1の「予め定めた閾値」及び条件2の「予め定めた定数倍」が、それぞれ移動体検出パラメータ辞書170に含まれるパラメータである。
結果出力部180は、移動体検出部160で得られた移動体の情報を出力する。出力方法は種々考えられるが、例えば、移動体領域を1とし、それ以外の領域を0とした2値画像として出力することができる。或いは、当該2値画像に対してラベリング処理を施すことによって連結部分を生成し、連結成分毎に外接矩形を出力することも考えられる。
以下、検出システム100の処理の流れを、図3を参照しながら説明する。図3は、本実施形態に係る検出システム100の処理の流れを示すフローチャートである。
以下、図4を参照しながら、上述してきた検出システム100をコンピュータにより実現する場合のハードウェア構成の一例を説明する。なお、検出システム100の機能は、複数のコンピュータにより実現することも可能である。
以上説明したように、本実施形態に係る検出システム100は、短期背景モデル、中期背景モデル、及び長期背景モデルの間のそれぞれの差異を検出し、それらを利用することで、特に一時的に静止する移動体を好適に検出することができる。
以下、第2実施形態を、図5を参照しながら説明する。図5は、検出システム500の機能構成を示すブロック図である。図5に示すように、検出システム500は、入力部510と、算出部520と、検出部530とを含む。
入力部510は、例えば映像を構成する、撮影時刻の異なる複数の画像フレームの入力を受ける。
このように実装することで、本実施形態に係る検出システム500は、好適に移動体を検出することができる。
なお、前述の実施形態の構成は、組み合わせたり或いは一部の構成部分を入れ替えたりしてもよい。また、本発明の構成は前述の実施形態のみに限定されるものではなく、本発明の要旨を逸脱しない範囲内において種々変更を加えてもよい。
撮影時刻の異なる複数の画像フレームの入力を受ける入力手段と、処理時刻の画像フレームを元に生成された第1の背景モデルと、前記第1の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第2の背景モデルと、前記第2の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第3の背景モデルとの間で、それぞれ差異を算出する算出手段と、前記第2の背景モデルと前記第3の背景モデルとの差異が第1の閾値以上であり、かつ、前記第1の背景モデルと前記第3の背景モデルとの差異が、前記第1の背景モデルと前記第2の背景モデルとの差異の第2の閾値倍以上である、前記画像フレーム内の第1の領域を検出する検出手段とを備える、検出システム。
前記第1の背景モデルと、前記第2の背景モデルと、前記第3の背景モデルとは、考慮される画像フレームの撮影時刻の時間幅が異なる付記1記載の検出システム。
前記検出手段は、前記第1の背景モデルと前記第2の背景モデルとの差異が第3の閾値以上である、前記画像フレーム内の第2の領域を検出する、付記1又は付記2記載の検出システム。
前記第1の領域と、前記第2の領域とを区別して出力する出力手段を更に備える、付記1乃至付記3のいずれか1項記載の検出システム。
前記第2の背景モデルにおける前記処理時刻の画像フレームの影響が可変である、付記1乃至付記4のいずれか1項記載の検出システム。
前記処理時刻の画像フレーム中の前記第1の領域についての前記第2の背景モデルに対して与える影響が、他の領域が前記第2の背景モデルに対して与える影響よりも小さい、付記1乃至付記5のいずれか1項記載の検出システム。
撮影時刻の異なる複数の画像フレームの入力を受けるステップと、処理時刻の画像フレームを元に生成された第1の背景モデルと、前記第1の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第2の背景モデルと、前記第2の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第3の背景モデルとの間で、それぞれ差異を算出するステップと、前記第2の背景モデルと前記第3の背景モデルとの差異が第1の閾値以上であり、かつ、前記第1の背景モデルと前記第3の背景モデルとの差異が、前記第1の背景モデルと前記第2の背景モデルとの差異の第2の閾値倍以上である、前記画像フレーム内の第1の領域を検出するステップとをコンピュータが行う、検出方法。
前記第1の背景モデルと、前記第2の背景モデルと、前記第3の背景モデルとは、考慮される画像フレームの撮影時刻の時間幅が異なる付記7記載の検出方法。
前記検出手段は、前記第1の背景モデルと前記第2の背景モデルとの差異が第3の閾値以上である、前記画像フレーム内の第2の領域を検出する、付記7又は付記8記載の検出方法。
前記第1の領域と、前記第2の領域とを区別して出力する出力手段を更に備える、付記7乃至付記9のいずれか1項記載の検出方法。
前記第2の背景モデルにおける前記処理時刻の画像フレームの影響が可変である、付記7乃至付記10のいずれか1項記載の検出方法。
前記処理時刻の画像フレーム中の前記第1の領域についての前記第2の背景モデルに対して与える影響が、他の領域が前記第2の背景モデルに対して与える影響よりも小さい、付記11記載の検出方法。
撮影時刻の異なる複数の画像フレームの入力を受ける処理と、処理時刻の画像フレームを元に生成された第1の背景モデルと、前記第1の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第2の背景モデルと、前記第2の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第3の背景モデルとの間で、それぞれ差異を算出する処理と、前記第2の背景モデルと前記第3の背景モデルとの差異が第1の閾値以上であり、かつ、前記第1の背景モデルと前記第3の背景モデルとの差異が、前記第1の背景モデルと前記第2の背景モデルとの差異の第2の閾値倍以上である、前記画像フレーム内の第1の領域を検出する処理とをコンピュータに実行させるプログラム。
前記第1の背景モデルと、前記第2の背景モデルと、前記第3の背景モデルとは、考慮される画像フレームの撮影時刻の時間幅が異なる付記13記載のプログラム。
前記検出手段は、前記第1の背景モデルと前記第2の背景モデルとの差異が第3の閾値以上である、前記画像フレーム内の第2の領域を検出する、付記13又は付記14記載のプログラム。
前記第1の領域と、前記第2の領域とを区別して出力する出力手段を更に備える、付記13乃至付記15のいずれか1項記載のプログラム。
前記第2の背景モデルにおける前記処理時刻の画像フレームの影響が可変である、付記13乃至付記16のいずれか1項記載のプログラム。
前記処理時刻の画像フレーム中の前記第1の領域についての前記第2の背景モデルに対して与える影響が、他の領域が前記第2の背景モデルに対して与える影響よりも小さい、付記17記載のプログラム。
以上、上述した実施形態を模範的な例として本発明を説明した。しかしながら、本発明は、上述した実施形態には限定されない。即ち、本発明は、本発明のスコープ内において、当業者が理解し得る様々な態様を適用することができる。
この出願は、2014年6月3日に出願された日本出願特願2014-115207を基礎とする優先権を主張し、その開示の全てをここに取り込む。
110 :画像入力部
120 :背景モデル取得部
130 :背景モデルデータベース
140 :背景モデル更新部
150 :背景モデル間距離計算部
160 :移動体検出部
170 :移動体検出パラメータ辞書
180 :結果出力部
401 :プロセッサ
403 :メモリ
405 :記憶装置
407 :入力インタフェース部
409 :データインタフェース部
411 :通信インタフェース部
413 :表示装置
500 :検出システム
510 :入力部
520 :算出部
530 :検出部
Claims (8)
- 撮影時刻の異なる複数の画像フレームの入力を受ける入力手段と、
処理時刻の画像フレームを元に生成された第1の背景モデルと、前記第1の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第2の背景モデルと、前記第2の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第3の背景モデルとの間で、それぞれ差異を算出する算出手段と、
前記第2の背景モデルと前記第3の背景モデルとの差異が第1の閾値以上であり、かつ、前記第1の背景モデルと前記第3の背景モデルとの差異が、前記第1の背景モデルと前記第2の背景モデルとの差異の第2の閾値倍以上である、前記画像フレーム内の第1の領域を検出する検出手段と
を備える、検出システム。 - 前記第1の背景モデルと、前記第2の背景モデルと、前記第3の背景モデルとは、考慮される画像フレームの撮影時刻の時間幅が異なる
請求項1記載の検出システム。 - 前記検出手段は、前記第1の背景モデルと前記第2の背景モデルとの差異が第3の閾値以上である、前記画像フレーム内の第2の領域を検出する、
請求項1又は請求項2記載の検出システム。 - 前記第1の領域と、前記第2の領域とを区別して出力する出力手段
を更に備える、請求項1乃至請求項3のいずれか1項記載の検出システム。 - 前記第2の背景モデルにおける前記処理時刻の画像フレームの影響が可変である、
請求項1乃至請求項4のいずれか1項記載の検出システム。 - 前記処理時刻の画像フレーム中の前記第1の領域についての前記第2の背景モデルに対して与える影響が、他の領域が前記第2の背景モデルに対して与える影響よりも小さい、請求項5記載の検出システム。
- コンピュータが、
撮影時刻の異なる複数の画像フレームの入力を受信し、
処理時刻の画像フレームを元に生成された第1の背景モデルと、前記第1の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第2の背景モデルと、前記第2の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第3の背景モデルとの間で、それぞれ差異を算出し、
前記第2の背景モデルと前記第3の背景モデルとの差異が第1の閾値以上であり、かつ、前記第1の背景モデルと前記第3の背景モデルとの差異が、前記第1の背景モデルと前記第2の背景モデルとの差異の第2の閾値倍以上である、前記画像フレーム内の第1の領域を検出する検出方法。 - 撮影時刻の異なる複数の画像フレームの入力を受ける処理と、
処理時刻の画像フレームを元に生成された第1の背景モデルと、前記第1の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第2の背景モデルと、前記第2の背景モデルよりも前記処理時刻の画像フレームの影響が小さい第3の背景モデルとの間で、それぞれ差異を算出する処理と、
前記第2の背景モデルと前記第3の背景モデルとの差異が第1の閾値以上であり、かつ、前記第1の背景モデルと前記第3の背景モデルとの差異が、前記第1の背景モデルと前記第2の背景モデルとの差異の第2の閾値倍以上である、前記画像フレーム内の第1の領域を検出する処理と
をコンピュータに実行させるプログラムを記憶するプログラム記憶媒体。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016525700A JP6652051B2 (ja) | 2014-06-03 | 2015-06-02 | 検出システム、検出方法及びプログラム |
US15/315,413 US10115206B2 (en) | 2014-06-03 | 2015-06-02 | Detection system, detection method, and program storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014-115207 | 2014-06-03 | ||
JP2014115207 | 2014-06-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015186347A1 true WO2015186347A1 (ja) | 2015-12-10 |
Family
ID=54766430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/002775 WO2015186347A1 (ja) | 2014-06-03 | 2015-06-02 | 検出システム、検出方法及びプログラム記憶媒体 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10115206B2 (ja) |
JP (1) | JP6652051B2 (ja) |
WO (1) | WO2015186347A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106548488A (zh) * | 2016-10-25 | 2017-03-29 | 电子科技大学 | 一种基于背景模型及帧间差分的前景检测方法 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018123801A1 (ja) * | 2016-12-28 | 2018-07-05 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元モデル配信方法、三次元モデル受信方法、三次元モデル配信装置及び三次元モデル受信装置 |
JP7154045B2 (ja) * | 2018-06-14 | 2022-10-17 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法 |
JP7233873B2 (ja) * | 2018-09-19 | 2023-03-07 | キヤノン株式会社 | 画像処理装置、画像処理方法、およびプログラム |
CN111193917B (zh) * | 2018-12-29 | 2021-08-10 | 中科寒武纪科技股份有限公司 | 运算方法、装置及相关产品 |
CN112329616B (zh) * | 2020-11-04 | 2023-08-11 | 北京百度网讯科技有限公司 | 目标检测方法、装置、设备以及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110164185A1 (en) * | 2010-01-04 | 2011-07-07 | Samsung Electronics Co., Ltd. | Apparatus and method for processing image data |
JP2011198244A (ja) * | 2010-03-23 | 2011-10-06 | Hiromitsu Hama | 対象物認識システム及び該システムを利用する監視システム、見守りシステム |
JP5058010B2 (ja) * | 2007-04-05 | 2012-10-24 | ミツビシ・エレクトリック・リサーチ・ラボラトリーズ・インコーポレイテッド | シーン中に置き去りにされた物体を検出する方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011102416A1 (ja) * | 2010-02-19 | 2011-08-25 | 株式会社 東芝 | 移動物体追跡システムおよび移動物体追跡方法 |
CN103826102B (zh) * | 2014-02-24 | 2018-03-30 | 深圳市华宝电子科技有限公司 | 一种运动目标的识别方法、装置 |
US10079827B2 (en) * | 2015-03-16 | 2018-09-18 | Ricoh Company, Ltd. | Information processing apparatus, information processing method, and information processing system |
-
2015
- 2015-06-02 US US15/315,413 patent/US10115206B2/en active Active
- 2015-06-02 WO PCT/JP2015/002775 patent/WO2015186347A1/ja active Application Filing
- 2015-06-02 JP JP2016525700A patent/JP6652051B2/ja active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5058010B2 (ja) * | 2007-04-05 | 2012-10-24 | ミツビシ・エレクトリック・リサーチ・ラボラトリーズ・インコーポレイテッド | シーン中に置き去りにされた物体を検出する方法 |
US20110164185A1 (en) * | 2010-01-04 | 2011-07-07 | Samsung Electronics Co., Ltd. | Apparatus and method for processing image data |
JP2011198244A (ja) * | 2010-03-23 | 2011-10-06 | Hiromitsu Hama | 対象物認識システム及び該システムを利用する監視システム、見守りシステム |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106548488A (zh) * | 2016-10-25 | 2017-03-29 | 电子科技大学 | 一种基于背景模型及帧间差分的前景检测方法 |
CN106548488B (zh) * | 2016-10-25 | 2019-02-15 | 电子科技大学 | 一种基于背景模型及帧间差分的前景检测方法 |
Also Published As
Publication number | Publication date |
---|---|
US10115206B2 (en) | 2018-10-30 |
JP6652051B2 (ja) | 2020-02-19 |
US20170186179A1 (en) | 2017-06-29 |
JPWO2015186347A1 (ja) | 2017-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7447932B2 (ja) | 画像処理システム、画像処理方法及びプログラム | |
WO2015186347A1 (ja) | 検出システム、検出方法及びプログラム記憶媒体 | |
US9858483B2 (en) | Background understanding in video data | |
JP6482195B2 (ja) | 画像認識装置、画像認識方法及びプログラム | |
US10776931B2 (en) | Image processing system for detecting stationary state of moving object from image, image processing method, and recording medium | |
US10970896B2 (en) | Image processing apparatus, image processing method, and storage medium | |
WO2019020103A1 (zh) | 目标识别方法、装置、存储介质和电子设备 | |
US20130162867A1 (en) | Method and system for robust scene modelling in an image sequence | |
JP6436077B2 (ja) | 画像処理システム、画像処理方法及びプログラム | |
KR20130104286A (ko) | 영상 처리 방법 | |
JP7067023B2 (ja) | 情報処理装置、背景更新方法および背景更新プログラム | |
US11521392B2 (en) | Image processing apparatus and image processing method for image analysis process | |
US9911061B2 (en) | Fast histogram-based object tracking | |
JP2014203133A (ja) | 画像処理装置、画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15803844 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016525700 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15315413 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15803844 Country of ref document: EP Kind code of ref document: A1 |