CN108665489B - Method for detecting changes in geospatial images and data processing system - Google Patents

Method for detecting changes in geospatial images and data processing system Download PDF

Info

Publication number
CN108665489B
CN108665489B CN201710188693.5A CN201710188693A CN108665489B CN 108665489 B CN108665489 B CN 108665489B CN 201710188693 A CN201710188693 A CN 201710188693A CN 108665489 B CN108665489 B CN 108665489B
Authority
CN
China
Prior art keywords
image
slope
values
normalized
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710188693.5A
Other languages
Chinese (zh)
Other versions
CN108665489A (en
Inventor
罗伯特·詹姆斯·克莱因
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Boeing Co
Original Assignee
Boeing Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Boeing Co filed Critical Boeing Co
Priority to CN201710188693.5A priority Critical patent/CN108665489B/en
Publication of CN108665489A publication Critical patent/CN108665489A/en
Application granted granted Critical
Publication of CN108665489B publication Critical patent/CN108665489B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method and a data processing system for detecting changes in geospatial images. A method for detecting changes in geospatial imagery. An imaging device (902) is operated to take two images (104, 106) of a single location. The two images are received at a processor (906) that normalizes the visible (108, 116) and near infrared (112, 120) bands of the two images. The two images are registered (210). Corresponding pixels of the two images are divided into a first group (302) with a slope and a second group (306) without a slope. The slope groupings are compared (402, 406) to determine which of the corresponding pixels have a probability of change greater than a predetermined threshold. A vector polygon (604) is created based on the comparison of the slope groupings to represent the changing area in the two images. The changed regions in the two images are displayed on a display device (912).

Description

Method for detecting changes in geospatial images and data processing system
Technical Field
The present disclosure relates to methods and apparatus for improving image registration tools and using geomorphic algorithms to designate regions with high probability of variation between two georegistered images.
Background
Disclosure of Invention
An exemplary embodiment provides a method for detecting changes in geospatial imagery. The method may include operating the imaging device to obtain two images of a single location. The method may further include receiving the two images at a processor. The method may further include normalizing, by the processor, the visible band and the near infrared band of the two images. The method may further include registering, by the processor, the two images. The method may further include dividing, by the processor, corresponding pixels of the two images into a first group having a slope and a second group having no slope. The method may further include comparing, by the processor, the slope groupings to determine which of the corresponding pixels have a probability of change greater than a predetermined threshold. The method may further include creating, by the processor, a vector polygon to represent the region of change in the two images based on the comparison of the slope groupings. The method may further include displaying, on a display device in communication with the processor, the registered image and the polygon representing a probability of change above the threshold.
The exemplary embodiment provides an alternative method. An alternative method includes operating at least one device to obtain a first optical image of a location and a second optical image of the location. The alternative method may also include normalizing, with the processor, the first optical image to form a normalized first image and normalizing the second optical image to form a normalized second image. The alternative method may also include performing, by the processor, image matching for the first normalized image and the second normalized image to generate a vector control. The alternative method may further include performing, by the processor, registration of the first normalized image with the second normalized image, wherein a registered image is formed. The alternative method may also include calculating, by the processor, a first slope image from the first normalized image and a second slope image from the second normalized image. The alternative method may also include refining, by the processor, the first grade image to a first set of binary values based on a first threshold value, and the second grade image to a second set of binary values based on a second threshold value. The alternative method may further include adding, by the processor, the first slope image and the second slope image, wherein the third set of values is generated to have a value of 0, 1, or 2. The alternative method may also include changing, by the processor, the values of all two to 0, wherein the fourth set of values is created to be either 0 or 1. The alternative method may also include thereafter creating, by the processor, a polygon around a group of pixels having a value of 1. The alternative method may further include displaying the registered image and the polygon representing a probability of change above a threshold on a display device in communication with the processor.
Exemplary embodiments also provide a system. The system includes at least one imaging device configured to capture a first optical image of a location and a second optical image of the location. The system also includes a computer in communication with the at least one imaging device, the computer including a processor in communication with the non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium stores program code that, when executed by the processor, is configured to normalize the first optical image to form a first normalized image and normalize the second optical image to form a second normalized image. The program code is further configured to perform image matching for the first normalized image and the second normalized image to generate a vector control. The program code is further configured to perform registration of the first normalized image with the second normalized image, wherein a registered image is formed. The program code is further configured to calculate a first slope image from the first normalized image and a second slope image from the second normalized image. The program code is further configured to refine the first grade image to a first set of binary values based on a first threshold value and refine the second grade image to a second set of binary values based on a second threshold value. The program code is further configured to add the first gradient image and the second gradient image, wherein the third set of values is generated to have a value of 0, 1 or 2. The program code is further configured to change the values of all 2's to 0, wherein a fourth set of values is created to be either 0 or 1. The program code is further configured to thereafter create a polygon around a group of pixels having a value of 1. The system also includes a display device in communication with the processor, the display device configured to display the registered image with the polygon.
Drawings
The novel features which are believed to be characteristic of the illustrative embodiments are set forth in the summary section above. The exemplary embodiments, however, as well as a preferred mode of use, further objectives and features thereof, will best be understood by reference to the following detailed description of an exemplary embodiment of the present disclosure when read in conjunction with the accompanying drawings, wherein:
FIG. 1 is a portion of a flowchart of a process for discovering changing areas of two geospatial images of a location in accordance with an exemplary embodiment;
FIG. 2 is a portion of a flowchart of a process for discovering changing areas of two geospatial images of a location in accordance with an exemplary embodiment;
FIG. 3 is a portion of a flowchart of a process for discovering changing areas of two geospatial images of a location in accordance with an exemplary embodiment;
FIG. 4 is a portion of a flowchart of a process for discovering changing areas of two geospatial images of a location in accordance with an exemplary embodiment;
FIG. 5 is a portion of a flowchart of a process for discovering changing areas of two geospatial images of a location in accordance with an exemplary embodiment;
FIG. 6 is a portion of a flowchart of a process for discovering changing areas of two geospatial images of a location in accordance with an exemplary embodiment;
FIG. 7 is an alternative flow diagram of a process for discovering changing regions of two geospatial images of a location in accordance with an exemplary embodiment;
FIG. 8 is an alternative flow diagram of a process for discovering changing regions of two geospatial images of a location in accordance with an exemplary embodiment;
FIG. 9 is a block diagram of a system for discovering changing areas of two geospatial images of a location in accordance with an exemplary embodiment; and
FIG. 10 illustrates a data processing system in accordance with an exemplary embodiment.
Detailed Description
The illustrative embodiments recognize and take into account that existing change detection tools tend to attempt to separate pixels that have changed rather than determining changed regions of an image. In general, existing automated image registration tools or image matching tools tend to have very narrow operating conditions. For example, these tools often require that the images are already in very close alignment. The normalization pre-processing of the exemplary embodiments described herein removes shading effects, enhances contrast, and compensates for some variation in spectral characteristics, which simplifies the stringent conditions sometimes imposed during image registration or image matching. The exemplary embodiments further recognize and take into account that most existing image change detection solutions include an undesirable plurality of false positive change designations or directions. The illustrative embodiments further recognize and take into account that geospatial imagery sometimes does not require periodic retakes because it is not known whether the provided image has changed from a previously taken image. It is expensive to retake these images at full resolution.
The illustrative embodiments solve these and other problems. For example, in practice, the process of the exemplary embodiments produces fewer false positives than if a single known image change detection technique was used. Exemplary embodiments may enable existing image registration tools and geomorphologic algorithms to work better by indicating regions with high probability of variation between two geographically registered images. Exemplary embodiments address specifying changed regions between temporally separated pairs of images. Furthermore, the exemplary embodiments use a pre-process that improves the registration of existing image matching algorithms.
Exemplary embodiments process the electro-optic imagery into terrain and calculate the slope between pixel values to determine whether the area is rough or smooth. Then, the exemplary embodiment may sum the images to generate a yes/no scheme where a given area on the image represents a change or no change.
In other words, the exemplary embodiment may calculate the rise of a single pixel in both images and determine whether a slope exists in that pixel. Subsequently, the pixels are merged and classified according to their likelihood of change. Thus, exemplary embodiments provide a system and technique for determining which ground segments in an image have such changes so that additional images will be retaken only when needed.
In still other words, exemplary embodiments may create a topographical map of pixels for both images. The slope of each pixel is calculated and classified in a binary manner (no slope/sloped). The terrain pixel maps are then merged to determine which pixels have good probability of change. The changed pixels are then output and used to create a polygon vector representing the area of the image that should be retaken.
The technical effect of the exemplary embodiment is: they improve the efficiency of detecting objects in a geo-registered image. In this way, processing resources are conserved and the overall process of image management is more efficient, thereby increasing the efficiency of the computer processing the images.
FIG. 1 is a portion of a flow chart of a process for discovering changing areas of two geospatial images of a location in accordance with an exemplary embodiment. The method 100 may be used to find regions of change in at least two geo-registered images of a single location. Method 100 is only partially illustrated in FIG. 1; the method 100 is shown as a combination of fig. 1-6 as a whole. Method 100 may be implemented using a data processing system, such as data processing system 1000 shown in FIG. 10. The method 100 from fig. 1-6 should be considered as a whole, as the method 100 connects the processes together and uses slope calculations in a new and innovative way relative to previous image recognition techniques. The method 100 may be referred to as an algorithm that normalizes the probability of change because the method 100 may be used to find changes between two geo-registered images.
Fig. 1 illustrates a first operation in method 100. In particular, FIG. 1 refers to operation 102 of the method 100, where in operation 102, a calculation is performed to normalize two images of the same location. Specifically, the image of the visible band is normalized using the second image of the near-infrared or infrared band.
Because the two images are normalized, the images are easier to directly compare. Further, normalization increases contrast, which increases the efficiency of a successful registration step in operation 200 of fig. 2. One byproduct of this process is that the control generated from the registration process can be used to register a new image that is not standardized with a standardized image.
An example of a normalization technique includes using a Normalized Difference Vegetation Index (NDVI) to normalize the red band using the near infrared (nir). Another example of a normalization technique is to use the Green Normalized Difference Vegetation Index (GNDVI) to normalize the green band using the near infrared (nir) in a similar manner. An example of an NDVI formula is:
(image1_nir–image1_red)/(image1_nir+image 1_red)
as used herein, the "visible band" corresponds to a wavelength of light that humans can perceive as red, green, or blue, and is therefore typically between about 400 nanometers and about 700 nanometers. As used herein, "near infrared" (nir) refers to wavelengths of light between about 700 nanometers to about 5000 nanometers.
Returning to FIG. 1, operation 102 may include a number of sub-operations. The processor may receive a first image (operation 104) and a second image (operation 106). These operations may be performed in series or in parallel, although in the context of the method 100, these operations are shown as being performed in parallel.
With respect to the first image, a first image visible band is determined (operation 108), and a first image near infrared band image is determined (operation 110). Again, these operations may be performed in series or in parallel, but are shown as being performed in parallel. Thereafter, an operation is performed to normalize the visible band to the near infrared band by dividing the difference between the near infrared band and the visible band by the sum of the near infrared band and the visible band (operation 112). However, other standardization techniques may be performed. In any case, a first normalized image is formed (operation 114).
With respect to the second image, after operation 106, a second image visible frequency band is determined (operation 116), and a second image near infrared band image is determined (operation 118). Again, these operations may be performed in series or in parallel, but are shown as being performed in parallel. Thereafter, as described above, an operation is performed to normalize the visible frequency band to the near infrared frequency band (operation 120). However, other standardization techniques may be performed. In any case, a second normalized image is formed (operation 122).
FIG. 2 is a portion of a flowchart of a process for discovering changing areas of two geospatial images of a location in accordance with an exemplary embodiment. The method 100 may be used to find regions of variation between at least two geo-registered images of a single location. The method 100 is only partially illustrated in FIG. 2; the method 100 is shown as a combination of fig. 1-6 as a whole. Method 100 may be implemented using a data processing system, such as data processing system 1000 shown in fig. 10.
Fig. 1 illustrates a second operation in method 100. In particular, fig. 2 refers to operation 200 of method 100, in which operation 200 image matching and registration and vector control generation are performed. Image matching is performed in operation 202 and registration is performed in operation 204.
Attention is first turned to image matching (operation 202). The input to operation 202 is the single band normalized image from operation 102 and the output of operation 202 is vector control. Each vector includes two coordinates: the "source coordinates" of the location on the first image and the "destination coordinates" of the location on the second image corresponding to the coordinates in the first image. Controls are used to register the original images. The registration algorithm should generate one or more control vectors.
Continuing from operations 114 and 122 of fig. 1, the first normalized image and the second normalized image are provided to a processor that performs image matching (operation 206). The result is registration control (operation 208) for the image registration performed in operation 204. Further, as shown in fig. 3, a registered second normalized image is generated (operation 210), which is used in operation 300 of the method 100.
Turning now to image registration in operation 204, the two images are registered with each other. For example, the warp function (warp function) may use the control vectors from operation 202 to transform or register or correct the position of the original image (second image) as opposed to the normalized image. Multiple registrations may be used to stage the registration as long as the control is applied in an acceptable order, such as an ascending polynomial.
Continuing from operation 106 of fig. 1 (receiving the second image) and further based on input from the registration control generated in operation 208, the processor may register all frequency bands of the second image (operation 212). Thus, the second image is co-registered with the first image (operation 214).
FIG. 3 is a portion of a flowchart of a process for discovering changing areas of two geospatial images of a location in accordance with exemplary embodiments. The method 100 may be used to find varying regions of at least two geo-registered images of a single location. The method 100 is only partially illustrated in FIG. 3; the method 100 is shown as a combination of fig. 1-6 as a whole. Method 100 may be implemented using a data processing system, such as data processing system 1000 shown in FIG. 10.
Fig. 3 illustrates a third operation in method 100. Specifically, FIG. 3 refers to operation 300 of method 100, where in operation 300 a grade image is obtained.
In operation 300, the image may be processed into a terrain. Thus, the processor may calculate the percentage rise between the first image and the second image for each pixel. Note that the slopes are calculated independently for each image, and then reclassified and added, as described further below. This operation operates on each of the single-band normalized images (the first normalized image and the second normalized image). The output of operation 300 is a set of floating point (decimal) values in the range between 0 and 1, where 1 is represented as 100%. The operation displays the degree of crowding, roughness, or variation within an area in the image.
Returning to fig. 3, operation 300 may include calculating a first slope image using a pixel percentage rise from the first normalized image (operation 302). The result of this operation is that a first gradient image is created (operation 304). The results are used during operation 400 of method 100 shown in fig. 4.
Simultaneously or in parallel, operation 300 may also include calculating a second slope image using a pixel percentage rise from the second normalized image (operation 306). The result of this operation is that a second slope image is created (operation 308). This result is used during operation 400 of the method 100 shown in fig. 4.
FIG. 4 is a portion of a flowchart of a process for discovering areas of change in two geospatial images of a location in accordance with an exemplary embodiment. The method 100 may be used to find regions of change in at least two geo-registered images at a single location. The method 100 is only partially illustrated in FIG. 4; the method 100 is shown as a whole in the combination of fig. 1 to 6. Method 100 may be implemented using a data processing system, such as data processing system 1000 shown in FIG. 10.
Fig. 4 illustrates a third operation in method 100. In particular, fig. 4 refers to operation 400 of method 100, where in operation 400, pixels of two slope images are filtered and refined.
In operation 400, values of the two slope images are divided into binary values using a threshold value. Therefore, each value in each of the first and second gradient images is assigned a value of 0 or 1, indicating the values of the gradient and the non-gradient, respectively. An exemplary threshold may be a standard deviation or a simple average between corresponding pixels of the two slope images, although other methods may be used.
Again, a value of 1 is assigned to the grade and a value of 0 is assigned to the non-grade. The value of the slope indicates the presence of objects (possibly representing vegetation, buildings, cars, and others) in the area. A value other than grade would indicate little or no objects in the area (and may represent a parking lot, outdoors, ground, and others). The main analysis can be done on the pixels in the two slope images to aggregate and refine the results. Therefore, noise in the image is reduced.
Returning to fig. 4, the values in the first gradient image obtained in operation 304 of fig. 3 are divided into two groups between values of 0 and 1 based on the first threshold value (operation 402). Thereafter, the processor may perform primary analysis and aggregation to reduce noise in the resulting processed first slope image (operation 404). This result is used in operation 500 of the method 100 shown in fig. 5.
In parallel or in series, the values in the second gradient image obtained in operation 308 of fig. 3 are divided into two groups between the values of 0 and 1 based on the second threshold value (operation 406). The first and second thresholds may be the same value, although for clarity these thresholds are named differently. However, the first and second thresholds need not be the same, if advantageous. In any case, thereafter, the processor may perform primary analysis and aggregation to reduce noise in the resulting processed second grade image (operation 408). This result is used in operation 500 of method 100 shown in fig. 5.
FIG. 5 is a portion of a flowchart of a process for discovering areas of change in two geospatial images of a location in accordance with an exemplary embodiment. The method 100 may be used to find regions of change in at least two geo-registered images at a single location. The method 100 is only partially shown in FIG. 5; the method 100 is shown in the combination of fig. 1-6 as a whole. Method 100 may be implemented using a data processing system, such as data processing system 1000 shown in FIG. 10.
Fig. 5 shows a fifth operation in the method 100. Specifically, fig. 5 refers to operation 500 of method 100, where in operation 500, the two processed grade images are added together and the pixels are reclassified.
Each corresponding value in the processed first gradient image and the processed second gradient image are added together (operation 502). The result is a single combined slope image represented by a third set of values, all of which are one of 0, 1, or 2. For example, if the specific value of a pixel in the first slope image is 1 and the corresponding value of the pixel in the second slope image is also 1, the resulting value is 2. Likewise, if a specific value of a pixel in the first or second slope image is 1 and the corresponding value of the pixel in the other slope image is 0, the resulting value is 1. Likewise, if the specific value of a pixel in the first slope image is 0 and the corresponding value of that pixel in the second slope image is also 0, the resulting value is 0.
The resulting combined gradient image is then reclassified (operation 504), with the results being used in operation 600 of the method 100, as shown in FIG. 6. In this re-classification, all values of two become 0. It is assumed that if the corresponding pixel in both slope images is 1 (which results in a value of 2 in the combined slope image), the image does not change substantially. Thus, it is assumed that the same object is being recorded and that the pixels do not represent the boundary of the object. Thus, the value 2 is reclassified as a value of 0 to indicate a probability of little or no change in the image.
The other values of 0 and 1 in the combined gradient image remain unchanged. Assume that there is no object of interest in the pixels having a combined value of 0. It is further assumed that a value 1 representing the gradient change between corresponding pixels in the two images represents a possible boundary of the object of interest.
FIG. 6 is a portion of a flowchart of a process for discovering areas of change in two geospatial images of a location in accordance with an exemplary embodiment. The method 100 may be used to find regions of change in at least two geo-registered images at a single location. The method 100 is only partially shown in FIG. 6; the method 100 is shown as a whole in the combination of fig. 1 to 6. Method 100 may be implemented using a data processing system, such as data processing system 1000 shown in FIG. 10.
Fig. 6 shows a sixth operation in the method 100. In particular, fig. 6 refers to operation 600 of the method 100, in which operation 600 the polygon vector is generated, refined, and then displayed. The polygon vectors are drawn around the possible objects of interest in order to highlight them to the user. In an exemplary embodiment, the polygon vector may take the form of a circle for regions in the image that have varying probabilities.
Pixels having a value of 1 in the reclassified combined gradient image are grouped and output based on the reclassified combined gradient image obtained in operation 504 of fig. 5. From there, bounding polygons are created, dismissed, and recreated to aggregate changing regions that touch each other. The final output may be a circle or other shaped polygon around the region in the original image representing the region with a high probability of variation, thus representing a possible object of interest. The size of the probability may be defined in map units by the area within the polygon.
In other words, in operation 600, the processor may convert the combined slope image from a raster format to a polygon vector format. The processor may use the polygon vector data to create a bounding circle on all neighboring polygons of value 1, representing the probability of change. The processor may then undo the adjacent or overlapping bounding circles and create a new bounding circle. This process results in a circle over the region with varying probability. The magnitude of the variation is defined as the area of each circle. In other words, larger circles contain larger probabilities of change. The result may be correlated with the original image and then with the image having the rounded area presented to the user via the display device.
Returning to fig. 6, from the combined reclassified slope images resulting from operation 504, the processor may output a set of values 1 within the combined reclassified slope images to create a polygon (operation 602). Additional processes are then performed at operation 604. Specifically, the processor may perform bounding of the polygon (operation 604A), and then may undo the polygon (operation 604B) and bounding again (operation 604C). This process of operation 604 may be repeated until the last polygon aggregation. Simultaneously, the sub-operation in operation 604 may be considered to refine the polygon vector.
The last polygon is then displayed on the display device (operation 606). These polygons may be registered to one or both of the original or registered images in operation 200 and displayed so that the user may see the possible objects of interest bounded by the polygons. In any case, the process may terminate thereafter.
FIG. 7 is an alternative flow diagram of a process for discovering areas of change in two geospatial images of a location in accordance with an exemplary embodiment. The method 700 may be used to find regions of change in at least two geo-registered images at a single location. Method 700 may be implemented using a data processing system, such as data processing system 1000 shown in FIG. 10. Method 700 may be characterized as a method for detecting geospatial image changes.
The method 700 may begin by operating an imaging device to obtain two images of a single location (operation 702). This operation may be commanded by a processor or may be performed manually by a person operating the imaging device. In an exemplary embodiment, "operating the imaging device" may be operating the imaging device to acquire an image. In alternative exemplary embodiments, the two images may be obtained from memory locally or via a network such as a web service. Thus, the exemplary embodiments do not necessarily require the operation of an imaging device (such as a camera).
Next, the processor may receive two images (operation 704). The processor may then normalize the visible band and the near infrared band of the two images (operation 706) and may register the two images (operation 708).
The processor may then divide the corresponding pixels of the two images into a first group having a slope and a second group having no slope (operation 710). The processor may then compare the slope groupings to determine which corresponding pixels have a probability of change greater than a predetermined threshold (operation 712).
The processor may then create a vector polygon based on the comparison slope groupings to indicate regions of change in the two images (operation 714). The processor may then display the changed region in the two images on a display device in communication with the processor (operation 716). In other words, the processor may cause the registered image and the polygon representing a probability of change above the threshold to be displayed on the display device. The threshold may be a "high probability" value determined by a user or some other automated process. However, the threshold may be any value determined by a user or automated processing. The process may terminate thereafter.
The method 700 may vary and may include more or fewer operations, or different operations. For example, the registration may be performed using a graph path method. In another example, the slope grouping may be binary only, consisting of the first and second groups only.
In the extended method, the method 700 may further include reacquiring a third image including the changed region using the imaging device; and repeating at least one of selecting, normalizing, registering, dividing, comparing, and creating two images using the third image. An example of this concept, exemplary embodiments contemplate archival and new shots at medium or low spatial resolutions (typically about 5 meters to about 30 meters), which use significantly less resources than using high resolution images. Thus, exemplary embodiments allow for identifying changes in medium or low spatial resolution images and then acquiring only the changed regions using a resource intensive high resolution imaging device. Next, the high resolution images (about 1 meter and below) may undergo a varying probability detection process for the high resolution archived images. The result shows only the changed regions in the high resolution image. Therefore, the exemplary embodiments are not necessarily limited to the specific examples described above.
FIG. 8 is an alternative flow diagram of a process for discovering areas of change in two geospatial images of a location in accordance with an exemplary embodiment. The method 800 may be used to find regions of change in at least two geo-registered images at a single location. Method 800 may be implemented using a data processing system, such as data processing system 1000 shown in FIG. 10. Method 800 may be characterized as a method of image processing.
The method 800 may begin by operating an imaging device to acquire two images of a single location (operation 802). This operation may be similar to operation 702 of FIG. 7. This operation may be commanded by the processor or may be performed manually by a person operating the imaging device.
The processor may then normalize the first optical image to form a first normalized image and normalize the second optical image to form a second normalized image (operation 804). The processor may then perform image matching of the first normalized image and the second normalized image to generate a vector control (operation 806). The processor may then perform registration of the first normalized image with the second normalized image, wherein a registered image is formed (operation 808).
Next, the processor may calculate a first gradient image from the first normalized image and a second gradient image from the second normalized image (operation 810). The processor may then refine the first grade image to a first set of binary values based on a first threshold and refine the second grade image to a second set of binary values based on a second threshold (operation 812).
The processor may then add the first and second slope images, wherein a third set of values having values of 0, 1, or 2 is generated (operation 814). Next, the processor may change all values of 2 to 0, where a fourth set of values of 0 or 1 is created (operation 816).
Thereafter, the processor may create a polygon around the group of pixels having a value of 1 (operation 818). The processor may then cause the processor to display the registered image and the polygon representing a probability of change above the threshold on a display device in communication with the processor (operation 820). More simply stated, the processor may cause the registered image having the polygon to be displayed on a display device. The process may terminate thereafter.
The method 800 may vary and may include more or fewer operations, or different operations. For example, the first optical image may be one of a first visible band image and a first near-infrared band image, and the second optical image may be one of a second visible band image and a second near-infrared band image.
In another example, the first slope image is derived by calculating a percentage rise of each pixel in the first normalized image, and the second slope image is derived by calculating a percentage rise of each pixel in the second normalized image. In this case, calculating the first result of the first gradient image results in a first set of values, wherein each of the first set of values is initially in a range between 0 and 1, and calculating the second result of the second gradient image results in a second set of values, wherein each of the second set of values is in a range between 0 and 1. Further, refining the first slope image may include converting the first set of values to 0 or 1 based on a first threshold, and wherein refining the second slope image includes converting the first set of values to 0 or 1 based on a second threshold. Still further, a slope of 1 may indicate a first changed area in the first optical image or the second optical image, and a slope of 0 may indicate a second unchanged area in the first optical image or the second optical image.
In yet another example, refining may include performing a major (majority) analysis on all pixels to aggregate results. In another example, the polygon may define a region with pixel variations, indicating that an object of interest may be present.
In the extended method, the method 800 of creating a polygon may include converting the registered image into vector data for the set of pixels; converting raster data into polygon vector data for the group of pixels; and creates a bounding circle on the polygon. In this case, there are a plurality of polygons, wherein converting the registration image, converting the raster data, and creating the bounding circle are repeated for each of the plurality of polygons. In this case, the method 800 may further include revoking adjacent or overlapping bounding circles.
In another variation, the area of the polygon may indicate the magnitude of the probability of change for the region. Accordingly, the exemplary embodiments are not necessarily limited by the specific examples described above.
FIG. 9 is a block diagram of a system for discovering changing regions in two geospatial images of a location in accordance with an exemplary embodiment. The system 900 can be used to find regions of change in at least two geo-registered images at a single location. System 900 may be implemented as data processing system 1000 shown in fig. 10.
System 900 may include at least one imaging device 902. The at least one imaging device 902 may be configured to take a first optical image of a location and a second optical image of the location.
The system 900 may also include a computer 904 in communication with at least one imaging device 902. Computer 904 includes a processor 906 in communication with a non-transitory computer-readable storage medium 908. The non-transitory computer-readable storage medium 908 may store program code 910 that, when executed by a processor, is configured to perform a method.
The program code includes program code for normalizing the first optical image to form a first normalized image and normalizing the second optical image to form a second normalized image. The program code also includes program code to perform image matching of the first normalized image and the second normalized image to produce a vector control.
The program code also includes program code to perform registration of the first normalized image with the second normalized image, wherein a registered image is formed. The program code also includes program code to calculate a first slope image from the first normalized image and a second slope image from the second normalized image.
The program code also includes program code to refine the first grade image to a first set of binary values based on a first threshold value and refine the second grade image to a second set of binary values based on a second threshold value. The program code also includes program code to sum the first grade image and the second grade image, wherein a third set of values having values of 0, 1, or 2 is generated.
The program code also includes program code for all values 2 to change to 0, wherein a fourth set of values of 0 or 1 is created. The program code also includes program code to subsequently create a polygon around the group of pixels having a value of 1.
The system 900 also includes a display device 912 in communication with the processor 906. As described above, the display device 912 may be configured to display the registered images and the polygons indicating that the probability of change is above the threshold.
The system 900 may vary. For example, the first optical image may include one of a first visible band image and a first near-infrared band image, and the second optical image may include one of a second visible band image and a second near-infrared band image.
In another example, program code 910 may be configured such that the first slope image is derived by calculating a percentage rise of each pixel in the first normalized image, and the second slope image is derived by calculating a percentage rise of each pixel in the second normalized image. In this case, the program code 910 may be configured such that calculating a first result for the first slope image results in a first set of values, wherein each of the first set of values is initially in a range between 0 and 1, and wherein calculating a second result for the second slope image results in a second set of values, wherein each of the second set of values is in a range between 0 and 1. Further, the program code 910 may be configured such that refining the first slope image includes converting the first set of values to 0 or 1 based on a first threshold, and wherein refining the second slope image includes converting the first set of values to 0 or 1 based on a second threshold. Accordingly, the exemplary embodiments are not necessarily limited by the specific examples described above.
Turning now to FIG. 10, a schematic diagram of a data processing system is depicted in accordance with an illustrative embodiment. Data processing system 1000 in fig. 10 is an example of a data processing system that may be used to implement the exemplary embodiments (such method 100 as shown in fig. 1 through 6, and the methods shown in fig. 7 and 8). Data processing system 1000 may also be used as a processor or a computer as described with respect to FIG. 9.
In the illustrative example, data processing system 1000 includes communications fabric 1002 that provides communications between processor unit 1004, memory 1006, persistent storage 1008, communications unit 1010, input/output (I/O) unit 1012, and display 1014.
The processor unit 1004 is used to execute instructions of software that may be loaded into memory 1006. The software may be associated memory, content addressable memory, or software for implementing processes described elsewhere herein. Thus, for example, the software loaded into memory 1006 may be software for implementing the six steps described above with respect to fig. 4-8. The processor unit 1004 may be a plurality of processors, a multi-core processor, or some other type of processor, depending on the particular implementation. "plurality" as used herein with reference to an item refers to one or more items. Further, the processor unit 1004 may be implemented with a plurality of heterogeneous processor systems in which a main processor and a secondary processor are present on a single chip. As another illustrative example, processor unit 1004 may be a symmetric multi-processor system containing multiple processors of the same type.
Memory 1006 and persistent storage 1008 are examples of storage devices 1016. A storage device is any piece of hardware that is capable of temporarily and/or permanently storing information such as, but not limited to, data, program code in functional form, and/or other suitable information. Storage device 1016 may also be referred to as a computer-readable storage device in these examples. In these examples, memory 1006 may be, for example, random access memory or any other suitable volatile or non-volatile storage device. Persistent storage 1008 may take various forms depending on the particular implementation.
For example, persistent storage 1008 may contain one or more components or devices. For example, persistent storage 1008 may be a hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The media used by persistent storage 1008 also may be removable. For example, a removable hard drive may be used for persistent storage 1008.
In these examples, communications unit 1010 provides for communications with other data processing systems or devices. In these examples, communications unit 1010 is a network interface card. The communication unit 1010 may provide communication using a physical communication link or a wireless communication link or both physical and wireless communication links.
An input/output (I/O) unit 1012 allows data to be input and output with other devices that may be connected to data processing system 1000. For example, input/output (I/O) unit 1012 may provide a connection for a user to enter input through a keyboard, a mouse, and/or some other suitable input device. Further, an input/output (I/O) unit 1012 may send output to a printer. Display 1014 provides a mechanism to display information to a user.
Instructions for the operating system, applications, and/or programs may be located in storage devices 1016, with storage devices 1016 communicating with processor unit 1004 through communications fabric 1002. In these illustrative examples, the instructions are present in functional form on persistent storage 1008. These instructions may be loaded into memory 1006 for execution by processor unit 1004. The processes of the different embodiments may be performed by processor unit 1004 through the use of computer implemented instructions, which may be located in a memory, such as memory 1006.
These instructions are referred to as program code, computer usable program code, or computer readable program code that may be read and executed by a processor in processor unit 1004. Program code in different embodiments may be embodied on different physical or computer readable storage media, such as memory 1006 or persistent storage 1008.
Program code 1018 is located in a functional form on computer readable media 1020 that is selectively removable and may be loaded onto or transferred to data processing system 1000 for execution by processor unit 1004. Program code 1018 and computer readable media 1020 constitute computer program product 1022 in these examples. In one example, computer readable media 1020 may be computer readable storage media 1024 or computer readable signal media 1026. For example, computer-readable storage media 1024 may include an optical or magnetic disk that is inserted into or placed on a drive or other device (that is part of persistent storage 1008) for transfer to a storage device (that is part of persistent storage 1008) such as a hard disk drive. Computer readable storage media 1024 may also take the form of persistent storage connected to data processing system 1000, such as a hard disk drive, thumb drive, or flash memory. In some examples, computer readable storage media 1024 is not removable from data processing system 1000.
Alternatively, program code 1018 may be transferred to data processing system 1000 using computer readable signal media 1026. Computer-readable signal medium 1026 may be, for example, a propagated data signal containing program code 1018. For example, computer-readable signal media 1026 may be an electromagnetic signal, an optical signal, and/or any other suitable type of signal. These signals may be transmitted over communication links such as wireless communication links, fiber optic cables, coaxial cables, wiring, and/or any other suitable type of communication link. In other words, the communication links and/or connections may be physical or wireless in the illustrative examples.
In some illustrative examples, program code 1018 may be downloaded over a network from another device to persistent storage 1008 or from data processing system to persistent storage 1008 for use in data processing system 1000 by way of computer readable signal media 1026. For example, program code stored in a computer readable storage medium in a server data processing system may be downloaded to data processing system 1000 from a server over a network. The data processing system providing program code 1018 may be a server computer, a client computer, or some other device capable of storing and transmitting program code 1018.
The different components illustrated with respect to data processing system 1000 are not intended to provide architectural limitations to the manner in which different embodiments may be implemented. The different illustrative embodiments may be implemented in a data processing system including components in addition to, or in place of, those shown for data processing system 1000. Other components shown in fig. 10 may vary according to the illustrative example shown. The different embodiments may be implemented using any hardware device or system capable of executing program code. As one example, a data processing system may include organic components integrated with inorganic components and/or may include all organic components except humans in their entirety. For example, the memory device may include an organic semiconductor.
In another illustrative example, the processor unit 1004 may be in the form of a hardware unit having circuitry fabricated or configured for a particular use. This type of hardware may perform various operations without the need to load program code into memory from a storage device configured to perform the operations.
For example, when the processor unit 1004 is in the form of a hardware unit, the processor unit 1004 may be circuitry, an Application Specific Integrated Circuit (ASIC), a programmable logic device, or some other suitable type of hardware configured to perform various operations. The imaging device is configured to perform a variety of operations through the programmable logic device. The imaging device may be reconfigured at a later time or may be permanently configured to perform various operations. Examples of programmable logic devices include, for example, programmable logic arrays, programmable array logic, field programmable logic arrays, field programmable gate arrays, and other suitable hardware devices. In this type of implementation, program code 1018 may be omitted because the processing for the different implementations is implemented in a hardware unit.
In yet another illustrative example, the processor unit 1004 may be implemented using a combination of processors found in computers and hardware units. The processor unit 1004 may have a plurality of hardware units and a plurality of processors configured to execute the program code 1018. In this described example, some of the processes may be implemented in multiple hardware units, while other processes may be implemented in multiple processors.
As another example, a storage device in data processing system 1000 is any hardware device that may store data. Memory 1006, persistent memory 1008, and computer-readable media 1020 are examples of storage devices in a tangible form.
In another example, a bus system may be used to implement communications fabric 1002 and the bus system may include one or more buses, such as a system bus or an input/output bus. Of course, the bus system may be implemented using any suitable type of architecture that provides for a transfer of data between different components or devices attached to the bus system. Additionally, a communications unit may include one or more devices used to transmit and receive data, such as a modem or a network adapter. Also for example, a memory may be, for example, the memory 1006 or a cache memory, such as may be found in an interface and memory controller hub present in the communications fabric 1002.
Data processing system 1000 may also include an associative memory 1028. Associative memory 1028 may be in communication with communication structure 1002. Associative memory 1028 may also be in communication with storage device 1016 or, in some exemplary embodiments, may be considered a part of storage device 1016. Although one associative memory 1028 is shown, additional associative memories may be present.
As used herein, the term "associative memory" refers to a plurality of data and a plurality of associations in the plurality of data. The plurality of data and the plurality of associations may be stored in a non-transitory computer readable storage medium. Multiple data may be collected into associated groups. In addition to direct correlations in the plurality of data, the associative memory may be configured to query based on at least indirect relationships in the plurality of data. Thus, the associative memory may be configured to query based on only direct relationships, only minimal indirect relationships, and a combination of direct relationships and minimal indirect relationships. The associative memory may be a content addressable memory.
The different illustrative embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. Some embodiments are implemented in software, which includes but is not limited to forms such as firmware, resident software, and microcode.
Furthermore, the different embodiments may take the form of a computer program product accessible from a computer-usable or computer-readable medium of program code for use by or in connection with any device or system that provides for computer or execution of instructions. For the purposes of this disclosure, a computer-usable or computer readable medium can generally be any tangible apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or a propagation medium. Non-limiting examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a Random Access Memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Optical disks may include compact disk read-only memory (CD-ROM), compact disk read/write (CD-R/W), and DVD.
Furthermore, a computer-usable or computer-readable medium may contain or store a computer-readable or computer-usable program code such that when the computer-readable or computer-usable program code is executed on a computer, the execution of the computer-readable or computer-usable program code causes the computer to transmit another computer-readable or computer-usable program code over a communication link. The communication link may use a medium, i.e., such as, but not limited to, physical or wireless.
A data processing system suitable for storing and/or executing computer readable or computer usable program code will include one or more processors coupled directly or indirectly to memory elements through a communication structure, such as a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some computer-readable or computer-usable program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
Input/output or I/O devices can be coupled to the system either directly or through intervening I/O controllers. These devices may include, for example and without limitation, keyboards, touch screen displays, and pointing devices. Various communications adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Non-limiting examples of modems and network adapters are just a few of the currently available types of communication adapters.
Further, the present disclosure includes embodiments according to the following:
1. a method for detecting changes in geospatial imagery, the method comprising:
operating an imaging device to obtain two images of a single location;
receiving, at a processor, two images;
normalizing, by a processor, a visible band and a near infrared band of the two images;
registering, by a processor, the two images;
dividing, by a processor, corresponding pixels of the two images into a first group having a slope and a second group having no slope;
comparing, by the processor, the slope groupings to determine which of the corresponding pixels have a probability of change greater than a predetermined threshold;
generating, by the processor, a vector polygon to represent a region of change in the two images based on the comparison of the slope groupings; and
a display image of the changed region in the two images is generated and communicated to the processor.
2. The method according to item 1, wherein the registration is performed using a general pattern matching graphical path method (generic pattern matching graphical path method).
3. The method of item 1, wherein the slope grouping is only binary, the slope grouping consisting of only the first group and the second group.
4. The method of item 1, further comprising:
re-capturing a third image including the changed region using the imaging device; and
the third image is used to repeat the selecting, normalizing, registering, dividing, comparing, and generating at least one of the two images.
5. A method of image processing, comprising:
operating at least one imaging device to acquire a first optical image of a location and a second optical image of the location;
normalizing, with a processor, the first optical image to form a first normalized image and the second optical image to form a second normalized image;
performing, by a processor, image matching for the first normalized image and the second normalized image to produce a vector control;
performing, by a processor, registration of the first normalized image and the second normalized image, wherein a registered image is formed;
calculating, by a processor, a first slope image from the first normalized image and a second slope image from the second normalized image;
refining, by the processor, the first slope image to a first set of binary values based on a first threshold and the second slope image to a second set of binary values based on a second threshold;
adding, by the processor, the first and second slope images, wherein a third set of values having values of 0, 1, or 2 is generated;
changing, by the processor, all values 2 to 0, wherein a fourth set of values of 0 or 1 is generated;
thereafter creating, by the processor, a polygon around a group of pixels having a value of 1; and
registered images and polygons are generated that represent a probability of change above a threshold.
6. The method of item 5, wherein the first optical image comprises one of a first visible band image and a first near infrared band image, and wherein the second optical image comprises one of a second visible band image and a second near infrared band image.
7. The method of clause 5, wherein the first slope image is derived by calculating a percentage rise of each pixel in the first normalized image, and wherein the second slope image is derived by calculating a percentage rise of each pixel in the second normalized image.
8. The method of item 7, wherein calculating the first result of the first slope image yields a first set of values, wherein each of the first set of values is initially in a range between 0 and 1, and wherein calculating the second result of the second slope image yields a second set of values, wherein each of the second set of values is in a range between 0 and 1.
9. The method of item 8, wherein refining the first slope image comprises converting the first set of values to 0 or 1 based on a first threshold, and wherein refining the second slope image comprises converting the first set of values to 0 or, 1 based on a second threshold.
10. The method of item 9, wherein slope 1 represents a first changed region in the first optical image or the second optical image, and wherein slope 0 represents a second unchanged region in the first optical image or the second optical image.
11. The method of item 5, wherein refining comprises performing a primary analysis on all pixels to aggregate results.
12. The method of item 5, wherein the polygon defines a region with pixel variations, thereby indicating a possible presence of an object of interest.
13. The method of item 5, wherein creating a polygon comprises:
converting the registered image into vector data for the set of pixels;
converting the raster data into polygon vector data for the set of pixels; and
a bounding circle is created on the polygon.
14. The method of item 13, wherein there are a plurality of polygons, wherein registering image conversion, raster data conversion, and bounding circle creation are repeatedly performed for each of the plurality of polygons, and wherein the method further comprises:
adjacent or overlapping bounding circles are withdrawn.
15. The method of item 13, wherein the area of the polygon represents a likelihood size of the area variation.
16. A system, comprising:
at least one imaging device configured to take a first optical image of a location and a second optical image of the location;
a computer in communication with at least one imaging device, the computer comprising a processor in communication with a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium storing program code that, when executed by the processor, is configured to perform a method, the program code comprising:
program code for normalizing the first optical image to form a first normalized image and normalizing the second optical image to form a second normalized image;
program code for performing image matching on the first normalized image and the second normalized image to generate a vector control;
program code for performing registration of the first normalized image with the second normalized image, wherein a registered image is formed;
program code for calculating a first slope image from the first normalized image and a second slope image from the second normalized image;
program code for refining the first grade image to a first set of binary values based on a first threshold value and refining the second grade image to a second set of binary values based on a second threshold value;
program code for adding the first gradient image and the second gradient image, wherein a third set of values having values of 0, 1 or 2 is generated;
program code for changing all values of 2 to 0, wherein a fourth set of values of 0 or 1 is generated; and
program code for thereafter creating a polygon around a group of pixels having a value of 1; and
a display device in communication with the processor, the display device configured to display the registered image and the polygon representing the probability of change over the threshold.
17. The system of item 16, wherein the first optical image comprises one of a first visible band image and a first near-infrared band image, and wherein the second optical image comprises one of a second visible band image and a second near-infrared band image.
18. The system of item 16, wherein the program code is configured such that the first slope image is derived by calculating a percentage rise of each pixel in the first normalized image, and the second slope image is derived by calculating a percentage rise of each pixel in the second normalized image.
19. The system of item 18, wherein the program code is configured such that calculating a first result for the first grade image results in a first set of values, wherein each of the first set of values is initially between a range of 0 and 1, and wherein calculating a second result for the second grade image results in a second set of values, wherein each of the second set of values is between a range of 0 and 1.
20. The system of item 19, wherein the program code is configured such that refining the first grade image comprises converting the first set of values to 0 or 1 based on a first threshold, and wherein refining the second grade image comprises converting the first set of values to 0 or 1 based on a second threshold.
The description of the different illustrative embodiments has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. Moreover, different illustrative embodiments may be provided with different features than other illustrative embodiments. The embodiment or embodiments selected are chosen and described in order to best explain the principles of the embodiments, the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (20)

1. A method (100) for detecting changes in geospatial imagery, the method (100) comprising:
operating an imaging device to obtain two images of a single location;
receiving (104, 106) the two images at a processor;
normalizing (112, 120), by the processor, the two images to form two normalized images;
registering (200), by the processor, the two normalized images;
dividing (302, 306), by the processor, corresponding pixels of the two normalized images into a first group (304) with a slope and a second group (308) without a slope;
comparing (400), by the processor, slope groupings to determine which corresponding pixels have a probability of change (404, 408) greater than a predetermined threshold;
creating (600), by the processor, a vector polygon to represent a region of change in the two normalized images based on a comparison of the slope groupings; and
displaying the vector polygon to represent a region of variation in the two normalized images.
2. The method (100) according to claim 1, wherein the registration (210) is performed using a general pattern matching graph path method.
3. The method (100) of claim 1, wherein the slope groupings are only binary (402, 406), the slope groupings consisting only of the first and second groups.
4. The method (100) of claim 1, further comprising:
re-capturing a third image including the changed region using the imaging device; and
repeating the selecting, normalizing (114, 122), registering (210, 214), dividing (402, 406), comparing (404, 408), and creating of at least one of the two images using the third image.
5. A data processing system (1000), comprising:
at least one imaging device (902) configured to take a first optical image (104) of a location and a second optical image (106) of the location;
a computer (904) in communication with the at least one imaging device (902), the computer (904) comprising a processor (906) in communication with a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium (908) storing program code (910) that, when executed by the processor (906), is configured to perform a method, the program code (910) comprising:
program code (910) for normalizing the first optical image (104) to form a first normalized image (114) and normalizing the second optical image (106) to form a second normalized image (122);
program code (910) for performing image matching (206) on the first normalized image (114) and the second normalized image (122) to generate a vector control (806);
program code (910) for performing a registration (200) of the first normalized image (114) and the second normalized image (122), wherein a registered image (210) is formed;
program code (910) for calculating a first slope image (304) from the first normalized image (114) and a second slope image (308) from the second normalized image (122);
program code (910) for refining (400) the first grade image (304) to a first set of binary values based on a first threshold value, and refining (400) the second grade image (308) to a second set of binary values based on a second threshold value (812);
program code (910) for adding the first gradient image (304) and the second gradient image (308), wherein a third set of values is generated having a value (504) of 0, 1 or 2;
program code (910) for changing all values of 2 to 0 to create a fourth set of values of 0 or 1; and
program code (910) for thereafter creating a polygon (604) around a set of pixels having a value of 1 in the fourth set of values to represent a change in area of the first normalized image (114) and the second normalized image (122); and
a display device (912) in communication with the processor (906), the display device (912) configured to display the registered image and the polygon (604) representing a probability of change above a threshold.
6. The data processing system (1000) of claim 5, wherein the first optical image (104) comprises one of a first visible band image (108) and a first near infrared band image, and wherein the second optical image (106) comprises one of a second visible band image (116) and a second near infrared band image (118).
7. The data processing system (1000) of claim 5, wherein the program code (910) is configured such that the first slope image (304) is derived by calculating a percentage rise (302) for each pixel in the first normalized image (114) and the second slope image (308) is derived by calculating a percentage rise (306) for each pixel in the second normalized image (122).
8. The data processing system (1000) of claim 7, wherein the program code (910) is configured such that calculating a first result of the first slope image (304) results in a first set of values, wherein each value in the first set of values is initially in a range between 0 and 1, and wherein calculating a second result of the second slope image (308) results in a second set of values, wherein each value in the second set of values is in a range between 0 and 1.
9. The data processing system (1000) of claim 8, wherein the program code (910) is configured such that refining (400) the first slope image (304) comprises: converting the first set of values to 0 or 1 based on the first threshold, and wherein refining the second slope image (308) comprises converting the first set of values to 0 or 1 based on the second threshold.
10. A method of image processing, comprising:
operating at least one imaging device to acquire a first optical image of a location and a second optical image of the location;
normalizing, with a processor, the first optical image to form a first normalized image and the second optical image to form a second normalized image;
performing, by the processor, image matching for the first normalized image and the second normalized image to produce a vector control;
performing, by the processor, registration of the first and second normalized images, wherein a registered image is formed;
calculating, by the processor, a first slope image from the first normalized image and a second slope image from the second normalized image;
refining, by the processor, the first slope image to a first set of binary values based on a first threshold and the second slope image to a second set of binary values based on a second threshold;
adding, by the processor, the first and second slope images, wherein a third set of values having values of 0, 1, or 2 is generated;
changing, by the processor, all values 2 to 0 to generate a fourth set of values that are 0 or 1;
thereafter creating, by the processor, a polygon around a set of pixels having a value of 1 in the fourth set of values to represent a regional variation of the first normalized image (114) and the second normalized image (122); and
and displaying the polygon.
11. The method of claim 10, wherein the first optical image comprises one of a first visible band image and a first near-infrared band image, and wherein the second optical image comprises one of a second visible band image and a second near-infrared band image.
12. The method of claim 10, wherein the first slope image is obtained by calculating a percent-rise for each pixel in the first normalized image, and wherein the second slope image is obtained by calculating a percent-rise for each pixel in the second normalized image.
13. The method of claim 12, wherein calculating a first result of the first slope image yields a first set of values, wherein each value in the first set of values is initially in a range between 0 and 1, and wherein calculating a second result of the second slope image yields a second set of values, wherein each of the second set of values is in a range between 0 and 1.
14. The method of claim 13, wherein refining the first slope image comprises converting the first set of values to 0 or 1 based on the first threshold, and wherein refining the second slope image comprises converting the second set of values to 0 or, 1 based on the second threshold.
15. The method of claim 14, wherein a slope of 1 represents a first changed region in the first or second optical image, and wherein a slope of 0 represents a second unchanged region in the first or second optical image.
16. The method of claim 10, wherein refining comprises performing a primary analysis on all pixels to aggregate results.
17. The method of claim 10, wherein the polygon defines a region with pixel variations, thereby indicating the presence of an object of interest.
18. The method of claim 10, wherein creating the polygon further comprises:
converting the registered image into vector data for the set of pixels;
converting raster data of a combined slope image of the first slope image and the second slope image into polygon vector data of the set of pixels; and
a bounding circle is created on the polygon.
19. The method of claim 18, wherein there are a plurality of polygons, wherein registering image conversion, raster data conversion, and bounding circle creation are repeatedly performed for each of the plurality of polygons, and wherein the method further comprises:
adjacent or overlapping bounding circles are withdrawn.
20. The method of claim 18, wherein the area of the polygon represents a likelihood size of an area variation.
CN201710188693.5A 2017-03-27 2017-03-27 Method for detecting changes in geospatial images and data processing system Active CN108665489B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710188693.5A CN108665489B (en) 2017-03-27 2017-03-27 Method for detecting changes in geospatial images and data processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710188693.5A CN108665489B (en) 2017-03-27 2017-03-27 Method for detecting changes in geospatial images and data processing system

Publications (2)

Publication Number Publication Date
CN108665489A CN108665489A (en) 2018-10-16
CN108665489B true CN108665489B (en) 2023-03-21

Family

ID=63786114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710188693.5A Active CN108665489B (en) 2017-03-27 2017-03-27 Method for detecting changes in geospatial images and data processing system

Country Status (1)

Country Link
CN (1) CN108665489B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5719949A (en) * 1994-10-31 1998-02-17 Earth Satellite Corporation Process and apparatus for cross-correlating digital imagery
CN102915444A (en) * 2011-06-22 2013-02-06 波音公司 Image registration
CN103699900A (en) * 2014-01-03 2014-04-02 西北工业大学 Automatic batch extraction method for horizontal vector contour of building in satellite image
JP2015023858A (en) * 2013-06-20 2015-02-05 株式会社パスコ Forest phase analyzer, forest phase analysis method and program
CN106384081A (en) * 2016-08-30 2017-02-08 水利部水土保持监测中心 Slope farmland extracting method and system based on high-resolution remote sensing image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1780651A1 (en) * 2005-10-25 2007-05-02 Bracco Imaging, S.P.A. Method and system for automatic processing and evaluation of images, particularly diagnostic images
CA2719928A1 (en) * 2010-11-10 2011-01-19 Ibm Canada Limited - Ibm Canada Limitee Navigation on maps of irregular scales or variable scales
US9031311B2 (en) * 2013-02-28 2015-05-12 The Boeing Company Identification of aircraft surface positions using camera images
US10013785B2 (en) * 2015-05-22 2018-07-03 MyHEAT Inc. Methods and systems for object based geometric fitting
CN105809679B (en) * 2016-03-04 2019-06-18 李云栋 Mountain railway side slope rockfall detection method based on visual analysis
CN105957064A (en) * 2016-04-24 2016-09-21 长安大学 Bituminous pavement surface structure 2D test evaluating system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5719949A (en) * 1994-10-31 1998-02-17 Earth Satellite Corporation Process and apparatus for cross-correlating digital imagery
CN102915444A (en) * 2011-06-22 2013-02-06 波音公司 Image registration
JP2015023858A (en) * 2013-06-20 2015-02-05 株式会社パスコ Forest phase analyzer, forest phase analysis method and program
CN103699900A (en) * 2014-01-03 2014-04-02 西北工业大学 Automatic batch extraction method for horizontal vector contour of building in satellite image
CN106384081A (en) * 2016-08-30 2017-02-08 水利部水土保持监测中心 Slope farmland extracting method and system based on high-resolution remote sensing image

Also Published As

Publication number Publication date
CN108665489A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
US9767596B2 (en) Method and apparatus for processing depth image
Gonçalves et al. SegOptim—A new R package for optimizing object-based image analyses of high-spatial resolution remotely-sensed data
Aytekın et al. Unsupervised building detection in complex urban environments from multispectral satellite imagery
JP6798854B2 (en) Target number estimation device, target number estimation method and program
Raghavan et al. Optimized building extraction from high-resolution satellite imagery using deep learning
WO2019193702A1 (en) Image processing device, image processing method, and recording medium having image processing program stored thereon
CN113673305A (en) Image marking using geodesic features
WO2019197021A1 (en) Device and method for instance-level segmentation of an image
Shahi et al. Road condition assessment by OBIA and feature selection techniques using very high-resolution WorldView-2 imagery
US11804025B2 (en) Methods and systems for identifying topographic features
Samadzadegan et al. Automatic detection and classification of damaged buildings, using high resolution satellite imagery and vector data
Lee et al. Robust registration of cloudy satellite images using two-step segmentation
da Silva et al. Towards open-set semantic segmentation of aerial images
US10210621B2 (en) Normalized probability of change algorithm for image processing
WO2022062343A1 (en) Image removal tamper blind detection method, system, device, and storage medium
CN113689412A (en) Thyroid image processing method and device, electronic equipment and storage medium
WO2022045877A1 (en) A system and method for identifying occupancy of parking lots
CN108665489B (en) Method for detecting changes in geospatial images and data processing system
Damodaran et al. Attribute profiles on derived features for urban land cover classification
US9607398B2 (en) Image processing apparatus and method of controlling the same
Hao et al. Building height estimation via satellite metadata and shadow instance detection
Bhardwaj et al. Spectral-spatial active learning with superpixel profile for classification of hyperspectral images
Lukashevich et al. Building detection on aerial and space images
Cai et al. Learning parameter tuning for object extraction
JP7077093B2 (en) Area detection device, area detection method and its program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant