WO2022062853A1 - 遥感图像的配准方法、装置、设备、存储介质及系统 - Google Patents

遥感图像的配准方法、装置、设备、存储介质及系统 Download PDF

Info

Publication number
WO2022062853A1
WO2022062853A1 PCT/CN2021/115513 CN2021115513W WO2022062853A1 WO 2022062853 A1 WO2022062853 A1 WO 2022062853A1 CN 2021115513 W CN2021115513 W CN 2021115513W WO 2022062853 A1 WO2022062853 A1 WO 2022062853A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
remote sensing
road network
region
registration
Prior art date
Application number
PCT/CN2021/115513
Other languages
English (en)
French (fr)
Inventor
孟令宣
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2022062853A1 publication Critical patent/WO2022062853A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to a registration method, apparatus, device, storage medium and system for remote sensing images.
  • Remote sensing image registration plays an important role in change detection, surface disturbance detection, building extraction, and image super-resolution processing.
  • the remote sensing image has large size, complex texture features, and many repeated textures.
  • the present disclosure provides a registration method, device, device, storage medium and system for remote sensing images.
  • a method for registering a remote sensing image comprising: acquiring a remote sensing image; acquiring a first road network image based on the remote sensing image; For the second road network image corresponding to the remote sensing image template, image registration is performed on the remote sensing image.
  • performing image registration on the remote sensing image based on the first road network image and the second road network image corresponding to the remote sensing image template includes: extracting from the second road network image Road network feature points; image registration is performed on the remote sensing image based on the first road network image and the road network feature points.
  • performing image registration on the remote sensing image based on the first road network image and the road network feature points includes: acquiring the second road network image including the road network One or more second image areas of the feature points; for each of the second image areas, template matching is performed between the second image area and the corresponding first image area in the first road network image, and the obtained matching point pairs in the second image area and the first image area; image registration on the remote sensing image based on the matching point pairs in each of the second image areas and the corresponding first image area .
  • the matching point pair in the second image area and the first image area is determined based on the following manner: sliding the second image area on the first image area, and each time you slide, Calculate the correlation between the sub-regions in the first image region overlapping the second image region and the second image region; calculate the correlation between the first feature point in the sub-region with the greatest correlation and the second image region The second feature point corresponding to the first feature point is determined as a matching point pair.
  • the number of the second image areas is multiple, each second image area is in one-to-one correspondence with a first image area, and template matching is performed between a second image area and a corresponding first image area, obtaining a set of matching point pairs; the method further includes: before performing image registration on the remote sensing image based on the matching point pairs in each of the second image areas and the corresponding first image areas, for each A set of matching point pairs obtained by performing template matching between the second image area and the corresponding first image area, and filtering the set of matching point pairs based on a preset condition.
  • the corresponding preset condition is determined based on at least one of the following: in the first image area The value distribution characteristics of the correlation between each sub-region and the second image region; wherein, each of the sub-regions overlaps the second image region in the first image region when calculating the corresponding correlation The value of the correlation of the sub-region with the largest correlation with the second image region in the first image region.
  • the filtering the set of matching point pairs based on a preset condition includes: in the case that the number of the first target sub-regions in the first image region is greater than a preset number, filtering the first target sub-regions A matching point pair between an image area and a corresponding second image area is filtered out, and the first target sub-area is a sub-area whose correlation with the corresponding second image area is greater than a first preset value in the first image area; And/or in the case that the values of the correlations corresponding to the sub-regions of the first image region are all smaller than the second preset value, filter the matching point pairs of the first image region and the corresponding second image region Lose.
  • acquiring the first road network image based on the remote sensing image includes: inputting the remote sensing image into a pre-trained neural network; acquiring a first road corresponding to the remote sensing image output by the neural network web image.
  • an apparatus for registering remote sensing images includes: a first acquisition module for acquiring a remote sensing image; a second acquisition module for acquiring a first remote sensing image based on the remote sensing image A road network image; a registration module for performing image registration on the remote sensing image based on the first road network image and the second road network image corresponding to the remote sensing image template.
  • the registration module includes: an extraction unit for extracting road network feature points from the second road network image; a first registration unit for matching with the first road network image based on For the feature points of the road network, image registration is performed on the remote sensing image.
  • the registration module includes: an acquisition unit for acquiring one or more second image regions including the road network feature points in the second road network image; a matching unit for For each of the second image areas, template matching is performed between the second image area and the first image area corresponding to the first road network image, and the second image area and the first image area are obtained.
  • the second registration unit is configured to perform image registration on the remote sensing image based on the matching point pairs in each of the second image areas and the corresponding first image areas.
  • matching point pairs in the second image area and the first image area are determined based on a module for: a computing module for sliding the second image area on the first image area , each time sliding once, calculate the correlation between the sub-region overlapping the second image region and the second image region in the first image region; the determination module is used to determine the A feature point and a second feature point corresponding to the first feature point in the second image area are determined as a matching point pair.
  • the number of the second image areas is multiple, each second image area is in one-to-one correspondence with a first image area, and template matching is performed between a second image area and a corresponding first image area, obtaining a set of matching point pairs;
  • the device further includes: a filtering module configured to perform image matching on the remote sensing images based on the matching point pairs in each of the second image areas and the corresponding first image areas Before the calibration, a set of matching point pairs obtained by performing template matching on each of the second image regions and the corresponding first image regions is filtered based on a preset condition.
  • the corresponding preset condition is determined based on at least one of the following: in the first image area The value distribution characteristics of the correlation between each sub-region and the second image region; wherein, each of the sub-regions overlaps the second image region in the first image region when calculating the corresponding correlation The value of the correlation of the sub-region with the largest correlation with the second image region in the first image region.
  • the filtering module is configured to: in the case that the number of the first target sub-regions in the first image region is greater than a preset number, filter the first image region and the corresponding second image region Matching point pairs are filtered out, and the first target sub-region is a sub-region whose correlation with the corresponding second image region is greater than the first preset value in the first image region; and/or in the first image region In the case that the values of the correlations corresponding to the sub-regions are all smaller than the second preset value, the matching point pairs of the first image region and the corresponding second image region are filtered out.
  • the acquisition of the first road network image based on the remote sensing image includes: an input unit for inputting the remote sensing image into a pre-trained neural network; an acquisition unit for acquiring the The first road network image corresponding to the remote sensing image output by the neural network.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method described in any one of the embodiments.
  • a computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing any of the implementations when executing the program method described in the example.
  • a registration system for remote sensing images comprising: an image acquisition device for acquiring remote sensing images; and a processing device for: acquiring the remote sensing images; The remote sensing image is used to obtain a first road network image; based on the first road network image and the second road network image corresponding to the remote sensing image template, image registration is performed on the remote sensing image.
  • This embodiment of the present disclosure first obtains a first road network image based on a remote sensing image, and then performs image registration on the remote sensing image based on the first road network image and a second road network image corresponding to the remote sensing image template.
  • the texture features of the remote sensing image are simple, and there are few repeated textures, and the road network features in the road network image are less affected by the difference of the image sensor and the imaging conditions. Therefore, the image registration of the remote sensing image based on the road network image can effectively improve the remote sensing image. Registration success rate.
  • FIG. 1 is a flowchart of a registration method of a remote sensing image according to an embodiment of the present disclosure.
  • FIG. 2 is a specific flowchart of a registration method of a remote sensing image according to an embodiment of the present disclosure.
  • FIG. 3 is a block diagram of an apparatus for registering a remote sensing image according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a computer device according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a registration system of a remote sensing image according to an embodiment of the present disclosure.
  • first, second, third, etc. may be used in this disclosure to describe various pieces of information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information, without departing from the scope of the present disclosure.
  • word "if” as used herein can be interpreted as "at the time of” or "when” or "in response to determining.”
  • Remote sensing images are images collected by remote sensing.
  • Remote sensing is a non-contact long-distance detection technology. Through sensors (cameras, scanners, radars, etc.) on the ground or on vehicles such as aircraft and satellites, the electromagnetic wave information emitted, reflected or scattered by the measured object is recorded, and processed into images or data available to computers. After analysis and judgment, Identify various measured objects, and then reveal their spatial distribution and variation laws.
  • Remote sensing image registration is the process of matching and superimposing two or more images acquired at different times, different sensors (imaging equipment) or under different conditions (weather, illumination, camera position and angle, etc.). It has been widely used. It is used in remote sensing data analysis, computer vision, image processing and other fields, such as change detection, surface disturbance detection, building extraction, image super-resolution processing, etc.
  • remote sensing image registration especially high-resolution remote sensing image registration
  • remote sensing images have large size, complex texture features, and many repeated textures, and it is easy to misjudge the remote sensing image of one area as another area during remote sensing image registration; on the other hand, there are differences between different image sensors.
  • the imaging conditions of remote sensing images are different, resulting in changes in the ground objects in remote sensing images, and different remote sensing images have great differences in color, shadow, clarity, etc.; The ground objects in the image will change, and the ground objects in the same area may be very different in different remote sensing images.
  • an embodiment of the present disclosure provides a registration method for remote sensing images. As shown in FIG. 1 , the method may include:
  • Step 101 acquire remote sensing images
  • Step 102 obtaining a first road network image based on the remote sensing image
  • Step 103 Perform image registration on the remote sensing image based on the first road network image and the second road network image corresponding to the remote sensing image template.
  • the steps in the embodiments of the present disclosure may be performed by a processor, and the processor may be a GPU (Graphics Processing Unit, graphics processing unit), a CPU (Central Processing Unit, central processing unit), or other types of sensors, or may include a plurality of sensors A GPU, multiple GPUs, or a processor group of at least one GPU and at least one CPU.
  • the remote sensing image may be acquired by sensors (eg, cameras, scanners, radars, etc.) on the ground or on vehicles such as aircraft and satellites.
  • the remote sensing image may be a high-resolution remote sensing image, and the resolution of the high-resolution remote sensing image may reach sub-meter level.
  • the high-resolution remote sensing image has a large amount of data, and the higher the resolution of the high-resolution remote sensing image, the larger the data volume.
  • the higher the resolution the more detailed the data information recorded in the remote sensing image, not only the number of pixels increases with the high-resolution remote sensing image, but also the information complexity of each pixel in the high-resolution remote sensing image.
  • the increase in rate is not linearly related to the increase in file size of high-resolution remote sensing imagery.
  • the higher the resolution and the greater the amount of information the harder it is to extract data. Therefore, it is difficult to effectively register remote sensing images with the registration method for general images.
  • a first road network image may be acquired, and remote sensing image registration may be performed based on the first road network image.
  • the road network refers to a road system composed of various roads that are interconnected and interwoven into a network in a certain area. Because the texture features of road network images are simple, the repetitive textures are few, and the road network features in road network images are less affected by differences in image sensors and imaging conditions. Therefore, image registration of remote sensing images based on road network images can It can effectively improve the success rate of remote sensing image registration.
  • the remote sensing image can be input into a pre-trained neural network, and the first road network image corresponding to the remote sensing image output by the neural network can be obtained.
  • the neural network can perform semantic recognition on the remote sensing image, so as to extract from the remote sensing image each pixel point whose semantics is a road.
  • other algorithms may also be used, or a combination of other algorithms and neural networks may be used to extract the first road network image from the remote sensing image, which is not limited in the present disclosure.
  • the remote sensing image may also be down-sampled first to obtain a down-sampled image, and then a first road network image may be acquired based on the down-sampled image.
  • the remote sensing image can be input into a pre-trained neural network for deconvolution to obtain a down-sampled image.
  • the neural network used for downsampling and the neural network used for extracting the first road network image may be the same neural network, or may be different neural networks.
  • a second road network image corresponding to the remote sensing image template may be acquired, and the manner of acquiring the second road network image is similar to that of the first road network image, which will not be repeated here.
  • the second road network image corresponding to the remote sensing image template may also be stored in advance, and the stored second road network image may be directly read during registration.
  • the remote sensing image template refers to a remote sensing image used as a template, for example, a remote sensing image of a certain area released by an official or authoritative organization.
  • road network feature points may be extracted from the second road network image; image registration is performed on the remote sensing image based on the first road network image and the road network feature points.
  • the road network feature points may include, but are not limited to, at least one of the intersection of two or more roads in the road network, an inflection point of a road, and a road endpoint.
  • the second road network image may be binarized, and the originally wider connected area in the road network may be The central axis) is converted into an image connected by a single pixel to obtain the skeleton of the road network, and the feature points of the road network are obtained by extracting the feature points of the skeleton.
  • the edges in the second road network image can be made sharper, thereby facilitating the extraction of road network feature points from the second road network image.
  • one or more second image areas including the road network feature points in the second road network image may be acquired; for each of the second image areas, Perform template matching with the corresponding first image area in the first road network image, and obtain the matching point pairs in the second image area and the first image area; For the matching point pairs in the first image area, image registration is performed on the remote sensing image.
  • the number of the second image areas may be one or more, each second image area may be an image block in the second road network image, and a second image area may include one or more feature points of the road network.
  • the second image area may be determined by taking each road network feature point among the plurality of road network feature points in the second road network image as a center. For example, select the road network feature point a, the road network feature point b and the road network feature point c in the second road network image, and take the road network feature point a, the road network feature point b and the road network feature point c as The center, the second image area A, the second image area B, and the second image area C are determined correspondingly.
  • the shape and size of the second image area corresponding to each road network feature point may be the same or different.
  • the first image area may be the first road network image itself. In order to improve processing efficiency, the first image area may also be an image block in the first road network image.
  • the number of the second image areas may also be multiple.
  • the number of the first image areas may be the same as the number of the second image areas, and each second image area corresponds to one first image area.
  • the number of the first image areas may be smaller than the number of the second image areas, and a plurality of second image areas may share one first image area.
  • the first image corresponding to the second image area may be determined according to the pixel position of the center point of the second image area in the second road network image area.
  • the pixel position of the center point of the first image area corresponding to the second image area in the first road network image is the same as the pixel position of the center point of the second image area in the second road network image.
  • the pixel position of the center point of the second image area A in the second road network image is (x0, y0)
  • the center point of the first image area corresponding to the second image area A is in the first road network image
  • the pixel position of is also (x0,y0).
  • the size of the first image area may be larger than the size of the second image area.
  • an image block in the first road network image with the same shape as that of the second image area may be selected as the first image area.
  • the matching point pair includes at least a pair of matching pixel points corresponding to each other, and each pixel point in a pair of matching pixel points corresponds to the same physical point in the physical space, or a pair of matching pixel points in the physical space.
  • the physical distance is less than the preset distance threshold.
  • the matching point pair in the second image area and the first image area is determined based on the following method: sliding the second image area on the first image area, and calculating the Correlation between a sub-region overlapping the second image region in an image region and the second image region; comparing the first feature point in the sub-region with the greatest correlation and the second image region with the second image region.
  • the second feature point corresponding to the first feature point (for example, the center point in the second image area) is determined as a matching point pair.
  • the correlation can be calculated starting from the pixel point P whose pixel coordinates are (x1, y1) on the first image area. Slide the center point of the second image area to the pixel point P, and calculate the correlation of the sub-areas overlapping with the second image area in the first image area. Then, slide the second image area on the first image area by one pixel to the right along the X-axis, and calculate the correlation again until each sub-area with the coordinate y1 generated by moving the second image area along the X-axis is calculated Finish.
  • the correlation is used to represent the possibility that the area overlapping with the second image area in the first image area is the same area as the second image area. The higher the probability.
  • each second image area may correspond to a first image area, and template matching may be performed between a second image area and a corresponding first image area to obtain A set of matching point pairs.
  • the second image area includes a second image area A, a second image area B, and a second image area C, and the first image areas corresponding to the three second image areas are sequentially the first image area a, the first image area b and the first image area c.
  • the second image area A and the first image area a can be subjected to template matching to obtain a set of matching point pairs; the second image area B and the first image area b are subjected to template matching to obtain a set of matching point pairs; and Template matching is performed between the second image area C and the first image area c to obtain a set of matching point pairs.
  • each of the second image areas and the corresponding first image area can also be registered.
  • a set of matching point pairs obtained by performing template matching on an image region is filtered based on a preset condition.
  • the preset condition may be determined based on at least one of the following: a value distribution feature of the correlation between each sub-region in the first image region and the second image region; The value of the correlation of the sub-region with the highest correlation of the second image region. Specifically, when the number of first target sub-regions in a first image region is greater than a preset number, the matching point pairs between the first image region and the corresponding second image region may be filtered out, and the first image region may be filtered out.
  • a target sub-region is a sub-region of the first image region whose correlation with the corresponding second image region is greater than a first preset value.
  • the matching point pairs between the first image area and the corresponding second image area may also be filtered out under the condition that the values of the correlations corresponding to the sub-areas of the first image area are all smaller than the second preset value.
  • the first preset value, the second preset value and the preset number can be set according to actual needs.
  • the first preset value can be equal to the product of the maximum value of the correlation and the preset weight, and the weight is A positive number less than 1 (eg, 0.9).
  • the preset number may be 90% of the total number of sliding positions in the first image area.
  • the second preset value may be 0.9.
  • Traditional remote sensing image registration is mainly based on local feature matching to find matching points, or first collect a large amount of data to train a registration model, and then perform image registration based on the registration model.
  • the image registration method of the embodiment of the present disclosure performs registration based on road network images, and the registration success rate is high; and annotation, the registration efficiency is high and the cost is low.
  • FIG. 2 it is a specific flow chart of the registration method of remote sensing images according to an embodiment of the present disclosure.
  • the input is two high-resolution satellite remote sensing images to be registered, and the output is a registered remote sensing image.
  • the process has 4 core steps:
  • the intersection of the road network is used as the point to be matched, that is, the location where the road network is used for template matching in the next step.
  • T(x', y') is the pixel value of the second image area T at pixel coordinates (x', y'), and I(x+x', y+y') means that the first image area I is in
  • (x, y) is the position of the second image area in the first image area
  • R(x, y) is at (x, y) correlation.
  • the correlation value range is (0,1), and the larger the value, the better the correlation.
  • the method of the embodiments of the present disclosure can be used to align the input remote sensing images of two or more phases.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • the present disclosure further provides a registration device for remote sensing images
  • the device includes: a first acquisition module 301 for acquiring remote sensing images; and a second acquisition module 302 for acquiring remote sensing images based on the remote sensing images
  • the first road network image; the registration module 303 is configured to perform image registration on the remote sensing image based on the first road network image and the second road network image corresponding to the remote sensing image template.
  • the registration module includes: an extraction unit for extracting road network feature points from the second road network image; a first registration unit for matching with the first road network image based on For the feature points of the road network, image registration is performed on the remote sensing image.
  • the registration module includes: an acquisition unit for acquiring one or more second image regions including the road network feature points in the second road network image; a matching unit for For each of the second image areas, template matching is performed between the second image area and the first image area corresponding to the first road network image, and the second image area and the first image area are obtained.
  • the second registration unit is configured to perform image registration on the remote sensing image based on the matching point pairs in each of the second image areas and the corresponding first image areas.
  • matching point pairs in the second image area and the first image area are determined based on a module for: a computing module for sliding the second image area on the first image area , each time sliding once, calculate the correlation between the sub-region overlapping the second image region and the second image region in the first image region; the determination module is used to determine the A feature point and a second feature point corresponding to the first feature point in the second image area are determined as a matching point pair.
  • the number of the second image areas is multiple, each second image area is in one-to-one correspondence with a first image area, and template matching is performed between a second image area and a corresponding first image area, obtaining a set of matching point pairs;
  • the device further includes: a filtering module configured to perform image matching on the remote sensing images based on the matching point pairs in each of the second image areas and the corresponding first image areas Before the calibration, a set of matching point pairs obtained by performing template matching on each of the second image regions and the corresponding first image regions is filtered based on a preset condition.
  • the corresponding preset condition is determined based on at least one of the following: in the first image area The value distribution characteristics of the correlation between each sub-region and the second image region; wherein, each of the sub-regions overlaps the second image region in the first image region when calculating the corresponding correlation The value of the correlation of the sub-region with the largest correlation with the second image region in the first image region.
  • the filtering module is configured to: in the case that the number of the first target sub-regions in the first image region is greater than a preset number, filter the first image region and the corresponding second image region Matching point pairs are filtered out, and the first target sub-region is a sub-region whose correlation with the corresponding second image region is greater than the first preset value in the first image region; and/or in the first image region In the case that the values of the correlations corresponding to the sub-regions are all smaller than the second preset value, the matching point pairs of the first image region and the corresponding second image region are filtered out.
  • the acquisition of the first road network image based on the remote sensing image includes: an input unit for inputting the remote sensing image into a pre-trained neural network; an acquisition unit for acquiring the The first road network image corresponding to the remote sensing image output by the neural network.
  • the functions or modules included in the apparatuses provided in the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments.
  • modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed over multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the present disclosure. Those of ordinary skill in the art can understand and implement it without creative effort.
  • an embodiment of the present disclosure also provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implementing the program described in any of the embodiments when the processor executes the program. method described.
  • the device may include: a processor 401 , a memory 402 , an input/output interface 403 , a communication interface 404 and a bus 405 .
  • the processor 401 , the memory 402 , the input/output interface 403 and the communication interface 404 realize the communication connection among each other within the device through the bus 405 .
  • the processor 401 can be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. program to implement the technical solutions provided by the embodiments of the present disclosure.
  • a general-purpose CPU Central Processing Unit, central processing unit
  • a microprocessor an application specific integrated circuit (Application Specific Integrated Circuit, ASIC)
  • ASIC Application Specific Integrated Circuit
  • the memory 402 can be implemented in the form of a ROM (Read Only Memory, read-only memory), a RAM (Random Access Memory, random access memory), a static storage device, a dynamic storage device, and the like.
  • the memory 402 may store an operating system and other application programs. When implementing the technical solutions provided by the embodiments of the present disclosure through software or firmware, relevant program codes are stored in the memory 402 and invoked and executed by the processor 401 .
  • the input/output interface 403 is used to connect the input/output module to realize information input and output.
  • the input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions.
  • the input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc.
  • the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the communication interface 404 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices.
  • the communication module may implement communication through wired means (eg, USB, network cable, etc.), or may implement communication through wireless means (eg, mobile network, WIFI, Bluetooth, etc.).
  • Bus 405 includes a path to transfer information between the various components of the device (eg, processor 401, memory 402, input/output interface 403, and communication interface 404).
  • the above-mentioned device only shows the processor 401, the memory 402, the input/output interface 403, the communication interface 404 and the bus 405, in the specific implementation process, the device may also include the necessary components for normal operation. other components.
  • the above-mentioned device may only include components necessary to implement the solutions of the embodiments of the present disclosure, instead of all the components shown in the figures.
  • An embodiment of the present disclosure further provides a registration system for remote sensing images.
  • the system may include: an image acquisition device 501 for acquiring remote sensing images; and a processing device 502 for acquiring the remote sensing images obtaining a first road network image based on the remote sensing image; performing image registration on the remote sensing image based on the first road network image and a second road network image corresponding to a remote sensing image template.
  • the image acquisition device 501 can be mounted on a vehicle such as an aircraft, a satellite or a vehicle, and directly or indirectly transmit the acquired first road network image to the processing device 502, and the processing device 502 can perform any of the above implementations.
  • the method performed by the method can refer to the above method embodiment, and details are not repeated here.
  • An embodiment of the present disclosure further provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, implements the method described in any of the foregoing embodiments.
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.
  • a typical implementing device is a computer, which may be in the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, email sending and receiving device, game control desktop, tablet, wearable device, or a combination of any of these devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例提供一种遥感图像的配准方法、装置、设备、存储介质及系统,所述方法包括:获取遥感图像;基于所述遥感图像获取第一路网图像;基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准。

Description

遥感图像的配准方法、装置、设备、存储介质及系统
相关公开的交叉引用
本公开要求于2020年9月22日提交的、申请号为202011003115.8的中国专利公开的优先权,该中国专利公开的全部内容以引用的方式并入本文中。
技术领域
本公开涉及图像处理技术领域,尤其涉及遥感图像的配准方法、装置、设备、存储介质及系统。
背景技术
遥感图像配准在变化检测、地表扰动检测、建筑物提取、图像超分辨率处理等方面有着重要作用。然而,一方面遥感图像尺寸大,纹理特征复杂,重复纹理多,另一方面不同的图像传感器之间存在差异,以及遥感图像的成像条件不同,导致遥感图像中地物会发生变化,同一区域的地物在不同遥感图像中可能差异很大。因此,遥感图像配准的成功率较低。
发明内容
本公开提供一种遥感图像的配准方法、装置、设备、存储介质及系统。
根据本公开实施例的第一方面,提供一种遥感图像的配准方法,所述方法包括:获取遥感图像;基于所述遥感图像获取第一路网图像;基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准。
在一些实施例中,所述基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准,包括:从所述第二路网图像中提取路网特征点;基于所述第一路网图像与所述路网特征点,对所述遥感图像进行图像配准。
在一些实施例中,所述基于所述第一路网图像与所述路网特征点,对所述遥感图像进行图像配准,包括:获取所述第二路网图像中包括所述路网特征点的一个或多个第二图像区域;针对每个所述第二图像区域,对所述第二图像区域与所述第一路网图像中对应的第一图像区域进行模板匹配,获取所述第二图像区域与所述第一图像区域中的匹配点对;基于各所述第二图像区域与对应的所述第一图像区域中的匹配点对,对所述遥感图像进行图像配准。
在一些实施例中,所述第二图像区域与所述第一图像区域中的匹配点对基于以下方 式确定:将所述第二图像区域在所述第一图像区域上滑动,每滑动一次,计算所述第一图像区域中重叠于所述第二图像区域的子区域与所述第二图像区域的相关性;将相关性最大的子区域中的第一特征点以及所述第二图像区域中与所述第一特征点对应的第二特征点确定为匹配点对。
在一些实施例中,所述第二图像区域的数量为多个,每个第二图像区域与一个第一图像区域一一对应,一个第二图像区域与对应的第一图像区域进行模板匹配,得到一组匹配点对;所述方法还包括:在基于各所述第二图像区域与对应的所述第一图像区域中的匹配点对,对所述遥感图像进行图像配准之前,针对每个所述第二图像区域与对应的第一图像区域进行模板匹配得到的一组匹配点对,基于预设条件对该组匹配点对进行过滤。
在一些实施例中,针对一个第二图像区域与对应第一图像区域进行模板匹配得到的一组匹配点对,对应的所述预设条件基于以下至少一者确定:所述第一图像区域中各个子区域与所述第二图像区域的相关性的取值分布特征;其中,各所述子区域为计算对应的所述相关性时所述第一图像区域中重叠于所述第二图像区域的部分;所述第一图像区域中与所述第二图像区域的相关性最大的子区域的相关性的取值。
在一些实施例中,所述基于预设条件对该组匹配点对进行过滤,包括:在所述第一图像区域中第一目标子区域的数量大于预设数量的情况下,将所述第一图像区域与对应第二图像区域的匹配点对过滤掉,所述第一目标子区域为所述第一图像区域中与对应第二图像区域的相关性大于第一预设值的子区域;和/或在所述第一图像区域的各子区域对应的相关性的取值均小于第二预设值的情况下,将所述第一图像区域与对应第二图像区域的匹配点对过滤掉。
在一些实施例中,所述基于所述遥感图像获取第一路网图像,包括:将所述遥感图像输入预先训练的神经网络;获取所述神经网络输出的所述遥感图像对应的第一路网图像。
根据本公开实施例的第二方面,提供一种遥感图像的配准装置,所述装置包括:第一获取模块,用于获取遥感图像;第二获取模块,用于基于所述遥感图像获取第一路网图像;配准模块,用于基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准。
在一些实施例中,所述配准模块包括:提取单元,用于从所述第二路网图像中提取 路网特征点;第一配准单元,用于基于所述第一路网图像与所述路网特征点,对所述遥感图像进行图像配准。
在一些实施例中,所述配准模块包括:获取单元,用于获取所述第二路网图像中包括所述路网特征点的一个或多个第二图像区域;匹配单元,用于针对每个所述第二图像区域,对所述第二图像区域与所述第一路网图像中对应的第一图像区域进行模板匹配,获取所述第二图像区域与所述第一图像区域中的匹配点对;第二配准单元,用于基于各所述第二图像区域与对应的所述第一图像区域中的匹配点对,对所述遥感图像进行图像配准。
在一些实施例中,所述第二图像区域与所述第一图像区域中的匹配点对基于以下模块确定:计算模块,用于将所述第二图像区域在所述第一图像区域上滑动,每滑动一次,计算所述第一图像区域中重叠于所述第二图像区域的子区域与所述第二图像区域的相关性;确定模块,用于将相关性最大的子区域中的第一特征点以及所述第二图像区域中与所述第一特征点对应的第二特征点确定为匹配点对。
在一些实施例中,所述第二图像区域的数量为多个,每个第二图像区域与一个第一图像区域一一对应,一个第二图像区域与对应的第一图像区域进行模板匹配,得到一组匹配点对;所述装置还包括:过滤模块,用于在基于各所述第二图像区域与对应的所述第一图像区域中的匹配点对,对所述遥感图像进行图像配准之前,针对每个所述第二图像区域与对应的第一图像区域进行模板匹配得到的一组匹配点对,基于预设条件对该组匹配点对进行过滤。
在一些实施例中,针对一个第二图像区域与对应第一图像区域进行模板匹配得到的一组匹配点对,对应的所述预设条件基于以下至少一者确定:所述第一图像区域中各个子区域与所述第二图像区域的相关性的取值分布特征;其中,各所述子区域为计算对应的所述相关性时所述第一图像区域中重叠于所述第二图像区域的部分;所述第一图像区域中与所述第二图像区域的相关性最大的子区域的相关性的取值。
在一些实施例中,所述过滤模块用于:在所述第一图像区域中第一目标子区域的数量大于预设数量的情况下,将所述第一图像区域与对应第二图像区域的匹配点对过滤掉,所述第一目标子区域为所述第一图像区域中与对应第二图像区域的相关性大于第一预设值的子区域;和/或在所述第一图像区域的各子区域对应的相关性的取值均小于第二预设值的情况下,将所述第一图像区域与对应第二图像区域的匹配点对过滤掉。
在一些实施例中,所述基于所述遥感图像获取第一路网图像,第二获取模块包括:输入单元,用于将所述遥感图像输入预先训练的神经网络;获取单元,用于获取所述神经网络输出的所述遥感图像对应的第一路网图像。
根据本公开实施例的第三方面,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现任一实施例所述的方法。
根据本公开实施例的第四方面,提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现任一实施例所述的方法。
根据本公开实施例的第五方面,提供一种遥感图像的配准系统,所述系统包括:图像采集设备,用于采集遥感图像;以及处理设备,用于:获取所述遥感图像;基于所述遥感图像获取第一路网图像;基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准。
本公开实施例先基于遥感图像获取第一路网图像,再基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准,由于路网图像的纹理特征简单,重复纹理少,且路网图像中的路网特征受图像传感器差异以及成像条件的影响较小,因此,基于路网图像来对遥感图像进行图像配准,能够有效提高遥感图像配准的成功率。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1是本公开实施例的遥感图像的配准方法的流程图。
图2是本公开实施例的遥感图像的配准方法的具体流程图。
图3是本公开实施例的遥感图像的配准装置的框图。
图4是本公开实施例的计算机设备的示意图。
图5是本公开实施例的遥感图像的配准系统的示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
为了使本技术领域的人员更好的理解本公开实施例中的技术方案,并使本公开实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本公开实施例中的技术方案作进一步详细的说明。
遥感图像是通过遥感方式采集到的图像。遥感是一种非接触的远距离探测技术。通过地面或飞机、卫星等运载工具上的传感器(照相机、扫描仪、雷达等),记录被测物体发射、反射或散射的电磁波信息,并处理为图像或计算机可用的数据,经过分析、判断,识别各种被测物体,进而揭示它们的空间分布和变化规律。遥感图像配准是将不同时间、不同传感器(成像设备)或不同条件下(天候、照度、摄像位置和角度等)获取的两幅或多幅图像进行匹配、叠加的过程,它已经被广泛地应用于遥感数据分析、计算机视觉、图像处理等领域,例如,用于变化检测、地表扰动检测、建筑物提取、图像超分辨率处理等方面。
然而,遥感影像配准,尤其是高分辨率遥感影像配准难度大。原因在于,一方面,遥感图像尺寸大,纹理特征复杂,重复纹理多,在遥感影像配准时容易将一个区域的遥感影像误判为另一个区域;另一方面,不同的图像传感器之间存在差异,且遥感图像的成像条件不同,导致遥感图像中地物会发生变化,不同遥感图像间在色彩、阴影、清晰 度等方面有很大的不同;而且由于不同遥感图像的成像时间不同,导致遥感图像中的地物会发生变化,同一区域的地物在不同的遥感图像中可能差异很大。
基于上述原因,传统的基于SIFT(Scale-invariant feature transform,尺度不变特征变换)、SURF(Speeded Up Robust Features,加速稳健特征)、ORB(Oriented FAST and Rotated BRIEF,有向FAST角点检测算法和旋转BRIEF算法)等特征提取和匹配算法的遥感图像配准方式很难在遥感图像中找到正确的配准点,因而配准成功率低。
基于此,本公开实施例提供一种遥感图像的配准方法,如图1所示,所述方法可包括:
步骤101:获取遥感图像;
步骤102:基于所述遥感图像获取第一路网图像;
步骤103:基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准。
本公开实施例的步骤可由处理器执行,所述处理器可以是GPU(Graphics Processing Unit,图形处理器)、CPU(Central Processing Unit,中央处理器)或者其他类型的传感器,也可以是包括多个GPU、多个GPU或者至少一个GPU和至少一个CPU的处理器组。在步骤101中,所述遥感图像可以由地面或飞机、卫星等运载工具上的传感器(例如,照相机、扫描仪、雷达等)获取。在一些实施例中,所述遥感图像可以是高分辨率遥感图像,所述高分辨率遥感图像的分辨率可达到亚米级。不同于一般的图像,所述高分辨率遥感图像的数据量大,高分辨率遥感图像的分辨率越高,其数据量就越大。分辨率越高,遥感图像记录的数据信息越详细,不仅仅是像元数量随着高分辨率遥感图像增大,高分辨率遥感图像中每个像元的信息复杂性也在增加,因此分辨率的提高与高分辨率遥感图像文件大小的增加并非线性关系。并且,分辨率越高,信息量越大,数据提取就越难。因此,用针对一般图像的配准方式难以有效地对遥感图像进行配准。
在步骤102和步骤103中,可以获取第一路网图像,并基于第一路网图像进行遥感图像配准。路网指的是在一定区域内,由各种道路组成的相互联络、交织成网状分布的道路系统。由于路网图像的纹理特征简单,重复纹理少,且路网图像中的路网特征受图像传感器差异以及成像条件的影响较小,因此,基于路网图像来对遥感图像进行图像配准,能够有效提高遥感图像配准的成功率。
在一些实施例中,可以将所述遥感图像输入预先训练的神经网络,获取所述神经网 络输出的所述遥感图像对应的第一路网图像。所述神经网络可以对所述遥感图像进行语义识别,以从所述遥感图像中提取出语义为道路的各个像素点。在另一些实施例中,也可以采用其他算法,或者采用其他算法与神经网络相结合的方式来从所述遥感图像中提取出第一路网图像,本公开对此不做限制。
在一些实施例中,还可以先对所述遥感图像进行下采样处理,得到下采样图像,再基于所述下采样图像获取第一路网图像。例如,可以将所述遥感图像输入预先训练的神经网络进行去卷积处理(deconvolution),得到下采样图像。用于进行下采样处理的神经网络与用于提取第一路网图像的神经网络可以是同一个神经网络,也可以是不同的神经网络。通过下采样处理,能够减少数据处理量,提高获取第一路网图像的效率。
在进行配准时,可以获取遥感图像模板对应的第二路网图像,所述第二路网图像的获取方式与所述第一路网图像的方式类似,此处不再赘述。为了提高处理效率,也可以预先对遥感图像模板对应的第二路网图像进行存储,并在进行配准时直接读取存储的第二路网图像。其中,所述遥感图像模板是指作为模板使用的遥感图像,例如可以是官方或权威机构发布的某一地区的遥感图像。
在步骤103中,可以从所述第二路网图像中提取路网特征点;基于所述第一路网图像与所述路网特征点,对所述遥感图像进行图像配准。所述路网特征点可以包括但不限于路网中两条或两条以上道路的交点、一条道路的拐点、道路端点中的至少一者。
示例性的,在获取第二路网图像之后,可以对所述第二路网图像进行二值化处理,将路网中原来较宽的连通区域通过某种方式(例如,只保留路网的中轴线)转换为通过单像素连接的图像,以获取路网的骨架,通过提取骨架的特征点得到路网特征点。通过进行二值化处理,能够使所述第二路网图像中的边缘更加锐利,从而便于从第二路网图像中提取路网特征点。
在一些实施例中,可以获取所述第二路网图像中包括所述路网特征点的一个或多个第二图像区域;针对每个所述第二图像区域,对所述第二图像区域与所述第一路网图像中对应的第一图像区域进行模板匹配,获取所述第二图像区域与所述第一图像区域中的匹配点对;基于各所述第二图像区域与对应的所述第一图像区域中的匹配点对,对所述遥感图像进行图像配准。通过在路网特征点处利用路网信息在一定范围内做模板匹配获得匹配点,避免了使用局部特征做匹配时的匹配点不准确的问题。
其中,所述第二图像区域的数量可以是一个或多个,每个第二图像区域均可以是所 述第二路网图像中的一个图像块,一个第二图像区域中可以包括一个或多个路网特征点。为了便于处理,可以以所述第二路网图像中的多个路网特征点中的每个路网特征点为中心,确定第二图像区域。例如,选取所述第二路网图像中的路网特征点a、路网特征点b和路网特征点c,分别以路网特征点a、路网特征点b和路网特征点c为中心,对应确定第二图像区域A、第二图像区域B和第二图像区域C。各个路网特征点对应的第二图像区域的形状和尺寸可以相同,也可以不同。
第一图像区域可以是第一路网图像本身,为了提高处理效率,第一图像区域也可以是第一路网图像中的一个图像块。在所述第二图像区域的数量为多个的情况下,所述第一图像区域的数量也可以是多个。例如,所述第一图像区域的数量可以与所述第二图像区域的数量相同,每个第二图像区域对应一个第一图像区域。又例如,所述第一图像区域的数量可以小于所述第二图像区域的数量,多个第二图像区域可以共用一个第一图像区域。在每个第二图像区域对应一个第一图像区域的实施例中,可以根据第二图像区域的中心点在第二路网图像中的像素位置,确定所述第二图像区域对应的第一图像区域。其中,所述第二图像区域对应的第一图像区域的中心点在第一路网图像中的像素位置与所述第二图像区域的中心点在第二路网图像中的像素位置相同。例如,第二图像区域A的中心点在第二路网图像中的像素位置为(x0,y0),则与第二图像区域A对应的第一图像区域的中心点在第一路网图像中的像素位置也为(x0,y0)。所述第一图像区域的尺寸可以大于所述第二图像区域的尺寸。为了便于处理,可选取第一路网图像中与第二图像区域形状相同的图像块作为第一图像区域。
所述匹配点对中包括至少一对相互对应的匹配像素点,一对匹配像素点中的每个像素点对应于物理空间中的同一个物理点,或者一对匹配像素点在物理空间中的物理距离小于预设距离阈值。其中,所述第二图像区域与所述第一图像区域中的匹配点对基于以下方式确定:将所述第二图像区域在所述第一图像区域上滑动,每滑动一次,计算所述第一图像区域中重叠于所述第二图像区域的子区域与所述第二图像区域的相关性;将相关性最大的子区域中的第一特征点以及所述第二图像区域中与所述第一特征点对应的第二特征点(可例如为该第二图像区域中的中心点)确定为匹配点对。
例如,可以以第一图像区域上的像素坐标为(x1,y1)的像素点P为起点开始计算相关性。将第二图像区域的中心点滑动至像素点P,计算第一图像区域中与第二图像区域重叠的子区域的相关性。然后,将第二图像区域在第一图像区域上沿X轴向右滑动一个像素点,再次计算相关性,直到通过沿X轴移动第二图像区域所产生的坐标为y1的各个 子区域都计算完成。然后,将第二图像区域的中心从像素点P沿Y轴向下滑动一个像素点,并再次计算相关性,直到通过沿Y轴移动第二图像区域所产生的坐标为x1的各个子区域都计算完成。重复上述过程,直到第一图像区域中的所有子区域的相关性都计算完成。
所述相关性用于表征所述第一图像区域中与所述第二图像区域重叠的区域与所述第二图像区域为同一区域的可能性,相关性越大,表示二者为同一区域的可能性越高。
在所述第二图像区域的数量为多个的情况下,每个第二图像区域可以与一个第一图像区域对应,可以将一个第二图像区域与对应的第一图像区域进行模板匹配,得到一组匹配点对。例如,第二图像区域包括第二图像区域A,第二图像区域B和第二图像区域C,上述三个第二图像区域对应的第一图像区域依次为第一图像区域a,第一图像区域b和第一图像区域c。则可以将第二图像区域A与第一图像区域a进行模板匹配,得到一组匹配点对;将第二图像区域B与第一图像区域b进行模板匹配,得到一组匹配点对;以及将第二图像区域C与第一图像区域c进行模板匹配,得到一组匹配点对。
进一步地,在基于所述第二图像区域与所述第一图像区域中的匹配点对,对所述遥感图像进行图像配准之前,还可以针对每个所述第二图像区域与对应的第一图像区域进行模板匹配得到的一组匹配点对基于预设条件对该组匹配点对进行过滤。通过对路网模板匹配结果做判断,排除匹配错误的点,能够提高图像配准精度。
所述预设条件可基于以下至少一者确定:所述第一图像区域中各个子区域与所述第二图像区域的相关性的取值分布特征;以及所述第一图像区域中与所述第二图像区域的相关性最大的子区域的相关性的取值。具体来说,可以在一个第一图像区域中第一目标子区域的数量大于预设数量的情况下,将所述第一图像区域与对应第二图像区域的匹配点对过滤掉,所述第一目标子区域为所述第一图像区域中与对应第二图像区域的相关性大于第一预设值的子区域。还可以在一个第一图像区域的各子区域对应的相关性的取值均小于第二预设值的情况下,将所述第一图像区域与对应第二图像区域的匹配点对过滤掉。所述第一预设值、第二预设值和预设数量可以根据实际需要设置,例如,所述第一预设值可以等于相关性的最大值与预设权重的乘积,所述权重为小于1的正数(例如,0.9)。所述预设数量可以是所述第一图像区域中的滑动位置的总数的90%。所述第二预设值可以是0.9。本领域技术人员可以理解,上述数值仅为示例性说明,并非用于限制本公开,本公开中的第一预设值、第二预设值和预设数量还可以是其他数值。
传统的遥感图像配准主要基于局部特征匹配寻找匹配点,或者先采集大量数据来训 练一个配准模型,再基于该配准模型进行图像配准。本公开实施例的图像配准方式,一方面,基于路网图像进行配准,配准成功率高;另一方面,无需训练数据,也无需对数据做标注,避免了昂贵耗时的数据采集和标注,配准效率高且成本低。
如图2所示,是本公开实施例的遥感图像的配准方法的具体流程图,输入为两张待配准的高分辨率卫星遥感影像,输出为配准后的遥感图像。该流程共有4个核心步骤:
(1)通过道路提取模型,从遥感图像(即原图B)中提取第一路网图像(即路网B),并从遥感图像模板(即原图A)中提取第二路网图像(即路网A),然后将所述第二路网图像二值化,进一步获取路网的骨架,通过获得骨架交点得到路网的交点。路网的交点作为待匹配点,即下一步使用路网进行模板匹配的位置。
(2)在道路交点处利用路网信息在一定范围内做模板匹配获得匹配点。以路网交点为中心,在第二路网图像中裁剪一个小区域(即第二图像区域)。在第一路网图像中裁剪更大的一个区域(即第一图像区域),通过标准相关匹配法获得第二图像区域与第一图像区域的各个子区域的相关性,相关性最大的子区域即为所要查找的目标子区域,在目标子区域和第二图像区域中得到一对匹配点。其中,模板匹配的公式如下:
Figure PCTCN2021115513-appb-000001
其中,T(x′,y′)是第二图像区域T在像素坐标(x′,y′)处的像素值,I(x+x′,y+y′)表示第一图像区域I在像素坐标(x+x′,y+y′)处的像素值,(x,y)是第二图像区域在第一图像区域中的位置,R(x,y)为(x,y)处的相关性。相关性取值范围为(0,1),值越大表示相关性越好。通过在所有的道路交点处进行模板匹配,获得两幅遥感图像中多个对应点的坐标。
(3)对模板匹配结果做判断,排除匹配错误的点。每个道路交点做模板匹配时相关性的取值范围为(0,1),在每个道路交点滑动进行模板匹配时,记录相关性的最大值和相关性在(0.9*R m,R m)这一范围内的像素点的比例,R m为相关性的最大值。最大值越大表示相关性越好,(0.9*R m,R m)这一范围内的像素点的比例越低,表示匹配歧义性越小。排除相关性小的匹配点对或歧义大的匹配点对,只保留相关性大且歧义小的匹配点对,从而获得准确率较高的匹配点对。
(4)使用得到的匹配点对,通过对影像做单应性矩阵变换来实现影像配准。通过第3步得到的匹配点对,计算单应性矩阵,将原图A做仿射变换(warp),得到仿射变换 图warp A,从而得到配准后的遥感图像(warp A和原图B)。
在使用遥感图像做变化检测,地表扰动,建筑物提取时,使用本公开实施例方法可用于对输入的两期或者多期遥感图像进行对齐。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
如图3所示,本公开还提供一种遥感图像的配准装置,所述装置包括:第一获取模块301,用于获取遥感图像;第二获取模块302,用于基于所述遥感图像获取第一路网图像;配准模块303,用于基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准。
在一些实施例中,所述配准模块包括:提取单元,用于从所述第二路网图像中提取路网特征点;第一配准单元,用于基于所述第一路网图像与所述路网特征点,对所述遥感图像进行图像配准。
在一些实施例中,所述配准模块包括:获取单元,用于获取所述第二路网图像中包括所述路网特征点的一个或多个第二图像区域;匹配单元,用于针对每个所述第二图像区域,对所述第二图像区域与所述第一路网图像中对应的第一图像区域进行模板匹配,获取所述第二图像区域与所述第一图像区域中的匹配点对;第二配准单元,用于基于各所述第二图像区域与对应的所述第一图像区域中的匹配点对,对所述遥感图像进行图像配准。
在一些实施例中,所述第二图像区域与所述第一图像区域中的匹配点对基于以下模块确定:计算模块,用于将所述第二图像区域在所述第一图像区域上滑动,每滑动一次,计算所述第一图像区域中重叠于所述第二图像区域的子区域与所述第二图像区域的相关性;确定模块,用于将相关性最大的子区域中的第一特征点以及所述第二图像区域中与所述第一特征点对应的第二特征点确定为匹配点对。
在一些实施例中,所述第二图像区域的数量为多个,每个第二图像区域与一个第一图像区域一一对应,一个第二图像区域与对应的第一图像区域进行模板匹配,得到一组匹配点对;所述装置还包括:过滤模块,用于在基于各所述第二图像区域与对应的所述第一图像区域中的匹配点对,对所述遥感图像进行图像配准之前,针对每个所述第二图像区域与对应的第一图像区域进行模板匹配得到的一组匹配点对,基于预设条件对该组 匹配点对进行过滤。
在一些实施例中,针对一个第二图像区域与对应第一图像区域进行模板匹配得到的一组匹配点对,对应的所述预设条件基于以下至少一者确定:所述第一图像区域中各个子区域与所述第二图像区域的相关性的取值分布特征;其中,各所述子区域为计算对应的所述相关性时所述第一图像区域中重叠于所述第二图像区域的部分;所述第一图像区域中与所述第二图像区域的相关性最大的子区域的相关性的取值。
在一些实施例中,所述过滤模块用于:在所述第一图像区域中第一目标子区域的数量大于预设数量的情况下,将所述第一图像区域与对应第二图像区域的匹配点对过滤掉,所述第一目标子区域为所述第一图像区域中与对应第二图像区域的相关性大于第一预设值的子区域;和/或在所述第一图像区域的各子区域对应的相关性的取值均小于第二预设值的情况下,将所述第一图像区域与对应第二图像区域的匹配点对过滤掉。
在一些实施例中,所述基于所述遥感图像获取第一路网图像,第二获取模块包括:输入单元,用于将所述遥感图像输入预先训练的神经网络;获取单元,用于获取所述神经网络输出的所述遥感图像对应的第一路网图像。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本公开方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
相应地,本公开实施例还提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现任一实施例所述的方法。
图4示出了本公开实施例所提供的一种更为具体的计算机设备硬件结构示意图,该设备可以包括:处理器401、存储器402、输入/输出接口403、通信接口404和总线405。其中处理器401、存储器402、输入/输出接口403和通信接口404通过总线405实现彼此之间在设备内部的通信连接。
处理器401可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本公开实施例所提供的技术方案。
存储器402可以采用ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器402可以存储操作系统和其他应用程序,在通过软件或者固件来实现本公开实施例所提供的技术方案时,相关的程序代码保存在存储器402中,并由处理器401来调用执行。
输入/输出接口403用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。
通信接口404用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。
总线405包括一通路,在设备的各个组件(例如处理器401、存储器402、输入/输出接口403和通信接口404)之间传输信息。
需要说明的是,尽管上述设备仅示出了处理器401、存储器402、输入/输出接口403、通信接口404以及总线405,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本公开实施例方案所必需的组件,而不必包含图中所示的全部组件。
本公开实施例还提供一种遥感图像的配准系统,如图5所示,所述系统可包括:图像采集设备501,用于采集遥感图像;以及处理设备502,用于获取所述遥感图像;基于所述遥感图像获取第一路网图像;基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准。
所述图像采集设备501可以搭载于飞机、卫星或者车辆等运载工具上,并直接或间接将获取的第一路网图像传输至所述处理设备502,所述处理设备502可以执行上述任一实施例所述的方法,其所执行的方法参见上述方法实施例,此处不再赘述。
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前述任一实施例所述的方法。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本公开实施例可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本公开实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开实施例各个实施例或者实施例的某些部分所述的方法。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
本公开中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,在实施本公开实施例方案时可以把各模块的功能在同一个或多个软件和/或硬件中实现。也可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。

Claims (12)

  1. 一种遥感图像的配准方法,包括:
    获取遥感图像;
    基于所述遥感图像获取第一路网图像;
    基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准,包括:
    从所述第二路网图像中提取路网特征点;
    基于所述第一路网图像与所述路网特征点,对所述遥感图像进行图像配准。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述第一路网图像与所述路网特征点,对所述遥感图像进行图像配准,包括:
    获取所述第二路网图像中包括所述路网特征点的一个或多个第二图像区域;
    针对每个所述第二图像区域,对所述第二图像区域与所述第一路网图像中对应的第一图像区域进行模板匹配,获取所述第二图像区域与所述第一图像区域中的匹配点对;
    基于各所述第二图像区域与对应的所述第一图像区域中的匹配点对,对所述遥感图像进行图像配准。
  4. 根据权利要求3所述的方法,其特征在于,所述获取所述第二图像区域与所述第一图像区域中的匹配点对,包括:
    将所述第二图像区域在所述第一图像区域上滑动,每滑动一次,计算所述第一图像区域中重叠于所述第二图像区域的子区域与所述第二图像区域的相关性;
    将相关性最大的子区域中的第一特征点以及所述第二图像区域中与所述第一特征点对应的第二特征点确定为匹配点对。
  5. 根据权利要求3或4所述的方法,其特征在于,所述第二图像区域的数量为多个,每个第二图像区域与一个第一图像区域一一对应,一个第二图像区域与对应的第一图像区域进行模板匹配,得到一组匹配点对;所述方法还包括:
    在基于各所述第二图像区域与对应的所述第一图像区域中的匹配点对,对所述遥感图像进行图像配准之前,针对每个所述第二图像区域与对应的第一图像区域进行模板匹配得到的一组匹配点对,基于预设条件对该组匹配点对进行过滤。
  6. 根据权利要求5所述的方法,其特征在于,针对一个第二图像区域与对应第一图像区域进行模板匹配得到的一组匹配点对,对应的所述预设条件基于以下至少一者确 定:
    所述第一图像区域中各个子区域与所述第二图像区域的相关性的取值分布特征;其中,各所述子区域为计算对应的所述相关性时所述第一图像区域中重叠于所述第二图像区域的部分;
    所述第一图像区域中与所述第二图像区域的相关性最大的子区域的相关性取值。
  7. 根据权利要求6所述的方法,其特征在于,所述基于预设条件对该组匹配点对进行过滤,包括以下至少一个:
    在所述第一图像区域中第一目标子区域的数量大于预设数量的情况下,将所述第一图像区域与对应第二图像区域的匹配点对过滤掉,所述第一目标子区域为所述第一图像区域中与对应第二图像区域的相关性大于第一预设值的子区域;
    在所述第一图像区域的各子区域对应的相关性的取值均小于第二预设值的情况下,将所述第一图像区域与对应第二图像区域的匹配点对过滤掉。
  8. 根据权利要求1至7任意一项所述的方法,其特征在于,所述基于所述遥感图像获取第一路网图像,包括:
    将所述遥感图像输入预先训练的神经网络;
    获取所述神经网络输出的所述遥感图像对应的第一路网图像。
  9. 一种遥感图像的配准装置,包括:
    第一获取模块,用于获取遥感图像;
    第二获取模块,用于基于所述遥感图像获取第一路网图像;
    配准模块,用于基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1至8任意一项所述的方法。
  11. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至8任意一项所述的方法。
  12. 一种遥感图像的配准系统,包括:
    图像采集设备,用于采集遥感图像;以及
    处理设备,用于:获取所述遥感图像;基于所述遥感图像获取第一路网图像;基于所述第一路网图像与遥感图像模板对应的第二路网图像,对所述遥感图像进行图像配准。
PCT/CN2021/115513 2020-09-22 2021-08-31 遥感图像的配准方法、装置、设备、存储介质及系统 WO2022062853A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011003115.8A CN112150522A (zh) 2020-09-22 2020-09-22 遥感图像的配准方法、装置、设备、存储介质及系统
CN202011003115.8 2020-09-22

Publications (1)

Publication Number Publication Date
WO2022062853A1 true WO2022062853A1 (zh) 2022-03-31

Family

ID=73896208

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115513 WO2022062853A1 (zh) 2020-09-22 2021-08-31 遥感图像的配准方法、装置、设备、存储介质及系统

Country Status (2)

Country Link
CN (1) CN112150522A (zh)
WO (1) WO2022062853A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150522A (zh) * 2020-09-22 2020-12-29 上海商汤智能科技有限公司 遥感图像的配准方法、装置、设备、存储介质及系统
CN114419116B (zh) * 2022-01-11 2024-04-09 江苏省测绘研究所 一种基于网匹配的遥感影像配准方法及其系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364259A (zh) * 2008-04-09 2009-02-11 武汉大学 多层次知识驱动的全色遥感影像的道路变化信息提取方法
US20170236284A1 (en) * 2016-02-13 2017-08-17 University Of Rochester Registration of aerial imagery to vector road maps with on-road vehicular detection and tracking
CN109035315A (zh) * 2018-08-28 2018-12-18 武汉大学 融合sift特征和cnn特征的遥感图像配准方法及系统
CN109447160A (zh) * 2018-10-31 2019-03-08 武汉大学 一种影像和矢量道路交叉点自动匹配的方法
CN109493320A (zh) * 2018-10-11 2019-03-19 苏州中科天启遥感科技有限公司 基于深度学习的遥感影像道路提取方法及系统、存储介质、电子设备
CN111539432A (zh) * 2020-03-11 2020-08-14 中南大学 一种利用众源数据辅助遥感影像提取城市道路的方法
CN112150522A (zh) * 2020-09-22 2020-12-29 上海商汤智能科技有限公司 遥感图像的配准方法、装置、设备、存储介质及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364259A (zh) * 2008-04-09 2009-02-11 武汉大学 多层次知识驱动的全色遥感影像的道路变化信息提取方法
US20170236284A1 (en) * 2016-02-13 2017-08-17 University Of Rochester Registration of aerial imagery to vector road maps with on-road vehicular detection and tracking
CN109035315A (zh) * 2018-08-28 2018-12-18 武汉大学 融合sift特征和cnn特征的遥感图像配准方法及系统
CN109493320A (zh) * 2018-10-11 2019-03-19 苏州中科天启遥感科技有限公司 基于深度学习的遥感影像道路提取方法及系统、存储介质、电子设备
CN109447160A (zh) * 2018-10-31 2019-03-08 武汉大学 一种影像和矢量道路交叉点自动匹配的方法
CN111539432A (zh) * 2020-03-11 2020-08-14 中南大学 一种利用众源数据辅助遥感影像提取城市道路的方法
CN112150522A (zh) * 2020-09-22 2020-12-29 上海商汤智能科技有限公司 遥感图像的配准方法、装置、设备、存储介质及系统

Also Published As

Publication number Publication date
CN112150522A (zh) 2020-12-29

Similar Documents

Publication Publication Date Title
US9378431B2 (en) Method of matching image features with reference features and integrated circuit therefor
US9135710B2 (en) Depth map stereo correspondence techniques
Yu et al. Universal SAR and optical image registration via a novel SIFT framework based on nonlinear diffusion and a polar spatial-frequency descriptor
WO2016062159A1 (zh) 图像匹配方法及手机应用测试平台
WO2022062853A1 (zh) 遥感图像的配准方法、装置、设备、存储介质及系统
CN101826157B (zh) 一种地面静止目标实时识别跟踪方法
US9396553B2 (en) Vehicle dimension estimation from vehicle images
US20180268522A1 (en) Electronic device with an upscaling processor and associated method
WO2019215819A1 (ja) 合成開口レーダ画像解析システム、合成開口レーダ画像解析方法および合成開口レーダ画像解析プログラム
Yuan et al. Combining maps and street level images for building height and facade estimation
Wu et al. Remote sensing image registration based on local structural information and global constraint
Chen et al. Scene segmentation of remotely sensed images with data augmentation using U-net++
Qi et al. Research of image matching based on improved SURF algorithm
Jin et al. Registration of UAV images using improved structural shape similarity based on mathematical morphology and phase congruency
Wan et al. The P2L method of mismatch detection for push broom high-resolution satellite images
CN115205558B (zh) 一种具有旋转和尺度不变性的多模态影像匹配方法及装置
CN113793370A (zh) 三维点云配准方法、装置、电子设备及可读介质
Jiao et al. A novel and fast corner detection method for sar imagery
CN105631849A (zh) 多边形目标的变化检测方法及装置
Li et al. Subpixel image registration algorithm based on pyramid phase correlation and upsampling
CN116091998A (zh) 图像处理方法、装置、计算机设备和存储介质
CN115830073A (zh) 地图要素重建方法、装置、计算机设备和存储介质
CN114463503A (zh) 三维模型和地理信息系统的融合方法及装置
Du et al. Block-and-octave constraint SIFT with multi-thread processing for VHR satellite image matching
Hou et al. Navigation landmark recognition and matching algorithm based on the improved SURF

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21871225

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21871225

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21871225

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.09.2023)