CN117237684A - Pixel-level region matching method and device - Google Patents
Pixel-level region matching method and device Download PDFInfo
- Publication number
- CN117237684A CN117237684A CN202311499467.0A CN202311499467A CN117237684A CN 117237684 A CN117237684 A CN 117237684A CN 202311499467 A CN202311499467 A CN 202311499467A CN 117237684 A CN117237684 A CN 117237684A
- Authority
- CN
- China
- Prior art keywords
- target
- feature map
- pixel
- parameter information
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000011156 evaluation Methods 0.000 claims abstract description 15
- 230000008859 change Effects 0.000 claims description 18
- 230000000694 effects Effects 0.000 claims description 16
- 238000010586 diagram Methods 0.000 claims description 14
- 238000012544 monitoring process Methods 0.000 claims description 14
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 5
- 230000004931 aggregating effect Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000004445 quantitative analysis Methods 0.000 abstract description 4
- 238000004891 communication Methods 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000001681 protective effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
The application provides a pixel-level region matching method and a pixel-level region matching device, wherein the method comprises the following steps: acquiring first characteristic parameter information of a target protection area and second characteristic parameter information of a target candidate area; constructing a first multi-channel feature map according to the first feature parameter information, and constructing a second multi-channel feature map according to the second feature parameter information; calculating target pixel points matched with each pixel in the first multi-channel feature map on the second multi-channel feature map, and obtaining a comparison feature map corresponding to the target candidate region according to each target pixel point; the first multi-channel feature map and the control feature map are saved as a control image group. According to the method, the problem that quantitative analysis cannot be performed due to excessive dependence on priori knowledge is avoided by pixelating the features in the region, the region matching at the pixel level is realized by constructing the multi-channel feature map, the control image group with high comparability and accuracy is obtained, and the evaluation accuracy of the target protection region is improved.
Description
Technical Field
The application relates to the technical field of area monitoring, in particular to a pixel-level area matching method and device.
Background
The creation of natural protection zones has long been recognized as one of the most effective protection measures for protecting biodiversity. The number of various natural protection areas in the world is large, and the coverage area is larger, but how to scientifically evaluate the effect of the establishment of the natural protection areas is always controversial. The evaluation of the protective effect of the natural protective zone depends largely on the choice of the control, and a comparably strong control is chosen to obtain reliable evaluation results.
The prior art obtains the comparison after carrying out parameter adjustment according to a large amount of priori knowledge, and the obtained comparison is influenced by subjective factors, namely, the obtained comparison is inaccurate, so that an evaluation result is inaccurate.
Accordingly, the prior art has drawbacks and needs to be improved and developed.
Disclosure of Invention
The application provides a pixel-level region matching method and device, which are used for solving the technical problem that the contrast obtained in the related technology is inaccurate.
In order to achieve the above purpose, the present application adopts the following technical scheme:
an embodiment of a first aspect of the present application provides a pixel-level region matching method, including the steps of:
acquiring first characteristic parameter information of a target protection area and second characteristic parameter information of a target candidate area;
constructing a first multi-channel feature map according to the first feature parameter information, and constructing a second multi-channel feature map according to the second feature parameter information;
calculating target pixel points matched with each pixel in the first multi-channel feature map on the second multi-channel feature map, and obtaining a comparison feature map corresponding to the target candidate region according to each target pixel point;
and saving the first multichannel characteristic map and the contrast characteristic map as a contrast image group.
Optionally, the target candidate area is obtained by taking a center point of the target protection area as a circle center, taking a circular area with a first preset distance as a radius as a first area, taking a circular area with a second preset distance as a radius as a second area, and removing the first area in the second area; the second preset distance is larger than the first preset distance, and the area of the first area is larger than the area of the target protection area.
Optionally, the acquiring the first feature parameter information of the target protection area and the second feature parameter information of the target candidate area includes:
acquiring first remote sensing image data corresponding to the target protection area, and acquiring first characteristic parameter information of the target protection area according to the first remote sensing image data;
acquiring second remote sensing image data corresponding to the target candidate region, and acquiring second characteristic parameter information of the target protection region according to the second remote sensing image data;
the characteristic parameters in the first characteristic parameter information and the second characteristic parameter information comprise: slope, altitude, vegetation type, forest cut, first distance from road, second distance from water source, and third distance from human activity area.
Optionally, constructing a first multi-channel feature map according to the first feature parameter information, and constructing a second multi-channel feature map according to the second feature parameter information, including:
each characteristic parameter in the first characteristic parameter information is used as a channel characteristic, and a first multi-channel characteristic diagram is constructed;
and constructing a second multi-channel feature map by taking each feature parameter in the second feature parameter information as a channel feature.
Optionally, calculating a target pixel point on the second multi-channel feature map, which is matched with each pixel in the first multi-channel feature map, and obtaining a comparison feature map corresponding to the target candidate region according to each target pixel point, where the calculating includes:
calculating the feature similarity between each pixel in the first multi-channel feature map and each pixel point on the second multi-channel feature map by using a nearest neighbor algorithm;
taking the pixel point with the feature similarity reaching a preset threshold as a target pixel point;
and aggregating the target pixel points to obtain a comparison feature map corresponding to the target candidate region.
Optionally, after saving the first multi-channel feature map and the control feature map as a control image group, the method further includes:
and monitoring the target protection area and the target candidate area according to the control image group, and obtaining a protection effect evaluation result of the target protection area according to a monitoring result.
Optionally, the monitoring is performed on the target protection area and the target candidate area according to the control image group, and a protection effect evaluation result of the target protection area is obtained according to a monitoring result, including:
when the preset monitoring time is reached, acquiring third remote sensing image data of the current target protection area and fourth remote sensing image data of the target candidate area;
obtaining third characteristic parameter information of the target protection area according to the third remote sensing image data, and obtaining fourth characteristic parameter information of the target candidate area according to the fourth remote sensing image data;
constructing a third multi-channel feature map according to the third feature parameter information, and constructing a fourth multi-channel feature map according to the fourth feature parameter information;
calculating target pixel points matched with each pixel in the third multi-channel feature map on the fourth multi-channel feature map, and obtaining a change feature map corresponding to the target candidate region according to each target pixel point;
comparing the fourth multi-channel feature map with the first multi-channel feature map in the control image group to obtain protection area change information;
comparing the change feature map with the comparison feature map in the comparison image group to obtain change information of the candidate region;
and obtaining a protection effect evaluation result of the target protection area according to the protection area change information and the candidate area change information.
An embodiment of the second aspect of the present application provides a pixel-level region matching apparatus, including:
the acquisition module is used for acquiring first characteristic parameter information of the target protection area and second characteristic parameter information of the target candidate area;
the construction module is used for constructing a first multi-channel characteristic diagram according to the first characteristic parameter information and constructing a second multi-channel characteristic diagram according to the second characteristic parameter information;
the computing module is used for computing target pixel points matched with each pixel in the first multi-channel feature map on the second multi-channel feature map, and obtaining a comparison feature map corresponding to the target candidate region according to each target pixel point;
and the storage module is used for storing the first multichannel characteristic map and the contrast characteristic map as a contrast image group.
An embodiment of the third aspect of the present application provides a terminal including a memory, a processor, and a pixel-level region matching program stored in the memory and executable on the processor, the processor implementing the steps of the pixel-level region matching method as described above when executing the pixel-level region matching program.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a pixel-level region matching program which, when executed by a processor, implements the steps of the pixel-level region matching method as described above.
The application has the beneficial effects that: according to the embodiment of the application, the first characteristic parameter information of the target protection area and the second characteristic parameter information of the target candidate area are obtained; constructing a first multi-channel feature map according to the first feature parameter information, and constructing a second multi-channel feature map according to the second feature parameter information; calculating target pixel points matched with each pixel in the first multi-channel feature map on the second multi-channel feature map, and obtaining a comparison feature map corresponding to the target candidate region according to each target pixel point; and saving the first multichannel characteristic map and the contrast characteristic map as a contrast image group. According to the method, the problem that quantitative analysis cannot be performed due to excessive dependence on priori knowledge is avoided by pixelating the features in the region, and the multi-channel feature map is constructed, so that region matching at the pixel level is realized, a control image group with high comparability and accuracy is obtained, and the evaluation accuracy of the target protection region is further improved.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a preferred embodiment of a pixel level region matching method according to the present application.
FIG. 2 is a diagram showing the relationship among the target protection area, the buffer area and the target candidate area according to the preferred embodiment of the pixel level region matching method of the present application.
FIG. 3 is a diagram of a multi-channel feature in a preferred embodiment of a pixel level region matching method of the present application.
Fig. 4 is a functional block diagram of a preferred embodiment of the pixel-level region matching device of the present application.
Fig. 5 is a functional block diagram of a preferred embodiment of the terminal of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear and clear, the present application will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The main drawbacks of the prior art include: (1) Spatial heterogeneity and local variation of forest landscapes may not be adequately captured; (2) high computational costs, especially in large scale projects; (3) A great deal of prior knowledge and parameter adjustment is required, which can lead to the effect of the outcome on subjective factors; (4) the generally selected region is a continuous region.
Referring to fig. 1, the pixel-level region matching method according to the embodiment of the application includes the following steps:
step S100, obtaining first characteristic parameter information of a target protection area and second characteristic parameter information of a target candidate area.
In one embodiment, the target candidate area is obtained by taking a center point of the target protection area as a center, taking a circular area with a first preset distance as a radius as a first area, taking a circular area with a second preset distance as a radius as a second area, and removing the first area in the second area; the second preset distance is larger than the first preset distance, and the area of the first area is larger than the area of the target protection area.
Specifically, a protection area to be monitored is taken as a target protection area, a buffer area with a fixed radius is selected around the target protection area, the buffer area is an area obtained by removing the target protection area from a first area, and the target candidate area is an area obtained by removing the first area from a second area. For example, the first preset distance is set to 300km, the second preset distance is set to 700km, as shown in fig. 2, the middle circular area represents the target protection area, the first annular area of the circular periphery is the buffer area, and the second annular area is the target candidate area.
According to the embodiment of the application, the target candidate region corresponding to the target protection region is selected in the mode, so that the similar environment of the target protection region can be obtained, the problem that the comparison result is not obvious due to the fact that the protection region and the candidate region are too close to each other is avoided, and further, the comparison with strong comparability can be obtained.
In one implementation, the step S100 specifically includes: acquiring first remote sensing image data corresponding to the target protection area, and acquiring first characteristic parameter information of the target protection area according to the first remote sensing image data; acquiring second remote sensing image data corresponding to the target candidate region, and acquiring second characteristic parameter information of the target protection region according to the second remote sensing image data; the characteristic parameters in the first characteristic parameter information and the second characteristic parameter information comprise: slope, altitude, vegetation type, forest cut, first distance from road, second distance from water source, and third distance from human activity area.
Specifically, the embodiment of the application respectively collects a plurality of characteristic parameters of the target protection area and the target candidate area, wherein the characteristic parameters comprise gradient, altitude, vegetation type, forest cutting condition, first distance from a road, second distance from a water source, third distance from a human activity area and the like. These characteristic parameters can be obtained from the remote sensing image data. The remote sensing image (RS, remote Sensing Image) is a film or a photo for recording electromagnetic wave sizes of various ground objects, and is mainly divided into an aerial photo and a satellite photo. The embodiment of the application converts the geographic matching problem into the target pixel point matching problem in the image, and acquires the information of the protection area and the candidate area by using a remote sensing image processing technology. The first distance from the road can be obtained by obtaining road network data and then obtaining the distance from each position to the road by using an algorithm.
The embodiment of the application can also adopt other remote sensing data sources, such as optical remote sensing, synthetic aperture radar remote sensing and the like, so as to further enrich the characteristic information.
The embodiment of the application uses the traditional remote sensing processing method, such as minimum distance classification, support vector machine and the like to carry out feature matching, and a plurality of feature parameters of the region are acquired so as to construct a multi-channel feature map according to each feature parameter, thereby realizing pixel-level region matching and finally improving the evaluation accuracy.
As shown in fig. 1, the pixel-level region matching method further includes the following steps:
step 200, a first multi-channel feature map is constructed according to the first feature parameter information, and a second multi-channel feature map is constructed according to the second feature parameter information.
In one embodiment, the step S200 specifically includes: each characteristic parameter in the first characteristic parameter information is used as a channel characteristic, and a first multi-channel characteristic diagram is constructed; and constructing a second multi-channel feature map by taking each feature parameter in the second feature parameter information as a channel feature.
In particular, the present embodiment separately constructs the features of slope, altitude, vegetation type, forest cut, first distance from road, second distance from water source, and third distance from human active area as channel features of an image, as shown in fig. 3, with each channel on the image representing a feature. Each target pixel point is a feature vector of 1×n composed of n features. Wherein each pixel of the target protection area and the target candidate area is a feature vector of 1×n, and the difference is the number of pixels.
According to the embodiment of the application, the candidate region matching method at the pixel level is realized by constructing the multichannel characteristic diagram, and the comparison with strong comparability is further obtained.
As shown in fig. 1, the pixel-level region matching method further includes the following steps:
and step S300, calculating target pixel points matched with each pixel in the first multi-channel feature map on the second multi-channel feature map, and obtaining a comparison feature map corresponding to the target candidate region according to each target pixel point.
In one embodiment, the step S300 specifically includes: calculating the feature similarity between each pixel in the first multi-channel feature map and each pixel point on the second multi-channel feature map by using a nearest neighbor algorithm; taking the pixel point with the feature similarity reaching a preset threshold as a target pixel point; and aggregating the target pixel points to obtain a comparison feature map corresponding to the target candidate region.
Specifically, the best pixel point matched to the target protection area pixel i from the target candidate area using KNN (nearest neighbor algorithm) is used as a target pixel point, and each target pixel point on the target protection area corresponds to one target pixel point in the target candidate area. And finally, combining all the matched points into a complete contrast characteristic diagram for comparison analysis.
The embodiment can fully capture the spatial heterogeneity and the local variation, can avoid the problem that quantitative analysis cannot be performed due to excessive dependence on priori knowledge by pixelating the geographic position and channelizing the influence factor image, can realize point-to-point comparison analysis, and does not use a continuous area-to-continuous area comparison mode.
As shown in fig. 1, the pixel-level region matching method further includes the following steps:
and step 400, saving the first multi-channel characteristic map and the contrast characteristic map as a contrast image group.
The embodiment of the application improves the accuracy of the comparison analysis of the protection area and reduces the influence of subjective factors; the efficiency of contrast analysis is improved through automatic processing; the matching effect based on the KNN algorithm is superior to that of the traditional distance matching method; the matching at the pixel level can avoid the problem of mismatching or missed matching in the region matching process.
In one embodiment, the step S400 further includes: and S500, monitoring the target protection area and the target candidate area according to the control image group, and obtaining a protection effect evaluation result of the target protection area according to a monitoring result.
Specifically, on the premise that a comparison image pair with strong comparability is obtained, the data monitored later are compared with the comparison image pair, so that the accuracy of comparison analysis of a protection area is improved, and the influence of subjective factors is reduced.
In one implementation, the step S500 specifically includes:
step S510, when a preset monitoring time is reached, acquiring third remote sensing image data of the current target protection area and fourth remote sensing image data of the target candidate area;
step S520, obtaining third characteristic parameter information of the target protection area according to the third remote sensing image data, and obtaining fourth characteristic parameter information of the target candidate area according to the fourth remote sensing image data;
step S530, a third multi-channel feature map is constructed according to the third feature parameter information, and a fourth multi-channel feature map is constructed according to the fourth feature parameter information;
step S540, calculating target pixel points matched with each pixel in the third multi-channel feature map on the fourth multi-channel feature map, and obtaining a change feature map corresponding to the target candidate region according to each target pixel point;
step S550, comparing the fourth multi-channel feature map with the first multi-channel feature map in the control image group to obtain protection area change information;
step S560, comparing the change feature map with the contrast feature map in the contrast image group to obtain change information of the candidate region;
step 570, obtaining a protection effect evaluation result of the target protection area according to the protection area change information and the candidate area change information.
Specifically, the embodiment of the application improves the efficiency of the contrast analysis through automatic processing. The embodiment of the application can also introduce time sequence data to realize dynamic comparison analysis so as to evaluate the protection effect of the protection area in different time periods. The application scene of the application can comprise a wetland protection area, an ocean protection area and the like, and has wide universality.
In an embodiment, as shown in fig. 4, based on the above-mentioned pixel-level region matching method, the present application further provides a pixel-level region matching device, which includes:
an obtaining module 100, configured to obtain first feature parameter information of a target protection area and second feature parameter information of a target candidate area;
a construction module 200, configured to construct a first multi-channel feature map according to the first feature parameter information, and construct a second multi-channel feature map according to the second feature parameter information;
the calculating module 300 is configured to calculate a target pixel point on the second multi-channel feature map, where the target pixel point is matched with each pixel in the first multi-channel feature map, and obtain a comparison feature map corresponding to the target candidate region according to each target pixel point;
a saving module 400, configured to save the first multi-channel feature map and the comparison feature map as a comparison image group.
It should be noted that the foregoing explanation of the pixel-level region matching method embodiment is also applicable to the pixel-level region matching device of this embodiment, and will not be repeated here.
The application discloses a pixel-level region matching method and a pixel-level region matching device, wherein the method comprises the following steps: acquiring first characteristic parameter information of a target protection area and second characteristic parameter information of a target candidate area; constructing a first multi-channel feature map according to the first feature parameter information, and constructing a second multi-channel feature map according to the second feature parameter information; calculating target pixel points matched with each pixel in the first multi-channel feature map on the second multi-channel feature map, and obtaining a comparison feature map corresponding to the target candidate region according to each target pixel point; and saving the first multichannel characteristic map and the contrast characteristic map as a contrast image group. According to the method, the problem that quantitative analysis cannot be performed due to excessive dependence on priori knowledge is avoided by pixelating the features in the region, and the multi-channel feature map is constructed, so that region matching at the pixel level is realized, a control image group with high comparability and accuracy is obtained, and the evaluation accuracy of the target protection region is further improved.
Fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal may include:
memory 501, processor 502, and a computer program stored on memory 501 and executable on processor 502.
The processor 502 implements the pixel-level region matching method provided in the above-described embodiment when executing a program.
Further, the terminal further includes:
a communication interface 503 for communication between the memory 501 and the processor 502.
Memory 501 for storing a computer program executable on processor 502.
The memory 501 may include high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 501, the processor 502, and the communication interface 503 are implemented independently, the communication interface 503, the memory 501, and the processor 502 may be connected to each other via a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (Periphera l Component, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the figures are shown with only one line, but not with only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 501, the processor 502, and the communication interface 503 are integrated on a chip, the memory 501, the processor 502, and the communication interface 503 may perform communication with each other through internal interfaces.
The processor 502 may be a central processing unit (Central Processing Unit, abbreviated as CPU) or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC) or one or more integrated circuits configured to implement embodiments of the present application.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the pixel-level region matching method as above.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can read instructions from and execute instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer cartridge (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer readable medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or part of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented as software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.
Claims (10)
1. A pixel-level region matching method, comprising:
acquiring first characteristic parameter information of a target protection area and second characteristic parameter information of a target candidate area;
constructing a first multi-channel feature map according to the first feature parameter information, and constructing a second multi-channel feature map according to the second feature parameter information;
calculating target pixel points matched with each pixel in the first multi-channel feature map on the second multi-channel feature map, and obtaining a comparison feature map corresponding to the target candidate region according to each target pixel point;
and saving the first multichannel characteristic map and the contrast characteristic map as a contrast image group.
2. The pixel-level region matching method according to claim 1, wherein the target candidate region is obtained by taking a center point of the target protection region as a center, taking a circular region with a first preset distance as a radius as a first region, taking a circular region with a second preset distance as a radius as a second region, and removing the first region in the second region; the second preset distance is larger than the first preset distance, and the area of the first area is larger than the area of the target protection area.
3. The pixel-level region matching method according to claim 1, wherein the acquiring the first feature parameter information of the target protection area and the second feature parameter information of the target candidate area includes:
acquiring first remote sensing image data corresponding to the target protection area, and acquiring first characteristic parameter information of the target protection area according to the first remote sensing image data;
acquiring second remote sensing image data corresponding to the target candidate region, and acquiring second characteristic parameter information of the target protection region according to the second remote sensing image data;
the characteristic parameters in the first characteristic parameter information and the second characteristic parameter information comprise: slope, altitude, vegetation type, forest cut, first distance from road, second distance from water source, and third distance from human activity area.
4. A pixel level region matching method according to claim 3, wherein constructing a first multi-channel feature map from the first feature parameter information and constructing a second multi-channel feature map from the second feature parameter information comprises:
each characteristic parameter in the first characteristic parameter information is used as a channel characteristic, and a first multi-channel characteristic diagram is constructed;
and constructing a second multi-channel feature map by taking each feature parameter in the second feature parameter information as a channel feature.
5. The pixel-level region matching method according to claim 3, wherein calculating a target pixel point on the second multi-channel feature map, which is matched with each pixel in the first multi-channel feature map, and obtaining a comparison feature map corresponding to the target candidate region according to each target pixel point comprises:
calculating the feature similarity between each pixel in the first multi-channel feature map and each pixel point on the second multi-channel feature map by using a nearest neighbor algorithm;
taking the pixel point with the feature similarity reaching a preset threshold as a target pixel point;
and aggregating the target pixel points to obtain a comparison feature map corresponding to the target candidate region.
6. The pixel-level region matching method according to claim 1, further comprising, after saving the first multi-channel feature map and the contrast feature map as a contrast image group:
and monitoring the target protection area and the target candidate area according to the control image group, and obtaining a protection effect evaluation result of the target protection area according to a monitoring result.
7. The pixel-level area matching method according to claim 6, wherein monitoring the target protection area and the target candidate area according to the control image group, and obtaining a protection effect evaluation result of the target protection area according to a monitoring result, comprises:
when the preset monitoring time is reached, acquiring third remote sensing image data of the current target protection area and fourth remote sensing image data of the target candidate area;
obtaining third characteristic parameter information of the target protection area according to the third remote sensing image data, and obtaining fourth characteristic parameter information of the target candidate area according to the fourth remote sensing image data;
constructing a third multi-channel feature map according to the third feature parameter information, and constructing a fourth multi-channel feature map according to the fourth feature parameter information;
calculating target pixel points matched with each pixel in the third multi-channel feature map on the fourth multi-channel feature map, and obtaining a change feature map corresponding to the target candidate region according to each target pixel point;
comparing the fourth multi-channel feature map with the first multi-channel feature map in the control image group to obtain protection area change information;
comparing the change feature map with the comparison feature map in the comparison image group to obtain change information of the candidate region;
and obtaining a protection effect evaluation result of the target protection area according to the protection area change information and the candidate area change information.
8. A pixel-level region matching apparatus, comprising:
the acquisition module is used for acquiring first characteristic parameter information of the target protection area and second characteristic parameter information of the target candidate area;
the construction module is used for constructing a first multi-channel characteristic diagram according to the first characteristic parameter information and constructing a second multi-channel characteristic diagram according to the second characteristic parameter information;
the computing module is used for computing target pixel points matched with each pixel in the first multi-channel feature map on the second multi-channel feature map, and obtaining a comparison feature map corresponding to the target candidate region according to each target pixel point;
and the storage module is used for storing the first multichannel characteristic map and the contrast characteristic map as a contrast image group.
9. A terminal comprising a memory, a processor and a pixel-level region matching program stored in the memory and executable on the processor, the processor implementing the steps of the pixel-level region matching method of any one of claims 1-7 when the pixel-level region matching program is executed.
10. A computer readable storage medium, wherein a pixel level region matching program is stored on the computer readable storage medium, which when executed by a processor, implements the steps of the pixel level region matching method as claimed in any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311499467.0A CN117237684B (en) | 2023-11-13 | 2023-11-13 | Pixel-level region matching method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311499467.0A CN117237684B (en) | 2023-11-13 | 2023-11-13 | Pixel-level region matching method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117237684A true CN117237684A (en) | 2023-12-15 |
CN117237684B CN117237684B (en) | 2024-01-16 |
Family
ID=89095203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311499467.0A Active CN117237684B (en) | 2023-11-13 | 2023-11-13 | Pixel-level region matching method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117237684B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102496067A (en) * | 2011-12-05 | 2012-06-13 | 中国科学院地理科学与资源研究所 | Lake nutrient partition control technique |
US20140355892A1 (en) * | 2013-05-31 | 2014-12-04 | Samsung Electronics Co., Ltd. | Image processing method and apparatus |
US20180211401A1 (en) * | 2017-01-26 | 2018-07-26 | Samsung Electronics Co., Ltd. | Stereo matching method and apparatus, image processing apparatus, and training method therefor |
US20200226413A1 (en) * | 2017-08-31 | 2020-07-16 | Southwest Jiaotong University | Fast and robust multimodal remote sensing images matching method and system |
CN114494896A (en) * | 2021-12-22 | 2022-05-13 | 山东土地集团数字科技有限公司 | Cultivated land state monitoring method, equipment and medium for cultivated land protection |
CN115830354A (en) * | 2022-10-25 | 2023-03-21 | 北京旷视科技有限公司 | Binocular stereo matching method, device and medium |
CN116977674A (en) * | 2022-11-18 | 2023-10-31 | 腾讯科技(深圳)有限公司 | Image matching method, related device, storage medium and program product |
-
2023
- 2023-11-13 CN CN202311499467.0A patent/CN117237684B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102496067A (en) * | 2011-12-05 | 2012-06-13 | 中国科学院地理科学与资源研究所 | Lake nutrient partition control technique |
US20140355892A1 (en) * | 2013-05-31 | 2014-12-04 | Samsung Electronics Co., Ltd. | Image processing method and apparatus |
US20180211401A1 (en) * | 2017-01-26 | 2018-07-26 | Samsung Electronics Co., Ltd. | Stereo matching method and apparatus, image processing apparatus, and training method therefor |
US20200226413A1 (en) * | 2017-08-31 | 2020-07-16 | Southwest Jiaotong University | Fast and robust multimodal remote sensing images matching method and system |
CN114494896A (en) * | 2021-12-22 | 2022-05-13 | 山东土地集团数字科技有限公司 | Cultivated land state monitoring method, equipment and medium for cultivated land protection |
CN115830354A (en) * | 2022-10-25 | 2023-03-21 | 北京旷视科技有限公司 | Binocular stereo matching method, device and medium |
CN116977674A (en) * | 2022-11-18 | 2023-10-31 | 腾讯科技(深圳)有限公司 | Image matching method, related device, storage medium and program product |
Non-Patent Citations (3)
Title |
---|
刘子刚 等: "基于 PSM 的湿地自然保护区保护效果分析", 《西北大学学报( 自然科学版)》, vol. 50, no. 2, pages 241 - 249 * |
吴乐 等: "生态补偿有利于减贫吗?――基于倾向得分匹配法对贵州省三县的实证分析", 农村经济, no. 09, pages 48 - 55 * |
陈冰 等: "基于倾向评分配比法评估苍山自然保护区的森林保护成效", 生物多样性, no. 09, pages 89 - 97 * |
Also Published As
Publication number | Publication date |
---|---|
CN117237684B (en) | 2024-01-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110660066B (en) | Training method of network, image processing method, network, terminal equipment and medium | |
JP7221089B2 (en) | Stable simultaneous execution of location estimation and map generation by removing dynamic traffic participants | |
CN109117825B (en) | Lane line processing method and device | |
CN111179230B (en) | Remote sensing image contrast change detection method and device, storage medium and electronic equipment | |
US9489716B1 (en) | Street-level imagery acquisition and selection | |
CN112183395A (en) | Road scene recognition method and system based on multitask learning neural network | |
CN111476099B (en) | Target detection method, target detection device and terminal equipment | |
EA004910B1 (en) | Method and apparatus for determining regions of interest in images and for image transmission | |
CN111192239A (en) | Method and device for detecting change area of remote sensing image, storage medium and electronic equipment | |
CN113158773B (en) | Training method and training device for living body detection model | |
CN112991218B (en) | Image processing method, device, equipment and storage medium | |
CN110111382B (en) | Irregular area calculation method and device, computer equipment and storage medium | |
CN117237684B (en) | Pixel-level region matching method and device | |
CN116243273B (en) | Photon counting laser radar data filtering method for vegetation canopy extraction | |
CN111833341A (en) | Method and device for determining stripe noise in image | |
CN116740145A (en) | Multi-target tracking method, device, vehicle and storage medium | |
CN111126106B (en) | Lane line identification method and device | |
CN111199188A (en) | Pixel processing method and device for remote sensing image difference map, storage medium and equipment | |
CN118115530A (en) | Vehicle track generation method and device, electronic equipment and storage medium | |
CN115797310A (en) | Method for determining inclination angle of photovoltaic power station group string and electronic equipment | |
CN112950709B (en) | Pose prediction method, pose prediction device and robot | |
CN115424131A (en) | Remote sensing image cloud detection optimal threshold selection method based on absolute pixels, cloud detection method and system | |
CN114913105A (en) | Laser point cloud fusion method and device, server and computer readable storage medium | |
JP6997664B2 (en) | Status judgment device | |
CN112639868A (en) | Image processing method and device and movable platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |