CN112465886A - Model generation method, device, equipment and readable storage medium - Google Patents

Model generation method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN112465886A
CN112465886A CN202011427029.XA CN202011427029A CN112465886A CN 112465886 A CN112465886 A CN 112465886A CN 202011427029 A CN202011427029 A CN 202011427029A CN 112465886 A CN112465886 A CN 112465886A
Authority
CN
China
Prior art keywords
image
difference
processed
pixel point
registered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011427029.XA
Other languages
Chinese (zh)
Inventor
谭吉福
朱江
杨晓龙
黄鸿志
周明玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kq Geo Technologies Co ltd
Original Assignee
Kq Geo Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kq Geo Technologies Co ltd filed Critical Kq Geo Technologies Co ltd
Priority to CN202011427029.XA priority Critical patent/CN112465886A/en
Publication of CN112465886A publication Critical patent/CN112465886A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a model generation method, a model generation device, model generation equipment and a readable storage medium. The model generation method comprises the following steps: registering the first image to be processed and the second image to be processed to obtain a first registered image corresponding to the first image to be processed and a second registered image corresponding to the second image to be processed; the first image to be processed and the second image to be processed are two images aiming at the same object; detecting the difference between the first registered image and the second registered image to obtain an image difference map; the image difference image is subjected to blocking processing to obtain at least two block images; and training the model to be trained according to the first registration image, the second registration image and the block diagram to obtain a trained difference detection model, wherein the difference detection model can detect a difference region between two different images. The application detects through the difference detection model, can accelerate detection efficiency, improves and detects the rate of accuracy.

Description

Model generation method, device, equipment and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a model generation method, apparatus, device, and readable storage medium.
Background
With the development of the aerospace industry, remote sensing image change detection becomes a hot research topic of remote sensing image processing, and has important guiding significance for the fields of land planning, disaster detection, environment management, city transition and the like. At present, high-resolution shooting platforms such as high-resolution satellites and high-definition unmanned aerial vehicles are adopted in the market. And the detail information presented by the high-resolution remote sensing image is very rich, so that the difference detection between the two remote sensing images is very easily influenced by noise. In contrast, in the prior art, the ground features in the remote sensing image are classified, and then difference detection is performed based on the classification result.
Disclosure of Invention
The embodiment of the application provides a model generation method, a device, equipment and a readable storage medium, which are used for solving the problems in the related art, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a model generation method, including:
registering a first image to be processed and a second image to be processed to obtain a first registered image corresponding to the first image to be processed and a second registered image corresponding to the second image to be processed; the first image to be processed and the second image to be processed are two images aiming at the same object;
detecting a difference between the first registered image and the second registered image to obtain an image difference map;
carrying out blocking processing on the image difference image to obtain at least two block images;
and training a model to be trained according to the first registration image, the second registration image and the block diagram to obtain a trained difference detection model, wherein the difference detection model can detect a difference region between two different images.
In a second aspect, an embodiment of the present application provides a model generation apparatus, including:
the registration module is used for registering a first image to be processed and a second image to be processed to obtain a first registered image corresponding to the first image to be processed and a second registered image corresponding to the second image to be processed; the first image to be processed and the second image to be processed are two images aiming at the same object;
an image difference map obtaining module, configured to detect a difference between the first registered image and the second registered image to obtain an image difference map;
the block processing module is used for carrying out block processing on the image difference image to obtain at least two block images;
and the difference detection model obtaining module is used for training a model to be trained according to the first registration image, the second registration image and the block diagram to obtain a trained difference detection model, wherein the difference detection model can detect a difference area between two different images.
In a third aspect, an embodiment of the present application provides a model generation device, including: a memory and a processor. Wherein the memory and the processor are in communication with each other via an internal connection path, the memory is configured to store instructions, the processor is configured to execute the instructions stored by the memory, and the processor is configured to perform the method of any of the above aspects when the processor executes the instructions stored by the memory.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, and when the computer program runs on a computer, the method in any one of the above-mentioned aspects is executed.
The advantages or beneficial effects in the above technical solution at least include: the two images to be processed are registered, the difference graph obtained based on the two registered images is partitioned, and then the two registered images and the difference graph are utilized to train the model to be trained in a partitioning mode, so that the difference detection model obtained by training can directly learn and mine deep features of the images, difference areas among different images are detected, ground object classification is not needed, the problem that great effort is consumed to debug and divide scales due to classification is solved, and detection efficiency is improved. In addition, the difference image blocks are adopted for model training, the granularity is smaller, the contained image information is more concentrated, the image depth characteristics are favorably mined, and the stability and the accuracy of the difference detection model are improved.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
FIG. 1 is a flow chart of a model generation method according to an embodiment of the present application;
FIG. 2 is an exemplary diagram of a model generation method according to an embodiment of the present application;
FIG. 3 is a block diagram of a model generation apparatus according to an embodiment of the present application;
fig. 4 is a block diagram of a model generation apparatus according to an embodiment of the present application.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
FIG. 1 shows a flow diagram of a model generation method according to an embodiment of the present application. As shown in fig. 1, the model generation method may include:
s101, registering a first image to be processed and a second image to be processed to obtain a first registered image corresponding to the first image to be processed and a second registered image corresponding to the second image to be processed; the first image to be processed and the second image to be processed are two images aiming at the same object;
s102, detecting the difference between the first registration image and the second registration image to obtain an image difference image;
s103, carrying out blocking processing on the image difference image to obtain at least two block images;
s104, training the model to be trained according to the first registration image, the second registration image and the block diagram to obtain a trained difference detection model, wherein the difference detection model can detect a difference area between two different images.
The first to-be-processed image and the second to-be-processed image can be high-resolution remote sensing images. The first image to be processed and the second image to be processed may be two images acquired at different times, different imaging devices, or different shooting conditions (such as climate, illumination, shooting position, and/or shooting angle).
The same object for which the first image to be processed and the second image to be processed are directed may include: characters, vehicles, roads, buildings, natural landscapes or regions, etc.
In the above embodiment, the first image to be processed and the second image to be processed are registered, difference detection is performed based on the two registered images, then the difference image obtained by detection is partitioned, then the first registered image and the second registered image are used to determine a sample of the input model, the partitioned image is used to determine the supervision information (i.e., label information), and the model to be trained is trained to obtain the difference detection model. The difference detection model can directly learn and mine deep features of images, detect difference areas among different images, omit ground feature classification steps, avoid the need of consuming great effort to debug and divide scales due to classification, and improve detection efficiency.
In addition, the granularity of the difference image blocks is smaller, the included image details are more concentrated, and the method is beneficial to improving and mining more image characteristics, so that the detection accuracy of the difference detection model can be improved.
Furthermore, the difference images are partitioned and then the model is trained, so that the difference detection model can be trained to perform multi-thread synchronous detection on the image areas of the partitioned images corresponding to the first registration image and the second registration image, multi-thread detection of the difference detection model can be realized, and the detection efficiency is improved.
In one embodiment, in step S101, registering the first image to be processed and the second image to be processed includes:
determining position information of a first pixel point in a first image to be processed;
determining a second pixel point corresponding to the first pixel point in the second image to be processed, and determining the position information of the second pixel point;
and registering the first image to be processed and the second image to be processed according to the position information of the first pixel point and the position information of the second pixel point.
In the above embodiment, the image registration based on the pixel level is beneficial to reducing the registration error and improving the accuracy of the image registration. Furthermore, the subsequent difference detection can be performed according to the expression content of the pixel points at the same position in the two images, and the accuracy of the difference detection is improved.
Further, after determining a second pixel point corresponding to the first pixel point in the second image to be processed, the method further comprises the following steps:
determining a candidate region with a preset size in the second image to be processed by taking the second pixel point as a central point;
selecting a target pixel point with the minimum spectral difference with the first pixel point from all candidate pixel points in the candidate region;
and correcting the second pixel point according to the target pixel point so as to register the first image to be processed and the second image to be processed by using the corrected second pixel point.
Due to the fact that the shooting angle and the illumination condition of the first image to be processed and the second image to be processed are different, the difference of the same object is large, and therefore errors exist in preliminary registration of the pixel points. Therefore, after the initial registration of the pixel points, the spectral information around the initial registration pixel points is considered, so that the final registration pixel point pairs are determined, and the registration accuracy is improved. Meanwhile, the registration correction step is also beneficial to breaking the hard requirement that the traditional registration method has to unify the types and the resolutions of the sensors, and the source data range suitable for image difference detection is enlarged.
In one embodiment, in step S102, detecting a difference between the first and second registered images to obtain an image difference map, includes:
detecting the spectral difference between the first registered image and the second registered image, and determining a spectral difference map;
detecting texture differences between the first registered image and the second registered image, and determining a texture difference map;
and merging the spectrum difference image and the texture difference image to obtain an image difference image, wherein each pixel point of the image difference image represents merging information of the spectrum difference and the texture difference.
In the above embodiment, the image difference map includes the spectral difference information and the texture difference information, and makes full use of the spatial and image information of the image.
Further, detecting a spectral difference between the first and second registered images, determining a spectral difference map, comprising: and determining a spectrum difference graph by calculating the spectrum difference value of each pixel point of the first registration image and the corresponding pixel point in the second registration image. And each position in the spectrum difference image represents the spectrum difference of a corresponding pixel point between the first registration image and the second registration image.
For the first registration image X and the second registration image Y, the calculation method of the spectrum difference value between the first registration image X and the second registration image Y is as follows: first, a first region is determined in a first registration image X according to a first pixel point Xa, a second region is determined in a second registration image Y according to a second pixel point Ya, and the first pixel point Xa corresponds to the second pixel point Ya. For example, a first area with a size of 10 × 10 is determined in the first registered image X by taking the first pixel Xa as a center point of the first area. Then, the spectrum difference value between the first pixel Xa and each pixel in the second region is calculated, and the calculated spectrum difference value is called a first spectrum difference value. And calculating the spectrum difference value between the second pixel point Ya and each pixel point in the first area, which is called as a second spectrum difference value. And finally, selecting a minimum spectrum difference value from the first spectrum difference values and the second spectrum difference values as the spectrum difference value between the first pixel point Xa and the second pixel point Ya. At this time, two pixel points for calculating the minimum spectrum difference value are used as a new first pixel point and a new second pixel point.
Further, detecting a texture difference between the first and second registered images, determining a texture difference map, comprising: and determining a texture difference map by calculating texture difference values of all pixel points of the first registration image and corresponding pixel points in the second registration image. Each position in the texture difference map represents the texture difference of a corresponding pixel point between the first registration image and the second registration image.
For a first pixel point in the first registration image and a second pixel point corresponding to the first pixel point in the second registration image, the calculation mode of the texture difference value between the first pixel point and the second pixel point is as follows: first, a third region is determined in the first registration image according to the first pixel points, and a fourth region is determined in the second registration image according to the second pixel points. Then, texture features of each band of the third region and texture features of each band of the fourth region are calculated. And according to the difference value of the texture features of each wave band in the third region and the texture features of the corresponding wave band in the fourth region, performing evolution on the sum of squares of the difference values of the texture features of each wave band to obtain the texture difference value of the first pixel point and the second pixel point.
In the above manner, when calculating the spectral difference and the texture difference, spatial information around the pixel point is considered, and the difference detection accuracy is improved.
In one embodiment, in step S104, training the model to be trained according to the first registered image, the second registered image and the block map includes:
selecting a target area with difference values meeting preset conditions from the difference values corresponding to the block images;
determining a label aiming at the target area according to the difference value of the target area;
and training the model to be trained according to the region of the first registration image corresponding to the target region, the region of the second registration image corresponding to the target region and the target region marked with the label.
The preset condition may be a preset difference value interval and a preset non-difference value interval. Therefore, from the block diagram, a region where the difference value conforms to the difference value interval and a region where the difference value conforms to the non-difference value interval are selected to obtain the target region. Correspondingly, labels include both differential and non-differential types. And determining that the label of the difference value of the target area is different, and determining that the label of the difference value of the target area is not different.
In one embodiment, after step S103, the method may further include: and carrying out band inversion and/or angle rotation operation on the sample data of the model to be trained (namely the region corresponding to the target region in the first registration image and the region corresponding to the target region in the second registration image) to obtain new sample data so as to increase the number of the training samples. The intelligent degree of the artificial intelligence for identifying the image data with different spectral bands and angles can be trained and improved.
In one embodiment, the model to be trained in step S104 may be a gaussian-bernoulli deep boltzmann model.
In an implementation manner, the model generation method provided in the embodiment of the present application further includes:
acquiring a first image to be detected and a second image to be detected;
and inputting the first image to be detected and the second image to be detected into the difference detection model to obtain the image difference region result of the first image to be detected and the second image to be detected.
In the above embodiment, the difference detection model may be used to perform difference detection on the corresponding pixel points between the first image to be detected and the second image to be detected one by one, that is, to perform division on whether difference exists or not, and finally generate a difference region detection map, so that the region where the difference exists between the first image to be detected and the second image to be detected can be determined through the difference region detection map. For example, for a first pixel point in the first image to be detected and a second pixel point corresponding to the first pixel point in the second image to be detected, if the first pixel point and the second pixel point are different, the value of the pixel point representing the same position information as the first pixel point and the second pixel point in the difference region detection image is 1, otherwise, the value is 0. Therefore, in the disparity region detection map, a region having a pixel value of 1 corresponds to a disparity region.
Further, the first image to be detected is a block in the first original image to be detected, and the second image to be detected is a block in the second original image to be detected corresponding to the first image to be detected.
Namely, acquiring a first image to be detected and a second image to be detected, comprising: and performing image registration on the first original image to be detected and the second original image to be detected to obtain a first registered image to be detected and a second registered image to be detected corresponding to the first original image to be detected. And dicing the first registration image to be measured and the second registration image to be measured, and respectively storing dicing results as a first image set to be measured and a second image set to be measured according to the position sequence. And acquiring a first image to be detected from the first image set to be detected and acquiring a second image to be detected from the second image set to be detected.
On the basis of the above embodiment, a plurality of pairs of corresponding to-be-detected image pairs between the first to-be-detected image set and the second to-be-detected image set can be input into the difference detection model, so as to obtain a difference region result of the plurality of pairs of to-be-detected image pairs. And combining the difference region results of a plurality of pairs of images to be detected to obtain the image difference region result.
Further, the difference detection of multiple pairs of images to be detected can be multi-thread synchronous processing. Specifically, a plurality of difference detection models may be used for synchronous detection, and each difference detection model detects a preset number of image pairs to be detected. Or a difference detection model is adopted to synchronously detect a plurality of pairs of images to be detected. Therefore, multithreading synchronous processing is fused again to obtain a complete output result, and the detection efficiency is accelerated.
Fig. 2 shows an example provided above based on an example embodiment of the present application.
As shown in fig. 2, in the training phase, first, the to-be-processed image 100 and the to-be-processed image 200 are subjected to image registration to obtain a registered image 101 and a registered image 201. Then, spectral difference detection and texture difference detection are performed on the registered image 101 and the registered image 201, and an image difference map is generated by combining the results of the spectral difference detection and the texture difference detection. Then, the image difference image is divided into blocks to obtain a plurality of block images. And selecting a target area in each block image and determining label information of the target area by using the difference values of the plurality of block images. And finally, selecting corresponding areas from the registration image 101 and the registration image 201 respectively according to the target area as training samples, and then training the model to be trained by using the label of the target area as supervision information to obtain the difference detection model.
In the prediction stage, the image to be detected 102 and the image to be detected 202 are input into the difference detection model, and a difference area detection map is obtained according to the result output by the difference detection model.
Fig. 3 shows a block diagram of a model generation apparatus according to an embodiment of the present invention. As shown in fig. 3, the model generation apparatus 300 may include:
a registration module 301, configured to register the first image to be processed and the second image to be processed to obtain a first registered image corresponding to the first image to be processed and a second registered image corresponding to the second image to be processed; the first image to be processed and the second image to be processed are two images aiming at the same object;
an image difference map obtaining module 302, configured to detect a difference between the first registered image and the second registered image to obtain an image difference map;
a block processing module 303, configured to perform block processing on the image difference map to obtain at least two block maps;
a difference detection model obtaining module 304, configured to train the model to be trained according to the first registered image, the second registered image, and the block diagram, to obtain a trained difference detection model, where the difference detection model can detect a difference region between two different images.
In one embodiment, a registration module, comprising:
the first pixel point position determining submodule is used for determining position information of a first pixel point in the first image to be processed;
the second pixel point position determining submodule is used for determining a second pixel point corresponding to the first pixel point in the second image to be processed and determining the position information of the second pixel point;
and the registration submodule is used for respectively registering the first image to be processed and the second image to be processed according to the position information of the first pixel point and the position information of the second pixel point.
In one embodiment, the image difference map obtaining module includes:
the spectrum difference image obtaining sub-module is used for detecting the spectrum difference between the first registration image and the second registration image and determining a spectrum difference image;
the texture difference image obtaining sub-module is used for detecting the texture difference between the first registration image and the second registration image and determining a texture difference image;
and the image difference map obtaining submodule is used for merging the spectrum difference map and the texture difference map to obtain an image difference map, and each pixel point of the image difference map represents merging information of the spectrum difference and the texture difference.
In one embodiment, the difference detection model obtaining module includes:
the target area selection submodule is used for selecting a target area with a difference value meeting a preset condition from the difference values corresponding to the block images;
the tag determining submodule is used for determining a tag aiming at the target area according to the difference value of the target area;
and the training submodule is used for training the model to be trained according to the area corresponding to the target area in the first registration image, the area corresponding to the target area in the second registration image and the target area marked with the label.
In one embodiment, the method further comprises:
the image acquisition module to be detected is used for acquiring a first image to be detected and a second image to be detected;
and the difference region detection module is used for inputting the first image to be detected and the second image to be detected into the difference detection model to obtain the image difference region result of the first image to be detected and the second image to be detected.
The functions of each module in each apparatus in the embodiments of the present invention may refer to the corresponding description in the above method, and are not described herein again.
Fig. 4 shows a block diagram of a model generation apparatus according to an embodiment of the present invention. As shown in fig. 4, the model generation apparatus includes: a memory 410 and a processor 420, the memory 410 having stored therein a computer program operable on the processor 420. The processor 420, when executing the computer program, implements the model generation method in the above-described embodiments. The number of the memory 410 and the processor 420 may be one or more.
The model generation apparatus further includes:
and a communication interface 430, configured to communicate with an external device, and perform data interactive transmission.
If the memory 410, the processor 420 and the communication interface 430 are implemented independently, the memory 410, the processor 420 and the communication interface 430 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
Optionally, in an implementation, if the memory 410, the processor 420, and the communication interface 430 are integrated on a chip, the memory 410, the processor 420, and the communication interface 430 may complete communication with each other through an internal interface.
Embodiments of the present invention provide a computer-readable storage medium, which stores a computer program, and when the program is executed by a processor, the computer program implements the method provided in the embodiments of the present application.
The embodiment of the present application further provides a chip, where the chip includes a processor, and is configured to call and execute the instruction stored in the memory from the memory, so that the communication device in which the chip is installed executes the method provided in the embodiment of the present application.
An embodiment of the present application further provides a chip, including: the system comprises an input interface, an output interface, a processor and a memory, wherein the input interface, the output interface, the processor and the memory are connected through an internal connection path, the processor is used for executing codes in the memory, and when the codes are executed, the processor is used for executing the method provided by the embodiment of the application.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be an advanced reduced instruction set machine (ARM) architecture supported processor.
Further, optionally, the memory may include a read-only memory and a random access memory, and may further include a nonvolatile random access memory. The memory may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may include a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the present application are generated in whole or in part when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method of the above embodiments may be implemented by hardware that is configured to be instructed to perform the relevant steps by a program, which may be stored in a computer-readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present application, and these should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method of model generation, comprising:
registering a first image to be processed and a second image to be processed to obtain a first registered image corresponding to the first image to be processed and a second registered image corresponding to the second image to be processed; the first image to be processed and the second image to be processed are two images aiming at the same object;
detecting a difference between the first registered image and the second registered image to obtain an image difference map;
carrying out blocking processing on the image difference image to obtain at least two block images;
and training a model to be trained according to the first registration image, the second registration image and the block diagram to obtain a trained difference detection model, wherein the difference detection model can detect a difference region between two different images.
2. The method of claim 1, wherein registering the first to-be-processed image and the second to-be-processed image comprises:
determining position information of a first pixel point in the first image to be processed;
determining a second pixel point corresponding to the first pixel point in the second image to be processed, and determining the position information of the second pixel point;
and respectively carrying out primary registration on the first image to be processed and the second image to be processed according to the position information of the first pixel point and the position information of the second pixel point.
3. The method of claim 1, wherein said detecting a difference between said first and second registered images, resulting in an image difference map, comprises:
detecting spectral differences between the first and second registered images to determine a spectral difference map;
detecting texture differences between the first registered image and the second registered image, and determining a texture difference map;
and merging the spectrum difference image and the texture difference image to obtain an image difference image, wherein each pixel point of the image difference image represents the merged information of the spectrum difference and the texture difference.
4. The method of claim 1, wherein training a model to be trained based on the first registered image, the second registered image, and the patch map comprises:
selecting a target area with difference values meeting preset conditions from the difference values corresponding to the block images;
determining a label aiming at the target area according to the difference value of the target area;
and training a model to be trained according to the region of the first registration image corresponding to the target region, the region of the second registration image corresponding to the target region, and the target region marked with the label.
5. The method of any of claims 1 to 4, further comprising:
acquiring a first image to be detected and a second image to be detected;
and inputting the first image to be detected and the second image to be detected into the difference detection model to obtain the image difference region result of the first image to be detected and the second image to be detected.
6. A model generation apparatus, comprising:
the registration module is used for registering a first image to be processed and a second image to be processed to obtain a first registered image corresponding to the first image to be processed and a second registered image corresponding to the second image to be processed; the first image to be processed and the second image to be processed are two images aiming at the same object;
an image difference map obtaining module, configured to detect a difference between the first registered image and the second registered image to obtain an image difference map;
the block processing module is used for carrying out block processing on the image difference image to obtain at least two block images;
and the difference detection model obtaining module is used for training a model to be trained according to the first registration image, the second registration image and the block diagram to obtain a trained difference detection model, wherein the difference detection model can detect a difference area between two different images.
7. The apparatus of claim 6, wherein the registration module comprises:
a first pixel point position determining submodule, configured to determine position information of a first pixel point in the first image to be processed;
the second pixel point position determining submodule is used for determining a second pixel point corresponding to the first pixel point in the second image to be processed and determining the position information of the second pixel point;
and the registration submodule is used for respectively registering the first image to be processed and the second image to be processed according to the position information of the first pixel point and the position information of the second pixel point.
8. The apparatus of claim 6, wherein the image difference map obtaining module comprises:
the spectral difference image obtaining sub-module is used for detecting the spectral difference between the first registered image and the second registered image and determining a spectral difference image;
a texture difference map obtaining sub-module, configured to detect a texture difference between the first registered image and the second registered image, and determine a texture difference map;
and the image difference map obtaining submodule is used for merging the spectrum difference map and the texture difference map to obtain an image difference map, and each pixel point of the image difference map represents merging information of the spectrum difference and the texture difference.
9. The apparatus of claim 6, wherein the difference detection model obtaining module comprises:
the target area selection submodule is used for selecting a target area with a difference value meeting a preset condition from the difference values corresponding to the block images;
the label determining submodule is used for determining a label aiming at the target area according to the difference value of the target area;
and the training sub-module is used for training a model to be trained according to the region of the first registration image corresponding to the target region, the region of the second registration image corresponding to the target region and the target region marked with the label.
10. The apparatus of any one of claims 6 to 9, further comprising:
the image acquisition module to be detected is used for acquiring a first image to be detected and a second image to be detected;
and the difference region detection module is used for inputting the first image to be detected and the second image to be detected into the difference detection model to obtain the image difference region result of the first image to be detected and the second image to be detected.
11. A model generation apparatus, comprising: a processor and a memory, the memory having stored therein instructions that are loaded and executed by the processor to implement the method of any of claims 1 to 5.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN202011427029.XA 2020-12-09 2020-12-09 Model generation method, device, equipment and readable storage medium Pending CN112465886A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011427029.XA CN112465886A (en) 2020-12-09 2020-12-09 Model generation method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011427029.XA CN112465886A (en) 2020-12-09 2020-12-09 Model generation method, device, equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN112465886A true CN112465886A (en) 2021-03-09

Family

ID=74800322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011427029.XA Pending CN112465886A (en) 2020-12-09 2020-12-09 Model generation method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112465886A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409485A (en) * 2021-08-03 2021-09-17 广东电网有限责任公司佛山供电局 Inspection data acquisition method and device, computer equipment and storage medium
CN113933294A (en) * 2021-11-08 2022-01-14 中国联合网络通信集团有限公司 Concentration detection method and device
CN114882079A (en) * 2022-04-12 2022-08-09 北京极感科技有限公司 Image registration detection method, electronic device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08191440A (en) * 1995-01-10 1996-07-23 Fukuda Denshi Co Ltd Method and device for correcting endoscope image
US6766054B1 (en) * 2000-08-14 2004-07-20 International Business Machines Corporation Segmentation of an object from a background in digital photography
CN102254319A (en) * 2011-04-19 2011-11-23 中科九度(北京)空间信息技术有限责任公司 Method for carrying out change detection on multi-level segmented remote sensing image
US20130314437A1 (en) * 2012-05-22 2013-11-28 Sony Corporation Image processing apparatus, image processing method, and computer program
US20160307073A1 (en) * 2015-04-20 2016-10-20 Los Alamos National Security, Llc Change detection and change monitoring of natural and man-made features in multispectral and hyperspectral satellite imagery
CN107248172A (en) * 2016-09-27 2017-10-13 中国交通通信信息中心 A kind of remote sensing image variation detection method based on CVA and samples selection
US20170301093A1 (en) * 2016-04-13 2017-10-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN108573276A (en) * 2018-03-12 2018-09-25 浙江大学 A kind of change detecting method based on high-resolution remote sensing image
CN109636838A (en) * 2018-12-11 2019-04-16 北京市燃气集团有限责任公司 A kind of combustion gas Analysis of Potential method and device based on remote sensing image variation detection

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08191440A (en) * 1995-01-10 1996-07-23 Fukuda Denshi Co Ltd Method and device for correcting endoscope image
US6766054B1 (en) * 2000-08-14 2004-07-20 International Business Machines Corporation Segmentation of an object from a background in digital photography
CN102254319A (en) * 2011-04-19 2011-11-23 中科九度(北京)空间信息技术有限责任公司 Method for carrying out change detection on multi-level segmented remote sensing image
US20130314437A1 (en) * 2012-05-22 2013-11-28 Sony Corporation Image processing apparatus, image processing method, and computer program
US20160307073A1 (en) * 2015-04-20 2016-10-20 Los Alamos National Security, Llc Change detection and change monitoring of natural and man-made features in multispectral and hyperspectral satellite imagery
US20170301093A1 (en) * 2016-04-13 2017-10-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN107248172A (en) * 2016-09-27 2017-10-13 中国交通通信信息中心 A kind of remote sensing image variation detection method based on CVA and samples selection
CN108573276A (en) * 2018-03-12 2018-09-25 浙江大学 A kind of change detecting method based on high-resolution remote sensing image
CN109636838A (en) * 2018-12-11 2019-04-16 北京市燃气集团有限责任公司 A kind of combustion gas Analysis of Potential method and device based on remote sensing image variation detection

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HERNG-HUA CHANG 等: "Remote Sensing Image Registration Based on Modified SIFT and Feature Slope Grouping", 《 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》, vol. 16, no. 9, pages 1363, XP011743022, DOI: 10.1109/LGRS.2019.2899123 *
YANLONG CHEN 等: "Automatic Extraction Method of Sargassum Based on Spectral-Texture Features of Remote Sensing Images", 《 IGARSS 2019 - 2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM》, pages 3705 - 3707 *
吕可枫 等: "超像素的随机森林遥感影像变化检测", 《测绘科学技术学报》, vol. 37, no. 03, pages 269 - 274 *
张鑫龙 等: "高分辨率遥感影像的深度学习变化检测方法", 《测绘学报》, vol. 46, no. 08, pages 999 - 1008 *
王昶 等: "基于深度学习的遥感影像变化检测方法", 《浙江大学学报(工学版)》, vol. 54, no. 11, pages 2138 - 2148 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409485A (en) * 2021-08-03 2021-09-17 广东电网有限责任公司佛山供电局 Inspection data acquisition method and device, computer equipment and storage medium
CN113409485B (en) * 2021-08-03 2023-12-12 广东电网有限责任公司佛山供电局 Inspection data acquisition method and device, computer equipment and storage medium
CN113933294A (en) * 2021-11-08 2022-01-14 中国联合网络通信集团有限公司 Concentration detection method and device
CN114882079A (en) * 2022-04-12 2022-08-09 北京极感科技有限公司 Image registration detection method, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN112465886A (en) Model generation method, device, equipment and readable storage medium
CN111523459B (en) Remote sensing image bare area identification method and device, electronic equipment and storage medium
US10802485B2 (en) Apparatus, method and computer program product for facilitating navigation of a vehicle based upon a quality index of the map data
CN109285105A (en) Method of detecting watermarks, device, computer equipment and storage medium
EP4118576A1 (en) Systems and methods for image-based location determination
CN111429482A (en) Target tracking method and device, computer equipment and storage medium
US20230401691A1 (en) Image defect detection method, electronic device and readable storage medium
CN114240805B (en) Multi-angle SAR dynamic imaging detection method and device
CN115631397A (en) Target detection method and device based on bimodal image
CN114022523B (en) Low-overlapping point cloud data registration system and method
Drzewiecki Thorough statistical comparison of machine learning regression models and their ensembles for sub-pixel imperviousness and imperviousness change mapping
CN112101310B (en) Road extraction method and device based on context information and computer equipment
KR102488813B1 (en) Method and apparatus for generating disparity map using edge image
CN116844050A (en) Pegmatite type lithium ore prospecting method based on multi-source remote sensing data
CN112215304A (en) Gray level image matching method and device for geographic image splicing
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
US20240011792A1 (en) Method and apparatus for updating confidence of high-precision map
CN115797742A (en) Image fusion method and training method and system of detection model
CN113884188B (en) Temperature detection method and device and electronic equipment
CN113096129B (en) Method and device for detecting cloud cover in hyperspectral satellite image
US11391808B2 (en) Method for direction finding of at least one stationary and/or moving transmitter as well as system for direction finding
CN114563785A (en) Earth surface deformation detection method, device, equipment and medium based on phase gradient
CN114896134A (en) Metamorphic test method, device and equipment for target detection model
CN113203424A (en) Multi-sensor data fusion method and device and related equipment
CN113484879A (en) Positioning method and device of wearable equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination