CN115829943A - Image difference region detection method based on super-pixel segmentation - Google Patents

Image difference region detection method based on super-pixel segmentation Download PDF

Info

Publication number
CN115829943A
CN115829943A CN202211432673.5A CN202211432673A CN115829943A CN 115829943 A CN115829943 A CN 115829943A CN 202211432673 A CN202211432673 A CN 202211432673A CN 115829943 A CN115829943 A CN 115829943A
Authority
CN
China
Prior art keywords
image
difference
information
pyramid
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211432673.5A
Other languages
Chinese (zh)
Inventor
魏富彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weicheng Intelligent Power Technology Hangzhou Co ltd
Original Assignee
Weicheng Intelligent Power Technology Hangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weicheng Intelligent Power Technology Hangzhou Co ltd filed Critical Weicheng Intelligent Power Technology Hangzhou Co ltd
Priority to CN202211432673.5A priority Critical patent/CN115829943A/en
Publication of CN115829943A publication Critical patent/CN115829943A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an image difference region detection method based on superpixel segmentation, which comprises the following steps of S1, registering images to be compared by using a sift algorithm, changing the images to be compared to the same plane, and corresponding the coordinates of corresponding points one by one; s2, constructing respective image pyramids of the source image and the target image through a Gaussian function; s3, extracting a first image at the topmost layer of the target image pyramid, performing SLIC processing on the first image, and dividing the image into a plurality of sub-regions with relatively consistent information; s4, differentiating and distinguishing each subregion: the magnitude, direction and color information of the gradient of the pixels of each sub-region are calculated, and the 3 dimensional information is compared with the corresponding pixels in difference. The method utilizes the SLIC algorithm to perform region clustering on the images, and uses the clustered regions as image blocks to perform difference calculation so as to make the information in the blocks more consistent.

Description

Image difference region detection method based on super-pixel segmentation
Technical Field
The invention belongs to the field of image discrimination, and relates to an image difference region detection method based on superpixel segmentation.
Background
The image discrimination is used for calculating the difference region between the images, provides more useful information in a time sequence space, and has important influence in the fields of detection, tracking and the like.
Some image discrimination algorithms calculate the difference between pixels by taking the pixels as targets; some are directed to block-wise computation of differences between image blocks. The two modes have advantages and disadvantages respectively, the calculation taking the pixel as a target is more precise, and the micro change can be better reflected; the calculation with the block as the target is more robust, and the anti-interference capability on the influence of noise and the like is stronger; meanwhile, the information calculated based on the pixels is more noisy and is easily influenced by factors such as noise points, offset and the like; based on block calculation, the difference of information is larger due to the fact that the components in the region are inconsistent, and useful information is filtered out by screening the information, so that the detection of the difference region is not fine and smooth enough.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method for detecting an image difference region based on superpixel segmentation, which comprises the following steps:
s1, registering images to be compared by using a sift algorithm, changing the images to be compared to the same plane, and corresponding coordinates of corresponding points one by one;
s2, constructing respective image pyramids of the source image and the target image through a Gaussian function;
s3, extracting a first image at the topmost layer of the target image pyramid, performing SLIC processing on the first image, and dividing the image into a plurality of sub-regions with relatively consistent information;
s4, differentiating and distinguishing each subregion: calculating the size, the direction and the color information of the gradient of the pixel of each subarea, performing difference comparison on the 3 pieces of dimension information and the corresponding pixels, and adding 1 to the difference of the point when the difference value of a certain dimension is greater than a preset threshold value; when the information difference values of 3 dimensions are all larger than the threshold value, the difference degree of the point is additionally added with 1, namely the difference value of each pixel point is between 0 and 4.
Preferably, the image pyramid in S2 includes two parameters, namely, a layer height and a layer number, where the layer number determines the resolution of how many kinds of images are in the pyramid, the layer height determines the number of the filtered images at a single resolution, and the shape of the pyramid is adjusted by the two parameters, so that the ratio of the detection area at the difference to the total detection area in the subsequent processing is the largest.
Preferably, after S3, the method further includes using an interpolation algorithm to expand the segmentation information of the first image at the top level of the pyramid onto each layer of the pyramid, so that all images of the pyramid obtain corresponding and consistent image area distribution.
Preferably, the S4 includes the steps of:
s41, calculating the gradient amplitude and direction of the pixel by using a sobel operator:
I x =G x I (1)
I y =G y I (2)
Figure BDA0003945487250000021
Figure BDA0003945487250000022
wherein I is an input image, G x And G y Sobel operators in x-and y-directions, respectively, I x And I y As a gradient map in the corresponding direction, I I Is the magnitude of the gradient, I θ Is the direction of the gradient;
s42, converting the image into an HSV color space, and acquiring the value of the H dimension, namely the color information I of the pixel H Respectively obtaining I of source image pixel and target image pixel G 、I θ And I H After 3 dimensions of information, calculating difference values corresponding to 3 dimensions:
Figure BDA0003945487250000023
Figure BDA0003945487250000024
Figure BDA0003945487250000031
wherein, I' G 、I′ θ 、I′ H Is the corresponding 3-dimensional information value of the source image,
Figure BDA0003945487250000032
is the corresponding 3-dimensional information value of the target image when S G 、S θ 、S H Respectively greater than a gradient threshold tau G 、τ θ 、τ H The total difference D of the pixels p Respectively adding 1, otherwise, when S G 、S θ 、S H When all three are greater than the corresponding threshold value, D p Then additionally adding 1;
s43, summing and normalizing the difference degrees of all pixels in the region to obtain the total difference degree D of the region t
Figure BDA0003945487250000033
Wherein n is the number of regional pixels, D pi Is the ith pixel D p Value of, when the area is totally different by degree D t Greater than a threshold τ t If so, marking the area as 1, namely, a difference area possibly exists, otherwise, recording the table as 0; after marking all image areas in the pyramid, when the number of images marked as 1 in a certain area is larger than the total number of images tau r If so, determining that the area has a difference;
Figure BDA0003945487250000034
in formula (9), M k The label value for the region that is desired to be indexed by k for the final disparity map, m is the total number of pyramid images,
Figure BDA0003945487250000035
for the ith imageThe label values of k regions.
The beneficial effects of the invention at least comprise:
based on the detection algorithm of the image block, but different from the simple division of the image, the method and the device utilize the SLIC algorithm to perform area clustering on the image, and perform difference calculation by taking the clustered area as the image block, so that the information in the block is more consistent. Meanwhile, the information in the block is calculated pixel by pixel, the gradient module and angle information are combined, and the hue information is used for supplementing, compared with a histogram quantization mode, the pixel-by-pixel calculation method can not discard the spatial information of the pixels, so that the calculation result is more accurate and reliable.
Drawings
FIG. 1 is a flowchart illustrating the steps of a method for detecting a difference region in an image based on superpixel segmentation according to an embodiment of the present invention;
FIG. 2 is a pyramid image of an image in the method for detecting image difference regions based on superpixel segmentation according to the embodiment of the present invention;
FIG. 3 is a clustering input diagram of an image difference region detection method based on superpixel segmentation according to an embodiment of the present invention;
FIG. 4 is a graph of the cluster segmentation effect of the super-pixel segmentation based image difference region detection method according to the embodiment of the present invention;
FIG. 5 is a diagram illustrating the difference detection effect of the method for detecting the difference region of an image based on superpixel segmentation according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
On the contrary, the invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details.
Referring to fig. 1, the method comprises the following steps:
s1, registering images to be compared by using a sift algorithm, enabling the images to be changed to be under the same plane, and enabling coordinates of corresponding points to correspond one to one, so that influences caused by different shooting poses are eliminated;
s2, constructing respective image pyramids of the source image and the target image through a Gaussian function, and referring to FIG. 2;
s3, extracting a first image at the topmost layer of the target image pyramid, performing SLIC (simple linear iterative clustering) processing on the first image, dividing the image into a plurality of sub-regions with relatively consistent information, and improving the operation speed and clustering effect on the k-means algorithm by the SLIC (simple linear iterative clustering);
referring to fig. 3 and 4, after SLIC processing, an input image is divided into image blocks with close image information, and then, the division information of the first image at the top of the pyramid is expanded to each layer of the pyramid by using an interpolation algorithm, so that all images of the pyramid acquire corresponding image area distribution.
S4, differentiating and distinguishing each subregion: calculating the size, the direction and the color information of the gradient of each pixel of each subregion, performing difference comparison on the 3 pieces of dimensional information and corresponding pixels, and adding 1 to the difference of a point when the difference value of a certain dimension is greater than a preset threshold value; when the information difference values of 3 dimensions are all larger than the threshold value, the difference degree of the point is additionally added with 1, namely the difference value of each pixel point is between 0 and 4.
The image pyramid in the step S2 includes two parameters, namely, a layer height and a layer number, the layer number determines the resolution of the images in the pyramid, the layer height determines the number of the images after filtering processing under a single resolution, and the shape of the pyramid is adjusted through the two parameters, so that the ratio of the detection area at the difference to the total detection area in the subsequent processing is the largest and is close to 100% as much as possible.
S4 comprises the following steps:
s41, calculating the gradient amplitude and direction of the pixel by using a sobel operator:
I x =G x I (1)
I y =G y I (2)
Figure BDA0003945487250000051
Figure BDA0003945487250000052
wherein I is an input image, G x And G y Sobel operators in x-and y-directions, respectively, I x And I y As a gradient map in the corresponding direction, I G Is the magnitude of the gradient, I θ Is the direction of the gradient;
s42, converting the image into an HSV color space, and acquiring the value of the H dimension, namely the color information I of the pixel H Respectively obtaining I of source image pixel and target image pixel G 、I θ And I H After 3 dimensions of information, calculating difference values corresponding to 3 dimensions:
Figure BDA0003945487250000053
Figure BDA0003945487250000054
Figure BDA0003945487250000061
wherein, I' G 、I′ θ 、I′ H Is the corresponding 3-dimensional information value of the source image,
Figure BDA0003945487250000062
corresponding to the target image 3Value of dimension information, when S G 、S θ 、S H Respectively greater than a gradient threshold tau G 、τ θ 、τ H The total difference D of the pixels p Respectively adding 1, otherwise, when S G 、S θ 、S H When all three are greater than the corresponding threshold value, D p Then additionally adding 1;
s43, summing and normalizing the difference degrees of all pixels in the region to obtain the total difference degree D of the region t
Figure BDA0003945487250000063
Wherein n is the number of regional pixels, D pi Is the ith pixel D p Value of the total difference D of the areas t Greater than a threshold τ t If so, the region is marked as 1, namely a difference region possibly exists, otherwise, the table is marked as 0; after marking all image areas in the pyramid, when the number of images marked as 1 in a certain area is larger than the total number of images tau r If so, determining that the region has a difference;
Figure BDA0003945487250000064
in formula (9), M k The label value for the region that is indexed by k for the final disparity map, m is the total number of pyramid images,
Figure BDA0003945487250000065
the marker value of the k region of the ith image.
Referring to fig. 5, it can be seen that the pyramid image has the effect after the different regions are merged, and because the image blocks are obtained by clustering rather than simply dividing, the boundaries of the different regions are more fit and accurate, so that the difference mask is more accurate.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (4)

1. The image difference region detection method based on the super-pixel segmentation is characterized by comprising the following steps of:
s1, registering images to be compared by using a sift algorithm, changing the images to be compared to the same plane, and corresponding coordinates of corresponding points one by one;
s2, constructing respective image pyramids of the source image and the target image through a Gaussian function;
s3, extracting a first image at the topmost layer of the target image pyramid, performing SLIC processing on the first image, and dividing the image into a plurality of sub-regions with relatively consistent information;
s4, differentiating and distinguishing each subregion: calculating the size, the direction and the color information of the gradient of the pixel of each subarea, performing difference comparison on the 3 pieces of dimension information and the corresponding pixels, and adding 1 to the difference of the point when the difference value of a certain dimension is greater than a preset threshold value; when the information difference values of 3 dimensions are all larger than the threshold value, the difference degree of the point is additionally added with 1, namely the difference value of each pixel point is between 0 and 4.
2. The method according to claim 1, wherein the image pyramid in S2 includes two parameters, i.e. a layer height and a layer number, the layer number determines the resolution of how many images are in the pyramid, the layer height determines the number of the filtered images at a single resolution, and the shape of the pyramid is adjusted by the two parameters, so that the ratio of the detection area at the difference to the total detection area in the subsequent processing is the largest.
3. The method according to claim 2, further comprising, after S3, using an interpolation algorithm to expand the segmentation information of the first image at the top of the pyramid onto each layer of the pyramid, so that all images of the pyramid obtain a corresponding and consistent image region distribution.
4. The method for detecting the image difference region based on the super-pixel segmentation as claimed in claim 1, wherein the step S4 comprises the steps of:
s41, calculating the gradient amplitude and direction of the pixel by using a sobel operator:
I x =G x I (1)
I y =G y I (2)
Figure FDA0003945487240000021
Figure FDA0003945487240000022
wherein I is an input image, G x And G y Sobel operators in the x-direction and y-direction, I x And I y As a gradient map in the corresponding direction, I G Is the magnitude of the gradient, I θ Is the direction of the gradient;
s42, converting the image into an HSV color space, and acquiring the value of the H dimension, namely the color information I of the pixel H Respectively obtaining I of source image pixel and target image pixel G 、I θ And I H After 3 dimensions of information, calculating difference values corresponding to 3 dimensions:
Figure FDA0003945487240000023
Figure FDA0003945487240000024
Figure FDA0003945487240000025
wherein, I' G 、I′ θ 、I′ H For the corresponding 3-dimensional information value of the source image,
Figure FDA0003945487240000026
is the 3-dimensional information value corresponding to the target image, when S G 、S θ 、S H Respectively greater than a gradient threshold tau G 、τ θ 、τ H The total difference D of the pixels p Respectively adding 1, otherwise, when S is not added G 、S θ 、S H When all three are greater than the corresponding threshold value, D p Then additionally adding 1;
s43, summing and normalizing the difference degrees of all pixels in the region to obtain the total difference degree D of the region t
Figure FDA0003945487240000027
Wherein n is the number of regional pixels, D pi Is the ith pixel D p Value of, when the area is totally different by degree D t Greater than a threshold τ t If so, marking the area as 1, namely, a difference area possibly exists, otherwise, recording the table as 0; after marking all image areas in the pyramid, when the number of images marked as 1 in a certain area is larger than the total number of images tau r If so, determining that the region has a difference;
Figure FDA0003945487240000031
in formula (9), M k The label value for the region that is indexed by k for the final disparity map, m is the total number of pyramid images,
Figure FDA0003945487240000032
the marker value of the k region of the ith image.
CN202211432673.5A 2022-11-16 2022-11-16 Image difference region detection method based on super-pixel segmentation Pending CN115829943A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211432673.5A CN115829943A (en) 2022-11-16 2022-11-16 Image difference region detection method based on super-pixel segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211432673.5A CN115829943A (en) 2022-11-16 2022-11-16 Image difference region detection method based on super-pixel segmentation

Publications (1)

Publication Number Publication Date
CN115829943A true CN115829943A (en) 2023-03-21

Family

ID=85528366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211432673.5A Pending CN115829943A (en) 2022-11-16 2022-11-16 Image difference region detection method based on super-pixel segmentation

Country Status (1)

Country Link
CN (1) CN115829943A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372437A (en) * 2023-12-08 2024-01-09 安徽农业大学 Intelligent detection and quantification method and system for facial paralysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372437A (en) * 2023-12-08 2024-01-09 安徽农业大学 Intelligent detection and quantification method and system for facial paralysis
CN117372437B (en) * 2023-12-08 2024-02-23 安徽农业大学 Intelligent detection and quantification method and system for facial paralysis

Similar Documents

Publication Publication Date Title
CN111243032B (en) Full-automatic detection method for checkerboard corner points
CN109978839B (en) Method for detecting wafer low-texture defects
CN103325112B (en) Moving target method for quick in dynamic scene
CN109859226B (en) Detection method of checkerboard corner sub-pixels for graph segmentation
CN109271937B (en) Sports ground marker identification method and system based on image processing
CN106683119B (en) Moving vehicle detection method based on aerial video image
CN108986152B (en) Foreign matter detection method and device based on difference image
CN106446894A (en) Method for recognizing position of spherical object based on contour
CN111444778A (en) Lane line detection method
CN112364865B (en) Method for detecting small moving target in complex scene
CN104574401A (en) Image registration method based on parallel line matching
CN109472770B (en) Method for quickly matching image characteristic points in printed circuit board detection
CN114022439B (en) Flexible circuit board defect detection method based on morphological image processing
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
CN112183325B (en) Road vehicle detection method based on image comparison
CN111695373A (en) Zebra crossing positioning method, system, medium and device
CN115829943A (en) Image difference region detection method based on super-pixel segmentation
CN108038458B (en) Method for automatically acquiring outdoor scene text in video based on characteristic abstract diagram
CN111290582B (en) Projection interaction area positioning method based on improved linear detection
CN113409334A (en) Centroid-based structured light angle point detection method
Pan et al. An efficient method for skew correction of license plate
CN106446832B (en) Video-based pedestrian real-time detection method
CN117496401A (en) Full-automatic identification and tracking method for oval target points of video measurement image sequences
CN115311293B (en) Rapid matching method for printed matter pattern
CN116863458A (en) License plate recognition method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination