CN110619636A - Variable-scale image segmentation method based on RGB-D - Google Patents

Variable-scale image segmentation method based on RGB-D Download PDF

Info

Publication number
CN110619636A
CN110619636A CN201910754481.8A CN201910754481A CN110619636A CN 110619636 A CN110619636 A CN 110619636A CN 201910754481 A CN201910754481 A CN 201910754481A CN 110619636 A CN110619636 A CN 110619636A
Authority
CN
China
Prior art keywords
seed
point
points
queue
superpixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910754481.8A
Other languages
Chinese (zh)
Inventor
钟易潘
丁永良
宋迪
袁夏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN201910754481.8A priority Critical patent/CN110619636A/en
Publication of CN110619636A publication Critical patent/CN110619636A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a variable-scale image segmentation method based on RGB-D, which comprises the steps of sampling Poisson castellations by using a gradient map corresponding to a depth map, selecting seed points, obtaining initial super pixel points based on the change of the gradient, establishing a map model by taking the adjacent relation of super pixels as the basis, fusing the super pixels by using a map method, and finally obtaining the final segmented result. The invention makes full use of the gradient information of the depth map, and is more convenient and faster.

Description

Variable-scale image segmentation method based on RGB-D
Technical Field
The invention belongs to an image segmentation technology, and particularly relates to a variable-scale image segmentation method based on RGB-D.
Background
With the progress of sensor technology, the acquisition cost of RGB-D images becomes lower and lower, and how to more effectively preprocess RGB-D images is an important research content of computer vision in recent years. In order to fully utilize the three-dimensional geometric information in the RGB-D image, similar to the concept of two-dimensional image super-pixel over-segmentation, the RGB-D image is over-segmented into super-voxels, so that an effective preprocessing mode is provided, and the data volume processed by a subsequent algorithm can be effectively reduced. The currently common voxel size division method has high consistency of the sizes of the obtained voxels after the scale parameters are determined, if multi-scale analysis is needed subsequently, the sizes of the voxels are controlled by setting different scale factors, so that a voxel division result of one scale is obtained by calculation when each scale is analyzed, and the calculation amount is increased.
Disclosure of Invention
The invention aims to provide a variable-scale image segmentation method based on RGB-D.
The technical solution for realizing the invention is as follows: a variable-scale hyper-voxel segmentation method for an RGB-D image comprises the following steps:
step 1, obtaining a gradient map of a depth image;
step 2, carrying out Poisson castellation sampling in a gradient map to obtain a seed point set;
step 3, with various sub-points as initial points, classifying the points with adjacent gradient changes smaller than a threshold value as superpixels, expanding the superpixels as boundary points, and expanding the points with the adjacent gradient changes smaller than the threshold value of the boundary points to obtain initialized and segmented superpixels;
and step 4, taking the initial superpixel as the vertex of the graph, and if the two initial superpixels i and j have an adjacent relation in the depth map, establishing an edge of the two superpixels so as to establish an undirected graph G (V, E).
And 5, fusing the initial superpixels by utilizing the ideas of intra-class difference minimization and intra-class difference maximization based on the difference of the RGB color statistical characteristics in the superpixels to obtain the variable-scale superpixels.
Preferably, the gradient map of the depth image is calculated by the formula:
Gx=f(x,y)-f(x-1,y)
Gy=f(x,y)-f(x,y-1)
wherein f (x, y) is the depth value of the (x, y) point in the depth map, f (x-1, y), f (x, y-1) are the depth values of the adjacent (x-1, y) and (x, y-1) points respectively, Gx, Gy are the differential values of the (x, y) points in the x direction and the y direction respectively, and d (x, y) is the gradient magnitude of the (x, y) point.
Preferably, the specific method for obtaining the set of seed points by performing poisson castellation sampling in the gradient map in step 2 is as follows:
step 2-1, randomly selecting a seed point seed from the gradient map of the depth image D obtained in the step 1, initializing a queue to be sampled L1 to be empty, initializing a seed queue L2 to be empty, and adding the seed point seed into L1;
step 2-2, if the queue to be sampled L1 is not empty, dequeuing the seed point seed from the queue to be sampled L1, taking the seed point seed as the center of a circle, randomly sampling a candidate sampling seed next _ seed in a circular ring formed by concentric circles respectively taking R and 2R as the radiuses, if the distance between the candidate sampling seed next _ seed and all known seed points in the seed queue L2 is greater than R, adding the candidate sampling seed next _ seed into the queue to be sampled L1, meanwhile, adding the seed point seed into the seed queue L2, and if K times of candidate sampling points which still do not meet the condition are tried, deleting the seed point seed from the queue to be sampled L1 and adding the seed point into the seed queue L2; the resulting point in L2 is the finally obtained seed point.
Preferably, in step 3, the specific method for obtaining the initial segmented superpixel points by using the various sub-points as initial points, classifying the points with the adjacent gradient change smaller than the threshold as superpixels, expanding the superpixels as boundary points, and then expanding the points with the adjacent gradient change smaller than the threshold of the boundary points comprises the following steps:
step 3-1, initializing a marking map, wherein the size of the marking map is the same as that of the depth map, and the content of the marking map is filled with 0, so that the initial point does not belong to any super pixel point;
step 3-2, dequeuing the seed point in the seed queue L2, initializing an expansion queue L3 to be empty, and adding the initialized seed point into an expansion queue L3;
3-3, expanding a dequeuing point p of the queue L3, if four adjacent points of the point p have a point q which does not belong to any super pixel, and the gradient difference between the two points p and q is smaller than a set threshold value, marking the coordinate value of the point q as the super pixel in a marking map, adding the point q into L3, and repeating the step 3-2 until L3 is empty;
and 3-4, repeating the steps 3-2 and 3-3 until the seed queue L2 is empty, wherein the obtained label graph is an initial superpixel segmentation result.
Preferably, the initial superpixel is taken as a vertex of the graph, and if two initial superpixels i, j have an adjacency relation in the depth map, an edge of the two superpixels is established, so that a specific method for establishing an undirected graph G ═ V, E) is as follows:
step 4-1, calculating the RGB mean value in the initial super pixel i, wherein the calculation formula is as follows:
and 4-2, taking the initial superpixel as a node, if the superpixels i and j have an adjacent relation, establishing an edge of the two superpixels, and establishing an undirected graph G (V, E).
Preferably, the step 5 is based on the difference of the RGB color statistical characteristics inside the voxels, and the concept of minimizing intra-class difference and maximizing intra-class difference is used to fuse the initial voxels, so as to obtain the scale-variable voxels, and the specific method is as follows:
initializing each superpixel to a region Ai, calculating the internal difference of the region:
Int(Ai)=max(e(vj,vk)) vj,vk∈Ai
the external difference between the two regions is calculated:
diff(Ai,Aj)=min(e(vi,vj)) vi∈Ai,vj∈Aj
the minimum internal difference of any two regions is calculated:
Mint(Ai,Aj)=min((Int(Ai)+τ(Ai)),(Int(Aj,τ(Aj)))
where τ (Ai) ═ k/| Ai |, and | Ai | is region aiThe number of the points is included, k is a set constant, and Mint (Ai, Aj) is the minimum internal difference of any two areas of Ai and Aj;
merging two regions if they satisfy the following formula until no regions can be merged, resulting in a scaled superpixel:
Mint(Ai,Aj)<diff(Ai,Aj)。
compared with the prior art, the invention has the following remarkable advantages:
1) the method adopts a Poisson castellation sampling method to sample seed points on the gradient image of the depth image, is more suitable for the characteristics of human visual cells, and is beneficial to concentrated and uniform sampling in non-uniform three-dimensional data points;
2) the three-dimensional structure of the depth information is combined on the basis of the color, the gradient information of the depth information is utilized, the segmentation result is more accurate, and the interpretability is stronger;
3) the invention obtains the superpixels with larger scale difference in one-time calculation, compared with the traditional single-scale method, the invention is more flexible and stronger in robustness, and meanwhile, the multi-scale analysis can be better carried out due to the larger scale difference among the superpixels.
The present invention is described in further detail below with reference to the attached drawings.
Drawings
FIG. 1 is a diagram illustrating the segmentation result of the present invention
Fig. 2 is a flow chart of the present invention.
Detailed Description
A variable-scale hyper-voxel segmentation method for an RGB-D image comprises the following steps:
step 1, obtaining a gradient map of a depth image, wherein a gradient map calculation formula is as follows:
Gx=f(x,y)-f(x-1,y)
Gy=f(x,y)-f(x,y-1)
in the formula, f (x, y) is the depth value of the (x, y) point in the depth map, f (x-1, y), f (x, y-1) are the depth values of the adjacent points (x-1, y), (x, y-1), respectively, Gx, Gy are the differential values of the point in the x direction and the y direction, respectively, and d (x, y) is the gradient magnitude of the point.
Step 2, performing Poisson castellation sampling in a gradient map to obtain a seed point set, wherein the specific method comprises the following steps:
step 2-1, randomly selecting a seed point seed from the gradient map of the depth image D obtained in the step 1, initializing a queue to be sampled L1 to be empty, initializing a seed queue L2 to be empty, and adding the seed point seed into L1;
step 2-2, if the queue to be sampled L1 is not empty, dequeuing the seed point seed from the queue to be sampled L1, taking the seed point seed as the center of a circle, randomly sampling a candidate sampling seed next _ seed in a circular ring formed by concentric circles respectively taking R and 2R as the radiuses, if the distance between the candidate sampling seed next _ seed and all known seed points in the seed queue L2 is greater than R, adding the candidate sampling seed next _ seed into the queue to be sampled L1, meanwhile, adding the seed point seed into the seed queue L2, and if K times of candidate sampling points which still do not meet the condition are tried, deleting the seed point seed from the queue to be sampled L1 and adding the seed point into the seed queue L2; the resulting point in L2 is the finally obtained seed point.
And 3, with various sub-points as initial points, classifying the points with adjacent gradient changes smaller than a threshold value as superpixels, expanding the superpixels to be boundary points, and expanding the points with the adjacent gradient changes smaller than the threshold value of the boundary points to obtain initialized and segmented superpixels, wherein the specific method comprises the following steps:
step 3-1, initializing a marking map, wherein the size of the marking map is the same as that of the depth map, and the content of the marking map is filled with 0, so that the initial point does not belong to any super pixel point;
step 3-2, dequeuing the seed point in the seed queue L2, initializing an expansion queue L3 to be empty, and adding the initialized seed point into an expansion queue L3;
3-3, expanding a dequeuing point p of the queue L3, if four adjacent points of the point p have a point q which does not belong to any super pixel, and the gradient difference between the two points p and q is smaller than a set threshold value, marking the coordinate value of the point q as the super pixel in a marking map, adding the point q into L3, and repeating the step 3-2 until L3 is empty;
and 3-4, repeating the steps 3-2 and 3-3 until the seed queue L2 is empty, wherein the obtained label graph is an initial superpixel segmentation result.
Step 4, the RGB-D image scale-variable hypergraph segmentation method according to claim 1, wherein an initial superpixel is taken as a vertex of a graph, and if two initial superpixels i, j have an adjacency relation in a depth picture, an edge of the two superpixels is established, so as to establish an undirected graph G ═ V, E, specifically:
step 4-1, calculating the RGB mean value in the initial super pixel i, wherein the calculation formula is as follows:
step 4-2, taking the initial superpixel as a node, if the superpixels i and j have an adjacent relation, establishing edges of the two superpixels, and establishing an undirected graph G (V, E);
and 5, merging the super pixels based on the method with the minimum intra-class difference and the maximum inter-class difference to obtain a final segmentation map, wherein the specific method comprises the following steps of:
initializing each superpixel to the internal difference of one region calculation region:
Int(Ai)=max(e(vj,vk)) vj,vk∈Ai
the external difference between the two regions is calculated:
diff(Ai,Aj)=min(e(vi,vj)) vi∈Ai,vj∈Aj
the minimum internal difference of any two regions is calculated:
Mint(Ai,Aj)=min((Int(Ai)+τ(Ai)),(Int(Aj,τ(Aj)))
where τ (Ai) ═ k/| Ai |, and | Ai | is region aiThe number of the points is included, k is a set constant, and Mint (Ai, Aj) is the minimum internal difference of any two areas of Ai and Aj;
merging two regions if they satisfy the following formula until no regions can be merged, resulting in a scaled superpixel:
Mint(Ai,Aj)<diff(Ai,Aj)。
examples
FIG. 1 is a diagram of the RGB-D image of a frame captured by mynt _ eye according to the present invention after processing.
Step 1, obtaining a gradient map of the depth image, setting a frame of RGB-D image data as P, setting the resolution as 480 lines and 640 columns, wherein each data point comprises 3 color channels (R, G and B) and a depth channel (D), and solving the gradient of the depth image by using a formula (1) to obtain a gradient map D of the depth image.
Step 2:
selecting seed points, namely randomly selecting a point p0 from D as a seed point, setting the radius R as the minimum threshold distance of the seed point to be generated, and sampling in D by using a Poisson castellation sampling algorithm to obtain a seed point set;
and 2-1, randomly selecting a seed point seed from the gradient map of the depth image D obtained in the first step, initializing the queue L1 to be sampled to be empty, initializing the seed queue L2 to be empty, and adding the seed point seed into L1.
Step 2-2, if L1 is not empty, dequeuing a seed point seed from L1, taking the seed as the center of circle, setting the radius threshold R to 20, randomly sampling in concentric circles with R and 2R (R is the pixel coordinate unit) as the radius, and adding the next _ seed candidate sample to L1 and L2 if the distance between the next _ seed candidate sample and the known seed point is greater than R. If 40 attempts are made to find a candidate sample that still does not meet the criteria, then seed is removed from L1 and added to L2.
And 2-3, finally obtaining the point in the L2, namely the finally obtained seed point. And step 3:
and (3) performing super-pixel pre-segmentation, namely classifying the points with small gradient change beside the points into a super-pixel by taking various sub-points as initial points, expanding boundary points, and expanding the points with small gradient change beside the boundary points so as to obtain the initial super-pixel based on the gradient change.
And 3-1, if the L2 is not empty, dequeuing the seed point in the L2 queue, wherein the initialized expansion queue L3 is empty, and inputting the initialized seed point into the L3.
And 3-2, if the L3 is not empty, the L3 dequeues the expandable point p, if an unexpanded point exists in four adjacent points of the p point, the gradient difference between the unexpanded point and the p point is judged, if the gradient difference is smaller than gradient _ thresh, the gradient _ thresh is taken to be 1.0, the point is added into the L3, and the step 3-2 is repeated until the L3 is empty.
And 3-3, obtaining a result which is the initialized and segmented super pixel point.
And 4, step 4: establishing a graph model according to the adjacent relation of the superpixels, taking the current superpixel as the vertex i of the graph, and if the superpixel has the adjacent relation with the i, establishing an edge e (i, j) by taking the RGB statistical characteristics inside the superpixel i and the superpixel j as weights.
And 4-1, calculating the RGB mean value in the initial super pixel i, namely a formula (2), wherein p in the formula is the number of the pixels of the current super pixel.
And 4-2, taking the initial superpixel as a node i, if the adjacent superpixel exists in the node i, taking the initial superpixel as an adjacent node j, and establishing e (i, j) by using the absolute value of the difference of the color mean values of the two superpixels.
And 5: and merging the super pixels based on the principle that the intra-class difference is minimum and the inter-class difference is maximum to obtain a final segmentation graph.
Step 5-1, the initial hyper-voxels are merged, and each hyper-voxel is initialized to a region A before mergingiAnd in the merging process, calculating the internal difference of one region by using the formula (7)
Int(Ai)=max(e(vj,vk)) vj,vk∈Ai (3)
Calculation of the external Difference between two regions Using equation (8)
diff(Ai,Aj)=min(e(vi,vj)) vi∈Ai,vj∈Aj (4)
Mint (Ai, Aj) is the minimum internal difference between the two areas Ai and Aj, and then is calculated by equation (5)
Mint(Ai,Aj)=min((Int(Ai)+τ(Ai)),(Int(Aj,τ(Aj))) (5)
Where τ (Ai) is k/| Ai | and | Ai | is region aiThe number of the points is included, and k is a set constant; then judging that the two regions are merged if the two regions satisfy the formula (6), otherwise, not merging
Mint(Ai,Aj)<Diff(Ai,Aj) (6)
The region merging process is carried out until the regions in P can not be merged, and the variable-scale hyper-voxel is obtained.
The final result is shown in fig. 1, and it can be seen that the gradient changes of the objects in the graph are more consistent, and the objects are better clustered into a class of objects, and are consistent with the reality, so that the expected effect is achieved.

Claims (6)

1. A variable-scale hyper-voxel segmentation method for an RGB-D image is characterized by comprising the following steps of:
step 1, obtaining a gradient map of a depth image;
step 2, carrying out Poisson castellation sampling in a gradient map to obtain a seed point set;
step 3, with various sub-points as initial points, classifying the points with adjacent gradient changes smaller than a threshold value as superpixels, expanding the superpixels as boundary points, and expanding the points with the adjacent gradient changes smaller than the threshold value of the boundary points to obtain initialized and segmented superpixels;
and step 4, taking the initial superpixel as the vertex of the graph, and if the two initial superpixels i and j have an adjacent relation in the depth map, establishing an edge of the two superpixels so as to establish an undirected graph G (V, E).
And 5, fusing the initial superpixels by utilizing the ideas of intra-class difference minimization and intra-class difference maximization based on the difference of the RGB color statistical characteristics in the superpixels to obtain the variable-scale superpixels.
2. The RGB-D image scale-varying hyper-voxel segmentation method according to claim 1, wherein the gradient map of the depth image is calculated by the formula:
Gx=f(x,y)-f(x-1,y)
Gy=f(x,y)-f(x,y-1)
wherein f (x, y) is the depth value of the (x, y) point in the depth map, f (x-1, y), f (x, y-1) are the depth values of the adjacent (x-1, y) and (x, y-1) points respectively, Gx, Gy are the differential values of the (x, y) points in the x direction and the y direction respectively, and d (x, y) is the gradient magnitude of the (x, y) point.
3. The RGB-D image scale-variable voxel segmentation method according to claim 1, wherein the specific method of performing poisson castellation sampling in the gradient map to obtain the seed point set in step 2 is:
step 2-1, randomly selecting a seed point seed from the gradient map of the depth image D obtained in the step 1, initializing a queue to be sampled L1 to be empty, initializing a seed queue L2 to be empty, and adding the seed point seed into L1;
step 2-2, if the queue to be sampled L1 is not empty, dequeuing the seed point seed from the queue to be sampled L1, taking the seed point seed as the center of a circle, randomly sampling a candidate sampling seed next _ seed in a circular ring formed by concentric circles respectively taking R and 2R as the radiuses, if the distance between the candidate sampling seed next _ seed and all known seed points in the seed queue L2 is greater than R, adding the candidate sampling seed next _ seed into the queue to be sampled L1, meanwhile, adding the seed point seed into the seed queue L2, and if K times of candidate sampling points which still do not meet the condition are tried, deleting the seed point seed from the queue to be sampled L1 and adding the seed point into the seed queue L2; the resulting point in L2 is the finally obtained seed point.
4. The RGB-D image scale-variable voxel segmentation method according to claim 1, wherein the step 3 is to use various sub-points as initial points, classify the points whose adjacent gradient changes are smaller than the threshold as superpixels, and extend them as boundary points, and then extend the points whose adjacent gradient changes are smaller than the threshold to obtain the superpixels for initial segmentation by the specific method:
step 3-1, initializing a marking map, wherein the size of the marking map is the same as that of the depth map, and the content of the marking map is filled with 0, so that the initial point does not belong to any super pixel point;
step 3-2, dequeuing the seed point in the seed queue L2, initializing an expansion queue L3 to be empty, and adding the initialized seed point into an expansion queue L3;
3-3, expanding a dequeuing point p of the queue L3, if four adjacent points of the point p have a point q which does not belong to any super pixel, and the gradient difference between the two points p and q is smaller than a set threshold value, marking the coordinate value of the point q as the super pixel in a marking map, adding the point q into L3, and repeating the step 3-2 until L3 is empty;
and 3-4, repeating the steps 3-2 and 3-3 until the seed queue L2 is empty, wherein the obtained label graph is an initial superpixel segmentation result.
5. The RGB-D image scale-variable superpixel segmentation method according to claim 1, wherein the initial superpixel is used as a vertex of the graph, and if two initial superpixels i, j have an adjacency relation in the depth map, an edge of the two superpixels is established, and a specific method for establishing an undirected graph G ═ (V, E) is as follows:
step 4-1, calculating the RGB mean value in the initial super pixel i, wherein the calculation formula is as follows:
and 4-2, taking the initial superpixel as a node, if the superpixels i and j have an adjacent relation, establishing an edge of the two superpixels, and establishing an undirected graph G (V, E).
6. The RGB-D image scale-variable superpixel segmentation method according to claim 1, wherein step 5 is based on the difference of RGB color statistical characteristics inside the superpixel, and the specific method for obtaining the scale-variable superpixel by fusing the initial superpixel by using the ideas of intra-class difference minimization and intra-class difference maximization is as follows:
initializing each superpixel to a region Ai, calculating the internal difference of the region:
Int(Ai)=max(e(vj,vk))vj,vk∈Ai
the external difference between the two regions is calculated:
diff(Ai,Aj)=min(e(vi,vj))vi∈Ai,vj∈Aj
the minimum internal difference of any two regions is calculated:
Mint(Ai,Aj)=min((Int(Ai)+τ(Ai)),(Int(Aj,τ(Aj)))
where τ (Ai) ═ k/| Ai |, and | Ai | is region aiThe number of the points is included, k is a set constant, and Mint (Ai, Aj) is the minimum internal difference of any two areas of Ai and Aj;
merging two regions if they satisfy the following formula until no regions can be merged, resulting in a scaled superpixel:
Mint(Ai,Aj)<diff(Ai,Aj) 。
CN201910754481.8A 2019-08-15 2019-08-15 Variable-scale image segmentation method based on RGB-D Withdrawn CN110619636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910754481.8A CN110619636A (en) 2019-08-15 2019-08-15 Variable-scale image segmentation method based on RGB-D

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910754481.8A CN110619636A (en) 2019-08-15 2019-08-15 Variable-scale image segmentation method based on RGB-D

Publications (1)

Publication Number Publication Date
CN110619636A true CN110619636A (en) 2019-12-27

Family

ID=68921851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910754481.8A Withdrawn CN110619636A (en) 2019-08-15 2019-08-15 Variable-scale image segmentation method based on RGB-D

Country Status (1)

Country Link
CN (1) CN110619636A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860153A (en) * 2020-01-09 2020-10-30 九江学院 Scale-adaptive hyperspectral image classification method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860153A (en) * 2020-01-09 2020-10-30 九江学院 Scale-adaptive hyperspectral image classification method and system
CN111860153B (en) * 2020-01-09 2023-10-13 九江学院 Scale-adaptive hyperspectral image classification method and system

Similar Documents

Publication Publication Date Title
CN109522908B (en) Image significance detection method based on region label fusion
CN111640125B (en) Aerial photography graph building detection and segmentation method and device based on Mask R-CNN
CN109934826B (en) Image feature segmentation method based on graph convolution network
CN108537239B (en) Method for detecting image saliency target
CN105825502B (en) A kind of Weakly supervised method for analyzing image of the dictionary study based on conspicuousness guidance
CN104966285B (en) A kind of detection method of salient region
US20100008576A1 (en) System and method for segmentation of an image into tuned multi-scaled regions
CN107767382A (en) The extraction method and system of static three-dimensional map contour of building line
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN110610505A (en) Image segmentation method fusing depth and color information
CN106991686B (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN110717896A (en) Plate strip steel surface defect detection method based on saliency label information propagation model
CN113096096B (en) Microscopic image bone marrow cell counting method and system fusing morphological characteristics
CN110443809A (en) Structure sensitive property color images super-pixel method with boundary constraint
CN107506792B (en) Semi-supervised salient object detection method
WO2024021413A1 (en) Image segmentation method combining super-pixels and multi-scale hierarchical feature recognition
Gross et al. Multiresolution object detection and delineation
CN110334762A (en) A kind of feature matching method combining ORB and SIFT based on quaternary tree
CN109345536B (en) Image super-pixel segmentation method and device
CN111210447B (en) Hematoxylin-eosin staining pathological image hierarchical segmentation method and terminal
CN112396655A (en) Point cloud data-based ship target 6D pose estimation method
CN116229205A (en) Lipstick product surface defect data augmentation method based on small sample characteristic migration
CN108805139A (en) A kind of image similarity computational methods based on frequency-domain visual significance analysis
CN104050674B (en) Salient region detection method and device
CN108388901A (en) Collaboration well-marked target detection method based on space-semanteme channel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20191227