CN108986103B - Image segmentation method based on superpixel and multi-hypergraph fusion - Google Patents
Image segmentation method based on superpixel and multi-hypergraph fusion Download PDFInfo
- Publication number
- CN108986103B CN108986103B CN201810562839.2A CN201810562839A CN108986103B CN 108986103 B CN108986103 B CN 108986103B CN 201810562839 A CN201810562839 A CN 201810562839A CN 108986103 B CN108986103 B CN 108986103B
- Authority
- CN
- China
- Prior art keywords
- hypergraph
- matrix
- superpixel
- vertex
- super
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000003709 image segmentation Methods 0.000 title claims abstract description 26
- 230000004927 fusion Effects 0.000 title claims abstract description 17
- 239000011159 matrix material Substances 0.000 claims abstract description 51
- 238000005295 random walk Methods 0.000 claims abstract description 21
- 230000011218 segmentation Effects 0.000 claims abstract description 19
- 230000003595 spectral effect Effects 0.000 claims abstract description 15
- 238000010276 construction Methods 0.000 claims abstract description 7
- 238000009826 distribution Methods 0.000 claims description 15
- 230000007704 transition Effects 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 6
- 238000003064 k means clustering Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image segmentation method based on superpixels and multi-hypergraph fusion, which comprises the following steps of: step 1, performing superpixel segmentation on a to-be-segmented object by using a current mature SLIC model; step 2, extracting various characteristics of each super pixel block; step 3, regarding each superpixel block as a vertex of the graph, and adopting an INH model to perform superpixel block-based superpixel construction on each feature in the multiple features; step 4, fusing the information of a plurality of hypergraphs from the angle of random walk to construct a multi-hypergraph Laplace matrix; and 5, constructing a spectral clustering model based on the multi-hypergraph Laplace matrix and solving. The method can solve the problem of high-order relation depiction of the pixel points in image segmentation, and effectively improves the precision of image segmentation.
Description
Technical Field
The invention belongs to the field of image segmentation, and particularly relates to an image segmentation method based on superpixels and multi-hypergraph fusion.
Background
Computer vision has been one of the research hotspots in the field of computer science since the birth date. Meanwhile, computer vision is a subject of multi-field intersection, and is related to various directions such as optical information technology, automation technology, computer technology, integrated circuit technology, biology, psychology and the like. From the beginning of the 60's of the 20 th century, scientists in various countries began studying computer vision, but the overall progress was not great for a long time at the beginning; until the 80 s of the 20 th century, the turning point of computer vision research did not appear, and a great deal of important research results were obtained. Computer vision has evolved to now with significant performance in algorithms, vision systems, pattern recognition, and feature detection and description.
Image segmentation is a very basic work in computer vision, but the work is very critical, and the quality of an image segmentation result directly influences subsequent visual processing effects, including tracking, identification, analysis and the like. Until now, the development of computer vision still represents a fundamental problem of incomplete solution. The image engineering mainly comprises three parts of image processing, image analysis and image understanding, and the image processing and the image analysis must be segmented firstly. In image analysis, it is only meaningful to analyze an image structure or the like by the extracted image features on the premise that the segmentation is correct. Image segmentation is also a prerequisite for understanding. The image segmentation almost relates to various images, such as medical images, remote sensing images and images in video traffic monitoring, so that the application range of the image segmentation is very wide, which is the original intention of the image segmentation in the text.
At present, in image segmentation, the correlation relationship between pixel points in an image to be segmented must be concerned, only the second-order relationship is concerned in some image segmentation methods at present, and high-order relationships exist among the pixel points in many scenes, how to effectively depict the high-order relationships is a key factor influencing the image segmentation result, and the high-order relationships among the pixel points need to be efficiently depicted by considering the structure of a hypergraph. Based on the traditional feature extraction method, a single feature hardly achieves a good effect, and a method of fusing a plurality of features needs to be considered to improve the segmentation precision. The high-order relation depicting based on the pixel level inevitably leads to the overhigh complexity of the method and difficult realization, and the super-pixel pre-segmentation is needed to be considered, the original image is firstly segmented into a plurality of super-pixel blocks, and then the high-order relation among the super-pixel blocks is depicted. Based on the three points, the invention provides an image segmentation method based on super-pixel and multi-super-image fusion. The invention adopts the super pixel segmentation technology to generate the super pixel block firstly, thereby effectively reducing the complexity of calculation. By means of multi-hypergraph fusion, information loss is greatly reduced on the basis that a high-order relation of superpixels is effectively described. The method achieves the purpose of remarkably improving the classification precision in both subjective vision and objective evaluation indexes, and has higher use value.
Disclosure of Invention
The invention aims to provide an image segmentation method based on super-pixel and multi-hypergraph fusion, which can solve the problem of high-order relation depiction of pixel points in image segmentation and effectively improve the image segmentation precision.
In order to achieve the above purpose, the solution of the invention is:
an image segmentation method based on superpixels and multi-hypergraph fusion comprises the following steps:
step 1, performing superpixel segmentation on a to-be-segmented image;
step 2, extracting various characteristics of each super pixel block;
step 4, fusing the information of a plurality of hypergraphs from the angle of random walk to construct a multi-hypergraph Laplace matrix;
In the step 1, the SLIC model is used to perform the super-pixel segmentation on the to-be-segmented image.
In step 2, the extracted features include color, gradient, and texture.
In the step 3, an INH model is adopted to construct the hypergraph, each superpixel block is regarded as a vertex of the graph, and the similarity between the superpixel blocks is used as the weight of edges between the vertices.
In the step 4, the hypergraph construction part comprises the following steps:
step 41, a transition probability matrix of the hypergraph random walk is represented by P, and each element of P is expressed as follows:
where ω (e) is a weight of the super edge e, h (u, e) ═ 1 denotes that the vertex u is on the super edge e, h (u, e) ═ 0 denotes that the vertex u is not on the super edge e, h (v, e) ═ 1 denotes that the vertex v is on the super edge e, h (v, e) ═ 0 denotes that the vertex v is not on the super edge e, d (u) is a degree of the vertex u, and δ (e) is a degree of the super edge e;
the steady state distributions of the random walks of vertices u and v are expressed as follows:
wherein d (u) is the degree of vertex u, d (V) is the degree of vertex V, vol (V) is the degree of vertex included in V, and V is the set of vertices in the hypergraph;
interpretation of multiple hypergraph segmentation from random walk perspective, betai(u) is the weighting factor of the ith hypergraph, α is used to balance the weights between the hypergraphs:
wherein, pi1(u)、π2(u) shows the steady state distribution of random walk of the vertex u in the 1 st and 2 nd hypergraphs, respectively;
the transition probability matrix between superpixels is expressed as:
p(u,v)=β1(u)p1(u,v)+β2(u)p2(u,v)
wherein p is1(u,v)、p2(u, v) represent elements of 1 st and 2 nd hypergraphs, respectively;
the steady state distribution of the vertex v is expressed as:
π(v)=απ1(v)+(1-α)π2(v)
step 42, generalizing the above method to a plurality of hypergraphs:
therein, IIiMatrix Format, α, representing the Steady-State distribution of the ith hypergraphiWeight, P, representing the ith hypergraphiRepresenting a transition probability matrix of the ith hypergraph random walk, wherein N represents the number of hypergraphs;
obtaining a Laplace matrix after multi-hypergraph fusion:
where the superscript T denotes the transpose.
The specific content of the step 5 is as follows:
step 51, the basic model expression of spectral clustering is as follows:
s.t.XTX=I
wherein Tr represents a trace of a matrix, X represents a super-pixel set, D represents a diagonal matrix, and L represents a Laplace matrix;
the spectral clustering model based on the multi-hypergraph laplacian matrix is expressed as follows:
XTX=I
wherein Tr represents the trace of the matrix, N represents the number of hypergraphs, and alphaiWeight representing ith hypergraph, Π representing matrix format of steady-state distribution, PiA transition probability matrix representing the ith hypergraph random walk, and lambda represents a balance factor;
step 52, solving by using a cross iteration method, firstly fixing alpha, updating X:
s.t.XTX=I
step 53, fix X, update α:
where M isi=XT(ΠiPi) And X, solving the optimization model to obtain:
obtaining X ∈ Rn×kAnd X is a matrix consisting of k column vectors, n row vectors are regarded as n different samples and also represent n superpixel blocks, the samples are subjected to k-means clustering, and the superpixel blocks are finally divided to obtain a final segmentation result.
After the scheme is adopted, the super-pixel block is generated firstly by adopting a super-pixel segmentation technology, so that the complexity of calculation is effectively reduced. By means of multi-hypergraph fusion, information loss is greatly reduced on the basis that a high-order relation of superpixels is effectively described. The method achieves the purpose of remarkably improving the classification precision in both subjective vision and objective evaluation indexes, and has higher use value.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical solution and the advantages of the present invention will be described in detail with reference to the accompanying drawings.
As shown in FIG. 1, the invention provides an image segmentation method based on super-pixel and multi-super-map fusion, comprising the following steps:
step 1, performing superpixel segmentation on the segmentation to be performed by applying a current mature SLIC model: the SLIC model can effectively carry out superpixel segmentation on an original image to generate uniform and ordered superpixel blocks, and superpixel points in the superpixel blocks have strong consistency, so that the superpixel blocks have semantic features to a certain extent.
Step 2, extracting various characteristics of each super pixel block: the feature extraction is carried out on each super-pixel block by adopting a traditional method, the extracted features comprise color, gradient, texture and the like, and the extraction of various features can ensure better description and expression of each super-pixel block.
And 4, fusing the information of the multiple hypergraphs from the random walk angle to construct a multi-hypergraph Laplace matrix: the method comprises the steps of extracting various features from a superpixel block, fusing the extracted features together, and from the random walk angle, integrating the transition probability and the steady-state distribution of the various features to construct a multi-hypergraph Laplace matrix, wherein the weights of different features are embodied in matrix parameters and can be obtained by learning in the subsequent steps.
It should be noted that the core steps of the present invention are to construct a multi-hypergraph laplacian matrix and a spectral clustering model based on the multi-hypergraph laplacian matrix, and the description of the specific embodiment mainly focuses on steps 4 and 5, and steps 1 and 2 and step 3 can be implemented by using the prior art.
Note that the hypergraph adjacency matrix corresponding to the image is H ═ X, E, X is the superpixel set, V is the vertex set in the hypergraph, E is the hyperedge set, L is the corresponding laplacian matrix, D is the diagonal matrix, and N represents the number of hypergraphs. E is a hyper-edge on the hyper-graph, u and V represent two different vertices on the hyper-graph, d (u) is the degree of vertex u, δ (E) is the degree of hyper-edge E, ω (E) represents the weight of hyper-edge E, vol (V) is the degree of vertices contained in V, pi (V) is a steady-state distribution of random walks, h (u, E) ═ 1 represents that vertex u is on hyper-edge E, and h (u, E) ═ 0 represents that vertex u is not on hyper-edge E.
In the step 4, the hypergraph construction part comprises the following steps:
(1) let P denote the transition probability matrix of the random walk of the hypergraph, and each element of P can be expressed as follows:
the steady state distribution of random walks is expressed as follows:
interpretation of multiple hypergraph segmentation from random walk perspective, betai(u) is the weighting factor for the ith hypergraph, α is used to balance the weight between the two hypergraphs:
the transition probability matrix between superpixels can be expressed as:
p(u,v)=β1(u)p1(u,v)+β2(u)p2(u,v)
the steady state distribution can be expressed as:
π(v)=απ1(v)+(1-α)π2(v)
(2) the multi-hypergraph fusion method can only be used between two hypergraphs, but the method needs to fuse a plurality of hypergraphs and is popularized to the plurality of hypergraphs, piiMatrix Format, α, representing the Steady-State distribution of the ith hypergraphiWeight, P, representing the ith hypergraphiTransition probability matrix representing the ith hypergraph random walk:
here, the laplacian matrix after the multi-hypergraph fusion can be obtained:
in the step 5, the construction and solution of the hypergraph spectral clustering model comprises the following steps:
(1) the basic model of spectral clustering is expressed as follows:
s.t.XTX=I
the spectral clustering model based on the multiple hypergraph laplacian matrix can be expressed as follows (λ is a balance factor):
XTX=I
(2) the solution is solved by using a cross iteration method, firstly fixing alpha, updating X:
s.t.XTX=I
this can be translated into a eigen decomposition problem and solved.
(3) Fix X, update α:
where M isi=XT(ΠiPi) X, the optimization model can be solved to obtain:
here we obtain X ∈ Rn×kX is a matrix composed of k column vectors, n row vectors are regarded as n different samples, which also represent n superpixel blocks, and k-means clustering is performed on the samples, so that the superpixel blocks can be finally divided. After certain post-processing steps, the final segmentation result can be obtained.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention.
Claims (4)
1. An image segmentation method based on superpixels and multi-hypergraph fusion is characterized by comprising the following steps of:
step 1, performing superpixel segmentation on an image to be segmented;
step 2, extracting various characteristics of each super pixel block;
step 3, carrying out hypergraph construction based on a hypergraph block on each feature in the multiple features;
step 4, fusing the information of a plurality of hypergraphs from the angle of random walk to construct a multi-hypergraph Laplace matrix;
in the step 4, the multi-hypergraph laplacian matrix constructing part comprises the following steps:
step 41, a transition probability matrix of the hypergraph random walk is represented by P, and each element of P is expressed as follows:
where ω (e) is a weight of the super edge e, h (u, e) ═ 1 denotes that the vertex u is on the super edge e, h (u, e) ═ 0 denotes that the vertex u is not on the super edge e, h (v, e) ═ 1 denotes that the vertex v is on the super edge e, h (v, e) ═ 0 denotes that the vertex v is not on the super edge e, d (u) is a degree of the vertex u, and δ (e) is a degree of the super edge e;
the steady state distributions of the random walks of vertices u and v are expressed as follows:
wherein d (V) is the degree of vertex V, vol (V) is the degree of the vertex included in V, and V is the set of vertices in the hypergraph;
interpretation of multiple hypergraph segmentation from random walk perspective, betai(u) is the weighting factor of the ith hypergraph, and α is used for balancingWeight between hypergraphs:
wherein, pi1(u)、π2(u) shows the steady state distribution of random walk of the vertex u in the 1 st and 2 nd hypergraphs, respectively;
the transition probability matrix between superpixels is expressed as:
p(u,v)=β1(u)p1(u,v)+β2(u)p2(u,v)
wherein p is1(u,v)、p2(u, v) represent elements of 1 st and 2 nd hypergraphs, respectively;
the steady state distribution of the vertex v is expressed as:
π(v)=απ1(v)+(1-α)π2(v)
step 42, generalizing the above method to a plurality of hypergraphs:
therein, IIiMatrix Format, α, representing the Steady-State distribution of the ith hypergraphiWeight, P, representing the ith hypergraphiRepresenting a transition probability matrix of the ith hypergraph random walk, wherein N represents the number of hypergraphs;
obtaining a Laplace matrix after multi-hypergraph fusion:
wherein, superscript T represents a transpose matrix;
step 5, constructing a spectral clustering model based on the multi-hypergraph Laplace matrix and solving: constructing a spectral clustering model based on the obtained multi-hypergraph Laplace matrix, and solving by using a cross iteration method;
the specific content of the step 5 is as follows:
step 51, the basic model expression of spectral clustering is as follows:
s.t.XTX=I
wherein Tr represents a trace of a matrix, X represents a super-pixel set, D represents a diagonal matrix, and L represents a Laplace matrix;
the spectral clustering model based on the multi-hypergraph laplacian matrix is expressed as follows:
XTX=I
wherein pi represents a matrix format of steady-state distribution, and λ represents a balance factor;
step 52, solving by using a cross iteration method, firstly fixing alpha, updating X:
s.t.XTX=I
step 53, fix X, update α:
where M isi=XT(ΠiPi) And X, solving the optimization problem with the constraint to obtain:
obtaining X ∈ Rn×kAnd X is a matrix consisting of k column vectors, n row vectors are regarded as n different samples and also represent n superpixel blocks, k-means clustering is carried out on the samples, and finally the superpixel blocks are divided to obtain a final segmentation result.
2. The image segmentation method based on superpixel and multi-hypergraph fusion as claimed in claim 1, characterized in that: in the step 1, the SLIC model is used for carrying out super-pixel segmentation on the image to be segmented.
3. The image segmentation method based on superpixel and multi-hypergraph fusion as claimed in claim 1, characterized in that: in step 2, the extracted features include color, gradient, and texture.
4. The image segmentation method based on superpixel and multi-hypergraph fusion as claimed in claim 1, characterized in that: in the step 3, an INH model is adopted to construct the hypergraph, each superpixel block is regarded as a vertex of the graph, and the similarity between the superpixel blocks is used as the weight of edges between the vertices.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810562839.2A CN108986103B (en) | 2018-06-04 | 2018-06-04 | Image segmentation method based on superpixel and multi-hypergraph fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810562839.2A CN108986103B (en) | 2018-06-04 | 2018-06-04 | Image segmentation method based on superpixel and multi-hypergraph fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108986103A CN108986103A (en) | 2018-12-11 |
CN108986103B true CN108986103B (en) | 2022-06-07 |
Family
ID=64539970
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810562839.2A Active CN108986103B (en) | 2018-06-04 | 2018-06-04 | Image segmentation method based on superpixel and multi-hypergraph fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108986103B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109741358B (en) * | 2018-12-29 | 2020-11-06 | 北京工业大学 | Superpixel segmentation method based on adaptive hypergraph learning |
CN111967485B (en) * | 2020-04-26 | 2024-01-05 | 中国人民解放军火箭军工程大学 | Air-ground infrared target tracking method based on probability hypergraph learning |
CN112446417B (en) * | 2020-10-16 | 2022-04-12 | 山东大学 | Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976348A (en) * | 2010-10-21 | 2011-02-16 | 中国科学院深圳先进技术研究院 | Image clustering method and system |
CN103544697A (en) * | 2013-09-30 | 2014-01-29 | 南京信息工程大学 | Hypergraph spectrum analysis based image segmentation method |
CN104899253A (en) * | 2015-05-13 | 2015-09-09 | 复旦大学 | Cross-modality image-label relevance learning method facing social image |
-
2018
- 2018-06-04 CN CN201810562839.2A patent/CN108986103B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976348A (en) * | 2010-10-21 | 2011-02-16 | 中国科学院深圳先进技术研究院 | Image clustering method and system |
CN103544697A (en) * | 2013-09-30 | 2014-01-29 | 南京信息工程大学 | Hypergraph spectrum analysis based image segmentation method |
CN104899253A (en) * | 2015-05-13 | 2015-09-09 | 复旦大学 | Cross-modality image-label relevance learning method facing social image |
Also Published As
Publication number | Publication date |
---|---|
CN108986103A (en) | 2018-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109919830B (en) | Method for restoring image with reference eye based on aesthetic evaluation | |
CN108986103B (en) | Image segmentation method based on superpixel and multi-hypergraph fusion | |
CN111160533A (en) | Neural network acceleration method based on cross-resolution knowledge distillation | |
CN103761295B (en) | Automatic picture classification based customized feature extraction method for art pictures | |
CN112990077B (en) | Face action unit identification method and device based on joint learning and optical flow estimation | |
CN108304357A (en) | A kind of Chinese word library automatic generation method based on font manifold | |
CN112766280A (en) | Remote sensing image road extraction method based on graph convolution | |
CN108595558B (en) | Image annotation method based on data equalization strategy and multi-feature fusion | |
Wang et al. | Inpainting of dunhuang murals by sparsely modeling the texture similarity and structure continuity | |
Wang et al. | A global and local feature weighted method for ancient murals inpainting | |
CN113313173A (en) | Human body analysis method based on graph representation and improved Transformer | |
CN112906813A (en) | Flotation condition identification method based on density clustering and capsule neural network | |
CN110264483B (en) | Semantic image segmentation method based on deep learning | |
Zhu et al. | Semantic image segmentation with shared decomposition convolution and boundary reinforcement structure | |
CN110348395B (en) | Skeleton behavior identification method based on space-time relationship | |
Liao et al. | TransRef: Multi-scale reference embedding transformer for reference-guided image inpainting | |
Chen et al. | A classification method of oracle materials based on local convolutional neural network framework | |
CN114494284B (en) | Scene analysis model and method based on explicit supervision area relation | |
Liu et al. | Dsma: Reference-based image super-resolution method based on dual-view supervised learning and multi-attention mechanism | |
Ma et al. | A novel generative image inpainting model with dense gated convolutional network | |
CN111583352B (en) | Intelligent generation method of stylized icon for mobile terminal | |
CN114187495A (en) | Garment fashion trend prediction method based on images | |
Wu et al. | HIGSA: Human image generation with self-attention | |
Liu et al. | Data-driven scene understanding with adaptively retrieved exemplars | |
CN113688715A (en) | Facial expression recognition method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |