CN108986103B - Image segmentation method based on superpixel and multi-hypergraph fusion - Google Patents

Image segmentation method based on superpixel and multi-hypergraph fusion Download PDF

Info

Publication number
CN108986103B
CN108986103B CN201810562839.2A CN201810562839A CN108986103B CN 108986103 B CN108986103 B CN 108986103B CN 201810562839 A CN201810562839 A CN 201810562839A CN 108986103 B CN108986103 B CN 108986103B
Authority
CN
China
Prior art keywords
hypergraph
matrix
superpixel
vertex
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810562839.2A
Other languages
Chinese (zh)
Other versions
CN108986103A (en
Inventor
杨明
王凯翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Normal University
Original Assignee
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Normal University filed Critical Nanjing Normal University
Priority to CN201810562839.2A priority Critical patent/CN108986103B/en
Publication of CN108986103A publication Critical patent/CN108986103A/en
Application granted granted Critical
Publication of CN108986103B publication Critical patent/CN108986103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image segmentation method based on superpixels and multi-hypergraph fusion, which comprises the following steps of: step 1, performing superpixel segmentation on a to-be-segmented object by using a current mature SLIC model; step 2, extracting various characteristics of each super pixel block; step 3, regarding each superpixel block as a vertex of the graph, and adopting an INH model to perform superpixel block-based superpixel construction on each feature in the multiple features; step 4, fusing the information of a plurality of hypergraphs from the angle of random walk to construct a multi-hypergraph Laplace matrix; and 5, constructing a spectral clustering model based on the multi-hypergraph Laplace matrix and solving. The method can solve the problem of high-order relation depiction of the pixel points in image segmentation, and effectively improves the precision of image segmentation.

Description

Image segmentation method based on super-pixel and multi-hypergraph fusion
Technical Field
The invention belongs to the field of image segmentation, and particularly relates to an image segmentation method based on superpixels and multi-hypergraph fusion.
Background
Computer vision has been one of the research hotspots in the field of computer science since the birth date. Meanwhile, computer vision is a subject of multi-field intersection, and is related to various directions such as optical information technology, automation technology, computer technology, integrated circuit technology, biology, psychology and the like. From the beginning of the 60's of the 20 th century, scientists in various countries began studying computer vision, but the overall progress was not great for a long time at the beginning; until the 80 s of the 20 th century, the turning point of computer vision research did not appear, and a great deal of important research results were obtained. Computer vision has evolved to now with significant performance in algorithms, vision systems, pattern recognition, and feature detection and description.
Image segmentation is a very basic work in computer vision, but the work is very critical, and the quality of an image segmentation result directly influences subsequent visual processing effects, including tracking, identification, analysis and the like. Until now, the development of computer vision still represents a fundamental problem of incomplete solution. The image engineering mainly comprises three parts of image processing, image analysis and image understanding, and the image processing and the image analysis must be segmented firstly. In image analysis, it is only meaningful to analyze an image structure or the like by the extracted image features on the premise that the segmentation is correct. Image segmentation is also a prerequisite for understanding. The image segmentation almost relates to various images, such as medical images, remote sensing images and images in video traffic monitoring, so that the application range of the image segmentation is very wide, which is the original intention of the image segmentation in the text.
At present, in image segmentation, the correlation relationship between pixel points in an image to be segmented must be concerned, only the second-order relationship is concerned in some image segmentation methods at present, and high-order relationships exist among the pixel points in many scenes, how to effectively depict the high-order relationships is a key factor influencing the image segmentation result, and the high-order relationships among the pixel points need to be efficiently depicted by considering the structure of a hypergraph. Based on the traditional feature extraction method, a single feature hardly achieves a good effect, and a method of fusing a plurality of features needs to be considered to improve the segmentation precision. The high-order relation depicting based on the pixel level inevitably leads to the overhigh complexity of the method and difficult realization, and the super-pixel pre-segmentation is needed to be considered, the original image is firstly segmented into a plurality of super-pixel blocks, and then the high-order relation among the super-pixel blocks is depicted. Based on the three points, the invention provides an image segmentation method based on super-pixel and multi-super-image fusion. The invention adopts the super pixel segmentation technology to generate the super pixel block firstly, thereby effectively reducing the complexity of calculation. By means of multi-hypergraph fusion, information loss is greatly reduced on the basis that a high-order relation of superpixels is effectively described. The method achieves the purpose of remarkably improving the classification precision in both subjective vision and objective evaluation indexes, and has higher use value.
Disclosure of Invention
The invention aims to provide an image segmentation method based on super-pixel and multi-hypergraph fusion, which can solve the problem of high-order relation depiction of pixel points in image segmentation and effectively improve the image segmentation precision.
In order to achieve the above purpose, the solution of the invention is:
an image segmentation method based on superpixels and multi-hypergraph fusion comprises the following steps:
step 1, performing superpixel segmentation on a to-be-segmented image;
step 2, extracting various characteristics of each super pixel block;
step 3, carrying out hypergraph construction based on a hypergraph block on each feature in the multiple features:
step 4, fusing the information of a plurality of hypergraphs from the angle of random walk to construct a multi-hypergraph Laplace matrix;
step 5, constructing a spectral clustering model based on the multi-hypergraph Laplace matrix and solving: and constructing a spectral clustering model based on the obtained multi-hypergraph Laplace matrix, and solving by using a cross iteration method.
In the step 1, the SLIC model is used to perform the super-pixel segmentation on the to-be-segmented image.
In step 2, the extracted features include color, gradient, and texture.
In the step 3, an INH model is adopted to construct the hypergraph, each superpixel block is regarded as a vertex of the graph, and the similarity between the superpixel blocks is used as the weight of edges between the vertices.
In the step 4, the hypergraph construction part comprises the following steps:
step 41, a transition probability matrix of the hypergraph random walk is represented by P, and each element of P is expressed as follows:
Figure GDA0003581970830000021
where ω (e) is a weight of the super edge e, h (u, e) ═ 1 denotes that the vertex u is on the super edge e, h (u, e) ═ 0 denotes that the vertex u is not on the super edge e, h (v, e) ═ 1 denotes that the vertex v is on the super edge e, h (v, e) ═ 0 denotes that the vertex v is not on the super edge e, d (u) is a degree of the vertex u, and δ (e) is a degree of the super edge e;
the steady state distributions of the random walks of vertices u and v are expressed as follows:
Figure GDA0003581970830000031
Figure GDA0003581970830000032
wherein d (u) is the degree of vertex u, d (V) is the degree of vertex V, vol (V) is the degree of vertex included in V, and V is the set of vertices in the hypergraph;
interpretation of multiple hypergraph segmentation from random walk perspective, betai(u) is the weighting factor of the ith hypergraph, α is used to balance the weights between the hypergraphs:
Figure GDA0003581970830000033
Figure GDA0003581970830000034
wherein, pi1(u)、π2(u) shows the steady state distribution of random walk of the vertex u in the 1 st and 2 nd hypergraphs, respectively;
the transition probability matrix between superpixels is expressed as:
p(u,v)=β1(u)p1(u,v)+β2(u)p2(u,v)
wherein p is1(u,v)、p2(u, v) represent elements of 1 st and 2 nd hypergraphs, respectively;
the steady state distribution of the vertex v is expressed as:
π(v)=απ1(v)+(1-α)π2(v)
step 42, generalizing the above method to a plurality of hypergraphs:
Figure GDA0003581970830000035
Figure GDA0003581970830000036
therein, IIiMatrix Format, α, representing the Steady-State distribution of the ith hypergraphiWeight, P, representing the ith hypergraphiRepresenting a transition probability matrix of the ith hypergraph random walk, wherein N represents the number of hypergraphs;
obtaining a Laplace matrix after multi-hypergraph fusion:
Figure GDA0003581970830000037
where the superscript T denotes the transpose.
The specific content of the step 5 is as follows:
step 51, the basic model expression of spectral clustering is as follows:
Figure GDA0003581970830000038
s.t.XTX=I
wherein Tr represents a trace of a matrix, X represents a super-pixel set, D represents a diagonal matrix, and L represents a Laplace matrix;
the spectral clustering model based on the multi-hypergraph laplacian matrix is expressed as follows:
Figure GDA0003581970830000041
Figure GDA0003581970830000042
XTX=I
wherein Tr represents the trace of the matrix, N represents the number of hypergraphs, and alphaiWeight representing ith hypergraph, Π representing matrix format of steady-state distribution, PiA transition probability matrix representing the ith hypergraph random walk, and lambda represents a balance factor;
step 52, solving by using a cross iteration method, firstly fixing alpha, updating X:
Figure GDA0003581970830000043
s.t.XTX=I
step 53, fix X, update α:
Figure GDA0003581970830000044
Figure GDA0003581970830000045
where M isi=XTiPi) And X, solving the optimization model to obtain:
Figure GDA0003581970830000046
obtaining X ∈ Rn×kAnd X is a matrix consisting of k column vectors, n row vectors are regarded as n different samples and also represent n superpixel blocks, the samples are subjected to k-means clustering, and the superpixel blocks are finally divided to obtain a final segmentation result.
After the scheme is adopted, the super-pixel block is generated firstly by adopting a super-pixel segmentation technology, so that the complexity of calculation is effectively reduced. By means of multi-hypergraph fusion, information loss is greatly reduced on the basis that a high-order relation of superpixels is effectively described. The method achieves the purpose of remarkably improving the classification precision in both subjective vision and objective evaluation indexes, and has higher use value.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical solution and the advantages of the present invention will be described in detail with reference to the accompanying drawings.
As shown in FIG. 1, the invention provides an image segmentation method based on super-pixel and multi-super-map fusion, comprising the following steps:
step 1, performing superpixel segmentation on the segmentation to be performed by applying a current mature SLIC model: the SLIC model can effectively carry out superpixel segmentation on an original image to generate uniform and ordered superpixel blocks, and superpixel points in the superpixel blocks have strong consistency, so that the superpixel blocks have semantic features to a certain extent.
Step 2, extracting various characteristics of each super pixel block: the feature extraction is carried out on each super-pixel block by adopting a traditional method, the extracted features comprise color, gradient, texture and the like, and the extraction of various features can ensure better description and expression of each super-pixel block.
Step 3, performing hypergraph construction based on a hypergraph block on each feature in the multiple features by adopting a mature INH model: each superpixel block is considered herein as a vertex of the graph, and the similarity between superpixel blocks is used as the weight of the edge between vertices. The INH is a mature hypergraph structure model, has strong high-order relation expression capacity, and can effectively depict the internal structure between the superpixel blocks.
And 4, fusing the information of the multiple hypergraphs from the random walk angle to construct a multi-hypergraph Laplace matrix: the method comprises the steps of extracting various features from a superpixel block, fusing the extracted features together, and from the random walk angle, integrating the transition probability and the steady-state distribution of the various features to construct a multi-hypergraph Laplace matrix, wherein the weights of different features are embodied in matrix parameters and can be obtained by learning in the subsequent steps.
Step 5, constructing a spectral clustering model based on the multi-hypergraph Laplace matrix and solving: and constructing a spectral clustering model based on the obtained multi-hypergraph Laplace matrix, and solving by using a cross iteration method.
It should be noted that the core steps of the present invention are to construct a multi-hypergraph laplacian matrix and a spectral clustering model based on the multi-hypergraph laplacian matrix, and the description of the specific embodiment mainly focuses on steps 4 and 5, and steps 1 and 2 and step 3 can be implemented by using the prior art.
Note that the hypergraph adjacency matrix corresponding to the image is H ═ X, E, X is the superpixel set, V is the vertex set in the hypergraph, E is the hyperedge set, L is the corresponding laplacian matrix, D is the diagonal matrix, and N represents the number of hypergraphs. E is a hyper-edge on the hyper-graph, u and V represent two different vertices on the hyper-graph, d (u) is the degree of vertex u, δ (E) is the degree of hyper-edge E, ω (E) represents the weight of hyper-edge E, vol (V) is the degree of vertices contained in V, pi (V) is a steady-state distribution of random walks, h (u, E) ═ 1 represents that vertex u is on hyper-edge E, and h (u, E) ═ 0 represents that vertex u is not on hyper-edge E.
In the step 4, the hypergraph construction part comprises the following steps:
(1) let P denote the transition probability matrix of the random walk of the hypergraph, and each element of P can be expressed as follows:
Figure GDA0003581970830000061
the steady state distribution of random walks is expressed as follows:
Figure GDA0003581970830000062
interpretation of multiple hypergraph segmentation from random walk perspective, betai(u) is the weighting factor for the ith hypergraph, α is used to balance the weight between the two hypergraphs:
Figure GDA0003581970830000063
Figure GDA0003581970830000064
the transition probability matrix between superpixels can be expressed as:
p(u,v)=β1(u)p1(u,v)+β2(u)p2(u,v)
the steady state distribution can be expressed as:
π(v)=απ1(v)+(1-α)π2(v)
(2) the multi-hypergraph fusion method can only be used between two hypergraphs, but the method needs to fuse a plurality of hypergraphs and is popularized to the plurality of hypergraphs, piiMatrix Format, α, representing the Steady-State distribution of the ith hypergraphiWeight, P, representing the ith hypergraphiTransition probability matrix representing the ith hypergraph random walk:
Figure GDA0003581970830000065
Figure GDA0003581970830000066
here, the laplacian matrix after the multi-hypergraph fusion can be obtained:
Figure GDA0003581970830000071
in the step 5, the construction and solution of the hypergraph spectral clustering model comprises the following steps:
(1) the basic model of spectral clustering is expressed as follows:
Figure GDA0003581970830000072
s.t.XTX=I
the spectral clustering model based on the multiple hypergraph laplacian matrix can be expressed as follows (λ is a balance factor):
Figure GDA0003581970830000073
Figure GDA0003581970830000074
XTX=I
(2) the solution is solved by using a cross iteration method, firstly fixing alpha, updating X:
Figure GDA0003581970830000075
s.t.XTX=I
this can be translated into a eigen decomposition problem and solved.
(3) Fix X, update α:
Figure GDA0003581970830000076
Figure GDA0003581970830000077
where M isi=XTiPi) X, the optimization model can be solved to obtain:
Figure GDA0003581970830000078
here we obtain X ∈ Rn×kX is a matrix composed of k column vectors, n row vectors are regarded as n different samples, which also represent n superpixel blocks, and k-means clustering is performed on the samples, so that the superpixel blocks can be finally divided. After certain post-processing steps, the final segmentation result can be obtained.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention.

Claims (4)

1. An image segmentation method based on superpixels and multi-hypergraph fusion is characterized by comprising the following steps of:
step 1, performing superpixel segmentation on an image to be segmented;
step 2, extracting various characteristics of each super pixel block;
step 3, carrying out hypergraph construction based on a hypergraph block on each feature in the multiple features;
step 4, fusing the information of a plurality of hypergraphs from the angle of random walk to construct a multi-hypergraph Laplace matrix;
in the step 4, the multi-hypergraph laplacian matrix constructing part comprises the following steps:
step 41, a transition probability matrix of the hypergraph random walk is represented by P, and each element of P is expressed as follows:
Figure FDA0003581970820000011
where ω (e) is a weight of the super edge e, h (u, e) ═ 1 denotes that the vertex u is on the super edge e, h (u, e) ═ 0 denotes that the vertex u is not on the super edge e, h (v, e) ═ 1 denotes that the vertex v is on the super edge e, h (v, e) ═ 0 denotes that the vertex v is not on the super edge e, d (u) is a degree of the vertex u, and δ (e) is a degree of the super edge e;
the steady state distributions of the random walks of vertices u and v are expressed as follows:
Figure FDA0003581970820000012
Figure FDA0003581970820000013
wherein d (V) is the degree of vertex V, vol (V) is the degree of the vertex included in V, and V is the set of vertices in the hypergraph;
interpretation of multiple hypergraph segmentation from random walk perspective, betai(u) is the weighting factor of the ith hypergraph, and α is used for balancingWeight between hypergraphs:
Figure FDA0003581970820000014
Figure FDA0003581970820000015
wherein, pi1(u)、π2(u) shows the steady state distribution of random walk of the vertex u in the 1 st and 2 nd hypergraphs, respectively;
the transition probability matrix between superpixels is expressed as:
p(u,v)=β1(u)p1(u,v)+β2(u)p2(u,v)
wherein p is1(u,v)、p2(u, v) represent elements of 1 st and 2 nd hypergraphs, respectively;
the steady state distribution of the vertex v is expressed as:
π(v)=απ1(v)+(1-α)π2(v)
step 42, generalizing the above method to a plurality of hypergraphs:
Figure FDA0003581970820000021
Figure FDA0003581970820000022
therein, IIiMatrix Format, α, representing the Steady-State distribution of the ith hypergraphiWeight, P, representing the ith hypergraphiRepresenting a transition probability matrix of the ith hypergraph random walk, wherein N represents the number of hypergraphs;
obtaining a Laplace matrix after multi-hypergraph fusion:
Figure FDA0003581970820000023
wherein, superscript T represents a transpose matrix;
step 5, constructing a spectral clustering model based on the multi-hypergraph Laplace matrix and solving: constructing a spectral clustering model based on the obtained multi-hypergraph Laplace matrix, and solving by using a cross iteration method;
the specific content of the step 5 is as follows:
step 51, the basic model expression of spectral clustering is as follows:
Figure FDA0003581970820000024
s.t.XTX=I
wherein Tr represents a trace of a matrix, X represents a super-pixel set, D represents a diagonal matrix, and L represents a Laplace matrix;
the spectral clustering model based on the multi-hypergraph laplacian matrix is expressed as follows:
Figure FDA0003581970820000025
Figure FDA0003581970820000026
XTX=I
wherein pi represents a matrix format of steady-state distribution, and λ represents a balance factor;
step 52, solving by using a cross iteration method, firstly fixing alpha, updating X:
Figure FDA0003581970820000031
s.t.XTX=I
step 53, fix X, update α:
Figure FDA0003581970820000032
Figure FDA0003581970820000033
where M isi=XTiPi) And X, solving the optimization problem with the constraint to obtain:
Figure FDA0003581970820000034
obtaining X ∈ Rn×kAnd X is a matrix consisting of k column vectors, n row vectors are regarded as n different samples and also represent n superpixel blocks, k-means clustering is carried out on the samples, and finally the superpixel blocks are divided to obtain a final segmentation result.
2. The image segmentation method based on superpixel and multi-hypergraph fusion as claimed in claim 1, characterized in that: in the step 1, the SLIC model is used for carrying out super-pixel segmentation on the image to be segmented.
3. The image segmentation method based on superpixel and multi-hypergraph fusion as claimed in claim 1, characterized in that: in step 2, the extracted features include color, gradient, and texture.
4. The image segmentation method based on superpixel and multi-hypergraph fusion as claimed in claim 1, characterized in that: in the step 3, an INH model is adopted to construct the hypergraph, each superpixel block is regarded as a vertex of the graph, and the similarity between the superpixel blocks is used as the weight of edges between the vertices.
CN201810562839.2A 2018-06-04 2018-06-04 Image segmentation method based on superpixel and multi-hypergraph fusion Active CN108986103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810562839.2A CN108986103B (en) 2018-06-04 2018-06-04 Image segmentation method based on superpixel and multi-hypergraph fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810562839.2A CN108986103B (en) 2018-06-04 2018-06-04 Image segmentation method based on superpixel and multi-hypergraph fusion

Publications (2)

Publication Number Publication Date
CN108986103A CN108986103A (en) 2018-12-11
CN108986103B true CN108986103B (en) 2022-06-07

Family

ID=64539970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810562839.2A Active CN108986103B (en) 2018-06-04 2018-06-04 Image segmentation method based on superpixel and multi-hypergraph fusion

Country Status (1)

Country Link
CN (1) CN108986103B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741358B (en) * 2018-12-29 2020-11-06 北京工业大学 Superpixel segmentation method based on adaptive hypergraph learning
CN111967485B (en) * 2020-04-26 2024-01-05 中国人民解放军火箭军工程大学 Air-ground infrared target tracking method based on probability hypergraph learning
CN112446417B (en) * 2020-10-16 2022-04-12 山东大学 Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976348A (en) * 2010-10-21 2011-02-16 中国科学院深圳先进技术研究院 Image clustering method and system
CN103544697A (en) * 2013-09-30 2014-01-29 南京信息工程大学 Hypergraph spectrum analysis based image segmentation method
CN104899253A (en) * 2015-05-13 2015-09-09 复旦大学 Cross-modality image-label relevance learning method facing social image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976348A (en) * 2010-10-21 2011-02-16 中国科学院深圳先进技术研究院 Image clustering method and system
CN103544697A (en) * 2013-09-30 2014-01-29 南京信息工程大学 Hypergraph spectrum analysis based image segmentation method
CN104899253A (en) * 2015-05-13 2015-09-09 复旦大学 Cross-modality image-label relevance learning method facing social image

Also Published As

Publication number Publication date
CN108986103A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
CN109919830B (en) Method for restoring image with reference eye based on aesthetic evaluation
CN108986103B (en) Image segmentation method based on superpixel and multi-hypergraph fusion
CN111160533A (en) Neural network acceleration method based on cross-resolution knowledge distillation
CN103761295B (en) Automatic picture classification based customized feature extraction method for art pictures
CN112990077B (en) Face action unit identification method and device based on joint learning and optical flow estimation
CN108304357A (en) A kind of Chinese word library automatic generation method based on font manifold
CN112766280A (en) Remote sensing image road extraction method based on graph convolution
CN108595558B (en) Image annotation method based on data equalization strategy and multi-feature fusion
Wang et al. Inpainting of dunhuang murals by sparsely modeling the texture similarity and structure continuity
Wang et al. A global and local feature weighted method for ancient murals inpainting
CN113313173A (en) Human body analysis method based on graph representation and improved Transformer
CN112906813A (en) Flotation condition identification method based on density clustering and capsule neural network
CN110264483B (en) Semantic image segmentation method based on deep learning
Zhu et al. Semantic image segmentation with shared decomposition convolution and boundary reinforcement structure
CN110348395B (en) Skeleton behavior identification method based on space-time relationship
Liao et al. TransRef: Multi-scale reference embedding transformer for reference-guided image inpainting
Chen et al. A classification method of oracle materials based on local convolutional neural network framework
CN114494284B (en) Scene analysis model and method based on explicit supervision area relation
Liu et al. Dsma: Reference-based image super-resolution method based on dual-view supervised learning and multi-attention mechanism
Ma et al. A novel generative image inpainting model with dense gated convolutional network
CN111583352B (en) Intelligent generation method of stylized icon for mobile terminal
CN114187495A (en) Garment fashion trend prediction method based on images
Wu et al. HIGSA: Human image generation with self-attention
Liu et al. Data-driven scene understanding with adaptively retrieved exemplars
CN113688715A (en) Facial expression recognition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant