CN111814894A - Multi-view semi-supervised classification method for rapid seed random walk - Google Patents
Multi-view semi-supervised classification method for rapid seed random walk Download PDFInfo
- Publication number
- CN111814894A CN111814894A CN202010695386.8A CN202010695386A CN111814894A CN 111814894 A CN111814894 A CN 111814894A CN 202010695386 A CN202010695386 A CN 202010695386A CN 111814894 A CN111814894 A CN 111814894A
- Authority
- CN
- China
- Prior art keywords
- view
- data
- matrix
- angle
- visual angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000005295 random walk Methods 0.000 title claims abstract description 19
- 239000011159 matrix material Substances 0.000 claims abstract description 70
- 230000007704 transition Effects 0.000 claims abstract description 41
- 230000000007 visual effect Effects 0.000 claims abstract description 19
- 230000006870 function Effects 0.000 claims abstract description 12
- 238000012360 testing method Methods 0.000 claims abstract description 8
- 238000004364 calculation method Methods 0.000 claims description 20
- 238000012546 transfer Methods 0.000 claims description 6
- 230000009471 action Effects 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 abstract description 2
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000013459 approach Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/15—Correlation function computation including computation of convolution operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Algebra (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a multi-view semi-supervised classification method for fast seed random walk, which comprises the steps of firstly, adopting a Gaussian kernel function to calculate a similarity matrix and a transition probability matrix of each view of input multi-view data; then, establishing an initial distribution state of each visual angle of the multi-visual angle data according to a category label of the multi-visual angle data for semi-supervised learning, and calculating an arrival probability matrix of a first transition state of each visual angle of the multi-visual angle data; and finally, iteratively calculating an arrival probability matrix of multiple transition states of each view angle of the multi-view data, and performing weighted summation on the arrival probability matrices of all transition states of each view angle to obtain a reward matrix of each view angle to generate a category label of the multi-view data for testing. The invention can accurately and effectively classify various types of data such as images, texts, videos and the like only by using a small amount of monitoring information, and has certain practical value.
Description
Technical Field
The invention relates to the field of multi-view and semi-supervised learning, in particular to a multi-view semi-supervised classification method for fast seed random walk.
Background
Multi-view data is very common in practical applications, such as multi-camera image collection and multi-mode information acquisition. Data collected from heterogeneous sources often contains a large amount of redundant and irrelevant information, which can degrade the performance of the learning algorithm. In multi-view data, each view captures partial information rather than complete information. A complete representation is potentially redundant and therefore difficult to extract useful information for the task to be learned. On the other hand, single-view learning or simple concatenation of all multi-view features is generally ineffective. Therefore, it is important to learn sufficient discriminative features and to mine valid information from these data using multi-view driven algorithms.
In recent years, the random walk theory has made a breakthrough. The classical algorithm called PageRank plays a crucial role in Web search, where the importance of all the Web is ranked using Web link structures. Thereafter, a number of personalized PageRank approaches have been proposed to address various learning tasks. Random walk with restart (RWR) is a representative scheme that can provide a good level of relevance scoring between any two nodes in an undirected weighted graph. Random walk can be mainly classified into an unsupervised method and a supervised method according to whether category label information is available. The former places emphasis on data clustering and solution unsupervised learning tasks, such as image segmentation, and the latter aims at solving classification tasks and their corresponding specific applications. These studies have shown that random walk has wide applicability and high compatibility in various practical applications.
Multi-view learning aims at mining useful patterns from multi-view data. A great deal of previous research has shown that multi-view learning can make full use of similarity and complementarity information in multi-view data, and has better generating capability and superior performance than single-view learning. As one of the earliest representative paradigms of multi-perspective learning, collaborative training in unsupervised mode maximizes mutual consistency between two different perspectives. Later, common unsupervised multi-view learning methods have received increasing attention, including unsupervised multi-view feature representation and multi-view clustering. Accordingly, supervised multi-view learning approaches have also attracted increasing research interest from different areas, such as image classification, gesture recognition and image annotation. However, labeling enough multi-perspective training data is difficult and time consuming, while only a small fraction of labeled samples are readily available, and only a small amount of data is insufficient for fully supervised learning, and underutilized for unsupervised learning. As a compromise between the two learning paradigms, multi-view semi-supervised classification can improve performance with a limited proportion of labeled samples. For this reason, the academic world proposes multi-view semi-supervised learning to exploit effective information with a small proportion of labeled data points to the maximum extent. However, the work done so far for multi-view semi-supervised classification is still limited, and more research is needed for this problem.
The current multi-view semi-supervised learning algorithm is mainly divided into two types. The first approach is a subspace-based approach, where a given tag matrix is usually embedded in an objective function as a common regression target, and a potential low-dimensional subspace is learned to project the input data to mine commonality between views. For example, Nie et al model a convex problem to avoid local minima and propose a new adaptive weight learning strategy to learn the projection matrix. Xue et al introduce label information into the depth matrix decomposition and learn the relevant prediction subspace for incomplete multi-view data classification. The second mode is a graph-based model that treats each data point as a vertex of a joined graph fused from multiple perspectives and propagates label information from labeled to unlabeled samples through weighted edges. As an early graph-based method, Karasuyama proposed a multi-graph integration method in a label propagation environment, which linearly combines multiple graphs by weight canonical chemistry learning sparse multi-view weights. In contrast, Nie et al learn the weights from a priori graph structures to fuse the multiple perspectives, rather than learning the fusion weights by weight regularization. Currently, a variety of multi-view semi-supervised learning methods have been developed and their effectiveness in specific applications is demonstrated. However, multi-view semi-supervised classification has largely not been fully studied, and these models still have some drawbacks. High computational complexity is one of the major limitations that most algorithms need to solve in the face of large-scale learning problems, and therefore require more work. Furthermore, limited learning performance requires algorithms to mine more efficient patterns when only a small amount of labeled data is available for model training. Therefore, a more effective and efficient multi-view semi-supervised learning method still needs to be developed.
Disclosure of Invention
In view of the above, the invention provides a multi-view semi-supervised classification method for fast seed random walk, which can accurately and effectively classify various types of data sets such as texts, images, videos and the like, and obtains higher classification performance on each data set under the condition of only using a small amount of supervision information, thereby having a certain practical value.
The invention is realized by adopting the following scheme: a multi-view semi-supervised classification method for fast seed random walk comprises the following steps:
step S1: calculating a similarity matrix and a transition probability matrix of each view angle of the input multi-view data by adopting a Gaussian kernel function;
step S2: establishing an initial distribution state of each visual angle of the multi-visual angle data according to the category label of the multi-visual angle data for semi-supervised learning;
step S3: calculating an arrival probability matrix of a first transition state of each view of the multi-view data according to the initial distribution state of each view of the multi-view data established in the step S2;
step S4: iteratively calculating an arrival probability matrix of multiple transition states of each view angle of the multi-view angle data, and performing weighted summation on the arrival probability matrices of all the transition states of each view angle to obtain an incentive matrix of each view angle;
step S5: predicting a category label of each multi-view data for test according to the reward matrix of each view of the multi-view data calculated at S4.
Further, the step S1 specifically includes the following steps:
step S11: the similarity matrix of each visual angle of the input multi-visual angle data is calculated by adopting a Gaussian kernel function, and the calculation formula is as follows:
wherein [ W ]t]ijFor the similarity between the ith and jth data points at the tth view,controlling the local action range of the Gaussian kernel function for the ith data point under the tth visual angle, wherein sigma is the bandwidth;
step S12: calculating a transition probability matrix of each view of the multi-view data, wherein the calculation formula is as follows:
wherein,is a diagonal matrix, andPta transition probability matrix for the t-th view of the multi-view data.
Further, the step S2 of establishing the initial distribution state of each view angle of the multi-view angle data according to the category label of the multi-view angle data for semi-supervised learning is performed by the following formula:
[Qt](0)=[Q0,O],
wherein,is a matrix of all 0 s, and the matrix is,c is the number of categories of multi-view data, N is the total number of multi-view data, N is the number of multi-view data for semi-supervised learning, [ Q ]y](0)For the initial distribution of the tth view of the multi-view data, Q0The calculation formula of (a) is as follows:
wherein C isiIs the ith category, if the jth data point x of the multi-view datajBelong to the ith class CiThen [ Q ]0]ijIs 1, otherwise is 0.
Further, the step S3 of calculating an arrival probability matrix of the first transition state of each view angle of the multi-view data according to the initial distribution state of each view angle of the multi-view data established in S2 is as follows:
whereinα is the restart probability determined from a priori knowledge, [ Q ]t](1)And the arrival probability matrix of the first transition state of the tth visual angle of the multi-visual angle data in the initial distribution state.
Further, the step S4 specifically includes the following steps:
step S41: iteratively calculating an arrival probability matrix of multiple transition states for each view of the multi-view data, the calculation formula being as follows:
[Qt](k)=(1-α)[Qt](k-l)Pt+α[Qt](0)for k≥2,
wherein [ Q ]t](k)An arrival probability matrix of a k transition state of a tth visual angle of the multi-visual angle data;
step S42: weighting and summing the arrival probability matrixes of all transition states of each view angle of the multi-view angle data to obtain a reward matrix of each view angle, wherein the calculation formula is as follows:
wherein is the number of transfer steps determined from a priori knowledge, [ R [ [ R ]t](s)The reward matrix at the number of transfer steps for the tth view of the multi-view data, γ is a decay factor determined from a priori knowledge.
Further, in step S5, the category label of each multi-view data used for testing is predicted according to the reward matrix of each view of the multi-view data calculated in step S4, and the specific calculation formula is as follows:
wherein V is the number of views of the multi-view data,is the predicted label for the jth data point.
Compared with the prior art, the invention has the following beneficial effects:
the algorithm adopted by the invention is based on random walk with restart probability, can effectively discover the correlation among data points and capture the global structure information of a data set, and has larger performance improvement compared with other similar algorithms. The invention can accurately and effectively classify data of various types such as images, texts, videos and the like, and in experiments, eight public data sets in practical application are classified, wherein the data sets comprise data sets of the text type, the image type and the video type, and under the condition of only using a small amount of monitoring information, the invention obtains higher classification performance on each data set, thereby having certain practical value.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Fig. 2 is a flowchart illustrating an implementation of the overall method according to an embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As shown in fig. 1 and 2, the present embodiment provides a multi-view semi-supervised classification method for fast seed random walk, including the following steps:
step S1: calculating a similarity matrix and a transition probability matrix of each view angle of the input multi-view data by adopting a Gaussian kernel function;
step S2: establishing an initial distribution state of each visual angle of the multi-visual angle data according to the category label of the multi-visual angle data for semi-supervised learning;
step S3: calculating an arrival probability matrix of a first transition state of each view of the multi-view data according to the initial distribution state of each view of the multi-view data established in the step S2;
step S4: iteratively calculating an arrival probability matrix of multiple transition states of each view angle of the multi-view angle data, and performing weighted summation on the arrival probability matrices of all the transition states of each view angle to obtain an incentive matrix of each view angle;
step S5: predicting a category label of each multi-view data for test according to the reward matrix of each view of the multi-view data calculated at S4.
In this embodiment, the step S1 specifically includes the following steps:
step S11: the similarity matrix of each visual angle of the input multi-visual angle data is calculated by adopting a Gaussian kernel function, and the calculation formula is as follows:
wherein [ W ]t]ijFor the similarity between the ith and jth data points at the tth view,controlling the local action range of the Gaussian kernel function for the ith data point under the tth visual angle, wherein sigma is the bandwidth;
step S12: calculating a transition probability matrix of each view of the multi-view data, wherein the calculation formula is as follows:
wherein,is a diagonal matrix, andPta transition probability matrix for the t-th view of the multi-view data.
In this embodiment, the step S2 of establishing the initial distribution state of each view angle of the multi-view angle data according to the category label of the multi-view angle data for semi-supervised learning is performed according to the following formula:
[Qt](0)=[Q0,O],
wherein,is a matrix of all 0 s, and the matrix is,c is the number of categories of multi-view data, N is the total number of multi-view data, N is the number of multi-view data for semi-supervised learning, [ Q ]t](0)For the initial distribution of the tth view of the multi-view data, Q0The calculation formula of (a) is as follows:
wherein C isiIs the ith category, if the jth data point x of the multi-view datajBelong to the ith class CiThen [ Q ]0]ijIs 1, otherwise is 0.
In this embodiment, the step S3 of calculating the arrival probability matrix of the first transition state of each view angle of the multi-view data according to the initial distribution state of each view angle of the multi-view data established in S2 is as follows:
whereinα is the restart probability determined from a priori knowledge, [ Q ]t](1)And the arrival probability matrix of the first transition state of the tth visual angle of the multi-visual angle data in the initial distribution state.
In this embodiment, the step S4 specifically includes the following steps:
step S41: iteratively calculating an arrival probability matrix of multiple transition states for each view of the multi-view data, the calculation formula being as follows:
[Qt](k)=(1-α)[Qt](k-l)Pt+α[Qt](0)for k≥2,
wherein [ Q ]t](k)An arrival probability matrix of a k transition state of a tth visual angle of the multi-visual angle data;
step S42: weighting and summing the arrival probability matrixes of all transition states of each view angle of the multi-view angle data to obtain a reward matrix of each view angle, wherein the calculation formula is as follows:
wherein is the number of transfer steps determined from a priori knowledge, [ R [ [ R ]t](s)The reward matrix at the number of transfer steps for the tth view of the multi-view data, γ is a decay factor determined from a priori knowledge.
In this embodiment, in step S5, the category label of each multi-view data used for testing is predicted according to the reward matrix of each view of the multi-view data calculated in step S4, and the specific calculation formula is as follows:
wherein V is the number of views of the multi-view data,is the predicted label for the jth data point.
From practical application, in the embodiment, firstly, a gaussian kernel function is adopted to calculate a similarity matrix and a transition probability matrix of each view angle of input multi-view data; then, establishing an initial distribution state of each visual angle of the multi-visual angle data according to a category label of the multi-visual angle data for semi-supervised learning, and calculating an arrival probability matrix of a first transition state of each visual angle of the multi-visual angle data; and finally, iteratively calculating an arrival probability matrix of multiple transition states of each view angle of the multi-view data, and performing weighted summation on the arrival probability matrices of all transition states of each view angle to obtain a reward matrix of each view angle to generate a category label of the multi-view data for testing. The embodiment is based on random walk with restart probability, can effectively explore the correlation among data points and capture the global structure information of a data set, can further accurately and effectively classify various types of data such as images, texts and audios, and has a certain application value.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.
Claims (6)
1. A multi-view semi-supervised classification method for fast seed random walk is characterized by comprising the following steps: the method comprises the following steps:
step S1: calculating a similarity matrix and a transition probability matrix of each view angle of the input multi-view data by adopting a Gaussian kernel function;
step S2: establishing an initial distribution state of each visual angle of the multi-visual angle data according to the category label of the multi-visual angle data for semi-supervised learning;
step S3: calculating an arrival probability matrix of a first transition state of each view of the multi-view data according to the initial distribution state of each view of the multi-view data established in the step S2;
step S4: iteratively calculating an arrival probability matrix of multiple transition states of each view angle of the multi-view angle data, and performing weighted summation on the arrival probability matrices of all the transition states of each view angle to obtain an incentive matrix of each view angle;
step S5: predicting a category label of each multi-view data for test according to the reward matrix of each view of the multi-view data calculated at S4.
2. The multi-view semi-supervised classification method for fast seed random walk according to claim 1, characterized in that: the step S1 specifically includes the following steps:
step S11: the similarity matrix of each visual angle of the input multi-visual angle data is calculated by adopting a Gaussian kernel function, and the calculation formula is as follows:
wherein [ W ]t]ijFor the similarity between the ith and jth data points at the tth view,controlling the local action range of the Gaussian kernel function for the ith data point under the tth visual angle, wherein sigma is the bandwidth;
step S12: calculating a transition probability matrix of each view of the multi-view data, wherein the calculation formula is as follows:
3. The multi-view semi-supervised classification method for fast seed random walk according to claim 1, characterized in that: in step S2, the initial distribution state of each view of the multi-view data is established according to the category label of the multi-view data for semi-supervised learning, and the calculation formula is as follows:
[Qt](0)=[Q0,O],
wherein,is a matrix of all 0 s, and the matrix is,c is the number of categories of multi-view data, N is the total number of multi-view data, and N is the number of categories of multi-view dataNumber of multi-view data in semi-supervised learning, [ Q ]t](0)For the initial distribution of the tth view of the multi-view data, Q0The calculation formula of (a) is as follows:
wherein C isiIs the ith category, if the jth data point x of the multi-view datajBelong to the ith class CiThen [ Q ]0]ijIs 1, otherwise is 0.
4. The multi-view semi-supervised classification method for fast seed random walk according to claim 1, characterized in that: in step S3, the arrival probability matrix of the first transition state of each view angle of the multi-view data is calculated according to the initial distribution state of each view angle of the multi-view data established in step S2, and the calculation formula is:
5. The multi-view semi-supervised classification method for fast seed random walk according to claim 1, characterized in that: the step S4 specifically includes the following steps:
step S41: iteratively calculating an arrival probability matrix of multiple transition states for each view of the multi-view data, the calculation formula being as follows:
[Qt](k)=(1-α)[Qt](k-1)Pt+α[Qt](0)for k≥2,
wherein [ Q ]t](k)An arrival probability matrix of a k transition state of a tth visual angle of the multi-visual angle data;
step S42: weighting and summing the arrival probability matrixes of all transition states of each view angle of the multi-view angle data to obtain a reward matrix of each view angle, wherein the calculation formula is as follows:
where s is the number of transfer steps determined from a priori knowledge, [ R [ [ R ]t](s)A reward matrix at the number of transfer steps s for the tth view of the multi-view data, and gamma is a decay factor determined from a priori knowledge.
6. The multi-view semi-supervised classification method for fast seed random walk according to claim 1, characterized in that: in step S5, the category label of each multi-view data used for testing is predicted according to the reward matrix of each view of the multi-view data calculated in step S4, and the specific calculation formula is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010695386.8A CN111814894B (en) | 2020-07-17 | 2020-07-17 | Multi-view semi-supervised classification method for rapid seed random walk |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010695386.8A CN111814894B (en) | 2020-07-17 | 2020-07-17 | Multi-view semi-supervised classification method for rapid seed random walk |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111814894A true CN111814894A (en) | 2020-10-23 |
CN111814894B CN111814894B (en) | 2022-09-09 |
Family
ID=72866453
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010695386.8A Active CN111814894B (en) | 2020-07-17 | 2020-07-17 | Multi-view semi-supervised classification method for rapid seed random walk |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111814894B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113111721A (en) * | 2021-03-17 | 2021-07-13 | 同济大学 | Human behavior intelligent identification method based on multi-unmanned aerial vehicle visual angle image data driving |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110781788A (en) * | 2019-10-18 | 2020-02-11 | 中国科学技术大学 | Method and system for field robot ground classification based on small amount of labels |
US20200125897A1 (en) * | 2018-10-18 | 2020-04-23 | Deepnorth Inc. | Semi-Supervised Person Re-Identification Using Multi-View Clustering |
-
2020
- 2020-07-17 CN CN202010695386.8A patent/CN111814894B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200125897A1 (en) * | 2018-10-18 | 2020-04-23 | Deepnorth Inc. | Semi-Supervised Person Re-Identification Using Multi-View Clustering |
CN110781788A (en) * | 2019-10-18 | 2020-02-11 | 中国科学技术大学 | Method and system for field robot ground classification based on small amount of labels |
Non-Patent Citations (2)
Title |
---|
CHARU C.AGGARWAL: "《社会网络数据分析》", 31 December 2016, 武汉大学出版社 * |
JINJIANG TANG 等: "Multi-view non-negative matrix factorization for scene recognition", 《JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113111721A (en) * | 2021-03-17 | 2021-07-13 | 同济大学 | Human behavior intelligent identification method based on multi-unmanned aerial vehicle visual angle image data driving |
Also Published As
Publication number | Publication date |
---|---|
CN111814894B (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gao et al. | Cnn-based density estimation and crowd counting: A survey | |
Fan et al. | Cian: Cross-image affinity net for weakly supervised semantic segmentation | |
Wei et al. | Learning to segment with image-level annotations | |
Lei et al. | Ultralightweight spatial–spectral feature cooperation network for change detection in remote sensing images | |
Zhao et al. | Real-time and light-weighted unsupervised video object segmentation network | |
Kasarla et al. | Region-based active learning for efficient labeling in semantic segmentation | |
Fang et al. | Multi-level feature fusion based locality-constrained spatial transformer network for video crowd counting | |
CN111476315A (en) | Image multi-label identification method based on statistical correlation and graph convolution technology | |
Atif et al. | A review on semantic segmentation from a modern perspective | |
Zhu et al. | Attentive multi-stage convolutional neural network for crowd counting | |
CN110993037A (en) | Protein activity prediction device based on multi-view classification model | |
An et al. | Weather classification using convolutional neural networks | |
Bai et al. | A survey on deep learning-based single image crowd counting: Network design, loss function and supervisory signal | |
CN115240024A (en) | Method and system for segmenting extraterrestrial pictures by combining self-supervised learning and semi-supervised learning | |
CN117456232A (en) | Semi-supervised few-sample image classification method based on multi-scale features | |
Wang et al. | Multi-task multimodal learning for disaster situation assessment | |
CN114782752A (en) | Small sample image grouping classification method and device based on self-training | |
Liu et al. | Dual-branch self-attention network for pedestrian attribute recognition | |
CN111814894B (en) | Multi-view semi-supervised classification method for rapid seed random walk | |
Jiang et al. | Dynamic proposal sampling for weakly supervised object detection | |
Zhang et al. | A Survey of Generative Techniques for Spatial-Temporal Data Mining | |
Hu et al. | Data-free dense depth distillation | |
Alshahrani et al. | Optimal Deep Convolutional Neural Network for Vehicle Detection in Remote Sensing Images. | |
CN115240271A (en) | Video behavior identification method and system based on space-time modeling | |
Oh et al. | OCEAN: Object-centric arranging network for self-supervised visual representations learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |