CN102945370B - Based on the sorting technique of many label two visual angles support vector machine - Google Patents

Based on the sorting technique of many label two visual angles support vector machine Download PDF

Info

Publication number
CN102945370B
CN102945370B CN201210396612.8A CN201210396612A CN102945370B CN 102945370 B CN102945370 B CN 102945370B CN 201210396612 A CN201210396612 A CN 201210396612A CN 102945370 B CN102945370 B CN 102945370B
Authority
CN
China
Prior art keywords
mrow
msubsup
msup
label
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210396612.8A
Other languages
Chinese (zh)
Other versions
CN102945370A (en
Inventor
祁仲昂
杨名
张仲非
张正友
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201210396612.8A priority Critical patent/CN102945370B/en
Publication of CN102945370A publication Critical patent/CN102945370A/en
Application granted granted Critical
Publication of CN102945370B publication Critical patent/CN102945370B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of sorting technique based on many label two visual angles support vector machine, comprise the following steps: first, in many Label space, define a kind of novel distance metric method, to be used for weighing under specific class object distance between points in many Label space; Then, two condition of reciprocity independently visual angle extract two stack features of training set, the complementary information of two stack features comprised in conjunction with utilizing two visual angles; Finally, in conjunction with the information in many Label space and two spaces, visual angle, a kind of many labels two visual angle support vector machine classifier newly of definition is utilized to carry out the training of many labelings.The present invention utilizes for adopting identifying sorter to combine the information and the information processing many labelings problem in various visual angles that comprise in Label space, while carrying out noise reduction, obtains one sorting technique more accurately to training set label.

Description

Classification method based on multi-label two-view support vector machine
Technical Field
The invention belongs to the technical field of labels, and particularly relates to a classification method based on a multi-label two-view support vector machine.
Background
With the advent of the information age, multimedia data has achieved explosive growth. The tag, as one of the content forms of multimedia, can help solve many important practical applications in the aspect of data mining, and particularly plays a very important role in the cross-media field. For example, powerful image annotation and image retrieval techniques can be developed using appropriate tags as part of the image annotation; by using a proper label as a part of movie reviews, an effective movie recommendation system can be developed; with the appropriate tags as part of the web page markup, a more efficient search engine can be developed.
The variety of tags is diverse and it is impractical to rely solely on data processing personnel to manually tag all data due to the increasing day-to-day explosiveness of the data volume. Under the premise, the social label is produced. The social label, also called cooperative label, social classification, is a method for general public users to associate the on-line digital resources with the labels provided by the users, and is a bottom-up organization classification system generated by the users and used for organizing and sharing the network contents. Here, the general public can add labels which the general public finds suitable for the digital resources interested in themselves in the corresponding system through the online environment. Based on the characteristics, the result of the social label is often inaccurate and contains much noise, because each common user who participates in the social label cannot eliminate the subjectivity, carelessness and even lack of patience to provide a perfect label.
In order to better utilize social tags for further data processing and analysis services, the accuracy of tag classification must be improved as much as possible, and the influence of noise on the tag classification must be reduced. Meanwhile, the types of the labels are various, so that the multi-label anti-noise classifier is produced at the same time, and the multi-label anti-noise classifier has a very wide application prospect and a very important practical value. When the traditional discrimination classifier is applied to the multi-label classification problem, the multi-label problem is generally converted into a One-to-many (One Vs All) classification mode, that is, the multi-label classification problem is converted into a plurality of two-classification problems. Conventional discriminative classifiers do not use the information contained in the multi-label space in this transformation process. In practice, the more tags that data are tagged, the more information that is contained in the tag space, and the more that this information can be utilized. When judging whether a data point should be labeled with a certain label, other labels existing in the data point can play a certain role in helping the judgment. For example, when an image containing an animal has tags for sky, clouds, grass, trees, it is more likely that the tag is a bird than a fish; when an image containing an animal has tags of water, aquatic weed, sea, coral, it is more likely that the tag is a fish rather than a bird. The information contained in the multi-label space can help us to classify better to some extent, and the influence of noise on classification is reduced.
With the diversification of terminals for acquiring data, the data generally has the characteristic of multiple visual angles, and particularly in the multimedia field, an event can be recorded and described by multiple visual angles such as texts, images, sounds, videos and the like. Even if there is only one medium, a plurality of mutually condition-independent features of the medium can be regarded as multi-view features. For example, images may be analyzed from multiple perspectives, such as texture, color, region shape, and the like. The multiple perspectives are similar to the recordings of multiple independent historians for the same historical event, and although there is some overlap in the recordings, the non-overlapping portions of the recordings are most valuable, and can help the later to recover the entire historical event as systematically as possible, even to correct the subjective wrong description of a single historian with respect to some sporadic occurrences of the historical event. Similarly, the multi-view-angle learning method can help people to classify better, reduce the influence of noise on classification and improve the accuracy of multi-label classification. The discrimination classifier provided by the invention can effectively combine and utilize the information contained in the label space and the information in the multi-view space to improve the accuracy of multi-label classification. Therefore, discriminative classifiers that deal with the multi-label classification problem have become a very important research direction in the current data mining field.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a classification method based on a multi-label two-view support vector machine, which is used for processing a multi-label classification problem by using an identification classifier in combination with information contained in a label space and information in multiple views, and obtaining a more accurate classification method while performing noise reduction on a training set label.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a classification method based on a multi-label two-view support vector machine comprises the following steps:
firstly, defining a novel distance measurement method in a multi-label space for measuring the distance between a point and a middle point of the multi-label space under a specific classification target, wherein the novel distance measurement method is to represent a multi-label training set as a multi-label training setThe two mutually condition-independent visual angle spaces of the multi-label training set are respectively expressed asAndeach point in the multi-label training setAre marked with various labels, thereforThe label dictionary of the multi-label training set forms an S-dimensional multi-label spaceEach point in the multi-label training setIn the visual angle spaceAndrespectively expressed asAndthe label vector in the label dictionary is denoted as di=(di,1,di,2,...,di,S) ', wherein di,rIs in the element of {0, 1}, and r is more than or equal to 1 and less than or equal to S represents the r-th label T in the label dictionaryrWhether or not in IiIn the presence of yi,rIs represented byiClass label of yi,r=2·di,r-1, in a multi-label One-to-many (One Vs All) classification mode, when One label TrWhen the label dictionary is used as a classification target, the rest labels in the label dictionary form an S-1-dimensional label feature spaceBy ti,rIs represented byiIn spaceA feature vector of (1), where ti,r=(di,1,...,di,r-1,di,r+1,...,di,S)′,
Definition ofWhen given di,kWhen equal to 0or1, di,rThe conditional probability of 0or1 is as follows:
<math> <mrow> <msub> <mi>P</mi> <mn>10</mn> </msub> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>|</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>P</mi> <mn>00</mn> </msub> </mrow> </math>
<math> <mrow> <msub> <mi>P</mi> <mn>01</mn> </msub> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> <mo>|</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>P</mi> <mn>11</mn> </msub> </mrow> </math>
each tag TrIs marked as gr
gr=(gr,1,...,gr,r-1,gr,r+1,...,gr,s)′,
Each element of the vector represents a tag TrThe degree of association with other tags is,
element g of degree of associationr,k(k ∈ {1,. r, r-1, r +1,. S }) is defined as follows: gr,k=P00·P11+P10·P01The sample point is in spaceThe feature vector in (1) and each tag TrThe definition of a novel distance measurement method in the multi-label space obtained by combining the relevance vectors is shown as the following formula: disr(Ii,Ij)=||(ti,r-tj,r)⊙gr||pWherein [ ] denotes the Hadamard product between vectors;
then, two groups of characteristics of the training set are extracted from two mutually condition-independent visual angles, and complementary information of the two groups of characteristics contained in the two visual angles is utilized in a combined manner;
and finally, combining information in a multi-label space and a two-view space, and performing multi-label classification training by using a defined new multi-label two-view support vector machine classifier, wherein the establishment method of the new multi-label two-view support vector machine classifier comprises the following steps: i isiIn the label feature spaceThe neighborhood of (1), excluding IiSelf, is represented asIiAnd its neighborhoodThe classification result of the data points has high similarity with the classification result of the non-neighborhood data points, and the neighborhood is lowThe size u of (A) represents IiIn spaceThe number of nearest neighbor points in,will be provided withAndare respectively marked asAnd
adding a two-view constraint by maximizing the classification similarity of the same sample point at two views, the two-view constraint is as follows:
<math> <mrow> <msubsup> <mo>&ForAll;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>:</mo> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msub> <mi>&eta;</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>&eta;</mi> <mi>i</mi> </msub> <mo>&GreaterEqual;</mo> <mn>0</mn> </mrow> </math>
wherein w(z)The multi-label two-view support vector machine classifier MSVM-2K is respectively arranged at the view anglez is the coefficient and offset on a, b,
by minimizing each point and its presence in the multi-label spaceAdding multi-label constraint according to the difference between the classification result of the nearest neighbor point in the same visual angle and different visual angles, wherein the multi-label constraint is as follows:
and is
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>aa</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>aa</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>bb</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>bb</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ba</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ba</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
Replacing multi-label constraints (1) and (2) at the same view angle by flexible classification labels; meanwhile, only one of the multi-label constraints (3) and (4) at different visual angles is selected to reduce the computational complexity, and each point I is subjected toiFlexible classification label of (1)i,r,li,rIs not only dependent on IiClass label y ofi,rAlso depends on IiIn spaceClass label of nearest neighbor point in, li,rThe definition of (A) is as follows:
d is a constant, D is more than or equal to 0 and less than 1, and the optimization formula of the multi-label two-view support vector machine is as follows:
<math> <mrow> <msub> <mi>C</mi> <mi>ij</mi> </msub> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msup> <mi>C</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msup> </mtd> <mtd> <mi>i</mi> <mo>=</mo> <mi>j</mi> </mtd> </mtr> <mtr> <mtd> <msup> <mi>C</mi> <mrow> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> <mo>*</mo> </mrow> </msup> <mo>/</mo> <msup> <mi>e</mi> <mrow> <msub> <mi>dis</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>I</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </msup> </mtd> <mtd> <mi>i</mi> <mo>&NotEqual;</mo> <mi>j</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
<math> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <msubsup> <mo>&ForAll;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>:</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <msup> <mrow> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> </mrow> </math>
<math> <mrow> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <msup> <mrow> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> </mrow> </math>
and is
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> </mrow> </math>
Wherein C is(a),C(b),C(ab),C(ab)*And D is a constant, C(ab)*<C(ab),0≤D<1。
Compared with the prior art, the invention has the following beneficial effects:
(1) the information in a multi-label space is fully utilized in a multi-label One-to-many (One Vs All) classification mode to combine with the information in a two-view angle space, so that the influence of noise on a classification training process is reduced, and the classification accuracy of the multi-label classification discrimination classifier is improved.
(2) A novel distance measurement method is defined in the multi-label space and is used for measuring the distance between a point and a middle point in the multi-label space under a specific classification target. The distance measurement method fully considers the mutual relation and the degree of dependence between the labels.
(3) A multi-label two-view support vector machine (MSVM-2K) is provided, and information in a multi-label space and complementary information in a two-view space can be applied to a classification training process through flexible classification labels, multi-label constraint items and two-view constraint items.
Drawings
Fig. 1 is a flowchart of a classification method based on a multi-label two-view support vector machine according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
On the contrary, the invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details.
Referring to fig. 1, a flowchart of a classification method based on a multi-label two-view support vector machine according to an embodiment of the present invention is shown, which includes the following steps:
s01, defining a novel distance measurement method in the multi-label space for measuring the distance between the midpoint of the multi-label space and the point under the specific classification target;
s02, extracting two groups of characteristics of the training set from two mutually condition-independent visual angles, and combining and utilizing complementary information of the two groups of characteristics contained in the two visual angles;
and S03, combining the information in the multi-label space and the two-view space, and performing multi-label classification training by using a novel multi-label two-view support vector machine classifier.
The embodiment of the invention provides a Multi-label Two-view support vector Machine (MSVM-2K). Representing the multi-label training set asTwo mutually condition-independent visual angle spaces of the multi-label training set are respectively expressed asAndeach point in the multi-label training setAre marked with various labels, and the label dictionary of the whole multi-label training set forms S-dimensional multi-label spaceWhen any one of the tags T is usedr(1 ≦ r ≦ S) as the target for the second classification, the rest of the labels will form an S-1 dimension label feature spaceEach point in the multi-label training setIn the visual angle spaceAndrespectively expressed asAndthe label vector in the label dictionary is denoted as di=(di,1,di,2,...,di,S) ', wherein di,rE is {0, 1}, and r is more than or equal to 1 and less than or equal to S represents the r-th label T in the dictionaryrWhether or not in IiAppears in (a). For each tag TrAnd each pointBy yi,rIs represented byiClass label of yi,r=2·di,r-1。
The invention defines a novel distance measurement method in a multi-label space, which is used for measuring the distance between a midpoint and a point in the multi-label space under a specific classification target. In the multi-label One-to-many (One Vs All) classification mode, when a label TrWhen the label dictionary is used as a classification target, the rest labels in the label dictionary form an S-1-dimensional label feature spaceIn spaceThe closer the intermediate distance is, the higher the classification similarity is. By ti,rIs represented byiIn spaceCharacteristic vector of (1), ti,r=(di,1,...,di,r-1,di,r+1,...,di,S)'. However, with the formula | | ti,r-tj,r||pDirect measure of IiAnd IjIn thatIs in most cases not reasonable because this method assumes that the tags are in relation to each otherIndependent and ignores possible interrelationships between tags. In real-world situations, there are a variety of relationships between tags, some often occurring together and some never occurring simultaneously.
By evaluatingFor IiAnd IjIn thatInfluence of distance in (1) to discuss tag TrAnd Tk(k ∈ {1,. r, r-1, r +1,. S }). When | di,k-dj,kWhen | is 0, | di,k-dj,kFor IiAnd IjIn thatThe influence of the distance in (1) is also 0; when | di,k-dj,kWhen 1, | di,k-dj,kFor IiAnd IjIn thatThe influence of the distance in (1) depends on the tag TrAnd TkThe degree of association between them. | di,k-dj,k1 and | di,r-dj,rThe relationship between the values of | is described as follows:
when in useAnd is <math> <mrow> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>&DoubleRightArrow;</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </math>
OrAnd is <math> <mrow> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>&DoubleRightArrow;</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> </mrow> </math> Time of flight
(5)
When in useAnd is <math> <mrow> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>&DoubleRightArrow;</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> </mrow> </math>
OrAnd is <math> <mrow> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>&DoubleRightArrow;</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </math> Time of flight
Definition ofEquation (5) describes the label TrAnd TkFour special relationships between them. In practice, when the tag T isrIn thatAndwhen the middle distribution is uniform, TkFor TrRather than a distinctive label; when the label TrIn thatAndwhen the medium distribution is not uniform, TkFor TrIt is a distinctive label. When given di,kWhen equal to 0or1, di,rThe conditional probability of 0or1 is as follows:
<math> <mrow> <msub> <mi>P</mi> <mn>10</mn> </msub> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>|</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>P</mi> <mn>00</mn> </msub> </mrow> </math>
<math> <mrow> <msub> <mi>P</mi> <mn>01</mn> </msub> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> <mo>|</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>P</mi> <mn>11</mn> </msub> </mrow> </math>
as can be seen from the equations (5) and (6), when P is present00·P11Or P10·P01The larger the value of, | di,k-dj,k1 deduces | di,r-dj,rThe greater the probability of 1, | di,k-dj,kFor IiAnd IjIn thatThe greater the effect of distance (d) in (d). When P is present00·P01Or P10·P11When the value of (c) is larger, | di,k-dj,k|=1Push out | di,r-dj,rThe greater the probability of 0, | di,k-dj,kFor IiAnd IjIn thatThe smaller the influence of the distance in (d).
P00·P11+P10·P01+P00·P01+P10·P111. We label each TrIs marked as gr,gr=(gr,1,...,gr,r-1,gr,r+1,...,gr,s)'. Each element of the vector represents a tag TrDegree of association with other tags. The association element g is defined byr,k(k∈{1,...,r-1,r+1,...,S}):gr,k=P00·P11+P10·P01
Sample points in spaceThe feature vector in (2) is combined with the association degree vector of each label Tr to define a novel distance measurement method in a multi-label space, which is shown as the following formula: disr(Ii,Ij)=||(ti,r-tj,r)⊙gr||p. An Hadamard product between vectors, IiIn spaceThe neighborhood defined by the novel distance measurement method does not include IiSelf, is represented asIiAnd its neighborhoodHigh similarity of classification results of the data points, andthe classification result similarity of the neighborhood data points is low. Neighborhood zoneThe size u of (A) represents IiIn spaceThe number of nearest neighbor points in (c).
In order to combine and utilize information contained in a multi-label space and information in a two-view space, the embodiment of the invention provides a novel multi-label two-view support vector machine (MSVM-2K). According to the embodiment of the invention, two-view constraint is added by maximizing the classification similarity of the same sample point under two views, and the two-view constraint is as follows:
<math> <mrow> <msubsup> <mo>&ForAll;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>:</mo> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msub> <mi>&eta;</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>&eta;</mi> <mi>i</mi> </msub> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein w(z)Classifier MSVM-2K at the view anglez is the coefficient and offset on a, b. Information in the multi-label space can be exploited in the form of adding multi-label constraints. The multi-label constraint minimizes each point and its location in the multi-label spaceThe difference between the classification results of the nearest neighbor points in the same view and in different views, the constraint is as follows:
and is
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>aa</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>aa</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> </mrow> </math> (1)
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>bb</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>bb</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ba</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ba</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein (1) and (2) are multi-label constraints at the same perspective; (3) and (4) is a multi-label constraint between different perspectives. Adding all of these multi-label constraints (1-4) adds computational complexity, and the patent demonstrates that given any one of the above multi-label constraints, the other three constraints can be approximated. For example, when a multi-label constraint (3) is given in combination with a two-view constraint (7), we can get the following constraints:
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msub> <mi>&eta;</mi> <mi>j</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msub> <mi>&eta;</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
constraints (8) and (9) are approximately matched to constraints (1) and (2) by a slightly larger constraint variable. When in useWhen the temperature of the water is higher than the set temperature,the constraint (10) strictly conforms to the constraint (4); when in useThe constraint (10) approximately conforms to the constraint (4) with a slightly larger constraint variable.
In order to reduce the computational complexity, the embodiment of the invention adopts flexible classification labels to replace multi-label constraints (1) and (2) at the same visual angle; meanwhile, only one of (3) and (4) is selected as the multi-label constraint at different viewing angles. This patent will every point IiFlexible classification label of (1)i,r,li,rIs not only dependent on IiClass label y ofi,rAlso depends on IiIn spaceThe classification label of the nearest neighbor point in. li,rThe definition of (A) is as follows:
wherein D is a constant and 0. ltoreq. D < 1. Will be provided withAndare respectively marked asAndthen the optimized equation of the multi-label two-view support vector machine is as follows:
<math> <mrow> <msub> <mi>C</mi> <mi>ij</mi> </msub> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msup> <mi>C</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msup> </mtd> <mtd> <mi>i</mi> <mo>=</mo> <mi>j</mi> </mtd> </mtr> <mtr> <mtd> <msup> <mi>C</mi> <mrow> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> <mo>*</mo> </mrow> </msup> <mo>/</mo> <msup> <mi>e</mi> <mrow> <msub> <mi>dis</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>I</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </msup> </mtd> <mtd> <mi>i</mi> <mo>&NotEqual;</mo> <mi>j</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
<math> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <msubsup> <mo>&ForAll;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>:</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <msup> <mrow> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> </mrow> </math>
<math> <mrow> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <msup> <mrow> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> </mrow> </math>
and is
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> </mrow> </math>
Wherein C is(a),C(b),C(ab),C(ab)*And D is a constant, C(ab)*<C(ab)D is more than or equal to 0 and less than 1. The dual problem of this problem can be found by the lagrange multiplier method:
<math> <mrow> <msubsup> <mo>&ForAll;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>:</mo> <mn>0</mn> <mo>&le;</mo> <msubsup> <mi>&lambda;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>&le;</mo> <msup> <mi>C</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <msubsup> <mi>&lambda;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>&le;</mo> <msup> <mi>C</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> </mrow> </math>
and is
<math> <mrow> <msubsup> <mi>&beta;</mi> <mi>ij</mi> <mrow> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> <mo>+</mo> </mrow> </msubsup> <mo>+</mo> <msubsup> <mi>&beta;</mi> <mi>ij</mi> <mrow> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> <mo>-</mo> </mrow> </msubsup> <mo>&le;</mo> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> <msub> <mi>C</mi> <mi>ij</mi> </msub> </mrow> </math>
Wherein λiAndare all lagrange multipliers.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (1)

1. A classification method based on a multi-label two-view support vector machine is characterized by comprising the following steps:
firstly, defining a novel distance measurement method in a multi-label space for measuring the distance between a point and a middle point of the multi-label space under a specific classification target, wherein the novel distance measurement method is to represent a multi-label training set as a multi-label training setTwo of the multi-label training setThe mutually condition-independent view angle spaces are respectively expressed asAndeach point in the multi-label training setAre labeled with various labels, and the label dictionary of the multi-label training set forms an S-dimensional multi-label spaceWhere S is a positive integer, each point in the multi-labeled training setIn the visual angle spaceAndrespectively expressed asAndthe label vector in the label dictionary is denoted as di=(di,1,di,2,...,di,S) ', wherein di,rIs in the element of {0, 1}, and r is more than or equal to 1 and less than or equal to S represents the r-th label T in the label dictionaryrWhether or not in IiWhere T represents a tag in a dictionary of tags, TrRepresenting the r-th label in the label dictionary with yi,rIs represented byiClass label of yi,r=2·di,r-1, in a multi-label one-to-many classification mode, when a label T is presentrWhen the label dictionary is used as a classification target, the rest labels in the label dictionary form an S-1-dimensional label feature spaceBy ti,rIs represented byiIn spaceA feature vector of (1), where ti,r=(di,1,...,di,r-1,di,r+1,...,di,S)′,
Definition ofWhen given di,kWhen equal to 0or1, di,rThe conditional probability of 0or1 is as follows:
<math> <mrow> <msub> <mi>P</mi> <mn>10</mn> </msub> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>|</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>P</mi> <mn>00</mn> </msub> </mrow> </math>
<math> <mrow> <msub> <mi>P</mi> <mn>01</mn> </msub> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> <mo>|</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>P</mi> <mn>11</mn> </msub> </mrow> </math>
wherein P represents a probability;
each tag TrIs marked as gr
gr=(gr,1,...,gr,r-1,gr,r+1,...,gr,S)′,
Each element of the vector represents a tag TrThe degree of association with other tags is,
element g of degree of associationr,k(k ∈ {1,. r, r-1, r +1,. S }) is defined as follows: gr,k=P00·P11+P10·P01The sample point is in spaceThe feature vector in (1) and each tag TrThe definition of a novel distance measurement method in the multi-label space obtained by combining the relevance vectors is shown as the following formula: disr(Ii,Ij)=||(ti,r-tj,r)⊙gr||pWherein [ ] denotes the Hadamard product of Hadamard between vectors;
then, two groups of characteristics of the training set are extracted from two mutually condition-independent visual angles, and complementary information of the two groups of characteristics contained in the two visual angles is utilized in a combined manner;
and finally, combining information in a multi-label space and a two-view space, and performing multi-label classification training by using a defined new multi-label two-view support vector machine classifier, wherein the establishment method of the new multi-label two-view support vector machine classifier comprises the following steps: i isiIn the label feature spaceThe neighborhood of (1), excluding IiSelf, is represented asWherein,is a set, which includes IiIn the label feature spaceBut does not include IiOneself, IiAnd its neighborhoodThe classification result of the data points has high similarity with the classification result of the non-neighborhood data points, and the neighborhood is lowThe size u of (A) represents IiIn spaceThe number of nearest neighbor points in,will be provided withAndare respectively marked asAnd
adding a two-view constraint by maximizing the classification similarity of the same sample point at two views, the two-view constraint is as follows:
<math> <mrow> <msubsup> <mo>&ForAll;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>:</mo> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msub> <mi>&eta;</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>&eta;</mi> <mi>i</mi> </msub> <mo>&GreaterEqual;</mo> <mn>0</mn> </mrow> </math>
whereinThe multi-label two-view support vector machine classifier MSVM-2K is respectively arranged at the view anglez ═ coefficients and offsets on a, b, ηiIs a parameter which is adjustable in the training process and is greater than or equal to 0;
by minimizing each point and its presence in the multi-label spaceAdding multi-label constraint according to the difference between the classification result of the nearest neighbor point in the same visual angle and different visual angles, wherein the multi-label constraint is as follows:
and is
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>aa</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>aa</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>bb</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>bb</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ba</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ba</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
WhereinParameters which are all more than or equal to 0 and are adjustable in the training process;
replacing multi-label constraints (1) and (2) at the same view angle by flexible classification labels; meanwhile, only one of the multi-label constraints (3) and (4) at different visual angles is selected to reduce the computational complexity, and each point I is subjected toiFlexible classification label of (1)i,r,li,rIs not only dependent on IiClass label y ofi,rAlso depends on IiIn spaceClass label of nearest neighbor point in, li,rThe definition of (A) is as follows:
d is a constant, D is more than or equal to 0 and less than 1, and the optimization formula of the multi-label two-view support vector machine is as follows:
<math> <mrow> <msub> <mi>C</mi> <mi>ij</mi> </msub> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msup> <mi>C</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msup> </mtd> <mtd> <mi>i</mi> <mo>=</mo> <mi>j</mi> </mtd> </mtr> <mtr> <mtd> <msup> <mi>C</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msup> <mo>*</mo> <mo>/</mo> <msup> <mi>e</mi> <mrow> <msub> <mi>dis</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>I</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </msup> </mtd> <mtd> <mi>i</mi> <mo>&NotEqual;</mo> <mi>j</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <msubsup> <mo>&ForAll;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>:</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <msup> <mrow> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <msup> <mrow> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> <mo>|</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&xi;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> </math>
and is
<math> <mrow> <mo>|</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>w</mi> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>b</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&le;</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>&eta;</mi> <mi>ij</mi> <mrow> <mo>(</mo> <mi>ab</mi> <mo>)</mo> </mrow> </msubsup> <mo>&GreaterEqual;</mo> <mn>0</mn> </mrow> </math>
Wherein C is(a),C(b),C(ab),C(ab)*And D is a constant, C(ab)*<C(ab),0≤D<1,Andis a parameter adjustable in the training process, which is greater than or equal to 0.
CN201210396612.8A 2012-10-18 2012-10-18 Based on the sorting technique of many label two visual angles support vector machine Expired - Fee Related CN102945370B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210396612.8A CN102945370B (en) 2012-10-18 2012-10-18 Based on the sorting technique of many label two visual angles support vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210396612.8A CN102945370B (en) 2012-10-18 2012-10-18 Based on the sorting technique of many label two visual angles support vector machine

Publications (2)

Publication Number Publication Date
CN102945370A CN102945370A (en) 2013-02-27
CN102945370B true CN102945370B (en) 2015-10-28

Family

ID=47728309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210396612.8A Expired - Fee Related CN102945370B (en) 2012-10-18 2012-10-18 Based on the sorting technique of many label two visual angles support vector machine

Country Status (1)

Country Link
CN (1) CN102945370B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263151B (en) * 2019-05-06 2023-02-07 广东工业大学 Latent semantic learning method for multi-view multi-label data
CN112926675B (en) * 2021-03-22 2023-08-18 哈尔滨工业大学(深圳) Depth incomplete multi-view multi-label classification method under double visual angle and label missing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129470A (en) * 2011-03-28 2011-07-20 中国科学技术大学 Tag clustering method and system
CN102253996A (en) * 2011-07-08 2011-11-23 北京航空航天大学 Multi-visual angle stagewise image clustering method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129470A (en) * 2011-03-28 2011-07-20 中国科学技术大学 Tag clustering method and system
CN102253996A (en) * 2011-07-08 2011-11-23 北京航空航天大学 Multi-visual angle stagewise image clustering method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多视角二维主动学习的多标签分类;张晓字;《高技术通讯》;20111231;第21卷(第12期);第1312-1317页 *

Also Published As

Publication number Publication date
CN102945370A (en) 2013-02-27

Similar Documents

Publication Publication Date Title
Li et al. Videolstm convolves, attends and flows for action recognition
Kodirov et al. Semantic autoencoder for zero-shot learning
Feng et al. A cluster sampling method for image matting via sparse coding
CN104731962B (en) Friend recommendation method and system based on similar corporations in a kind of social networks
Ravichandran et al. View-invariant dynamic texture recognition using a bag of dynamical systems
Fergus et al. Semi-supervised learning in gigantic image collections
WO2019015246A1 (en) Image feature acquisition
CN111950528B (en) Graph recognition model training method and device
CN104112018A (en) Large-scale image retrieval method
CN111680579B (en) Remote sensing image classification method for self-adaptive weight multi-view measurement learning
CN111368254A (en) Multi-view data missing completion method for multi-manifold regularization non-negative matrix factorization
US20180285670A1 (en) Systems and Methods for Identifying Objects in Media Contents
Lu et al. An EL-LDA based general color harmony model for photo aesthetics assessment
CN104966075A (en) Face recognition method and system based on two-dimensional discriminant features
CN103136309B (en) Social intensity is modeled by kernel-based learning algorithms
CN102945372A (en) Classifying method based on multi-label constraint support vector machine
Celona et al. A genetic algorithm to combine deep features for the aesthetic assessment of images containing faces
CN102945370B (en) Based on the sorting technique of many label two visual angles support vector machine
Gao et al. LiCa: Label-indicate-conditional-alignment domain generalization for pixel-wise hyperspectral imagery classification
Kılıçarslan et al. A comparative study of bread wheat varieties identification on feature extraction, feature selection and machine learning algorithms
Memon et al. A novel luminance-based algorithm for classification of semi-dark images
Khalil et al. A Comprehensive Study of Vision Transformers in Image Classification Tasks
Scaramuzzino et al. Attribute disentanglement with gradient reversal for interactive fashion retrieval
Shi et al. A global-local affinity matrix model via EigenGap for graph-based subspace clustering
CN102945371A (en) Classifying method based on multi-label flexible support vector machine

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151028

Termination date: 20201018

CF01 Termination of patent right due to non-payment of annual fee