CN110070136B - Image representation classification method and electronic equipment thereof - Google Patents

Image representation classification method and electronic equipment thereof Download PDF

Info

Publication number
CN110070136B
CN110070136B CN201910344169.1A CN201910344169A CN110070136B CN 110070136 B CN110070136 B CN 110070136B CN 201910344169 A CN201910344169 A CN 201910344169A CN 110070136 B CN110070136 B CN 110070136B
Authority
CN
China
Prior art keywords
image
calculating
training sample
test sample
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910344169.1A
Other languages
Chinese (zh)
Other versions
CN110070136A (en
Inventor
卢桂馥
王勇
唐肝翌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Polytechnic University
Original Assignee
Anhui Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Polytechnic University filed Critical Anhui Polytechnic University
Priority to CN201910344169.1A priority Critical patent/CN110070136B/en
Publication of CN110070136A publication Critical patent/CN110070136A/en
Application granted granted Critical
Publication of CN110070136B publication Critical patent/CN110070136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image representation classification method and electronic equipment thereof, relating to the technical field of pattern recognition and machine learning systems and comprising the following steps: acquiring a region image training sample X and an image testing sample y; calculating the weight w of the local geometry i (ii) a Computationally weighted image training sample data
Figure DDA0002041742590000011
And image test sample data
Figure DDA0002041742590000012
Calculating beta t (ii) a Calculating the reconstruction error r of the image test sample data y relative to the image training sample of each class i (y) comparing all of r i (y), then the image test sample y belongs to r i (y) the largest. The invention is a local measurement method through the maximum correlation entropy, after the maximum correlation entropy is introduced into the CRC, the algorithm is more robust, thereby improving the classification performance of the algorithm, and meanwhile, the local manifold structure among samples is further introduced into the CRC algorithm, so that the local geometric structure among the samples can be considered when the reconstruction coefficient is calculated, and the classification performance of the algorithm is also improved.

Description

Image representation classification method and electronic equipment thereof
Technical Field
The present invention relates to the field of pattern recognition and machine learning systems, and more particularly to an image representation classification method and an electronic device thereof
Background
Sparse Representation based Classification (SRC) is an image Classification method proposed by j. The basic idea of the method is that the test image is located in a subspace spanned by the training samples, that is, the test image can be linearly reconstructed by the training image, then sparse reconstruction coefficients of the test image in the training image are found by using the L1 norm, then the samples are reconstructed by using the reconstruction coefficients, and finally classification is performed by judging the size of the reconstruction error. Although SRC has a good recognition effect, it needs to solve the optimization problem of L1 norm, so that the algorithm complexity is high.
In order to solve the problems of the SRC algorithm, a Collaborative representation Classification (SRC) algorithm has been proposed. The CRC replaces the L1 norm in the SRC algorithm with the L2 norm, so that the operation speed of the CRC algorithm is much faster than that of the SRC, the algorithm complexity is greatly reduced, and the CRC and the SRC have similar identification capability through experimental verification. However, the CRC algorithm is sensitive to noise and is not robust enough.
Disclosure of Invention
In view of this, the present invention provides an image representation classification method and an electronic device thereof, which are used to solve the problems of the prior image classification algorithm in the background art that the algorithm is sensitive to noise and not robust enough, and the algorithm is more robust, thereby improving the classification performance of the algorithm.
The invention provides an image representation classification method, which comprises the following steps:
s1: acquiring an image training sample X and an image testing sample y;
s2: let t equal to 1, p t =-[1,1,...,1]∈R d And beta t =-[1,1,...,1]∈R n Where t is the number of iterations, t-1 denotes the first loop, p t As an initial value, beta is a reconstruction coefficient vector to be solved;
s3: calculating weight w of local geometric structure based on image training sample X and image testing sample y i All w are i Form a diagonal matrix
Figure BDA0002041742570000021
S4: calculating weighted image training sample data based on image training sample X and image test sample y
Figure BDA0002041742570000022
And image test sample data
Figure BDA0002041742570000023
S5: training sample data based on weighted images
Figure BDA0002041742570000024
Weighted imageTest sample data
Figure BDA0002041742570000025
And the weight value w i Calculating beta t
S6: defining a numerical value epsilon, and calculating | beta tt-1 If beta | |) tt-1 ||<E, turning to the step S7; otherwise, calculating
Figure BDA0002041742570000026
p j Is the jth component of the vector p, y j For the jth component of the test sample y, x ji For training sample x j The ith component of (a); g (-) is a Gaussian function, i.e.
Figure BDA0002041742570000027
That is, the maximum correlation entropy is obtained, σ is the width of the gaussian kernel, t is t +1, and the step S4 is shifted;
s7: calculating the reconstruction error r of the image test sample data y relative to the image training sample of each class i (y) comparing all r i (y), then the image test sample y belongs to r i (y) the largest.
Optionally, the step S1 further includes the following steps:
acquiring a plurality of image training samples;
obtaining an image test sample;
and respectively carrying out dimensionality reduction operation on the image training sample and the image test sample to obtain an acquisition area image training sample X and an image test sample y which are subjected to dimensionality reduction.
Optionally, after acquiring a plurality of image training samples, stretching each image training sample in columns to convert it into a d-dimensional column vector x i ∈R d I is 1,2, … …, n, and all the image training samples are formed into a data matrix X { X ═ X 1 ,x 2 ,…,x n }=[X 1 ,…,X c ]∈R d×n And c is the number of categories of the image training samples.
Optionally, after the image test sample is obtained, the image test sample is listedStretching to obtain a d-dimensional column vector y ∈ R d
Optionally, the step S3 is executed by
Figure BDA0002041742570000031
Calculating the weight w of the local geometry i When y and x i Relatively close to each other, w i Smaller, otherwise larger, where x i The i-th component of the sample data x is trained for the image.
Optionally, the step S4 is executed by
Figure BDA0002041742570000032
And
Figure BDA0002041742570000033
image training sample data with calculated weight
Figure BDA0002041742570000034
And image test sample data
Figure BDA0002041742570000035
Where diag (-) denotes changing a column vector into a diagonal matrix.
Optionally, the step S5 is characterized by a formula
Figure BDA0002041742570000036
Calculated to obtain beta t
Optionally, the step S7 is implemented by the formula r i (y)=||y-X i δ i (β) i || 2 I 1,2, c, calculating to obtain a reconstruction error r i (y) wherein X i And c is the total class number of the image training samples.
Based on the same invention creation, the invention also provides electronic equipment of the collaborative representation classification method, which comprises at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
The invention has the following advantages:
the invention is a local measurement method through the maximum correlation entropy, and after the maximum correlation entropy is introduced into the CRC, the algorithm is more robust, so that the classification performance of the algorithm is improved.
Meanwhile, a local manifold structure among samples is further introduced into the CRC algorithm, so that the local geometric structure among the samples can be considered when a reconstruction coefficient is calculated, and the classification performance of the algorithm is also improved.
Drawings
FIG. 1 is a flow chart of a method for image representation classification according to an embodiment of the present invention;
fig. 2 is a schematic hardware structure diagram of an embodiment of an electronic device cooperatively representing a classification method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
The inventors found that neither the SRC algorithm nor the CRC algorithm takes advantage of the local geometry between samples.
Maximum correlation entropy (Maximum correlation) is a local metric distance, and when noise increases rapidly, the Maximum correlation entropy changes more slowly and is therefore more robust to noise.
In order to overcome the problems of the CRC algorithm and improve the robustness and the classification performance of the CRC algorithm, the maximum correlation entropy and the local manifold structure are introduced into the CRC algorithm, and a collaborative image representation classification method based on the maximum correlation entropy and the local constraint is designed.
The technical model of the invention is as follows:
Figure BDA0002041742570000041
wherein: beta is epsilon to R n For the reconstruction coefficient vector to be solved, n is the number of training samples, d is the dimension of the samples, beta i The ith component of β; y is j For the jth component of the test sample y, x ji For training sample x j The ith component of (a); g (-) is a Gaussian function, i.e.
Figure BDA0002041742570000042
I.e. the maximum correlation entropy, sigma is the width of the gaussian kernel; w is a diagonal matrix and W is a diagonal matrix,
Figure BDA0002041742570000043
and is
Figure BDA0002041742570000044
As shown in fig. 1, in some embodiments, the method for calculating the technical model includes the following steps:
s1: acquiring an image training sample X and an image testing sample y;
s2: let t equal to 1, p t =-[1,1,...,1]∈R d And beta t =-[1,1,...,1]∈R n Where t is the number of iterations, t-1 denotes the first loop, p t As an initial value, β is the reconstruction coefficient vector to be solved.
S3: calculating weight w of local geometric structure based on image training sample X and image testing sample y i All w are i Form a diagonal matrix
Figure BDA0002041742570000051
S4: calculating weighted image training sample data based on image training sample X and image test sample y
Figure BDA0002041742570000052
And image test sample data
Figure BDA0002041742570000053
S5: training sample data based on weighted images
Figure BDA0002041742570000054
Weighted image test sample data
Figure BDA0002041742570000055
And the weight value w i Calculating beta t
S6: defining a numerical value epsilon, and calculating | beta tt-1 If beta | |) tt-1 ||<E, turning to the step S7; otherwise, calculating
Figure BDA0002041742570000056
p j Is the jth component of the vector p, y j For the jth component of the test sample y, x ji For training sample x j The ith component of (a); g (-) is a Gaussian function, i.e.
Figure BDA0002041742570000057
That is, the maximum correlation entropy is obtained, σ is the width of the gaussian kernel, t is t +1, and the step S4 is shifted;
s7: calculating the reconstruction error r of the image test sample data y relative to the image training sample of each class i (y) comparing all r i (y), then the image test sample y belongs to r i (y) the largest.
According to the method, the maximum correlation entropy is a local measurement method, and after the maximum correlation entropy is introduced into the CRC, the algorithm is more robust, so that the classification performance of the algorithm is improved; meanwhile, a local manifold structure among samples is further introduced into the CRC algorithm, so that the local geometric structure among the samples can be considered when a reconstruction coefficient is calculated, and the classification performance of the algorithm is also improved.
In some embodiments, the step S1 further includes the steps of:
acquiring a plurality of image training samples;
obtaining an image test sample;
and respectively carrying out dimensionality reduction operation on the image training sample and the image test sample, specifically carrying out dimensionality reduction operation through a Principal Component Analysis (PCA) algorithm to obtain an acquisition area image training sample X and an image test sample y which are subjected to dimensionality reduction.
In some embodiments, after a plurality of image training samples are acquired, each image training sample is stretched column-wise into a d-dimensional column vector x i ∈R d I is 1,2, … …, n, and all the image training samples are formed into a data matrix X { X ═ X 1 ,x 2 ,…,x n }=[X 1 ,…,X c ]∈R d×n And c is the class number of the image training sample.
After an image test sample is obtained, the image test sample is stretched according to columns to be changed into a d-dimensional column vector y ∈ R d
In some embodiments, the step of S3 is performed by
Figure BDA0002041742570000061
Calculating the weight w of the local geometry i When y and x i Relatively close to each other, w i Smaller, otherwise larger, where x i The i-th component of the sample data x is trained for the image.
In some embodiments, the step of S4 is performed by
Figure BDA0002041742570000062
And
Figure BDA0002041742570000063
image training sample data with calculated weight
Figure BDA0002041742570000064
And image test sample data
Figure BDA0002041742570000065
Where diag (-) denotes changing a column vector into a diagonal matrix.
In some embodiments, the step of S5 is formulated by formula
Figure BDA0002041742570000066
Calculated to obtain beta t
In some embodiments, the step of S7 is performed by the formula r i (y)=||y-X i δ i (β) i || 2 I 1,2, c, calculating to obtain a reconstruction error r i (y) wherein X i And c is the total class number of the image training samples.
The superiority of the algorithm compared with other algorithms is verified through experiments in ORL and Yale face image libraries.
Table 1 shows the recognition rates of SRC, CRC and the algorithm designed by the present invention (CRC/MCCLC) in the ORL library (the recognition rates of training samples are 4 and 5, respectively).
Table 1 recognition rates of different algorithms in ORL library.
Figure BDA0002041742570000067
From the above table, the recognition rate of the algorithm in the ORL library is better than that of the SRC and CRC algorithms.
Table 2 shows the recognition rates of SRC, CRC and the algorithm designed by the present invention in Yale library (the recognition rates of training samples are 4 and 5 respectively).
Table 2 recognition rates of different algorithms in ORL library.
Figure BDA0002041742570000071
From the above table, the recognition rate of the algorithm in Yale library is better than that of SRC and CRC algorithms.
In conclusion, after the maximum correlation entropy is introduced into the CRC, the algorithm is more robust, so that the classification performance of the algorithm is improved; meanwhile, a local manifold structure among samples is further introduced into the CRC algorithm, so that the local geometric structure among the samples can be considered when a reconstruction coefficient is calculated, and the classification performance of the algorithm is also improved.
An electronic device for collaborative representation classification methods, comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above.
Taking the electronic device shown in fig. 2 as an example, the electronic device includes a processor and a memory, and may further include: an input device and an output device.
The processor, memory, input device, and output device may be connected by a bus or other means, such as by a bus.
The memory, which is a non-volatile computer-readable storage medium, may be used to store a non-volatile software program, a non-volatile computer-executable program, and modules, such as program instructions/modules corresponding to the computing migration method of the mobile terminal program in the embodiments of the present application. The processor executes various functional applications and data processing of the server by running the nonvolatile software programs, instructions and modules stored in the memory, that is, the computing migration method of the mobile terminal program of the above-described method embodiment is realized.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to usage of the computing migration apparatus of the mobile terminal program, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory optionally includes memory remotely located from the processor. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means may receive input numeric or character information and generate key signal inputs related to user settings and function control of the computing migration means of the mobile terminal program. The output device may include a display device such as a display screen.
The one or more modules are stored in the memory and, when executed by the processor, perform the computing migration method of the mobile terminal program in any of the above-described method embodiments.
Any embodiment of the electronic device executing the computing migration method of the mobile terminal program may achieve the same or similar effects as any corresponding embodiment of the foregoing method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM), a Random Access Memory (RAM), or the like. Embodiments of the computer program may achieve the same or similar effects as any of the preceding method embodiments to which it corresponds.
Furthermore, the method according to the present disclosure may also be implemented as a computer program executed by a CPU, which may be stored in a computer-readable storage medium. The computer program, when executed by the CPU, performs the above-described functions defined in the method of the present disclosure.
Further, the above method steps and system elements may also be implemented using a controller and a computer readable storage medium for storing a computer program for causing the controller to implement the functions of the above steps or elements.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with the following components designed to perform the functions described herein: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the invention, also features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity.

Claims (9)

1. A method of classifying an image representation, comprising the steps of:
s1: acquiring an image training sample X and an image testing sample y;
s2: let t equal to 1, p t =-[1,1,...,1]∈R d And beta t =-[1,1,...,1]∈R n Where t is the number of iterations, t-1 denotes the first loop, p t As an initial value, beta is a reconstruction coefficient vector to be solved;
s3: calculating weight w of local geometric structure based on image training sample X and image testing sample y i All w are i Form a diagonal matrix
Figure FDA0002041742560000011
S4: calculating weighted image training sample data based on image training sample X and image test sample y
Figure FDA0002041742560000012
And image test sample data
Figure FDA0002041742560000013
S5: training sample data based on weighted images
Figure FDA0002041742560000014
Weighted image test sample data
Figure FDA0002041742560000015
And the weight value w i Calculating beta t
S6: defining a numerical value epsilon, and calculating | beta tt-1 If beta | |) tt-1 ||<E, turning to the step S7; otherwise, calculating
Figure FDA0002041742560000016
p j Is the jth component of the vector p, y j For the jth component of the test sample y, x ji For training sample x j The ith component of (a); g (-) is a Gaussian function, i.e.
Figure FDA0002041742560000017
I.e. maximum correlation entropy, sigma is the width of the Gaussian kernelTurning to step S4 when t is t + 1;
s7: calculating the reconstruction error r of the image test sample data y relative to the image training sample of each class i (y) comparing all r i (y), then the image test sample y belongs to r i (y) the largest.
2. The method according to claim 1, wherein said step S1 further comprises the steps of:
acquiring a plurality of image training samples;
obtaining an image test sample;
and respectively carrying out dimensionality reduction operation on the image training sample and the image testing sample to obtain an acquisition area image training sample X and an image testing sample y which are subjected to dimensionality reduction.
3. An image representation classification method as claimed in claim 2, characterized in that after a plurality of image training samples have been acquired, each image training sample is stretched column by column into a d-dimensional column vector x i ∈R d I is 1,2, … …, n, and all the image training samples are formed into a data matrix X { X ═ X 1 ,x 2 ,…,x n }=[X 1 ,…,X c ]∈R d×n And c is the class number of the image training sample.
4. An image representation classification method as claimed in claim 2, characterized in that after the image test samples are obtained, they are stretched column by column to become a d-dimensional column vector y e R d
5. The method for classifying image representations according to claim 1, wherein said step of S3 is performed by
Figure FDA0002041742560000021
Calculating the weight w of the local geometry i When y and x i Relatively close to each other, w i Smaller, otherwiseIs larger, where x i The i-th component of the sample data x is trained for the image.
6. The method according to claim 1, wherein said step S4 is performed by
Figure FDA0002041742560000022
And
Figure FDA0002041742560000023
image training sample data with calculated weighting
Figure FDA0002041742560000024
And image test sample data
Figure FDA0002041742560000025
Where diag (-) denotes changing a column vector into a diagonal matrix.
7. The method for classifying image representations according to claim 1, wherein said step of S5 is performed by a formula
Figure FDA0002041742560000026
Calculated to obtain beta t
8. The method for classifying image representations according to claim 1, wherein said step of S7 is performed by the formula r i (y)=||y-X i δ i (β) i || 2 I 1,2, c, calculating to obtain a reconstruction error r i (y) wherein X i And c is the total class number of the image training samples.
9. An electronic device cooperatively representing a classification method, comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
CN201910344169.1A 2019-04-26 2019-04-26 Image representation classification method and electronic equipment thereof Active CN110070136B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910344169.1A CN110070136B (en) 2019-04-26 2019-04-26 Image representation classification method and electronic equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910344169.1A CN110070136B (en) 2019-04-26 2019-04-26 Image representation classification method and electronic equipment thereof

Publications (2)

Publication Number Publication Date
CN110070136A CN110070136A (en) 2019-07-30
CN110070136B true CN110070136B (en) 2022-09-09

Family

ID=67369168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910344169.1A Active CN110070136B (en) 2019-04-26 2019-04-26 Image representation classification method and electronic equipment thereof

Country Status (1)

Country Link
CN (1) CN110070136B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930301A (en) * 2012-10-16 2013-02-13 西安电子科技大学 Image classification method based on characteristic weight learning and nuclear sparse representation
CN106291449A (en) * 2016-08-04 2017-01-04 大连大学 Direction of arrival angular estimation new method under symmetric-stable distribution noise
CN107066964A (en) * 2017-04-11 2017-08-18 宋佳颖 Rapid collaborative representation face classification method
CN108875459A (en) * 2017-05-08 2018-11-23 武汉科技大学 One kind being based on the similar weighting sparse representation face identification method of sparse coefficient and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011100491A2 (en) * 2010-02-12 2011-08-18 University Of Florida Research Foundation Inc. Adaptive systems using correntropy
US11636329B2 (en) * 2017-08-28 2023-04-25 University Of Florida Research Foundation, Inc. Real time implementation of recurrent network detectors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930301A (en) * 2012-10-16 2013-02-13 西安电子科技大学 Image classification method based on characteristic weight learning and nuclear sparse representation
CN106291449A (en) * 2016-08-04 2017-01-04 大连大学 Direction of arrival angular estimation new method under symmetric-stable distribution noise
CN107066964A (en) * 2017-04-11 2017-08-18 宋佳颖 Rapid collaborative representation face classification method
CN108875459A (en) * 2017-05-08 2018-11-23 武汉科技大学 One kind being based on the similar weighting sparse representation face identification method of sparse coefficient and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于复数核的鲁棒最大间距准则算法;卢桂馥等;《光电工程》;20150915(第09期);全文 *

Also Published As

Publication number Publication date
CN110070136A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
US11403838B2 (en) Image processing method, apparatus, equipment, and storage medium to obtain target image features
CN110163300B (en) Image classification method and device, electronic equipment and storage medium
CN106570464B (en) Face recognition method and device for rapidly processing face shielding
Liu et al. Robust exemplar extraction using structured sparse coding
Kim et al. Orchard: Visual object recognition accelerator based on approximate in-memory processing
CN109961107B (en) Training method and device for target detection model, electronic equipment and storage medium
Talwalkar et al. Distributed low-rank subspace segmentation
CN113435594B (en) Security detection model training method, device, equipment and storage medium
CN109766476B (en) Video content emotion analysis method and device, computer equipment and storage medium
Wang et al. Large-scale affine matrix rank minimization with a novel nonconvex regularizer
CN116403019A (en) Remote sensing image quantum identification method and device, storage medium and electronic device
CN115439708A (en) Image data processing method and device
CN111291807A (en) Fine-grained image classification method and device and storage medium
TWI769603B (en) Image processing method and computer readable medium thereof
CN110070136B (en) Image representation classification method and electronic equipment thereof
CN115905845A (en) Data center anomaly detection method, system, equipment and storage medium
CN110929767B (en) Font processing method, system, device and medium
CN109583512B (en) Image processing method, device and system
CN113569953A (en) Training method and device of classification model and electronic equipment
CN112766465A (en) Training method of neural network for intelligent rotation performance detection
Pataskar FACE DETECTION USING FPGA
CN112329925B (en) Model generation method, feature extraction method, device and electronic equipment
CN113298049B (en) Image feature dimension reduction method and device, electronic equipment and storage medium
US20240013523A1 (en) Model training method and model training system
US11609936B2 (en) Graph data processing method, device, and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant