CN111860529A - Image preprocessing method, system, device and medium - Google Patents

Image preprocessing method, system, device and medium Download PDF

Info

Publication number
CN111860529A
CN111860529A CN202010744567.5A CN202010744567A CN111860529A CN 111860529 A CN111860529 A CN 111860529A CN 202010744567 A CN202010744567 A CN 202010744567A CN 111860529 A CN111860529 A CN 111860529A
Authority
CN
China
Prior art keywords
image
processing
dimension
features
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010744567.5A
Other languages
Chinese (zh)
Inventor
刘卓
刘毅枫
梁记斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Chaoyue CNC Electronics Co Ltd
Original Assignee
Shandong Chaoyue CNC Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Chaoyue CNC Electronics Co Ltd filed Critical Shandong Chaoyue CNC Electronics Co Ltd
Priority to CN202010744567.5A priority Critical patent/CN111860529A/en
Publication of CN111860529A publication Critical patent/CN111860529A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for preprocessing an image, which comprises the following steps: acquiring an image and carrying out gray processing on the image; carrying out standardization processing of a color space on the image subjected to the graying processing; performing feature extraction on the image subjected to the standardization processing by using an HOG algorithm to obtain image features of a first dimension; and performing dimension reduction processing on the image features of the first dimension by using an LDA algorithm. The invention also discloses a system, a computer device and a readable storage medium. According to the scheme provided by the invention, the HOG algorithm is utilized to preprocess the image sample data, so that the uncertainty of objective factors to the original appearance of the image can be eliminated, the 'features' capable of representing the original appearance of the image are extracted, and the precision of the sample data is improved; and the LDA algorithm is utilized to perform dimensionality reduction processing on the high-dimensional feature data subjected to HOG feature processing, redundant data set features are removed, model precision is improved, model robustness is enhanced, and model training is accelerated.

Description

Image preprocessing method, system, device and medium
Technical Field
The present invention relates to the field of images, and in particular, to a method, a system, a device, and a storage medium for preprocessing an image.
Background
In the daily life of the present human society, image recognition and verification have very important functions, and for example, in the fields of face recognition, unmanned driving, object detection, object classification and the like, image recognition is an indispensable technology.
When carrying out image identification, the collection of image is convenient, and low-grade camera, low price have very strong practicality in the collection equipment can use.
However, since most images are in high dimension, the images are non-rigid bodies, and there are many variations; the image collected by the camera is influenced by illumination, an imaging angle, an imaging distance and the like, so that image signals have great uncertainty, the uncertainty is equivalent to noise for the original appearance of an object, the accuracy of the object detection process can be reduced, the risk of overfitting is increased, and the robustness of a model is reduced.
Disclosure of Invention
In view of the above, in order to overcome at least one aspect of the above problems, an embodiment of the present invention provides a method for preprocessing an image, including:
acquiring an image and carrying out gray processing on the image;
carrying out standardization processing of a color space on the image subjected to the graying processing;
performing feature extraction on the image subjected to the standardization processing by using an HOG algorithm to obtain image features of a first dimension;
and performing dimension reduction processing on the image features of the first dimension by using an LDA algorithm.
In some embodiments, the performing, by using the HOG algorithm, feature extraction on the normalized image and obtaining an image feature of the first dimension further includes:
calculating a gradient for each pixel of the image;
dividing the image into a plurality of units, and counting a gradient histogram in each unit to obtain the characteristics of each unit;
forming a block by a plurality of units, and connecting the characteristics of all the units in the block in series to obtain the characteristics of the block;
and connecting the characteristics of all blocks in series to obtain the final image characteristics.
In some embodiments, calculating a gradient for each pixel of the image further comprises:
and calculating the magnitude and direction of the gradient of each pixel point.
In some embodiments, counting the histogram of gradients within each cell further comprises:
and counting the number of different gradients.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides an image preprocessing system, including:
a grayscale module configured to acquire an image and graying it;
a normalization processing module configured to perform normalization processing of a color space on the image subjected to the graying processing;
the extraction module is configured to extract the features of the image subjected to the standardization processing by using an HOG algorithm and obtain the image features of a first dimension;
and the dimension reduction module is configured to perform dimension reduction processing on the image features of the first dimension by utilizing an LDA algorithm.
In some embodiments, the extraction module is further configured to:
calculating a gradient for each pixel of the image;
dividing the image into a plurality of units, and counting a gradient histogram in each unit to obtain the characteristics of each unit;
forming a block by a plurality of units, and connecting the characteristics of all the units in the block in series to obtain the characteristics of the block;
and connecting the characteristics of all blocks in series to obtain the final image characteristics.
In some embodiments, the extraction module is further configured to:
and calculating the magnitude and direction of the gradient of each pixel point.
In some embodiments, the extraction module is further configured to: and counting the number of different gradients.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer apparatus, including:
at least one processor; and
a memory storing a computer program operable on the processor, wherein the processor executes the program to perform the steps of any of the image pre-processing methods described above.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of any of the image preprocessing methods described above.
The invention has one of the following beneficial technical effects: according to the scheme provided by the invention, the HOG algorithm is utilized to preprocess the image sample data, so that the uncertainty of objective factors to the original appearance of the image can be eliminated, the 'features' capable of representing the original appearance of the image are extracted, the precision of the sample data is improved, and the identification precision of the model discriminator is improved; and the LDA algorithm is utilized to perform dimensionality reduction processing on the high-dimensional feature data subjected to HOG feature processing, redundant data set features are removed, model precision is improved, model robustness is enhanced, and the training and recognition process of the model is accelerated.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for preprocessing an image according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an image preprocessing system according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a computer device provided in an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
According to an aspect of the present invention, an embodiment of the present invention provides a method for preprocessing an image, as shown in fig. 1, which may include the steps of:
s1, acquiring an image and carrying out gray processing on the image;
s2, carrying out color space standardization processing on the image after the graying processing;
s3, performing feature extraction on the image subjected to the standardization processing by using an HOG algorithm to obtain image features of a first dimension;
and S4, performing dimension reduction processing on the image features of the first dimension by using an LDA algorithm.
According to the scheme provided by the invention, the HOG algorithm is utilized to preprocess the image sample data, so that the uncertainty of objective factors to the original appearance of the image can be eliminated, the 'features' capable of representing the original appearance of the image are extracted, the precision of the sample data is improved, and the identification precision of the model discriminator is improved; and the LDA algorithm is utilized to perform dimensionality reduction processing on the high-dimensional feature data subjected to HOG feature processing, redundant data set features are removed, model precision is improved, model robustness is enhanced, and the training and recognition process of the model is accelerated.
In some embodiments, the performing, by using the HOG algorithm, feature extraction on the normalized image and obtaining an image feature of the first dimension further includes:
calculating a gradient for each pixel of the image;
dividing the image into a plurality of units, and counting a gradient histogram in each unit to obtain the characteristics of each unit;
forming a block by a plurality of units, and connecting the characteristics of all the units in the block in series to obtain the characteristics of the block;
and connecting the characteristics of all blocks in series to obtain the final image characteristics.
In some embodiments, calculating a gradient for each pixel of the image further comprises:
and calculating the magnitude and direction of the gradient of each pixel point.
In some embodiments, counting the histogram of gradients within each cell further comprises:
and counting the number of different gradients.
Specifically, the image is grayed (the image is regarded as a three-dimensional image with x, y and z (gray levels)); then, carrying out standardization (normalization) of a color space on the input image by adopting a Gamma correction method; the method aims to adjust the contrast of the image, reduce the influence caused by local shadow and illumination change of the image and inhibit the interference of noise; then calculating the gradient (including the size and the direction) of each pixel of the image; mainly for capturing contour information while further attenuating the interference of illumination. Dividing the image into small cells (such as 6 × 6 pixels/cell), and counting a gradient histogram (the number of different gradients) of each cell to form a descriptor of each cell; and then, forming each cell into a block (for example, 3 × 3 cells/block), and connecting the feature descriptors of all the cells in the block in series to obtain the HOG feature descriptor of the block. And finally, connecting HOG feature descriptors of all blocks in the image in series to obtain the HOG feature descriptors of the image (the target to be detected). This is the final feature vector available for classification.
Linear Discriminant Analysis (LDA) fully utilizes known class information of training samples to find a projection direction subspace most helpful for Discriminant classification, and belongs to a supervised learning method. The method aims to extract the low latitude features with the highest discriminant ability from a high-dimensional feature space, the features can help to gather all samples of the same class together, samples of different classes are separated as much as possible, and the features enabling the ratio of the dispersion SB between the sample classes to the dispersion Sw in the sample classes to be the largest are selected. The intra-class dispersion matrix is obtained by averaging the training samples of each class, and then subtracting the average value of the class to which each sample belongs. . The definition of the inter-class dispersion matrix SB and the intra-class dispersion matrix Sw is shown as the following two formulas:
Figure BDA0002607902230000061
Figure BDA0002607902230000062
wherein C is the number of classes, PiIs a priori probability, muiIs the number CiMean of class samples, μ is mean of population samples, xkIs the kth feature, C, of the ith class sampleiIs a sample belonging to the i-th class.
After projection, the samples in different classes in the low-latitude space are better separated as much as possible, and the samples in each class are desired to be as dense as possible, that is, the larger the dispersion among the sample classes is, the better the dispersion is, and the smaller the dispersion in the sample classes is, the better the dispersion is. Therefore, if SwAre non-singular matrices and the optimal projection direction W is the orthogonal eigenvectors that maximize the determinant ratio of the inter-sample-class dispersion matrix and the intra-sample-class dispersion matrix.Thus, the optimal mapping function is defined as:
Figure BDA0002607902230000063
from linear algebraic theory, W is a solution that satisfies the following equation:
SBWi=λiSWWi(i=1,2,...,m)
i.e. to a matrix
Figure BDA0002607902230000064
Larger eigenvalue λiThe feature vector of (2).
The invention adopts the HOG algorithm, and the features are formed by calculating and counting the gradient direction histogram of the local area of the image. In one image, the appearance and shape of the local object can be well described by the directional density distribution of the gradient or edge. (essence: statistics of the gradient, whereas the gradient is mainly present at the edges). Therefore, the texture features which can represent the image characteristics are extracted, the accuracy of the sample data is improved, and the identification accuracy of the final model discriminator is improved.
However, the processing process can cause the sample data set to be enlarged, the storage to be difficult, the data feature redundancy and the dimensionality disaster, and further can cause the problems of reduced model training speed, poor model overfitting and robustness and the like; the LDA algorithm is adopted, and the basic idea of the method is to project high-dimensional data samples to the best discriminant vector space so as to achieve the effects of extracting classification information and compressing the dimension of the feature space, and after projection, the maximum inter-class distance and the minimum intra-class distance of the data samples in the new subspace are ensured, namely, the sample data has the best separability in the space, so that the dimension reduction processing of the features is completed, and the model training and the identification process are accelerated.
Therefore, the scheme provided by the invention can eliminate the uncertainty of the original appearance of the image caused by objective factors by preprocessing the image sample data by using the HOG algorithm, extracts the 'features' capable of representing the original appearance of the image, is favorable for improving the precision of the sample data and further improves the identification precision of the model discriminator; and the LDA algorithm is utilized to perform dimensionality reduction processing on the high-dimensional feature data subjected to HOG feature processing, redundant data set features are removed, model precision is improved, model robustness is enhanced, and the training and recognition process of the model is accelerated.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides an image preprocessing system 400, as shown in fig. 2, including:
a grayscale module 401, wherein the grayscale module 401 is configured to acquire an image and perform graying processing on the image;
a normalization module 402, wherein the normalization module 402 is configured to perform color space normalization on the grayed image;
an extraction module 403, where the extraction module 403 is configured to perform feature extraction on the normalized image by using an HOG algorithm and obtain an image feature of a first dimension;
a dimension reduction module 404, wherein the dimension reduction module 404 is configured to perform dimension reduction processing on the image feature of the first dimension by using an LDA algorithm.
In some embodiments, the extraction module 403 is further configured to:
calculating a gradient for each pixel of the image;
dividing the image into a plurality of units, and counting a gradient histogram in each unit to obtain the characteristics of each unit;
forming a block by a plurality of units, and connecting the characteristics of all the units in the block in series to obtain the characteristics of the block;
and connecting the characteristics of all blocks in series to obtain the final image characteristics.
In some embodiments, the extraction module 403 is further configured to:
and calculating the magnitude and direction of the gradient of each pixel point.
In some embodiments, the extraction module 403 is further configured to: and counting the number of different gradients.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 3, an embodiment of the present invention further provides a computer apparatus 501, comprising:
at least one processor 520; and
the memory 510, the memory 510 stores a computer program 511 that can be run on the processor, and the processor 520 executes the program to perform the steps of any of the above image preprocessing methods.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 4, an embodiment of the present invention further provides a computer-readable storage medium 601, where the computer-readable storage medium 601 stores computer program instructions 610, and the computer program instructions 610, when executed by a processor, perform the steps of any one of the above image preprocessing methods.
Finally, it should be noted that, as will be understood by those skilled in the art, all or part of the processes of the methods of the above embodiments may be implemented by a computer program to instruct related hardware to implement the methods.
Further, it should be appreciated that the computer-readable storage media (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A method for preprocessing an image, comprising the steps of:
acquiring an image and carrying out gray processing on the image;
carrying out standardization processing of a color space on the image subjected to the graying processing;
performing feature extraction on the image subjected to the standardization processing by using an HOG algorithm to obtain image features of a first dimension;
and performing dimension reduction processing on the image features of the first dimension by using an LDA algorithm.
2. The method of claim 1, wherein the step of performing feature extraction on the normalized image by using the HOG algorithm to obtain the image features of the first dimension further comprises:
calculating a gradient for each pixel of the image;
dividing the image into a plurality of units, and counting a gradient histogram in each unit to obtain the characteristics of each unit;
forming a block by a plurality of units, and connecting the characteristics of all the units in the block in series to obtain the characteristics of the block;
and connecting the characteristics of all blocks in series to obtain the final image characteristics.
3. The method of claim 2, wherein calculating a gradient for each pixel of the image, further comprises:
and calculating the magnitude and direction of the gradient of each pixel point.
4. The method of claim 2, wherein the histogram of gradients within each cell is counted, further comprising:
and counting the number of different gradients.
5. A system for pre-processing an image, comprising:
a grayscale module configured to acquire an image and graying it;
a normalization processing module configured to perform normalization processing of a color space on the image subjected to the graying processing;
the extraction module is configured to extract the features of the image subjected to the standardization processing by using an HOG algorithm and obtain the image features of a first dimension;
and the dimension reduction module is configured to perform dimension reduction processing on the image features of the first dimension by utilizing an LDA algorithm.
6. The system of claim 5, wherein the extraction module is further configured to:
calculating a gradient for each pixel of the image;
dividing the image into a plurality of units, and counting a gradient histogram in each unit to obtain the characteristics of each unit;
forming a block by a plurality of units, and connecting the characteristics of all the units in the block in series to obtain the characteristics of the block;
and connecting the characteristics of all blocks in series to obtain the final image characteristics.
7. The system of claim 6, wherein the extraction module is further configured to:
and calculating the magnitude and direction of the gradient of each pixel point.
8. The system of claim 6, wherein the extraction module is further configured to:
and counting the number of different gradients.
9. A computer device, comprising:
at least one processor; and
memory storing a computer program operable on the processor, characterized in that the processor executes the program to perform the steps of the method according to any of claims 1-4.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1-4.
CN202010744567.5A 2020-07-29 2020-07-29 Image preprocessing method, system, device and medium Pending CN111860529A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010744567.5A CN111860529A (en) 2020-07-29 2020-07-29 Image preprocessing method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010744567.5A CN111860529A (en) 2020-07-29 2020-07-29 Image preprocessing method, system, device and medium

Publications (1)

Publication Number Publication Date
CN111860529A true CN111860529A (en) 2020-10-30

Family

ID=72945963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010744567.5A Pending CN111860529A (en) 2020-07-29 2020-07-29 Image preprocessing method, system, device and medium

Country Status (1)

Country Link
CN (1) CN111860529A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897852A (en) * 2022-05-20 2022-08-12 华能宁南风力发电有限公司 Electric field metering and collecting method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650667A (en) * 2016-12-26 2017-05-10 北京交通大学 Pedestrian detection method and system based on support vector machine
CN108427966A (en) * 2018-03-12 2018-08-21 成都信息工程大学 A kind of magic magiscan and method based on PCA-LDA
CN109086687A (en) * 2018-07-13 2018-12-25 东北大学 The traffic sign recognition method of HOG-MBLBP fusion feature based on PCA dimensionality reduction
CN109190590A (en) * 2018-09-19 2019-01-11 深圳市美侨医疗科技有限公司 A kind of arena crystallization recognition methods, device, computer equipment and storage medium
CN109325507A (en) * 2018-10-11 2019-02-12 湖北工业大学 A kind of image classification algorithms and system of combination super-pixel significant characteristics and HOG feature
CN111274883A (en) * 2020-01-10 2020-06-12 杭州电子科技大学 Synthetic sketch face recognition method based on multi-scale HOG (histogram of oriented gradient) features and deep features

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650667A (en) * 2016-12-26 2017-05-10 北京交通大学 Pedestrian detection method and system based on support vector machine
CN108427966A (en) * 2018-03-12 2018-08-21 成都信息工程大学 A kind of magic magiscan and method based on PCA-LDA
CN109086687A (en) * 2018-07-13 2018-12-25 东北大学 The traffic sign recognition method of HOG-MBLBP fusion feature based on PCA dimensionality reduction
CN109190590A (en) * 2018-09-19 2019-01-11 深圳市美侨医疗科技有限公司 A kind of arena crystallization recognition methods, device, computer equipment and storage medium
CN109325507A (en) * 2018-10-11 2019-02-12 湖北工业大学 A kind of image classification algorithms and system of combination super-pixel significant characteristics and HOG feature
CN111274883A (en) * 2020-01-10 2020-06-12 杭州电子科技大学 Synthetic sketch face recognition method based on multi-scale HOG (histogram of oriented gradient) features and deep features

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897852A (en) * 2022-05-20 2022-08-12 华能宁南风力发电有限公司 Electric field metering and collecting method and system

Similar Documents

Publication Publication Date Title
CN105894047B (en) A kind of face classification system based on three-dimensional data
Sarfraz et al. Head Pose Estimation in Face Recognition Across Pose Scenarios.
CN110659665B (en) Model construction method of different-dimension characteristics and image recognition method and device
CN111126240B (en) Three-channel feature fusion face recognition method
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN112836671B (en) Data dimension reduction method based on maximized ratio and linear discriminant analysis
CN107766864B (en) Method and device for extracting features and method and device for object recognition
CN112580480B (en) Hyperspectral remote sensing image classification method and device
CN113205002B (en) Low-definition face recognition method, device, equipment and medium for unlimited video monitoring
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN105868711B (en) Sparse low-rank-based human behavior identification method
Song et al. Feature extraction and target recognition of moving image sequences
CN110555386A (en) Face recognition identity authentication method based on dynamic Bayes
CN110458064B (en) Low-altitude target detection and identification method combining data driving type and knowledge driving type
CN111126169A (en) Face recognition method and system based on orthogonalization graph regular nonnegative matrix decomposition
CN110969101A (en) Face detection and tracking method based on HOG and feature descriptor
CN114627424A (en) Gait recognition method and system based on visual angle transformation
CN111738194B (en) Method and device for evaluating similarity of face images
CN107480628B (en) Face recognition method and device
CN109389017B (en) Pedestrian re-identification method
CN111860529A (en) Image preprocessing method, system, device and medium
CN113111797A (en) Cross-view gait recognition method combining self-encoder and view transformation model
CN117523642A (en) Face recognition method based on optimal-spacing Bayesian classification model
CN110287973B (en) Image feature extraction method based on low-rank robust linear discriminant analysis
CN109815887B (en) Multi-agent cooperation-based face image classification method under complex illumination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201030

RJ01 Rejection of invention patent application after publication