CN106548180A - A kind of method for obtaining the Feature Descriptor for obscuring constant image - Google Patents
A kind of method for obtaining the Feature Descriptor for obscuring constant image Download PDFInfo
- Publication number
- CN106548180A CN106548180A CN201610922185.0A CN201610922185A CN106548180A CN 106548180 A CN106548180 A CN 106548180A CN 201610922185 A CN201610922185 A CN 201610922185A CN 106548180 A CN106548180 A CN 106548180A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- lpq
- feature descriptor
- obtains
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/469—Contour-based spatial representations, e.g. vector-coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/469—Contour-based spatial representations, e.g. vector-coding
- G06V10/473—Contour-based spatial representations, e.g. vector-coding using gradient analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of method for obtaining the Feature Descriptor for obscuring constant image, first divides an image into multiple local segments, intensive sampling is carried out to the subdivision of local segment and obtains LPQ+;Each local segment corresponding multiple LPQ+ are merged the feature description for obtaining local segment, FV (Fisher Vector) is then carried out to the Feature Descriptor of all of local segment and is encoded and is carried out corresponding Regularization to obtain more high-dimensional broad image feature description.The method that this acquisition that the present invention is provided obscures the Feature Descriptor of constant image, for a kind of more efficient, more accurately fuzzy image recognition Feature Descriptor LPQ+ that fuzzy image recognition is proposed based on LPQ, with identification height and the relatively low advantage of characteristic dimension;FV increases the combination property of traditional description;And the combination of FV codings and LPQ+, compared to the combination of FV codings and traditional description, more preferable effect is respectively provided with accuracy of identification and recognition efficiency.
Description
Technical field
The invention belongs to digital image understanding technical field, more particularly, to a kind of spy for obtaining and obscuring constant image
The method for levying description.
Background technology
In image acquisition process, the interference of environment, the deceptive movement of camera itself or it is out of focus easily cause image blurring, this gives
The image recognition tasks such as recognition of face, Texture classification and target detection bring the challenge of actuality.
For the problem of fuzzy image recognition, prior art includes following thering is two kinds of solutions;One kind is first to image
Deblurring is carried out, then using traditional description (such as LBP, SIFT and HOG etc.) carrying out target recognition, this method is suitable for
In the situation of known ambiguity function, the deblurring of image is carried out using the mode of non-blind deconvolution;Another kind is directly to image
Extract the feature with fuzzy invariance to be identified, and LPQ (Local Phase Quantization) is then generation therein
Table, using the phase information of image, carries out decorrelation and projects an octuple space after quantifying, so as in whole image to which
One rectangular histogram of upper structure, extracts one for centrosymmetry obscures the Feature Descriptor with invariance.
There is following defect in said method, for first method, ambiguity function is unknown under normal circumstances, therefore is gone
The time overhead of Fuzzy Processing is very big, the optimization problem that PSF (Point Spread Function) estimates under correspondence the method
Difficult point be image blurring type selection;LPQ in second method can not be very when the visual identity task of complexity is faced
Good balance efficiency and precision;LPQ features are encoded by prior art by LPQ and FV (Fisher Vector) is combined
Form more high-dimensional description to carry out phenogram picture to lift the combination property of LPQ, but the raising of dimension is not only increased
Experiment expense, in order to prevent model over-fitting from also add the demand to training sample.
The content of the invention
For the disadvantages described above or Improvement requirement of prior art, the invention provides a kind of obtain the spy for obscuring constant image
The method for levying description, the LPQ+ that its object is to obtain after carrying out intensive sampling to the subdivision of local segment carry out FV codings
To obtain more high-dimensional broad image feature description, to improve fuzzy image recognition precision and recognition efficiency.
For achieving the above object, according to one aspect of the present invention, there is provided a kind of acquisition obscures the feature of constant image
The method of description, comprises the following steps:
(1) Fuzzy Processing is carried out to the sample in source image data collection, and gray processing is carried out to the image after Fuzzy Processing
Process;
(2) intensive sampling is carried out to the image that gray processing process is obtained using sliding window, obtains multiple topography's blocks,
And each topography's block is divided into into multiple unit segments;
(3) LPQ+ is extracted to each unit segment, then the multigroup LPQ+ being drawn into is merged and obtains local segment
Feature Descriptor;
Wherein, LPQ+ is the Feature Descriptor based on LPQ, empty by being directed to real part described in phase place rather than LPQ
Portion's (substantially STFT coefficients) carries out quantifying to obtain LPQ+;
(4) Feature Descriptor of all of local segment is continued carries out Fisher Vector coded treatments, power regularization
WithRegularization, obtains the Feature Descriptor of broad image.
Preferably, also include identification after the method that above-mentioned acquisition obscures the Feature Descriptor of constant image, its step (4)
The step of:
(5) it is according to training dataset and character pair description, linear using man-to-man multicategory classification Strategies Training
SVM grader;The output result of grader is recognition result.
Preferably, the method that above-mentioned acquisition obscures the Feature Descriptor of constant image, its step (2) is including following sub-step
Suddenly:
(2-1) intensive sampling is carried out to the image that gray processing process is obtained using the sliding window of fixed dimension, step-length, is obtained
Obtain K groups topography block;
(2-2) topography's block is divided into into multiple junior units, for preserving the spatial structural form of local segment.
Preferably, the method that above-mentioned acquisition obscures the Feature Descriptor of constant image, its step (3) is including following sub-step
Suddenly:
(3-1) using STFT (Short Term Fourier Transform), with M × M sizes around center pixel
Neighborhood is computer capacity, to each the pixel extraction phase information in unit segment;
Wherein, (x, y), (u, v) refer to the coordinate in broad image and correspondence Fourier transform respectively;G (x, y) refers to mould
Paste image, G (u, v) are its corresponding Fourier transform form;NxAnd NyRefer to the contiguous range of pixel (x, y).
Take four low frequency points, u1=(a, 0), u2=(0, a), u3=(a, a), u4=(a ,-a), and which is carried out
STFT, obtains four phase value ∠ G (u1), ∠ G (u2), ∠ G (u3), ∠ G (u4);Wherein, a is constant;
(3-2) pressPhase value is divided, I different angular area is obtained
Aforementioned four phase value is included into corresponding angle region and is quantified according to following ballot values by domain,
Wherein, α is adjustable parameter,Refer to ∠ G (uj) the ballot value that obtains under corresponding angular regions i;Wherein, j
=1,2,3,4;
(3-3) distribution of the quantized value of four phase values is expressed as four rectangular histograms and merges rectangular histogram obtaining LPQ
+;
(3-4) LPQ+ of the different units block of same segment is merged, obtains the Feature Descriptor of local segment.
Preferably, the method that above-mentioned acquisition obscures the Feature Descriptor of constant image, its step (4) is including following sub-step
Suddenly:
(4-1) K Local map block feature of extraction is described into son and is designated as X={ xk, k=1,2 ..., K };
(4-2) with the mixed Gauss model u of N unitsλX () simulating the generating process of X, the model is designated as
Wherein, parameter lambda={ ωl,μl,σl, l=1 ..., N }, ωl, μlAnd σlRefer respectively to Gaussian function ulMixing power
Weight, average vector and standard deviation;Parameter lambda is according to EM criterions (expectation maximal criterion) in the local feature generated by a large amount of training sets
Estimated on description;
(4-3) by Gauss model uλX () is respectively to μlAnd σlLocal derviation is sought, two gradient vectors are obtained:
Wherein, γkL () represents feature xkBy the probability of l-th Gaussian function generation;
(4-4) all of N number of Gaussian function correspondence gradient vector is merged, obtains Fisher Vector codings
(4-5) it is rightEach dimension m carry out following power regularization (power normalization, process:
(4-6) after to above-mentioned power regularizationCarry outRegularization, obtains description of broad image feature:
In general, by the contemplated above technical scheme of the present invention compared with prior art, can obtain down and show
Beneficial effect:
(1) method that the acquisition that the present invention is provided obscures the Feature Descriptor of constant image, based on LPQ, but directly to phase
Real part and imaginary part (being essentially STFT coefficients) described in position rather than LPQ carries out quantifying to obtain LPQ+;Due to LPQ substantially only
Employ the symbolic information of STFT coefficients and have ignored concrete numerical value, and LPQ+ is in order to make up to the specific phase value amount of carrying out
Change, can preferably play the identification of LPQ;Simultaneously because LPQ+ intrinsic dimensionalities are lower than LPQ, therefore there is raising treatment effeciency
Effect;
(2) method that the acquisition that the present invention is provided obscures the Feature Descriptor of constant image, adopts to local feature description
Fisher Vector are encoded, and the feature description of low-dimensional are projected to high-dimensional feature space, with reference to linear SVM grader
Greatly improve the accuracy of identification of the description performance and broad image of description;
Thus, the invention provides the higher acquisition of a kind of accuracy of identification, speed obscures the Feature Descriptor of constant image
Method.
Description of the drawings
Fig. 1 is that the flow process of the method for the Feature Descriptor for obtaining and obscuring constant image provided in an embodiment of the present invention is illustrated
Figure;
Fig. 2 describes the schematic flow sheet of sub- LPQ+ for extraction feature in the embodiment of the present invention;
Fig. 3 obtains the schematic flow sheet of local feature description's for intensive extraction LPQ+ in the embodiment of the present invention;
Fig. 4 is to describe son using Fisher Vector in the embodiment of the present invention and carry out coding to form image to local segment
The schematic flow sheet of Feature Descriptor.
Specific embodiment
In order that the objects, technical solutions and advantages of the present invention become more apparent, it is below in conjunction with drawings and Examples, right
The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the present invention, and
It is not used in the restriction present invention.As long as additionally, technical characteristic involved in invention described below each embodiment
Do not constitute conflict each other can just be mutually combined.
The invention provides a kind of fuzzy constant characteristics of image description, its flow process is as shown in figure 1, including broad image
Obtain, intensive sampling is carried out to image, extract LPQ+, encoded using Fisher Vector, training SVM is linear
Grader, obtains recognition result;Acquisition to be specifically described present invention offer with reference to embodiments obscures the spy of constant image
The method for levying description, comprises the following steps that:
Step 1, obtains broad image sample, including following sub-step:
(1-1) obtain source image data collection;Five groups of different types of data sets are adopted in the present embodiment:15 classifications
Yale human face data collection, the KTH texture datasets of 10 classifications, the land use contextual data collection of 21 classifications, the HUST clouds of 6 classifications
The Oxford Flower data sets of data set and 17 classes;
(1-2) obtain broad image;Three kinds of image blurring modes are adopted in embodiment:Gaussian Blur, motion blur and
It is circular fuzzy;Specifically, for Gaussian Blur, neighborhood window is sized to 3 × 3, and standard deviation is respectively set to 1,2,3;
For motion blur, the linear movement parameter of camera is respectively set to 8,9;Obscure for circular, radius is respectively set to 2,3;
(1-3) all of coloured image is carried out into gray processing according to following formula:
K=0.2989 × R+0.5870 × G+0.1140 × B;
Wherein, K is gray level image, and R, G, B are three passages of coloured image.
Step 2, carries out intensive sampling, including following sub-step to image:
(2-1) size is adopted for 16 × 16, horizontal step-length and longitudinal step-length are 8 sliding window and gray processing process is obtained
The image for obtaining carries out intensive sampling, obtains topography's block;
(2-2) topography's block of acquisition is divided into into 4 unit segments, the space structure for preserving local segment is believed
Breath.
Step 3, extracts LPQ+ to the segment of different units, then merges the Feature Descriptor for local segment;Its
In, the flow chart of the sub extraction of Local map block feature description and intensive sampling is respectively as shown in Figures 2 and 3;Concrete steps are such as
Under:
(3-1) using STFT (Short Term Fourier Transform) to each the pixel extraction phase place in unit
Information;Centered on the computer capacity of STFT around pixel 13 × 13 sizes neighborhood:
Wherein, (x, y), (u, v) refer to the coordinate in broad image and correspondence Fourier transform respectively;G (x, y) refers to mould
Paste image, G (u, v) are its corresponding Fourier transform form;NxAnd NyRefer to the contiguous range of pixel (x, y).
Take four low frequency points, u1=(a, 0), u2=(0, a), u3=(a, a), u4=(a ,-a), wherein parameterAnd STFT process is carried out to which, obtain four phase value ∠ G (u1), ∠ G (u2), ∠ G (u3), ∠ G (u4);
(3-2) according toPhase value is divided into 8 different angular regions;
Aforementioned four phase value is included into into corresponding angle region and is quantified according to following ballot values,
Refer to ∠ G (uj) the ballot value that obtains under corresponding angular regions i;Wherein, j=1,2,3,4;
(3-3) distribution of the quantized value of four phase places is expressed as four rectangular histograms and merges rectangular histogram obtaining LPQ+;
Step 4, the Feature Descriptor of local segment is carried out Fisher Vector coded treatments, power regularization andJust
Then change is processed, and obtains the Feature Descriptor of broad image;In embodiment, the realization of Fisher Vector is based on VLFeat, its stream
Journey such as Fig. 4, including following sub-step:
(4-1) K Local map block feature of extraction is described into son and is designated as X={ xk, k=1,2 ..., K };
(4-2) with 50 yuan of mixed Gauss model uλ(x) simulating the generating process of X,
Wherein, parameter lambda={ ωl,μl,σl, l=1 ..., 50 }, ωl, μlAnd σlIt is expressed as Gaussian function ulMixing
Weight, average vector and standard deviation;Parameter lambda is according to EM criterions, enterprising in local feature description's generated by a large amount of training sets
Row is estimated;
(4-3) by Gauss model uλX () is respectively to μlAnd σlLocal derviation is sought, two gradient vectors are obtained:
Wherein, γkL () represents feature xkBy the probability of l-th Gaussian function generation;
(4-4) all of 50 Gaussian functions correspondence gradient vector is merged, obtains Fisher Vector codings
(4-5) to above-mentionedEach dimension m carry out following power Regularization, wherein, power exponent 0.5:
(4-5) after to power regularizationCarry outRegularization, obtains description of broad image feature
Step 5, it is according to training dataset and character pair description, linear using man-to-man multicategory classification Strategies Training
SVM grader;Take the LIBSVM for increasing income in the present embodiment to train and set up SVM grader;
The output result of grader is last recognition result.
As it will be easily appreciated by one skilled in the art that the foregoing is only presently preferred embodiments of the present invention, not to
The present invention, all any modification, equivalent and improvement made within the spirit and principles in the present invention etc. are limited, all should be included
Within protection scope of the present invention.
Claims (5)
1. a kind of method that acquisition obscures the Feature Descriptor of constant image, it is characterised in that comprise the following steps:
(1) Fuzzy Processing is carried out to the sample in source images, and gray processing process is carried out to the image after Fuzzy Processing;
(2) intensive sampling is carried out to the image that gray processing process is obtained using sliding window, obtains multiple topography's blocks, and will
Each topography's block is divided into multiple unit segments;
(3) LPQ+ is extracted to each unit segment, then the multigroup LPQ+ being drawn into is merged the feature for obtaining local segment
Description;
(4) Feature Descriptor of all of local segment is continued carry out Fisher Vector coded treatments, power regularization and
Regularization, obtains the Feature Descriptor of broad image.
2. the method that acquisition as claimed in claim 1 obscures the Feature Descriptor of constant image, it is characterised in that the step
(4) the step of also including image recognition after:
(5) according to training dataset and character pair description, using the linear support of man-to-man multicategory classification Strategies Training
Vector machine classifier;The output result of grader is image recognition result.
3. the method that acquisition as claimed in claim 1 or 2 obscures the Feature Descriptor of constant image, it is characterised in that described
Step (2) is including following sub-step:
(2-1) intensive sampling is carried out to the image that gray processing process is obtained using the sliding window of fixed dimension, step-length, is obtained many
Individual topography's block;
(2-2) each topography's block is divided into into multiple junior units, for preserving the spatial structural form of local segment.
4. the method that acquisition as claimed in claim 1 or 2 obscures the Feature Descriptor of constant image, it is characterised in that described
Step (3) is including following sub-step:
(3-1) STFT is adopted with the neighborhood of M × M sizes around center pixel as computer capacity, each picture in extraction unit segment
The phase information of element:
Wherein, (x, y), (u, v) refer to broad image with the coordinate in corresponding Fourier transform respectively;G (x, y) refers to fuzzy graph
Picture, NxAnd NyRefer to the contiguous range of pixel (x, y);
According to four low frequency point u1=(a, 0), u2=(0, a), u3=(a, a), u4=(a ,-a), obtains four phase value ∠ G
(u1), ∠ G (u2), ∠ G (u3), ∠ G (u4);Wherein, a is constant;
(3-2) pressI=1,2 ..., I are divided to phase value, obtain I different angular regions, will be upper
State four phase values to be included into corresponding angle region and quantified according to following ballot values,
Wherein, α is adjustable parameter,Refer to ∠ G (uj) the ballot value that obtains under corresponding angular regions i;Wherein, j=1,2,
3,4;
(3-3) four rectangular histograms are obtained and rectangular histogram is merged according to the quantized value of four phase values and obtain LPQ+;
(3-4) LPQ+ of each unit block of same topography's block is merged, obtains the Feature Descriptor of local segment.
5. the method that acquisition as claimed in claim 1 or 2 obscures the Feature Descriptor of constant image, it is characterised in that described
Step (4) is including following sub-step:
(4-1) K Local map block feature of extraction is described into son and is designated as X={ xk, k=1,2 ..., K };
(4-2) with the mixed Gauss model u of N unitsλX () simulating the generating process of X, the model is designated as
Wherein, parameter lambda={ ωl,μl,σl, l=1 ..., N }, ωl, μlAnd σlRefer respectively to Gaussian function ulHybrid weight, flat
Equal vector sum standard deviation;Parameter lambda is estimated on local feature description's generated by a large amount of training sets according to EM criterions;
(4-3) by Gauss model uλX () is respectively to μlAnd σlLocal derviation is sought, two gradient vectors are obtained:
Wherein, γkL () represents feature xkBy the probability of l-th Gaussian function generation;
(4-4) all of N number of Gaussian function correspondence gradient vector is merged, obtains Fisher Vector codings
(4-5) it is rightEach dimension m carry out following power Regularization:
(4-6) after to the power RegularizationCarry outRegularization, obtains description of broad image feature:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610922185.0A CN106548180B (en) | 2016-10-21 | 2016-10-21 | A method of obtaining the Feature Descriptor for obscuring constant image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610922185.0A CN106548180B (en) | 2016-10-21 | 2016-10-21 | A method of obtaining the Feature Descriptor for obscuring constant image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106548180A true CN106548180A (en) | 2017-03-29 |
CN106548180B CN106548180B (en) | 2019-04-12 |
Family
ID=58392232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610922185.0A Active CN106548180B (en) | 2016-10-21 | 2016-10-21 | A method of obtaining the Feature Descriptor for obscuring constant image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106548180B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110807464A (en) * | 2019-10-21 | 2020-02-18 | 华中科技大学 | Method and system for obtaining image fuzzy invariant texture feature descriptor |
CN111553893A (en) * | 2020-04-24 | 2020-08-18 | 成都飞机工业(集团)有限责任公司 | Method for identifying automatic wiring and cutting identifier of airplane wire harness |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123560A (en) * | 2014-07-03 | 2014-10-29 | 中山大学 | Phase encoding characteristic and multi-metric learning based vague facial image verification method |
CN104517104A (en) * | 2015-01-09 | 2015-04-15 | 苏州科达科技股份有限公司 | Face recognition method and face recognition system based on monitoring scene |
CN104537381A (en) * | 2014-12-30 | 2015-04-22 | 华中科技大学 | Blurred image identification method based on blurred invariant feature |
CN105893916A (en) * | 2014-12-11 | 2016-08-24 | 深圳市阿图姆科技有限公司 | New method for detection of face pretreatment, feature extraction and dimensionality reduction description |
-
2016
- 2016-10-21 CN CN201610922185.0A patent/CN106548180B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123560A (en) * | 2014-07-03 | 2014-10-29 | 中山大学 | Phase encoding characteristic and multi-metric learning based vague facial image verification method |
CN105893916A (en) * | 2014-12-11 | 2016-08-24 | 深圳市阿图姆科技有限公司 | New method for detection of face pretreatment, feature extraction and dimensionality reduction description |
CN104537381A (en) * | 2014-12-30 | 2015-04-22 | 华中科技大学 | Blurred image identification method based on blurred invariant feature |
CN104517104A (en) * | 2015-01-09 | 2015-04-15 | 苏州科达科技股份有限公司 | Face recognition method and face recognition system based on monitoring scene |
Non-Patent Citations (3)
Title |
---|
MENGYU ZHU ET AL: "《BEYOND LOCAL PHASE QUANTIZATION: MID-LEVEL BLURRED IMAGE REPRESENTATION USING FISHER VECTOR》", 《IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》 * |
储久良等: "《基于LPQ和Fisherfaces的模糊人脸识别》", 《河南理工大学学报(自然科学版)》 * |
朱梦宇: "《图像模糊不变特征提取与识别技术研究》", 《万方数据》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110807464A (en) * | 2019-10-21 | 2020-02-18 | 华中科技大学 | Method and system for obtaining image fuzzy invariant texture feature descriptor |
CN110807464B (en) * | 2019-10-21 | 2022-09-20 | 华中科技大学 | Method and system for obtaining image fuzzy invariant texture feature descriptor |
CN111553893A (en) * | 2020-04-24 | 2020-08-18 | 成都飞机工业(集团)有限责任公司 | Method for identifying automatic wiring and cutting identifier of airplane wire harness |
Also Published As
Publication number | Publication date |
---|---|
CN106548180B (en) | 2019-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104966085B (en) | A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features | |
CN105678231A (en) | Pedestrian image detection method based on sparse coding and neural network | |
CN104835175B (en) | Object detection method in a kind of nuclear environment of view-based access control model attention mechanism | |
CN103020992B (en) | A kind of video image conspicuousness detection method based on motion color-associations | |
CN106780582B (en) | The image significance detection method merged based on textural characteristics and color characteristic | |
CN106296695A (en) | Adaptive threshold natural target image based on significance segmentation extraction algorithm | |
CN107944428A (en) | A kind of indoor scene semanteme marking method based on super-pixel collection | |
CN108629783A (en) | Image partition method, system and medium based on the search of characteristics of image density peaks | |
CN108021869A (en) | A kind of convolutional neural networks tracking of combination gaussian kernel function | |
CN107862680B (en) | Target tracking optimization method based on correlation filter | |
CN102147867A (en) | Method for identifying traditional Chinese painting images and calligraphy images based on subject | |
CN116824485A (en) | Deep learning-based small target detection method for camouflage personnel in open scene | |
CN105426924A (en) | Scene classification method based on middle level features of images | |
CN107392211B (en) | Salient target detection method based on visual sparse cognition | |
Alsanad et al. | Real-time fuel truck detection algorithm based on deep convolutional neural network | |
Yanagisawa et al. | Face detection for comic images with deformable part model | |
CN104050674B (en) | Salient region detection method and device | |
CN112668662B (en) | Outdoor mountain forest environment target detection method based on improved YOLOv3 network | |
CN106548180A (en) | A kind of method for obtaining the Feature Descriptor for obscuring constant image | |
CN110490210B (en) | Color texture classification method based on t sampling difference between compact channels | |
CN106778504A (en) | A kind of pedestrian detection method | |
CN104050486B (en) | Polarimetric SAR image classification method based on maps and Wishart distance | |
Niu et al. | Real-time recognition and location of indoor objects | |
CN114266713A (en) | NonshadowGAN-based unmanned aerial vehicle railway fastener image shadow removing method and system | |
Qiao et al. | Lung nodule classification using curvelet transform, LDA algorithm and BAT-SVM algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |