CN106503696A - A kind of enhancing coding method for vision mapping objects value - Google Patents

A kind of enhancing coding method for vision mapping objects value Download PDF

Info

Publication number
CN106503696A
CN106503696A CN201611102813.7A CN201611102813A CN106503696A CN 106503696 A CN106503696 A CN 106503696A CN 201611102813 A CN201611102813 A CN 201611102813A CN 106503696 A CN106503696 A CN 106503696A
Authority
CN
China
Prior art keywords
coding
desired value
model
vector
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611102813.7A
Other languages
Chinese (zh)
Other versions
CN106503696B (en
Inventor
潘力立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu Electric Technology Shandong Scientific And Technological Achievement Transformation Co ltd
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201611102813.7A priority Critical patent/CN106503696B/en
Publication of CN106503696A publication Critical patent/CN106503696A/en
Application granted granted Critical
Publication of CN106503696B publication Critical patent/CN106503696B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/231Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/446Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention proposes a kind of enhancing coding method for vision mapping objects value, belongs to technical field of computer vision, is related to vision mapping techniques.The image that collects simultaneously carries out feature extraction, and record corresponding desired value;Afterwards, enhancing coding is carried out to desired value, and each of coding is 0/1 two-valued variable;Then, the mapping relations that sets up between original input picture feature and binary-coding;Then all of input picture is mapped to binary-coding according to above-mentioned mapping relations, finally, the mapping relations that recycling random forest method is set up between binary-coding and desired value.For new test pictures, characteristics of image is extracted, recycle the model that has acquired to estimate binary-coding, and binary-coding is revert to desired value.The patent of invention have sample is sparse and skewness in the case of, to improving sample identification rate, and the accuracy of identification.

Description

A kind of enhancing coding method for vision mapping objects value
Technical field
The invention belongs to technical field of computer vision, is related to vision mapping techniques, Attitude estimation, sight line is mainly used in In the vision estimation problem such as tracking and age estimation.
Background technology
In computer vision, vision mapping refers to the mistake of mapping function between study input picture feature and output variable Journey, when new images are input into, to estimate the corresponding target output value of the input picture.Specifically, vision mapping includes:People Body Attitude estimation, head pose estimation, sight line estimation and object tracking etc..Refer to bibliography:O.Williams,A.Blake, and R.Cipolla,Sparse and Semi-Supervised Visual Mapping with the S3GP,in IEEE Conference Computer on Computer Vision and Pattern Recognition,pp.230-237, 2006.
Used as an important branch of computer vision, vision mapping is changed under many occasions by people one by one according to image Content estimates the situation of target output.Replace, by computer according to Input Image Content, by existing vision mapping function Prediction output, so that realize replacing human eye and brain to carry out automatical analysis and estimation to image by video camera and computer.At present, The technology has started to be applied to multiple industries closely related with people's life.Wherein, head pose estimation is applied to vapour Car safe driving industry, sight line is estimated and human body attitude is estimated to be applied to Intelligent Human-Machine Interface and game industry, object tracking The industries such as intelligent transportation are applied to, human body attitude is estimated to be applied to field of human-computer interaction.Believe with computer hardware The progressively solution of key technical problem in the continuous improvement of reason ability and vision mapping, its application prospect will be more wide.
In the model for vision mapping problems, all kinds of regression models have been found to the mould best for solving the problem Type.When regression model is set up, it usually needs by input picture Feature Mapping to desired value (for example:Head pose, the age, Body posture and direction of visual lines etc.).In some particular problems, target range is to determine, and is spacedly distributed, For example:Age, the corresponding angle of direction of visual lines and the corresponding angle of attitude.For this kind of desired value, directly set up from original spy The mapping for levying desired value can have that target Distribution value is sparse and uneven, in order to solve problems, while carrying The performance of high algorithm, we have proposed enhancing coding method and desired value is encoded.
Content of the invention
The invention provides a kind of enhancing coding method of vision mapping objects value, be more suitable for setting up after desired value coding and Mapping relations between input.First to the image that collects and carry out feature extraction (original gradation, HOG, SIFT and Harr Deng), and record corresponding desired value (age, attitude angle and direction of visual lines etc.);Afterwards, enhancing coding is carried out to desired value, Each of coding is 0/1 two-valued variable;Then, the mapping relations that sets up between original input picture feature and binary-coding; Then all of input picture is mapped to binary-coding according to above-mentioned mapping relations, finally, recycles random forest method to build Mapping relations between vertical binary-coding and desired value.For new test pictures, characteristics of image is extracted, recycling has been acquired Binary-coding estimated by model, and binary-coding is revert to desired value.The patent of invention solves that sample is sparse and skewness In the case of even, the problem that the existing method estimation effect of vision mapping is not good enough.
In order to easily describe present invention, first some terms are defined.
Define 1:Vision maps.Will visual signature revert to desired value.
Define 1:Input feature vector.In vision estimation problem, it is often necessary to extract visual signature to original image, such as ladder Degree direction histogram feature, local binary feature etc..
Define 2:Desired value.In vision estimation problem, it is often necessary to estimate corresponding output valve, example according to input feature vector The age is estimated according to face-image such as, head angle deflection is estimated according to head image, age and head angle here deflect It is desired value.
Define 3:Gradient orientation histogram.Gradient orientation histogram feature.Using image pixel intensities gradient or the direction at edge The presentation and the Visual Feature Retrieval Process method of shape of the object in distribution description piece image.Its implementation is first divided the image into The little connected region for being called pane location;Then the gradient direction or edge direction Nogata of each pixel in pane location are gathered Figure;These set of histograms can be formed by Feature Descriptor altogether finally.In order to improve accuracy, can be with these offices Portion's histogram carries out contrast normalization (contrast-normalized), this side in the bigger interval (block) of image Method by first calculating density of each histogram in this interval (block), then according to this density value to interval in each Individual pane location is normalized.There can be higher robustness to illumination variation and shade by the normalization.
Define 3:Shallow-layer regression model.The combination for directly carrying out one layer of Weight from input feature vector obtains estimate.
Define 4:Deep layer regression model.The hidden feature that the combination that input feature vector carries out Weight is obtained next layer, then right Hidden feature is weighted combination and obtains the hidden feature of next layer, similar down estimates always to obtain last desired value.
Define 5:Random forest.In machine learning, random forest is a grader comprising multiple decision trees or returns Return device, and its classification for exporting be by the classification of output and the mode of numerical value is set individually depending on.
Detailed technology scheme of the present invention is a kind of enhancing coding method for vision mapping objects value;The method includes:
Step 1:Collection N width input pictures, and each image corresponding desired value is demarcated during according to collection each image;
Step 2:By the image zooming-out visual signature obtained in step 1, and remember the corresponding visual signature of any n-th width image Vector
Step 3:By all N width image character pairs vector, arrangement can obtain input data matrix X, i.e. X=in order [x1, x2..., xN];
Step 5:Corresponding for N width images desired value vector is arranged as data matrix Y, i.e. Y=[y in order1, y2..., yN];
Step 6:Enhancing coding is carried out to the desired value vector for exporting;
For ynEvery one-dimensional ynjCarrying out binary-coding method is:According to ynjSpan be [- M1+ 1, M2], this takes Value scope is set according to actual conditions, then to ynjSpan is first adjusted to [1, M1+M2], make M=M1+M2
Basis afterwardsValue carry out binary-coding, the length of coding is Coding vector an, [] represents and rounds symbol;Obtain coding vector anFront M dimension corresponding be encoded to:
Wherein k presentation codes vector anDimension;
anM+1 dimension to 2M corresponding encode be:
an2M+1 dimension arriveTieing up corresponding coding is:
an'sTie upCorrespondingly coding is dimension:
Step 6:Set up from input feature vectorArriveRegression model, and model is solved, obtains mould The each parameter of type;
Step 7:Using in step 6 obtain model parameter, by feature fromEnhancing space encoder is mapped to, is obtained final product Arrive
Step 8:Desired value is finally mapped in order to coding will be strengthened, is set upWith output desired valueIt Between mapping relations, set up contact therebetween, the intrinsic dimensionality of the number and random tree of random tree using Random Forest model Number according to the length and training sample that strengthen coding is selected;
Step 9:When sample to be estimated is given, input feature vector is mapped to increasing first with the model that sets up in step 6 Strong coding, then desired value is mapped to coding mapping is strengthened using the Random Forest model in step 8.
Further, the regression model in the step 6 is shallow Model or Deep model.
The present invention to the image that collects and carries out feature extraction first, and records corresponding desired value;Afterwards, to target Value carries out enhancing coding, and each of coding is 0/1 two-valued variable;Then, original input picture feature and binary-coding are set up Between mapping relations;Then all of input picture is mapped to binary-coding according to above-mentioned mapping relations, finally, is recycled The mapping relations that random forest method is set up between binary-coding and desired value.For new test pictures, characteristics of image is extracted, Recycle the model that has acquired to estimate binary-coding, and binary-coding is revert to desired value.The patent of invention has in sample In the case of sparse and skewness, to improving sample identification rate, and the accuracy of identification.
Description of the drawings
Fig. 1 is vision mapping schematic diagram (head pose estimation, body posture are estimated and sight line is estimated).
Fig. 2 is coding schematic diagram schematic diagram.
Specific embodiment
Realize language:Matlab,C/C++
Hardware platform:Intel core2 E7400+4G DDR RAM
Software platform:Matlab2012a,VisualStdio2010
The method according to the invention, clearly requires the vision mapping problems of solution first, and gathers associated picture (head figure Picture, body image and face-image etc.) and spotting value (head pose angle, body posture angle and age).According to this Patent of invention, first with Matlab or C language coding study image to the mapping model for strengthening coding, and from increasing The Random Forest model of desired value is encoded to by force;Image to be estimated to being input into carries out vision mapping afterwards, estimates desired value.This The method of invention can be used for the vision mapping problems in various computer visions, hence it is evident that improve direct mapping method (from input Feature is to desired value) performance.
The present invention is further detailed to technical scheme with reference to Figure of description:One kind is directed to vision mapping objects value Enhancing coding method;The method includes:
Step 1:Collection N width input pictures (see Fig. 1), and each image corresponding target is demarcated during according to collection each image Value;By taking head pose estimation as an example, N width input picture is N width head images, and calibration value is then head pose yn, ynFirst Dimension table shows the angle of pitch, two-dimensional representation inclination angle, and the third dimension represents that the anglec of rotation, subscript n represent the corresponding attitude of the n-th width image; In actual applications, if body posture estimation problem, input picture is body image, and desired value is between body parts Angle.If sight line estimation problem, input picture is eyes image, desired value be direction of visual lines (horizontal direction angle and Vertical direction angle);
Step 2:By the image zooming-out visual signature obtained in step 1, and remember the corresponding visual signature of any n-th width image VectorEqually by taking head pose as an example, visual signature generally extracts gradient orientation histogram feature, then Represent the gradient orientation histogram feature of the n-th width image;
Step 3:By all N width image character pairs vector, arrangement can obtain input data matrix X, i.e. X=in order [x1, x2..., xN];
Step 5:Corresponding for N width images desired value vector is arranged as data matrix Y, i.e. Y=[y in order1, y2..., yN];
Step 6:Enhancing coding (see Fig. 2) is carried out to the desired value vector for exporting;
For ynEvery one-dimensional ynjCarrying out binary-coding method is:According to ynjSpan be [- M1+ 1, M2], this takes Value scope is set according to actual conditions, then to ynjSpan is first adjusted to [1, M1+M2], make M=M1+M2
Basis afterwardsValue carry out binary-coding, the length of coding is Coding vector an, [] represents and rounds symbol;Obtain coding vector anFront M dimension corresponding be encoded to:
Wherein k presentation codes vector anDimension;
anM+1 dimension to 2M corresponding encode be:
an2M+1 dimension arriveTieing up corresponding coding is:
an'sTie upCorrespondingly coding is dimension:
Step 6:Set up from input feature vectorArriveRegression model, and model is solved, obtains mould The each parameter of type, the model are shallow Model or Deep model;
Step 7:Using in step 6 obtain model parameter, by feature fromEnhancing space encoder is mapped to, i.e., Obtain
Step 8:Desired value is finally mapped in order to coding will be strengthened, is set upWith output desired valueIt Between mapping relations, set up contact therebetween, the intrinsic dimensionality of the number and random tree of random tree using Random Forest model Number according to the length and training sample that strengthen coding is selected;
Step 9:When sample to be estimated is given, input feature vector is mapped to increasing first with the model that sets up in step 6 Strong coding, then desired value is mapped to coding mapping is strengthened using the Random Forest model in step 8;With head pose estimation it is Example, input feature vector is gradient orientation histogram feature, the enhancing coding for mapping afterwards, then from enhancing coding mapping to head appearance State.

Claims (2)

1. a kind of enhancing coding method for vision mapping objects value;The method includes:
Step 1:Collection N width input pictures, and each image corresponding desired value is demarcated during according to collection each image;
Step 2:By the image zooming-out visual signature obtained in step 1, and remember the corresponding visual feature vector of any n-th width image
Step 3:By all N width image character pairs vector, arrangement can obtain input data matrix X, i.e. X=[x in order1, x2..., xN];
Step 5:Corresponding for N width images desired value vector is arranged as data matrix Y, i.e. Y=[y in order1, y2..., yN];
Step 6:Enhancing coding is carried out to the desired value vector for exporting;
For ynEvery one-dimensional ynjCarrying out binary-coding method is:According to ynjSpan be [- M1+ 1, M2], the span Set according to actual conditions, then to ynjSpan is first adjusted to [1, M1+M2], make M=M1+M2
y ^ n j = y n j + M 1
Basis afterwardsValue carry out binary-coding, the length of coding is's Coding vector an, [] represents and rounds symbol;Obtain coding vector anFront M dimension corresponding be encoded to:
a n k = 1 1 ≤ k ≤ y ^ n j 0 y ^ n j + 1 ≤ k ≤ M ,
Wherein k presentation codes vector anDimension;
anM+1 dimension to 2M corresponding encode be:
a n k = 1 k = M + y ^ n j 0 M + 1 ≤ k ≤ 2 M , k ≠ M + y ^ n j ,
an2M+1 dimension arriveTieing up corresponding coding is:
a n k = 1 k = 2 M + [ y ^ n j + 1 2 ] 0 2 M + 1 ≤ k ≤ 2 M + [ M + 1 2 ] , k ≠ 2 M + [ y ^ n j + 1 2 ] ,
an'sTie upCorrespondingly coding is dimension:
a n k = 1 k = 2 M + [ M + 1 2 ] + [ y ^ n j + 9 10 ] 0 2 M + [ M + 1 2 ] + 1 ≤ k ≤ Q , k ≠ 2 M + [ M + 1 2 ] + [ y ^ n j + 9 10 ] .
Step 6:Set up from input feature vectorArriveRegression model, and model is solved, obtains model each Parameter;
Step 7:Using in step 6 obtain model parameter, by feature fromEnhancing space encoder is mapped to, that is, is obtained
Step 8:Desired value is finally mapped in order to coding will be strengthened, is set upWith output desired valueBetween reflect Penetrate relation, set up contact therebetween using Random Forest model, the intrinsic dimensionality of the number and random tree of random tree according to The number of the length and training sample that strengthen coding is selected;
Step 9:When sample to be estimated is given, input feature vector is mapped to enhancing first with the model that sets up in step 6 and is compiled Code, then desired value is mapped to coding mapping is strengthened using the Random Forest model in step 8.
2. a kind of enhancing coding method for vision mapping objects value as claimed in claim 1, it is characterised in that the step Regression model in rapid 6 is shallow Model or Deep model.
CN201611102813.7A 2016-12-05 2016-12-05 A kind of enhancing coding method for vision mapping objects value Expired - Fee Related CN106503696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611102813.7A CN106503696B (en) 2016-12-05 2016-12-05 A kind of enhancing coding method for vision mapping objects value

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611102813.7A CN106503696B (en) 2016-12-05 2016-12-05 A kind of enhancing coding method for vision mapping objects value

Publications (2)

Publication Number Publication Date
CN106503696A true CN106503696A (en) 2017-03-15
CN106503696B CN106503696B (en) 2019-08-13

Family

ID=58330455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611102813.7A Expired - Fee Related CN106503696B (en) 2016-12-05 2016-12-05 A kind of enhancing coding method for vision mapping objects value

Country Status (1)

Country Link
CN (1) CN106503696B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288673A (en) * 2019-05-08 2019-09-27 深圳大学 Instruction sequence is mapped to the method and system of image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS575182A (en) * 1980-06-13 1982-01-11 Fujitsu Ltd Character recognition processing system
CN101621306A (en) * 2008-06-30 2010-01-06 中兴通讯股份有限公司 Mapping method and device for multiple-input multiple-output system precoding matrix
CN104036293A (en) * 2014-06-13 2014-09-10 武汉大学 Rapid binary encoding based high resolution remote sensing image scene classification method
CN105069400A (en) * 2015-07-16 2015-11-18 北京工业大学 Face image gender recognition system based on stack type sparse self-coding
CN105469096A (en) * 2015-11-18 2016-04-06 南京大学 Feature bag image retrieval method based on Hash binary code
CN105760898A (en) * 2016-03-22 2016-07-13 电子科技大学 Vision mapping method based on mixed group regression method
CN105930834A (en) * 2016-07-01 2016-09-07 北京邮电大学 Face identification method and apparatus based on spherical hashing binary coding

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS575182A (en) * 1980-06-13 1982-01-11 Fujitsu Ltd Character recognition processing system
CN101621306A (en) * 2008-06-30 2010-01-06 中兴通讯股份有限公司 Mapping method and device for multiple-input multiple-output system precoding matrix
CN104036293A (en) * 2014-06-13 2014-09-10 武汉大学 Rapid binary encoding based high resolution remote sensing image scene classification method
CN105069400A (en) * 2015-07-16 2015-11-18 北京工业大学 Face image gender recognition system based on stack type sparse self-coding
CN105469096A (en) * 2015-11-18 2016-04-06 南京大学 Feature bag image retrieval method based on Hash binary code
CN105760898A (en) * 2016-03-22 2016-07-13 电子科技大学 Vision mapping method based on mixed group regression method
CN105930834A (en) * 2016-07-01 2016-09-07 北京邮电大学 Face identification method and apparatus based on spherical hashing binary coding

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JAE-PIL HEO等: ""Spherical Hashing: Binary Code Embedding with Hyperspheres"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
RYOZO KIYOHARA等: ""Study on binary code synchronization in consumer devices"", 《IEEE TRANSACTIONS ON CONSUMER ELECTRONICS》 *
付伟等: ""基于深度学习的监控视频目标检测"", 《信号与信息处理》 *
聂秀山等: ""基于特征融合和曼哈顿量化的视频哈希学习方法"", 《南京大学学报(自然科学)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288673A (en) * 2019-05-08 2019-09-27 深圳大学 Instruction sequence is mapped to the method and system of image

Also Published As

Publication number Publication date
CN106503696B (en) 2019-08-13

Similar Documents

Publication Publication Date Title
CN104834922B (en) Gesture identification method based on hybrid neural networks
CN104268593B (en) The face identification method of many rarefaction representations under a kind of Small Sample Size
CN105205449B (en) Sign Language Recognition Method based on deep learning
CN105373777B (en) A kind of method and device for recognition of face
CN105335732B (en) Based on piecemeal and differentiate that Non-negative Matrix Factorization blocks face identification method
CN105825183B (en) Facial expression recognizing method based on partial occlusion image
CN104680127A (en) Gesture identification method and gesture identification system
CN108182397B (en) Multi-pose multi-scale human face verification method
CN104392246B (en) It is a kind of based between class in class changes in faces dictionary single sample face recognition method
CN104463209A (en) Method for recognizing digital code on PCB based on BP neural network
CN105701495B (en) Image texture feature extraction method
CN104036293B (en) Rapid binary encoding based high resolution remote sensing image scene classification method
Bouchaffra et al. Structural hidden Markov models for biometrics: Fusion of face and fingerprint
CN101615245A (en) Expression recognition method based on AVR and enhancing LBP
CN106778474A (en) 3D human body recognition methods and equipment
CN106021330A (en) A three-dimensional model retrieval method used for mixed contour line views
CN105574475A (en) Common vector dictionary based sparse representation classification method
CN113989890A (en) Face expression recognition method based on multi-channel fusion and lightweight neural network
CN107944428A (en) A kind of indoor scene semanteme marking method based on super-pixel collection
CN112069891A (en) Deep fake face identification method based on illumination characteristics
CN110363099A (en) A kind of expression recognition method based on local parallel deep neural network
Kalaiselvi et al. Face recognition system under varying lighting conditions
CN109902692A (en) A kind of image classification method based on regional area depth characteristic coding
CN105404883B (en) A kind of heterogeneous three-dimensional face identification method
Karamizadeh et al. Race classification using gaussian-based weight K-nn algorithm for face recognition.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210414

Address after: 250013 Room 102, West unit, building 13, 81 Qianfo Shandong Road, Lixia District, Jinan City, Shandong Province

Patentee after: Jinan Century advantage Information Technology Co.,Ltd.

Address before: 611731, No. 2006, West Avenue, Chengdu hi tech Zone (West District, Sichuan)

Patentee before: University of Electronic Science and Technology of China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230406

Address after: 701, Qilu Transportation Information Industrial Park, Building 2, United Fortune Plaza, 2177 Tianchen Road, Jinan Area, China (Shandong) Free Trade Pilot Zone, Jinan City, Shandong Province, 250000

Patentee after: Qilu Electric Technology (Shandong) scientific and technological achievement transformation Co.,Ltd.

Address before: 250013 Room 102, West unit, building 13, 81 Qianfo Shandong Road, Lixia District, Jinan City, Shandong Province

Patentee before: Jinan Century advantage Information Technology Co.,Ltd.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190813

CF01 Termination of patent right due to non-payment of annual fee