CN106548130A - A kind of video image is extracted and recognition methods and system - Google Patents

A kind of video image is extracted and recognition methods and system Download PDF

Info

Publication number
CN106548130A
CN106548130A CN201610892089.6A CN201610892089A CN106548130A CN 106548130 A CN106548130 A CN 106548130A CN 201610892089 A CN201610892089 A CN 201610892089A CN 106548130 A CN106548130 A CN 106548130A
Authority
CN
China
Prior art keywords
video
image
video image
identification
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610892089.6A
Other languages
Chinese (zh)
Inventor
袁真
李首峰
陈放
王亚博
孟欣欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guozhengtong Polytron Technologies Inc
Original Assignee
Guozhengtong Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guozhengtong Polytron Technologies Inc filed Critical Guozhengtong Polytron Technologies Inc
Priority to CN201610892089.6A priority Critical patent/CN106548130A/en
Publication of CN106548130A publication Critical patent/CN106548130A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes a kind of video image and extracts and recognition methods and system, including database, for preserving video image training set;Training set processing module, for video image training set is obtained from database, and by their vectorizations, Scatter Matrix is answered according to overall, one group of orthogonal characteristic vector is obtained by the method for singular value decomposition, and chooses front 100 maximum characteristic values and its corresponding characteristic vector;Video image background removes module, for being processed to incoherent background in video image using image background mist elimination mode, image after process retains the video image for needing identification, is designated as the first images to be recognized, and first images to be recognized is sent to video images detection module;Video images detection module, processes to removing the first images to be recognized for receiving of module from video image background, determines the transformation space projection properties value of first images to be recognized, onestep extraction its feature of going forward side by side;Video identification module, carries out video identification using the nearest neighbor classifier of Euclidean distance, and exports recognition result.

Description

A kind of video image is extracted and recognition methods and system
Technical field
The present invention relates to video field, more particularly to a kind of to be processed using computer graphic image and mode identification technology Video image extract and recognition methods and system.
Background technology
Video identification is a kind of technology of identification being identified based on video image characteristic information, in recent years at some Field achieves application, and such as video identification can apply to gate control system, attendance checking system, smart mobile phone etc..
In video identification technology, mainly there are two steps:Characteristic vector is extracted from video image to be identified;By feature Vector carries out contrast with the characteristic vector of image in database and obtains recognition result.Wherein, first step directly affects video The accuracy of recognition result.In the prior art, video recognition algorithms are a lot, but cannot all ensure to be adapted to all samples, from And affect the accuracy of video identification.
Local binary patterns (LocalBinaryPattern, LBP) are proposed by Ojala, are measured in image local neighborhood Pixel value size simultaneously extracts texture information, has robustness to illumination variation.Its calculating is easy, anti-light according to interference, discriminating power By force, the recognition of face being widely used under illumination variation.But when illumination acute variation, LBP cannot represent the violent of change Degree, therefore reliability declines to a great extent, Tan et al. has also been proposed three value patterns of local on this basis (LocalTernaryPattern,LTP)。
LTP operators are improved to LBP operators, are encoded using three values, to improve the classification capacity of whole feature space. Pixel in neighborhood and center pixel are compared by the window of 3 × 3, self-defined threshold value t, and pixel value difference is mapped in gc It is quantified as in the region that 0, width is [- t ,+t], difference is+1 more than the Interval Coding, difference is -1 less than the Interval Coding, Difference is 0 in interval range interior coding.So, the binary system signed number of 8 can be produced in neighborhood, then by its position Different weights are given, and sums obtain three value pattern (LTP) characteristic value of local of the window, the area is described with this number The texture information in domain.
By the research to LBP and improvement, LTP solves the problems, such as the identification under illumination acute variation, to acute variation Image-forming condition (such as noise etc.) is with robustness.But LTP itself adopts self-defined threshold value, need to be looked for according to priori, be set Optimal threshold, ageing meeting are impacted, meanwhile, threshold value cannot take into account the difference between sample, also there is Problem of Universality.Therefore, Needs adopt new operator to improve the discrimination to video image identification, and the optimization of threshold value becomes a desirable direction.
In government affairs, the people's livelihood, environment, public safety, urban service, commercial activity, market, bank, customs, military restricted zone etc. In scene for personage or background Dynamic Recognition for the construction of intelligent city exist inherence power demand.
Video identification technology is Computer Image Processing, graphical configuration, pattern-recognition, computer visualization and cognitive science Etc. multiple technologies and the complex technique in field.Video identification technology due to its data complexity and collection, treatment technology it is tired Difficulty, which does not also much reach the requirement of application.
The content of the invention
The purpose of the present invention is achieved through the following technical solutions.
The present invention proposes a kind of video image and extracts and identifying system, and the video image is extracted and identifying system includes Following part:
Database, for preserving video image training set;
Training set processing module, for video image training set is obtained from database, and by their vectorizations, according to total The multiple Scatter Matrix of body, obtains one group of orthogonal characteristic vector by the method for singular value decomposition, and chooses front 100 maximum spies Value indicative and its corresponding characteristic vector;
Video image background remove module, for using image background mist elimination mode to incoherent background in video image Processed, the image after process retains the video image for needing identification, is designated as the first images to be recognized, and described first is treated Identification image is sent to video images detection module;
Video images detection module, to removing from the first images to be recognized for receiving of module carries out from video image background Reason, determines the transformation space projection properties value of first images to be recognized, onestep extraction its feature of going forward side by side;
Video identification module, carries out video identification using the nearest neighbor classifier of Euclidean distance, and exports recognition result.
According to an aspect of the present invention, the video images detection module is further used for, for the first figure to be identified As It, by formula yk=ETIt, extract its feature.
According to an aspect of the present invention, the video recognition system is applied to market, bank, customs, military restricted zone etc. The identification of object in scene.
The invention allows for a kind of method that video identification is carried out using above-mentioned video image extraction and identifying system, Comprise the steps:
Step one, video image training set is obtained from database, and by their vectorizations;
Step 2, according to the eigenmatrix of the video image training set determined in step one, answer Scatter Matrix according to overall, One group of orthogonal characteristic vector is obtained by the method for singular value decomposition;
Step 3, the front 100 maximum characteristic values of selection and its corresponding characteristic vector;
Step 4:Video image to be identified is obtained from the video image of high-definition camera captured in real-time, and is obtained and is waited to know Transformation space projection properties value in other video image;
Step 5, the video image I to be identified obtained according to step 4t, by formula yk=ETIt, extract its feature;
Step 6, video identification is carried out using the nearest neighbor classifier of Euclidean distance, if the result of identification is equal to minimum It is worth, then video image I to be identifiedtWith training image IrBelong to same class object.
According to an aspect of the present invention, in the step 5, expression of the image in higher dimensional space is converted to into which in phase The characteristic of lower dimensional space is answered, the extraction to characteristics of image is realized.
According to an aspect of the present invention, the video frequency identifying method is applied to market, bank, customs, military restricted zone etc. The identification of object in scene.
Description of the drawings
By the detailed description for reading hereafter preferred embodiment, various other advantages and benefit are common for this area Technical staff will be clear from understanding.Accompanying drawing is only used for the purpose for illustrating preferred embodiment, and is not considered as to the present invention Restriction.And in whole accompanying drawing, it is denoted by the same reference numerals identical part.In the accompanying drawings:
Accompanying drawing 1 shows.
Accompanying drawing 2 shows the video frequency identifying method schematic diagram according to embodiment of the present invention.
Specific embodiment
The illustrative embodiments of the disclosure are more fully described below with reference to accompanying drawings.Although showing in accompanying drawing public The illustrative embodiments opened, it being understood, however, that may be realized in various forms the disclosure and the reality that should not be illustrated here The mode of applying is limited.On the contrary, there is provided these embodiments are able to be best understood from the disclosure, and can be by this public affairs What the scope opened was complete conveys to those skilled in the art.
Video identification is the living things feature recognition skill that a kind of visual signature information of utilization video image carries out identification Art.Video identification compared with other traditional biological technology of identification has the advantages that to be easy to collection, convenient and swift, interaction friendly, It is gradually accepted by the public.
Several algorithms of video identification include:
1st, template matching algorithm (Correlationalgorithm):Picture position is calculated directly by obtaining video image The distance between vector it is whether similar to weigh video image.It is exactly briefly to obtain that video is most basic, intuitively feature (such as ear, nose, shape of face) carrying out the comparison of similarity, is the benchmark algorithm of video identification.This algorithm recognition speed is fast, accounts for But it is low with the little accuracy rate of Installed System Memory, it is not suitable for the system that high identification is required.
2nd, Eigenface:The algorithm is carried out on the basis of Eigenface by based on principal component analysis (PCA) method Optimization can make algorithm more effectively, benchmark recognizer when being video image contrast test.
3rd, Fisher faces algorithm:Linear discriminant analysis method is most classification capacity is extracted from higher dimensional space low Dimensional feature, the characteristic after projection, in lower dimensional space, different classes of sample is separated as far as possible, at the same time it is wished that each The sample of classification is as intensive as possible, that is to say, that between-class scatter is the bigger the better, and within-cluster variance is the smaller the better.
4th, the algorithm based on Gabor characteristic:Eigenface and Fisher faces algorithm carry out signature analysis using gray scale in image. And gradation of image can be analyzed from multiple angles based on the algorithm of Gabor characteristic, simulate mammal cortex cell Zone profile, and it is better than eigenface and Fisher faces algorithm for the adaptability of illumination.The selection of video recognition algorithms will be examined Consider the objective condition such as video data acquiring environment, collecting device condition, image real time transfer optimization, not all algorithm is all Suitable system to be set up.
For ease of illustrating video frequency identifying method provided in an embodiment of the present invention and device, first to being related in the embodiment of the present invention To video recognition algorithms in various scenes and technological reserve simply introduced.
IMAQ can be captured by camera or is directly selected from hard disk using camera as imageing sensor A chapter facial image is taken, then face image data source is stored in database.
Image semantic classification can include greyscale transformation, binary transform, noise reduction process, then utilize the side based on Adaboost Method carries out the detection of video and positioning, if detecting effective video, among being stored in database.
With 0~255 grey degree for representing every bit, 0 is black, and 255 are white for greyscale transformation.RGB color is straight Linear change was connected, R, G, B three-component was processed successively, RGB was by being converted to gray-scale map below formula:Gray= 0.299*R+0.587*G+0.114*B。
Binary transform is by the Sequence Transformed image sequence into 255-0 of 0-1 images.
Le is the operator that image texture characteristic is described in a kind of tonal range than operator, is mainly used to assisted extraction image office The contrast metric in portion region.It is using the gray value of central pixel point as threshold value, in center pixel neighborhood of a point to strangle than operator Inside sampled, for example, taken 3 × 3 neighborhood, then the gray value of 8 pixels adjacent with central pixel point and threshold value are carried out Relatively, if neighbor pixel gray value is more than threshold value (i.e. central pixel point gray value), the location of pixels is marked as 1, no 0 is labeled as then.8 bits can be so produced, 8 bits are converted to into decimal number, as middle imago The LBP characteristic values of vegetarian refreshments, the decimal number span being converted to due to 8 bits is 0-255, therefore characteristic value takes Value scope is 0-255.If providing one seeks the instantiation strangled than characteristic value, the grey scale pixel value of central pixel point is 9, Neighborhood territory pixel gray value is compared with center pixel gray value, is obtained 8 bits 01000111, is converted to decimal number 71 compare characteristic value as Le.
But, Le only compares the size of gray value than operator and have ignored the contrast value between pixel, when the picture in neighborhood When plain gray value changes on the premise of magnitude relationship is kept, strangle and keep constant than coding result.Therefore, strangle and cannot retouch than operator The difference before and after nonlinear change is stated, the textural characteristics that part may finally be caused important are dropped.
Ti Leg operators are, to strangling the improvement than operator, to be encoded using three values, to improve the classification capacity of whole feature space. One threshold value t of User Defined, greatly enhances the sensitivity to noise, what balanced to a certain extent violent illumination caused Bloom, the gray value in light region.Specific Ti Leg operator operation processes are when neighborhood territory pixel point gray value and central pixel point The difference of gray value is more than or equal to t, and the location of pixels is marked as 1, neighborhood territory pixel point gray value and central pixel point gray value Difference be less than-t, the location of pixels is marked as -1, is otherwise labeled as 0.In order to simplify calculating, the cataloged procedure of unconventional Leg can be with It is decomposed on the occasion of calculating section and negative value calculating section, on the occasion of the side than operator calculating is strangled in application respectively with each part of negative value Method.Decomposition computation process is shown in Figure 2, and the coding result for extracting "+1 " is designated as " 1 " remaining is designated as " 0 ", by strangling than coding Mode obtains pattern feature;The coding result of extraction " -1 " is designated as " 1 ", and remaining is designated as " 0 ", is obtained than coded system by strangling Lower pattern feature.So after Ti Leg feature extractions conversion, the sign and classification performance of whole feature space sample are entered One step strengthens and improves.
The soft and hardware program for realizing that video identification needs are perfect of video recognition system.An enforcement of the invention Mode, including following part:
Video capture device:The collection of appearance image is carried out to acquisition target, object is generally required and is not worn ornament (such as Glasses, cap etc.) ensure to gather the integrality of image.Collection video needs to meet the illumination of system requirements, shooting angle, background Etc. objective condition.
Video image positioner:Position after obtaining image to video from face to profile models, and determines acquisition target Position match with picture position to be compared.
Image pre-processing module:After determining video location, view data is pre-processed, adjust view data, it is excellent Change and compare effect.
Extract characteristics of image module:Required according to algorithm, the data needed in the image for having pre-processed are extracted.
Searching database:Searching database, for obtaining video image training set, extract data and database in regard Frequency training set of images needs the data of certification to compare.
Result display module:Reponse system result, is for further processing according to result.
On the basis of above-mentioned video recognition system, an a kind of embodiment of the invention, it is proposed that video figure As extraction and identifying system, configure as shown in figure 1, including following part:
Database:For preserving video image training set;
Training set processing module, obtains video image training set from database, and by their vectorizations, according to overall multiple Scatter Matrix, obtains one group of orthogonal characteristic vector by the method for singular value decomposition, and chooses front 100 maximum characteristic values And its corresponding characteristic vector;
Video image background removes module, obtains real time video image from video capture device, using image background mist elimination Mode is processed to incoherent background in video image, and the image after process retains the video image for needing identification, is designated as First images to be recognized, and first images to be recognized is sent to into video images detection module;
Video images detection module, to removing from the first images to be recognized for receiving of module carries out from video image background Reason, determines the transformation space projection properties value of first images to be recognized, onestep extraction its feature of going forward side by side;
Video identification module, carries out video identification using the nearest neighbor classifier of Euclidean distance, and exports recognition result.
The training set of hypothesis video image is C.C has m object video, and each object has n width video images.Note Each image all contains deep data (being represented with depth) and grey data (being represented with intn).Imaginary unit represented with i, then kth Width (1≤k≤n × m) high clear video image IkCan be expressed as:
Ik=depthk+intnk×i (1)
An embodiment of the invention, the present invention propose a kind of video detecting method of Dynamic Recognition.
First, average I under whole training set image complex fields can be expressed as:
In formula:Ip_q represents q image of p-th object in training set.
Scatter Matrix S is the totality of video training set C again:
In formula:IkFor k-th training image,For the mean value of training sample, scales of the n × m for training set.
Scatter Matrix is answered according to overall, one group of orthogonal characteristic vector is obtained by the method for singular value decomposition:u1, U2 ..., ut and its corresponding characteristic value:λ 1, λ 2 ..., λ t, wherein 1 >=λ of λ 2 >=... >=λ t.From front d (d<T) individual non-zero is special The corresponding characteristic vector of value indicative is used as orthogonal basis.D is referred to as intrinsic dimensionality N.Orthogonal basis is pressed into pattern matrix arrangement, resulting figure Picture referred to as eigenface.In the subspace E of eigenface, video sample IkY can be just projected ask.By such method, Expression of the piece image in higher dimensional space is converted to into its characteristic in corresponding lower dimensional space, is realized to characteristics of image Extract:
yk=ETIk (4)
Through above-mentioned feature extraction, the column vector of corresponding d × 1 dimension of each training video image is special to preserve which Reference ceases.Training set has m × n images, so finally giving matrix Y={ y1, y2 ..., ym × n }
Preserve the characteristic information of all training images.Arbitrary video image I to be identifiedtAlso can be extracted by formula (4) Its feature, and save as yt.Using the nearest neighbor classifier of Euclidean distance, define:
If met:
Dist(yt,yr)=min [Dist (yt,yc)]yc∈Y (6)
Then yt,yrBelong to same class object.Video image I i.e. to be identifiedtWith training image IrBelong to same class object.
On the basis of the video detecting method of above-mentioned Dynamic Recognition, an embodiment of the invention, it is proposed that A kind of method that video identification is carried out using above-mentioned video image extraction and identifying system, as shown in Fig. 2 can include as follows Step:
Step one, video image training set is obtained from database, and by their vectorizations.
By the gray matrix of i-th m × n size by conversion, row vector train_data of 1 × (m × n) is stored in (i,:) in.These vectors are arranged in order, and are averaged by row, such operation is intended to obtain video image in training set Mean value.The difference of each video image and mean value in training set is calculated, and takes the standardized form control of maximum Eigenmatrix train_xd numerical value spans processed.Ask for matrix R=train_xd*train_xd ', and calculate eigenvalue λ i with Its corresponding orthonomalization characteristic vector ν i, train_xd represent eigenmatrix.
Step 2, according to the eigenmatrix of the video image training set determined in step one, answer Scatter Matrix according to overall, One group of orthogonal characteristic vector is obtained by the method for singular value decomposition.
Orthogonal characteristic vector u1, u2 ..., ut and its corresponding characteristic value:λ 1, λ 2 ..., λ t, wherein 1 >=λ of λ 2 ≥…≥λt.From front d (d<T) the corresponding characteristic vector of individual nonzero eigenvalue is used as orthogonal basis.D is referred to as intrinsic dimensionality N.
Step 3, the front 100 maximum characteristic values of selection and its corresponding characteristic vector.
The orthonomalization characteristic vector of covariance matrix is asked for according to singular value decomposition theorem, formula is as follows:
Eigenface space is represented by:
U=(U1, U2 ..., Up) (17)
Matrix of differences train_xd of the training set image with mean value is projected to into eigenface space, matrix train_ is stored in In Y:
Train_Y=train_xd*U (18)
So far the training stage complete.
Step 4, video image to be identified is obtained from the video image of high-definition camera captured in real-time, and obtain and wait to know Transformation space projection properties value in other video image.
The corresponding transformation space projection properties value of each image is by each pixel in the image and the neighborhood of pixel points The gray scale difference value of each pixel determines.
Based on the transformation space projection properties value in video image to be identified, contribute to the spy in video image to be identified Levy in extraction process, constantly correct and improve the accuracy of feature extraction.
Video image to be identified for carrying out video identification adopts gray level image, first according in video image to be identified In each pixel and the neighborhood of pixel points, the gray value of pixel determines the adaptive threshold of the pixel, recycles Ti Leg to calculate Son is calculated as the threshold value of Ti Leg operators using the adaptive threshold of the pixel when calculating the characteristic value of the pixel, i.e., Determine that each pixel Ti Leg adaptive thresholds are special in video image to be identified using with adaptive threshold Ti Leg operators Value indicative.
In some embodiments of the invention, each pixel Ti Leg adaptive thresholds in video image to be identified are determined Implementing for characteristic value can include:
Each pixel in video image to be identified is traveled through, it is determined that the current pixel point for traversing presets each in neighborhood The gray scale difference of the gray value of pixel and the gray value of current pixel point.
The standard deviation of multiple gray scale difference values is calculated as the corresponding adaptive threshold of current pixel point.
Using current pixel point corresponding adaptive threshold as the threshold value of Ti Leg operators, the unconventional with adaptive threshold is adopted Leg operators determine current pixel point Ti Leg characteristic values, and current pixel point Ti Leg characteristic values are that current pixel point Ti Leg are adaptive Answer threshold trait value.
General default neighborhood can take 3 × 3 neighborhood block, then have 8 neighbor pixels after removing central pixel point, respectively The gray scale difference of this 8 neighbor pixels and central pixel point is calculated, this group can be calculated by this 8 gray scale difference value of group tried to achieve The standard deviation of gray scale difference recycles Ti Leg operators to calculate the unconventional of current pixel point as the corresponding adaptive threshold of current pixel point Leg characteristic values.
In some embodiments of the invention, video frequency identifying method provided in an embodiment of the present invention can also include:To treat Identification Ti Leg images are pre-processed and are divided into the polylith of equalization;Can so adopt and have.The local three of adaptive threshold Value pattern Ti Leg operator block-by-blocks calculate each pixel Ti Leg adaptive threshold characteristic values.It is determined that upper pattern feature face and under Pattern feature face, upper pattern feature face are made up of mode characteristic values on each pixel, and lower pattern feature face is by each pixel Lower mode characteristic values composition.On pixel the span of mode characteristic values be 0-255, the span of lower mode characteristic values For 0-255, so, the gray value of pixel is replaced with into corresponding upper mode characteristic values or lower mode characteristic values, Ke Yifen Pattern feature face image and lower pattern feature face image Que Ding not be gone up.One video image to be identified can be converted to pattern Eigenface and lower pattern feature face.
Step 5, the video image I to be identified obtained according to step 4t, by formula yk=ETIt, extract its feature;
Step 6, video identification is carried out using the nearest neighbor classifier of Euclidean distance, if the result of identification is equal to minimum It is worth, then video image I to be identifiedtWith training image IrBelong to same class object.
Current video identification mostly is static identification, that is to say, that people must stand and be known on a fixed position Not, such technology of identification has recognition speed slowly, the narrow problem of use range.Cannot all meet in many important occasions The requirement of society.Dynamic video identification according to embodiments of the present invention can realize that people on the way walks, and crawl is regarded video camera at random Frequency image carries out the technique effect of quick identification.
According to video frequency identifying method of the present invention, quickly know in multiple targets that can be from dynamic video image The object for wanting to recognize is not gone out.
The above, the only present invention preferably specific embodiment, but protection scope of the present invention is not limited thereto, Any those familiar with the art the invention discloses technical scope in, the change or replacement that can be readily occurred in, Should all be included within the scope of the present invention.Therefore, protection scope of the present invention should be described with the protection model of claim Enclose and be defined.

Claims (6)

1. a kind of video image is extracted and identifying system, it is characterised in that the video image is extracted and identifying system include as Lower part:
Database, for preserving video image training set;
Training set processing module, for video image training set is obtained from database, and by their vectorizations, according to overall multiple Scatter Matrix, obtains one group of orthogonal characteristic vector by the method for singular value decomposition, and chooses front 100 maximum characteristic values And its corresponding characteristic vector;
Video image background removes module, for being carried out to incoherent background in video image using image background mist elimination mode Process, the image after process retains the video image for needing identification, is designated as the first images to be recognized, and to be identified by described first Image is sent to video images detection module;
Video images detection module, the first images to be recognized to receiving from video image background removal module are processed, Determine the transformation space projection properties value of first images to be recognized, onestep extraction its feature of going forward side by side;
Video identification module, carries out video identification using the nearest neighbor classifier of Euclidean distance, and exports recognition result.
2. video recognition system as claimed in claim 1, it is characterised in that:
The video images detection module is further used for, for the first images to be recognized It, by formula yk=ETIt, extract which special Levy.
3. video recognition system as claimed in claim 1, it is characterised in that:
The video recognition system is applied to the identification of object in the scenes such as market, bank, customs, military restricted zone.
4. the method for carrying out video identification using video image as claimed in claim 1 extraction and identifying system, its feature exist In comprising the steps:
Step one, video image training set is obtained from database, and by their vectorizations;
Step 2, according to the eigenmatrix of the video image training set determined in step one, answer Scatter Matrix according to overall, pass through The method of singular value decomposition obtains one group of orthogonal characteristic vector;
Step 3, the front 100 maximum characteristic values of selection and its corresponding characteristic vector;
Step 4:Video image to be identified is obtained from the video image of high-definition camera captured in real-time, and obtains to be identified regarding Transformation space projection properties value in frequency image;
Step 5, the video image I to be identified obtained according to step 4t, by formula yk=ETIt, extract its feature;
Step 6, video identification is carried out using the nearest neighbor classifier of Euclidean distance, if the result of identification is equal to minimum of a value, Video image I to be identifiedtWith training image IrBelong to same class object.
5. the method for video identification as claimed in claim 4, it is characterised in that:
In the step 5, expression of the image in higher dimensional space is converted to into its characteristic in corresponding lower dimensional space, is realized Extraction to characteristics of image.
6. the video recognition system as described in claim 4 or 5, it is characterised in that:
The video frequency identifying method is applied to the identification of object in the scenes such as market, bank, customs, military restricted zone.
CN201610892089.6A 2016-10-12 2016-10-12 A kind of video image is extracted and recognition methods and system Pending CN106548130A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610892089.6A CN106548130A (en) 2016-10-12 2016-10-12 A kind of video image is extracted and recognition methods and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610892089.6A CN106548130A (en) 2016-10-12 2016-10-12 A kind of video image is extracted and recognition methods and system

Publications (1)

Publication Number Publication Date
CN106548130A true CN106548130A (en) 2017-03-29

Family

ID=58368691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610892089.6A Pending CN106548130A (en) 2016-10-12 2016-10-12 A kind of video image is extracted and recognition methods and system

Country Status (1)

Country Link
CN (1) CN106548130A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992839A (en) * 2017-12-12 2018-05-04 北京小米移动软件有限公司 Person tracking method, device and readable storage medium storing program for executing
CN111694979A (en) * 2020-06-11 2020-09-22 重庆中科云从科技有限公司 Archive management method, system, equipment and medium based on image
CN112101058A (en) * 2020-08-17 2020-12-18 武汉诺必答科技有限公司 Method and device for automatically identifying test paper bar code

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112116A (en) * 2011-06-30 2014-10-22 深圳市君盛惠创科技有限公司 Cloud server
CN104318219A (en) * 2014-10-31 2015-01-28 上海交通大学 Face recognition method based on combination of local features and global features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112116A (en) * 2011-06-30 2014-10-22 深圳市君盛惠创科技有限公司 Cloud server
CN104318219A (en) * 2014-10-31 2015-01-28 上海交通大学 Face recognition method based on combination of local features and global features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
庞成: "人脸识别中子空间降维方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李进: "基于代数特征的人脸识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992839A (en) * 2017-12-12 2018-05-04 北京小米移动软件有限公司 Person tracking method, device and readable storage medium storing program for executing
CN111694979A (en) * 2020-06-11 2020-09-22 重庆中科云从科技有限公司 Archive management method, system, equipment and medium based on image
CN112101058A (en) * 2020-08-17 2020-12-18 武汉诺必答科技有限公司 Method and device for automatically identifying test paper bar code
CN112101058B (en) * 2020-08-17 2023-05-09 武汉诺必答科技有限公司 Automatic identification method and device for test paper bar code

Similar Documents

Publication Publication Date Title
CN109583342B (en) Human face living body detection method based on transfer learning
Darlow et al. Fingerprint minutiae extraction using deep learning
KR101185525B1 (en) Automatic biometric identification based on face recognition and support vector machines
CN106845328B (en) A kind of Intelligent human-face recognition methods and system based on dual camera
CN108268859A (en) A kind of facial expression recognizing method based on deep learning
CN111340824B (en) Image feature segmentation method based on data mining
CN102982322A (en) Face recognition method based on PCA (principal component analysis) image reconstruction and LDA (linear discriminant analysis)
CN111126240B (en) Three-channel feature fusion face recognition method
CN109255289B (en) Cross-aging face recognition method based on unified generation model
CN107784263B (en) Planar rotation face detection method based on improved accelerated robust features
CN109325472B (en) Face living body detection method based on depth information
CN112464885A (en) Image processing system for future change of facial color spots based on machine learning
Hebbale et al. Real time COVID-19 facemask detection using deep learning
CN110598574A (en) Intelligent face monitoring and identifying method and system
Monwar et al. Pain recognition using artificial neural network
CN111832405A (en) Face recognition method based on HOG and depth residual error network
CN113591747A (en) Multi-scene iris recognition method based on deep learning
CN113312965A (en) Method and system for detecting unknown face spoofing attack living body
CN106548130A (en) A kind of video image is extracted and recognition methods and system
Adeyanju et al. Development of an american sign language recognition system using canny edge and histogram of oriented gradient
CN103942545A (en) Method and device for identifying faces based on bidirectional compressed data space dimension reduction
Shaban et al. A Novel Fusion System Based on Iris and Ear Biometrics for E-exams.
CN110135362A (en) A kind of fast face recognition method based under infrared camera
CN106529412A (en) Intelligent video recognition method and system
KR100880256B1 (en) System and method for recognition of face using the real face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170329