CN110210432A - A kind of face identification method based on intelligent security guard robot under the conditions of untethered - Google Patents

A kind of face identification method based on intelligent security guard robot under the conditions of untethered Download PDF

Info

Publication number
CN110210432A
CN110210432A CN201910492676.XA CN201910492676A CN110210432A CN 110210432 A CN110210432 A CN 110210432A CN 201910492676 A CN201910492676 A CN 201910492676A CN 110210432 A CN110210432 A CN 110210432A
Authority
CN
China
Prior art keywords
image
pixel
face
formula
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910492676.XA
Other languages
Chinese (zh)
Inventor
王耀南
黄亨斌
毛建旭
朱青
周士琪
张思远
钟杭
袁小芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN201910492676.XA priority Critical patent/CN110210432A/en
Publication of CN110210432A publication Critical patent/CN110210432A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of face identification methods based on intelligent security guard robot under the conditions of untethered, firstly, carrying out Face datection to image using HOG feature, then extract human face region image and are used for subsequent processing;Then, the classifiers of two classification are carried out to clear image and blurred picture using the training of CNN convolutional neural networks one, carry out deblurring processing to sorting out the blurred picture come again later, the step for clear image is then skipped;Then, Lucy-Richardson algorithm is recycled to carry out deblurring processing to moving image frame sequence;SRCNN Image Super-resolution is reused by image normalization to same size scale;It will be input in trained CNN convolutional neural networks by the image of above step processing again and extract feature vector;Finally, being classified by SVM to the feature vector extracted, to realize recognition of face.The present invention has the advantages that effectively improve efficiency and precision that intelligent security guard robot identifies face during patrol.

Description

A kind of face identification method based on intelligent security guard robot under the conditions of untethered
Technical field
The present invention relates to technical field of computer vision, more particularly, to one kind based on intelligent security guard robot it is non-by Face identification method under the conditions of limit.
Background technique
Existing security protection mainly utilizes video monitoring, needs special security personnel 24 hours and continual check and pacify On a patrol duty, human resources expense is big for guarantor person's timing, and by people monitor and go on patrol be easy it is tired out etc. at night In the case of occur careless omission situation.And when occurrence of large-area has a power failure, traditional security system majority can all paralyse.Intelligent security guard machine People can be applied to the security protection patrol mission of indoor and outdoor, and carry out environmental data collection, constitute flexible monitoring system.Security protection machine People can go on patrol work than continuing security protection under relatively rugged environment, greatly save human resources, have in safety absolutely Advantage.Since the picture quality obtained when the patrol of intelligent security guard robot alternates betwwen good and bad, cannot be required for recognition of face The height cooperation of identified personnel, and the height cooperation that traditional face identification method is mostly based on identified personnel can be only achieved High-precision discrimination.In consideration of it, studying the high-precision untethered face recognition technology of one kind is that those skilled in the art needs Technical problems to be solved.
Summary of the invention
In view of this, the present invention proposes a kind of face identification method based on intelligent security guard robot under the conditions of untethered, This method solves existing intelligent security guard robot and patrols by image deblurring, the image frame sequence of darkening photograph, superresolution processing Obtain that image is fuzzy, illumination condition is complicated and changeable, caused by human face posture variation and facial image low resolution etc. during patrolling Recognition of face precision reduces or identifies unsuccessful problem.
Face identification method based on intelligent security guard robot under the conditions of untethered of the invention, comprising the following steps:
S1 carries out Face datection to image using HOG feature, and intercepts corresponding facial image;
S2 passes through the face figure of the two classification classifier judgement interceptions based on CNN convolutional neural networks trained in advance Seem it is no fuzzy, if so, entering step S3;Conversely, then entering step S4;
S3 makees at deblurring the fuzzy facial image classified in step S2 using Lucy-Richardson algorithm Reason, enters back into step S4;
S4 is handled step S2 or S3 treated facial image by SRCNN Image Super-resolution, obtains high-resolution Unified size is normalized after rate image;
S5 extracts feature vector by trained CNN convolutional neural networks in advance;
S6 classifies to the obtained feature vector of step SS5 by preparatory trained SVM classifier, realizes face Identification.
Further, the concrete methods of realizing of Face datection is carried out in step S1 to image using HOG feature are as follows:
S11 starts and is loaded into colorized face images;
Colorized face images are converted to grayscale image by S12;
Grayscale image is divided into the small cube of several 8*8 pixels by S13;
S14 calculates the direction gradient distribution histogram of each small cube;
S15 chooses general direction of the maximum direction of gradient distribution quantity as the small cube in histogram;
S16 replaces small cube with direction arrow, obtains the face HOG characteristic pattern indicated by many arrows;
S17 detects the human face region in image with face HOG characteristic pattern similar portion by comparing.
Further, the direction gradient distribution histogram of small cube obtains as follows in step S14:
S141 carries out convolution to small cube using gradient operator, gradient direction and width at each pixel is calculated Value, specific formula is as follows:
In formula, IxFor the gradient value in horizontal direction, IyFor the gradient value in vertical direction, G (x, y) represents the width of gradient Value, θ (x, y) represent the direction of gradient;
360 degree of angles are divided into 12 regions by S142 as needed, and each region includes 30 degree, and entire histogram includes 12 dimensions;
Its amplitude is added in histogram according to the gradient direction of each pixel using Tri linear interpolation method by S143, Obtain the histograms of oriented gradients of each small cube.
Further, two classification classifiers in step S2 based on CNN convolutional neural networks by the following method test by training Card:
S21 is ready to train data set used, includes clear image and blurred picture in data set, be individually placed to Two different files;
S22 builds the CNN convolutional neural networks of training;
S23 optimizes algorithm using batch stochastic gradient descent algorithm and solves;
S24 is trained with the data set iteration prepared in step S21;
The CNN convolutional neural networks are used for the identification of blurred picture after the completion of training by S25.
Further, CNN convolutional neural networks include an input layer, four convolutional layers, four ponds in step S22 Layer, two full articulamentums and an output layer, wherein convolutional layer is all made of relu activation primitive, and output layer is returned using softmax Return the probability of two classifications of output.
Further, Lucy- is used to the blurred picture that step S2 classifies employed in step S3 Richardson algorithm makees the concrete methods of realizing of deblurring processing are as follows:
S31 establishes basic model
Y=γ * P (2)
In formula, Y indicates that degraded image, γ indicate that original image, P indicate point spread function, indicate convolution operation, wherein
In formula, i and j are pixel, and n is original image pixels point number, and d is degraded image pixel number, p (i, It j) is point spread function, λiFor the true pixel values of pixel i, yjFor the observation pixel value of pixel j;
S32, it is assumed that observed image and true picture include the identical pixel of quantity;
S33, yjMeet Poisson distribution, it may be assumed that
yj~Poisson (μj) (4)
Wherein, ujIndicate the parameter of Poisson distribution, formula are as follows:
S34 introduces the probability function and Gaussian PSF of Poisson distribution:
In formula, λ indicate Poisson distribution expectation and variance, k indicate variable value, d (i, j) indicate pixel i and j away from From δ indicates variance;
Formula (4) is converted to following formula by S35:
Z (i, j)~Poisson (λip(i, j)) (7)
In formula, z (i, j) indicates that mapping of the pixel i on pixel j is contributed, therefore yjIt can be expressed from the next:
In formula, λkIndicate the true pixel values of pixel k, p (k, j) indicates point spread function at pixel (k, j) Value;
S36, it is assumed that image γ is it is known that z (i, j) can be estimated by following formula:
Then λ can be estimated by following formula:
S37 obtains to the end iterative:
In formula, t indicates the number of iteration, λk (t)Indicate the true pixel values of t moment pixel k, λi (t)Indicate t moment picture The true pixel values of vegetarian refreshments i, λi (t+1)Indicate the true pixel values of t+1 moment pixel i.
Further, facial image is handled by SRCNN Image Super-resolution in step S4, obtains high-definition picture Normalized unified size is 128*128 afterwards, is realized especially by following method:
S41 constructs BiCubic interpolating function using Bicubic interpolation method
In formula, a is variation coefficient, and x is distance of the pixel to P point, and W (x) is the weight of the pixel;
S42 takes the 4x4 neighborhood point (x near it to the pixel (x, y) of interpolationi,yj), wherein x and y can be floating Points, i, j=0,1,2,3;
S43 carries out interpolation calculation by following formula, converts the image into the size of 128*128:
In formula, f (x, y) indicates the image function after conversion, f (xi,yj) indicate original image function, W (x-xi) table Showing indicates xiWeight, W (y-yj) indicate yjWeight.
Further, it is based on three-layer coil product neural network using SRCNN Image Super-resolution employed in step S4, Wherein, first convolutional layer: convolution kernel size 9 × 9, convolution kernel number 64;Second convolutional layer: convolution kernel size 1 × 1, volume Product nucleus number mesh 32;Third convolutional layer: convolution kernel size 5 × 5, convolution kernel number 1.
Further, the CNN convolutional neural networks of training one extraction feature vector employed in step S5 is specific Implementation method are as follows:
S51 uses Euclidean distance as similar by Triplet Loss loss function training CNN convolutional neural networks Property measurement, the Triplet Loss loss function are as follows:
In formula,For anchor example,Be positive example,Be negative example,For anchor exampleWith just show ExampleEuclidean distance measurement,For anchor exampleWith negative exampleEuclidean distance measurement, a shows for anchor ExampleWith positive exampleThe distance between and anchor exampleWith negative exampleThe distance between minimum interval;
The network structure of CNN convolutional neural networks in S52, step S51 includes four convolutional layers, four maximum ponds Layer, a full articulamentum, mode input are a triple, size 128*128*3, first convolutional layer: convolution kernel size 11 × 11, convolution kernel number is 48, uses maximum value pond, step-length 2;Second convolutional layer: convolution kernel size 5 × 5, convolution Nucleus number mesh is 128, uses maximum value pond, step-length 2;Third convolutional layer: convolution kernel size 3 × 3, convolution kernel number are 192, use maximum value pond, step-length 2;4th convolutional layer: convolution kernel size 3 × 3, convolution kernel number is 128, using most Big value pond, step-length 2;Full articulamentum exports 128 dimensional feature vectors.
Further, the concrete methods of realizing that SVM classifier employed in step S6 is trained in advance are as follows:
S61 establishes face database in advance;
Images all in database are extracted 128 dimensions by the CNN convolutional neural networks of step S5 training by S62 respectively Feature vector;
S63 designs a SVM between any two classes sample, one-to-one method classification based training is carried out, when to a unknown sample When being classified, last who gets the most votes's classification is the classification of the unknown sample.
Face identification method provided by the invention based on intelligent security guard robot under the conditions of untethered, (1) is due to face Identification work is only needed to human face region image procossing, causes computing resource waste to avoid handling remaining area image, so The first step carries out Face datection to image first with HOG feature, then extracts human face region image and is used for subsequent processing;(2) it is Overcome the problems, such as that the shake of intelligent security guard robot causes motion image blurring, using Lucy-Richardson algorithm to motion diagram As frame sequence carries out deblurring processing, it is contemplated that computing resource caused by the image being clarified above itself carries out deblurring processing Waste, so the classifier of two classification is carried out to clear image and blurred picture using the training of CNN convolutional neural networks one, it The step for blurred picture progress deblurring processing to sorting out again afterwards, clear image is then skipped;(3) schemed using SRCNN As super resolution algorithm is by image normalization to same size scale;(4) training will be input to by the image of above step processing Feature vector is extracted in good CNN convolutional neural networks;(5) classified by SVM to the feature vector extracted, thus real Existing recognition of face.So being had using of the invention based on face identification method of the intelligent security guard robot under the conditions of untethered Effectively improve the advantages of intelligent security guard robot identifies the efficiency and precision of face during patrol.
Detailed description of the invention
The attached drawing for constituting a part of the invention is used to provide further understanding of the present invention, schematic reality of the invention It applies example and its explanation is used to explain the present invention, do not constitute improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the face identification method provided in an embodiment of the present invention based on intelligent security guard robot under the conditions of untethered Flow chart;
Fig. 2 is the flow chart that HOG eigenface detection method is utilized in the present invention;
Fig. 3 is the CNN convolutional neural networks structure chart for carrying out two classification in the present invention to clear image and blurred picture;
Fig. 4 is the convolutional neural networks structure chart of Image Super-resolution SRCNN in the present invention;
Fig. 5 is the network structure that the CNN convolutional neural networks of feature vector are extracted in the present invention;
Fig. 6 is the comparison diagram of face identification method and two kinds of traditional algorithms in the present invention;
Fig. 7 be in the present invention face identification method and two kinds of traditional algorithms with distance change precision curve graph (wherein, 1. number Curve is accuracy of identification curve of the present inventor's face recognition method under different distance, and 2. number curve is VGGFace algorithm servant Face identifies the accuracy of identification curve under different distance, and 3. number curve is that FaceNet algorithm human face identifies under different distance Accuracy of identification curve).
Specific embodiment
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the present invention can phase Mutually combination.The present invention will be described in detail below with reference to the accompanying drawings and embodiments.
It should be noted that making description below herein preferably to illustrate the present invention:
HOG (Histogram of Oriented Gradient), histograms of oriented gradients;
CNN (Convolutional Neural Networks), convolutional neural networks;
Lucy-Richardson algorithm, also known as LR algorithm are a kind of nonlinear methods;
SRCNN (Super-Resolution Convolutional Neural Network) is based on convolutional neural networks To realize image reconstruction;
SVM (support vector machine), support vector machines;
Relu activation primitive, (Rectified Linear Units) activation primitive;
Softmax returns (Softmax Regression), for solving more classification problems;
Bicubic interpolation method, the interpolation method of bicubic;
Triplet Loss, triple loss function.
Fig. 1 is provided in an embodiment of the present invention based on the untethered condition of intelligent security guard robot Under face identification method flow chart.As shown in Figure 1, the people of the invention based on intelligent security guard robot under the conditions of untethered Face recognition method, comprising the following steps:
S1 carries out Face datection to image using HOG feature, and intercepts corresponding facial image;
S2 passes through the face figure of the two classification classifier judgement interceptions based on CNN convolutional neural networks trained in advance Seem it is no fuzzy, if so, entering step S3;Conversely, then entering step S4;
S3 makees at deblurring the fuzzy facial image classified in step S2 using Lucy-Richardson algorithm Reason, enters back into step S4;
S4 is handled step S2 or S3 treated facial image by SRCNN Image Super-resolution, obtains high-resolution Unified size is normalized after rate image;
S5 extracts feature vector by trained CNN convolutional neural networks in advance;
S6 classifies to the obtained feature vector of step SS5 by preparatory trained SVM classifier, realizes face Identification.
It should be noted that further including following steps before step S3 and S4: using Gamma correction algorithm to deblurring after Image carries out lighting process;
Meanwhile as shown in Fig. 2, carrying out the concrete methods of realizing of Face datection in step S1 to image using HOG feature are as follows:
S11 starts and is loaded into colorized face images;
Colorized face images are converted to grayscale image by S12;
Grayscale image is divided into the small cube of several 8*8 pixels by S13;
S14 calculates the direction gradient distribution histogram of each small cube;
Specifically, the direction gradient distribution histogram of small cube obtains as follows in step S14:
S141 carries out convolution to small cube using gradient operator, gradient direction and width at each pixel is calculated Value, specific formula is as follows:
In formula, IxFor the gradient value in horizontal direction, IyFor the gradient value in vertical direction, G (x, y) represents the width of gradient Value, θ (x, y) represent the direction of gradient;
360 degree of angles are divided into 12 regions by S142 as needed, and each region includes 30 degree, and entire histogram includes 12 dimensions;
Its amplitude is added in histogram according to the gradient direction of each pixel using Tri linear interpolation method by S143, Obtain the histograms of oriented gradients of each small cube.X i.e. by the gradient direction size of current pixel, pixel in small cube is sat Mark and these three values of y-coordinate are as interpolation weights, and the value for being used to insertion is the gradient magnitude of pixel, so that it may obtain every The histograms of oriented gradients of a small cube.
It should be noted that aforementioned gradient operator can be sobel operator (Sobel Operator), laplacian operator (La Pu Laplacian operater) etc. any one.
S15 chooses general direction of the maximum direction of gradient distribution quantity as the small cube in histogram;
S16 replaces small cube with direction arrow, obtains the face HOG characteristic pattern indicated by many arrows;
S17 detects the human face region in image with face HOG characteristic pattern similar portion by comparing.
In further technical solution, two classification classifiers in step S2 based on CNN convolutional neural networks are by such as Lower method training verifying:
S21 is ready to train data set used, includes clear image and blurred picture in data set, be individually placed to Two different files;Herein, for clear image and blurred picture each 500;
S22 builds the CNN convolutional neural networks of training;
It should be noted that due to only handling two classification tasks, so network does not need too deeply, as shown in figure 3, CNN Convolutional neural networks include an input layer, four convolutional layers, four pond layers, two full articulamentums and an output layer, In, convolutional layer is all made of relu activation primitive, and output layer returns the probability of two classifications of output using softmax.
S23 optimizes algorithm using batch stochastic gradient descent algorithm and solves;
Due to using batch stochastic gradient descent algorithm (MSGD), some elements of MSGD are first defined, it is main to wrap Include: cost function, training, verifying, test model, parameter update regular (i.e. gradient decline);
S24 is trained with the data set iteration prepared in step S21;
Training process has a setting of train epochs and the number of iterations, and every step can traverse all training datas, in this example It is exactly 1000 images, an iteration traverses all samples in a data set;
The CNN convolutional neural networks are used for the identification of blurred picture after the completion of training by S25.
Preferably, Lucy-Richardson is used to the blurred picture that step S2 classifies employed in step S3 Algorithm makees the concrete methods of realizing of deblurring processing are as follows:
S31 establishes basic model
Y=γ * P (2)
In formula, Y indicates that degraded image, γ indicate that original image, P indicate point spread function, indicate convolution operation, wherein
In formula, i and j are pixel, and n is original image pixels point number, and d is degraded image pixel number, p (i, It j) is point spread function, λiFor the true pixel values of pixel i, yjFor the observation pixel value of pixel j;
S32, it is assumed that observed image and true picture include the identical pixel of quantity;
S33, yjMeet Poisson distribution, it may be assumed that
yj~Poisson (μj) (4)
Wherein, ujIndicate the parameter of Poisson distribution, formula are as follows:
S34 introduces the probability function and Gaussian PSF of Poisson distribution:
In formula, λ indicate Poisson distribution expectation and variance and unit time (or unit area) interior chance event it is flat Equal incidence, k indicate the value of variable, and d (i, j) indicates the distance of pixel i and j, and δ indicates variance;
Formula (4) is converted to following formula by S35:
Z (i, j)~Poisson (λip(i, j)) (7)
In formula, z (i, j) indicates that mapping of the pixel i on pixel j is contributed, therefore yjIt can be expressed from the next:
In formula, λkIndicate the true pixel values of pixel k, p (k, j) indicates point spread function at pixel (k, j) The value at place;
S36, it is assumed that image γ is it is known that z (i, j) can be estimated by following formula:
Then λ can be estimated by following formula:
S37 obtains to the end iterative:
In formula, t indicates the number of iteration, λk (t)Indicate the true pixel values of t moment pixel k, λi (t)Indicate t moment picture The true pixel values of vegetarian refreshments i, λi (t+1)Indicate the true pixel values of t+1 moment pixel i.
Furthermore it is worth mentioning that facial image is handled by SRCNN Image Super-resolution in step S4, high score is obtained Normalized unified size is 128*128 after resolution image, is realized especially by following method:
S41 constructs BiCubic interpolating function using Bicubic interpolation method
In formula, a is variation coefficient, and x indicates the pixel to the distance of P point, and W (x) is the weight of the pixel;
S42 takes the 4x4 neighborhood point (x near it to the pixel (x, y) of interpolationi,yj), wherein x and y can be floating Points, i, j=0,1,2,3;
S43 carries out interpolation calculation by following formula, converts the image into the size of 128*128:
In formula, f (x, y) indicates the image function after conversion, f (xi,yj) indicate original image function, W (x-xi) indicate xi Weight, W (y-yj) indicate yjWeight.
By low-resolution image input three-layer coil product neural network, to YCrCb, (i.e. YUV is mainly used for optimizing color video The transmission of signal, wherein Y indicates brightness, that is, grayscale value, is a baseband signal;And what U and V was indicated is then coloration, is made With being description colors of image and saturation degree, for the color of specified pixel, U and V are not baseband signals, it two is to be orthogonally modulated ) channel Y in color space rebuild, latticed form is (conv1+relu1)-(conv2+relu2)- (conv3)), SRCNN Image Super-resolution is based on three-layer coil product neural network, as shown in figure 4, the convolutional neural networks first A convolutional layer: convolution kernel size 9 × 9, convolution kernel number 64 export 64 characteristic patterns;Second convolutional layer: convolution kernel size 1 × 1, convolution kernel number 32 exports 32 characteristic patterns;Third convolutional layer: convolution kernel size 5 × 5, convolution kernel number 1, output 1 Opening characteristic pattern is final reconstruction high-definition picture.
Further, the CNN convolutional neural networks of training one extraction feature vector employed in step S5 is specific Implementation method are as follows:
S51 uses Euclidean distance as similar by Triplet Loss loss function training CNN convolutional neural networks Property measurement, the Triplet Loss loss function are as follows:
In formula,For anchor example,Be positive example,Be negative example,For anchor exampleWith just ExampleEuclidean distance measurement,For anchor exampleWith negative exampleEuclidean distance measurement, a is anchor ExampleWith positive exampleThe distance between and anchor exampleWith negative exampleThe distance between minimum interval;
The network structure of CNN convolutional neural networks in S52, step S51 includes four convolutional layers, four maximum ponds Layer, a full articulamentum, mode input are a triple, size 128*128*3, first convolutional layer: convolution kernel size 11 × 11, convolution kernel number is 48, uses maximum value pond, step-length 2;Second convolutional layer: convolution kernel size 5 × 5, convolution Nucleus number mesh is 128, uses maximum value pond, step-length 2;Third convolutional layer: convolution kernel size 3 × 3, convolution kernel number are 192, use maximum value pond, step-length 2;4th convolutional layer: convolution kernel size 3 × 3, convolution kernel number is 128, using most Big value pond, step-length 2;Full articulamentum exports 128 dimensional feature vectors.It should be noted that the latticed form of present network architecture For (conv1+relu1+max_pool1)-(conv2+relu2+max_pool2)-(conv3+relu3+max_pool3)- (conv4+relu4+max_pool4)—(dropout1)—(flatten1)—(flatten2)—(dense1);Full connection Layer includes two flatten layers and one dense layers, and flatten1 layers of output are 2048d, and flatten2 layers of output are 1024d, Dense layers of 128 dimensional feature vector of output, are specifically shown in Fig. 5.
Meanwhile the concrete methods of realizing that SVM classifier employed in step S6 is trained in advance in the present invention are as follows:
S61 establishes face database in advance;
Database specifically include intelligent security guard robot execute task place in it is all enter personnel everyone one just Face clear image, every image possess separate label;
Images all in database are extracted 128 dimensions by the CNN convolutional neural networks of step S5 training by S62 respectively Feature vector;
S63 designs a SVM between any two classes sample, one-to-one method classification based training is carried out, when to a unknown sample When being classified, last who gets the most votes's classification is the classification of the unknown sample.
By taking k sample as an example, it is assumed that there is the sample of k classification just to need to design k (k-1)/2 SVM, whenever database increases When adding image, all SVM of re -training are not needed, it is only necessary to re -training classifier relevant with image pattern is increased.
Fig. 6 is the comparison diagram of face identification method and two kinds of traditional algorithms in the present invention.Fig. 7 is recognition of face in the present invention (wherein, 1. number curve is the present inventor's face recognition method not with distance change precision curve graph for method and two kinds of traditional algorithms Accuracy of identification curve under same distance, 2. number curve is that VGGFace algorithm human face identifies the accuracy of identification under different distance Curve, 3. number curve is that FaceNet algorithm human face identifies the accuracy of identification curve under different distance.It can according to Fig. 6 and Fig. 7 Know, intelligence is effectively improved based on face identification method of the intelligent security guard robot under the conditions of untethered using of the invention Security robot identifies the advantages of face precision during patrol.
In conclusion the present invention has the advantage that
(1) deblurring processing is made using Lucy-Richardson algorithm for blurred picture, improves intelligent security guard machine The discrimination of the blurred picture as caused by shake when people goes on patrol;
(2) image resolution ratio is improved using Bicubic interpolation method and SRCNN Image Super-resolution, improve it is remote, The discrimination of low-resolution image;
(3) using Triplet Loss loss function training CNN convolutional neural networks, the essence of recognition of face is effectively increased Degree;
(4) mono- polytypic SVM classifier of one-to-one method (one-versus-one, abbreviation OVO) Lai Xunlian is used, often When database increases image, all SVM of re -training are not needed, it is only necessary to which re -training is relevant with image pattern is increased Classifier.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Within mind and principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (10)

1. a kind of face identification method based on intelligent security guard robot under the conditions of untethered, which is characterized in that including following step It is rapid:
S1 carries out Face datection to image using HOG feature, and intercepts corresponding facial image;
S2, the facial image by the two classification classifier judgement interceptions based on CNN convolutional neural networks trained in advance are It is no fuzzy, if so, entering step S3;Conversely, then entering step S4;
S3 makees deblurring processing using Lucy-Richardson algorithm to the fuzzy facial image classified in step S2, then Enter step S4;
S4 is handled step S2 or S3 treated facial image by SRCNN Image Super-resolution, obtains high resolution graphics Unified size is normalized as after;
S5 extracts feature vector by trained CNN convolutional neural networks in advance;
S6 classifies to the obtained feature vector of step S5 by preparatory trained SVM classifier, realizes recognition of face.
2. the face identification method according to claim 1 based on intelligent security guard robot under the conditions of untethered, feature It is, carries out the concrete methods of realizing of Face datection in step S1 to image using HOG feature are as follows:
S11 starts and is loaded into colorized face images;
Colorized face images are converted to grayscale image by S12;
Grayscale image is divided into the small cube of several 8*8 pixels by S13;
S14 calculates the direction gradient distribution histogram of each small cube;
S15 chooses general direction of the maximum direction of gradient distribution quantity as the small cube in histogram;
S16 replaces small cube with direction arrow, obtains the face HOG characteristic pattern indicated by many arrows;
S17 detects the human face region in image with face HOG characteristic pattern similar portion by comparing.
3. the face identification method according to claim 2 based on intelligent security guard robot under the conditions of untethered, feature It is, the direction gradient distribution histogram of small cube obtains as follows in step S14:
S141 carries out convolution to small cube using gradient operator, the gradient direction at each pixel and amplitude is calculated, has Body formula is as follows:
In formula, IxFor the gradient value in horizontal direction, IyFor the gradient value in vertical direction, G (x, y) represents the amplitude of gradient, θ (x, y) represents the direction of gradient;
360 degree of angles are divided into 12 regions by S142 as needed, and each region includes 30 degree, and entire histogram includes 12 dimensions;
Its amplitude is added in histogram using Tri linear interpolation method, is obtained according to the gradient direction of each pixel by S143 The histograms of oriented gradients of each small cube.
4. the face identification method according to claim 1 based on intelligent security guard robot under the conditions of untethered, feature It is, two classification classifier training verifyings by the following method in step S2 based on CNN convolutional neural networks:
S21 is ready to train data set used, includes clear image and blurred picture in data set, be individually placed to two Different files;
S22 builds the CNN convolutional neural networks of training;
S23 optimizes algorithm using batch stochastic gradient descent algorithm and solves;
S24 is trained with the data set iteration prepared in step S21;
The CNN convolutional neural networks are used for the identification of blurred picture after the completion of training by S25.
5. the face identification method according to claim 4 based on intelligent security guard robot under the conditions of untethered, feature It is, CNN convolutional neural networks include an input layer, four convolutional layers, four pond layers, two full connections in step S22 Layer and an output layer, wherein convolutional layer is all made of relu activation primitive, and output layer returns two classes of output using softmax Other probability.
6. the face identification method according to claim 1 based on intelligent security guard robot under the conditions of untethered, feature It is, mould is gone using Lucy-Richardson algorithm to the blurred picture that step S2 classifies employed in step S3 Paste the concrete methods of realizing of processing are as follows:
S31 establishes basic model
Y=γ * P (2)
In formula, Y indicates that degraded image, γ indicate that original image, P indicate point spread function, indicate convolution operation, wherein
In formula, i and j are pixel, and n is original image pixels point number, and d is degraded image pixel number, and p (i, j) is Point spread function, λiFor the true pixel values of pixel i, yjFor the observation pixel value of pixel j;
S32, it is assumed that observed image and true picture include the identical pixel of quantity;
S33, yjMeet Poisson distribution, it may be assumed that
yj~Poisson (μj) (4)
Wherein, ujIndicate the parameter of Poisson distribution, formula are as follows:
S34 introduces the probability function and Gaussian PSF of Poisson distribution:
In formula, λ indicates the expectation and variance of Poisson distribution, and k indicates the value of variable, and d (i, j) indicates the distance of pixel i and j, δ Indicate variance;
Formula (4) is converted to following formula by S35:
Z (i, j)~Poisson (λiP (i, j)) (7)
In formula, z (i, j) indicates that mapping of the pixel i on pixel j is contributed, therefore yjIt can be expressed from the next:
In formula, λkIndicate the true pixel values of pixel k, p (k, j) indicates value of the point spread function at pixel (k, j);
S36, it is assumed that image γ is it is known that z (i, j) can be estimated by following formula:
Then λ can be estimated by following formula:
S37 obtains to the end iterative:
In formula, t indicates the number of iteration, λk (t)Indicate the true pixel values of t moment pixel k, λi (t)Indicate t moment pixel i True pixel values, λi (t+1)Indicate the true pixel values of t+1 moment pixel i.
7. the face identification method according to claim 1 based on intelligent security guard robot under the conditions of untethered, feature It is, facial image is handled by SRCNN Image Super-resolution in step S4, obtains normalized system after high-definition picture One size is 128*128, is realized especially by following method:
S41 constructs BiCubic interpolating function using Bicubic interpolation method
In formula, a is variation coefficient, and x indicates the pixel to the distance of P point, and W (x) is the weight of the pixel;
S42 takes the 4x4 neighborhood point (x near it to the pixel (x, y) of interpolationi,yj), wherein x and y can be floating-point Number, i, j=0,1,2,3;
S43 carries out interpolation calculation by following formula, converts the image into the size of 128*128:
In formula, f (x, y) indicates the image function after conversion, f (xi,yj) indicate original image function, W (x-xi) indicate xiPower Weight, W (y-yj) indicate yjWeight.
8. the face identification method according to claim 7 based on intelligent security guard robot under the conditions of untethered, feature It is, is based on three-layer coil product neural network using SRCNN Image Super-resolution employed in step S4, wherein first Convolutional layer: convolution kernel size 9 × 9, convolution kernel number 64;Second convolutional layer: convolution kernel size 1 × 1, convolution kernel number 32; Third convolutional layer: convolution kernel size 5 × 5, convolution kernel number 1.
9. the face identification method according to claim 1 based on intelligent security guard robot under the conditions of untethered, feature It is, the concrete methods of realizing of the CNN convolutional neural networks of training one extraction feature vector employed in step S5 are as follows:
S51 uses Euclidean distance as similarity measurements by Triplet Loss loss function training CNN convolutional neural networks Amount, the Triplet Loss loss function are as follows:
In formula,For anchor example,Be positive example,Be negative example,For anchor exampleWith positive example Euclidean distance measurement,For anchor exampleWith negative exampleEuclidean distance measurement, a be anchor example With positive exampleThe distance between and anchor exampleWith negative exampleThe distance between minimum interval;
The network structure of CNN convolutional neural networks in S52, step S51 includes four convolutional layers, four maximum pond layers, one A full articulamentum, mode input be a triple, size 128*128*3, first convolutional layer: convolution kernel size 11 × 11, convolution kernel number is 48, uses maximum value pond, step-length 2;Second convolutional layer: convolution kernel size 5 × 5, convolution nucleus number Mesh is 128, uses maximum value pond, step-length 2;Third convolutional layer: convolution kernel size 3 × 3, convolution kernel number are 192, are made With maximum value pond, step-length 2;4th convolutional layer: convolution kernel size 3 × 3, convolution kernel number are 128, use maximum value pond Change, step-length 2;Full articulamentum exports 128 dimensional feature vectors.
10. the face identification method according to claim 1 based on intelligent security guard robot under the conditions of untethered, feature It is, the concrete methods of realizing that SVM classifier employed in step S6 is trained in advance are as follows:
S61 establishes face database in advance;
Images all in database are extracted the spy of 128 dimensions by S62 respectively by the CNN convolutional neural networks of step S5 training Levy vector;
S63 designs a SVM between any two classes sample, carries out one-to-one method classification based training, carries out when to a unknown sample When classification, last who gets the most votes's classification is the classification of the unknown sample.
CN201910492676.XA 2019-06-06 2019-06-06 A kind of face identification method based on intelligent security guard robot under the conditions of untethered Pending CN110210432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910492676.XA CN110210432A (en) 2019-06-06 2019-06-06 A kind of face identification method based on intelligent security guard robot under the conditions of untethered

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910492676.XA CN110210432A (en) 2019-06-06 2019-06-06 A kind of face identification method based on intelligent security guard robot under the conditions of untethered

Publications (1)

Publication Number Publication Date
CN110210432A true CN110210432A (en) 2019-09-06

Family

ID=67791373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910492676.XA Pending CN110210432A (en) 2019-06-06 2019-06-06 A kind of face identification method based on intelligent security guard robot under the conditions of untethered

Country Status (1)

Country Link
CN (1) CN110210432A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209845A (en) * 2020-01-03 2020-05-29 平安科技(深圳)有限公司 Face recognition method and device, computer equipment and storage medium
CN111310631A (en) * 2020-02-10 2020-06-19 湖南大学 Target tracking method and system for rotor operation flying robot
CN111444899A (en) * 2020-05-14 2020-07-24 聚好看科技股份有限公司 Remote examination control method, server and terminal
CN111460939A (en) * 2020-03-20 2020-07-28 深圳市优必选科技股份有限公司 Deblurring face recognition method and system and inspection robot
CN111798414A (en) * 2020-06-12 2020-10-20 北京阅视智能技术有限责任公司 Method, device and equipment for determining definition of microscopic image and storage medium
CN112037406A (en) * 2020-08-27 2020-12-04 江门明浩电力工程监理有限公司 Intelligent construction site access control method, system and equipment
CN112561879A (en) * 2020-12-15 2021-03-26 北京百度网讯科技有限公司 Ambiguity evaluation model training method, image ambiguity evaluation method and device
CN112669207A (en) * 2020-12-21 2021-04-16 四川长虹电器股份有限公司 Method for enhancing resolution of face image based on television camera
KR102303002B1 (en) * 2021-03-31 2021-09-16 인하대학교 산학협력단 Method and Apparatus for Deblurring of Human and Scene Motion using Pseudo-blur Synthesizer

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060104504A1 (en) * 2004-11-16 2006-05-18 Samsung Electronics Co., Ltd. Face recognition method and apparatus
CN106600538A (en) * 2016-12-15 2017-04-26 武汉工程大学 Human face super-resolution algorithm based on regional depth convolution neural network
CN106845330A (en) * 2016-11-17 2017-06-13 北京品恩科技股份有限公司 A kind of training method of the two-dimension human face identification model based on depth convolutional neural networks
CN106909882A (en) * 2017-01-16 2017-06-30 广东工业大学 A kind of face identification system and method for being applied to security robot
CN108830262A (en) * 2018-07-25 2018-11-16 上海电力学院 Multi-angle human face expression recognition method under natural conditions

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060104504A1 (en) * 2004-11-16 2006-05-18 Samsung Electronics Co., Ltd. Face recognition method and apparatus
CN106845330A (en) * 2016-11-17 2017-06-13 北京品恩科技股份有限公司 A kind of training method of the two-dimension human face identification model based on depth convolutional neural networks
CN106600538A (en) * 2016-12-15 2017-04-26 武汉工程大学 Human face super-resolution algorithm based on regional depth convolution neural network
CN106909882A (en) * 2017-01-16 2017-06-30 广东工业大学 A kind of face identification system and method for being applied to security robot
CN108830262A (en) * 2018-07-25 2018-11-16 上海电力学院 Multi-angle human face expression recognition method under natural conditions

Non-Patent Citations (13)

* Cited by examiner, † Cited by third party
Title
306247770: "SVM实现多分类常用的两种方法以及一对一法的代码", 《CSDN》 *
306247770: "SVM实现多分类常用的两种方法以及一对一法的代码", 《CSDN》, 5 December 2018 (2018-12-05), pages 2 *
BICELOVE: "图像放大并进行BiCubic插值", 《CSDN》 *
BICELOVE: "图像放大并进行BiCubic插值", 《CSDN》, 23 April 2014 (2014-04-23), pages 1 - 8 *
CHAO DONG ET AL.: "Learning a Deep Convolutional Network for Image Super-Resolution", 《COMPUTER VISION-ECCV 2014》, pages 184 - 199 *
李秋珍等: "基于卷积神经网络的人脸图像质量评价", 《计算机应用》 *
李秋珍等: "基于卷积神经网络的人脸图像质量评价", 《计算机应用》, 31 March 2019 (2019-03-31), pages 2 - 4 *
田雷: "基于特征学习的无约束环境下的人脸识别研究", 《中国博士学位论文全文数据库信息科技辑》 *
田雷: "基于特征学习的无约束环境下的人脸识别研究", 《中国博士学位论文全文数据库信息科技辑》, 15 January 2019 (2019-01-15) *
谭恒良: "基于融合全局与局部HOG特征的人脸识别方法", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
谭恒良: "基于融合全局与局部HOG特征的人脸识别方法", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 May 2012 (2012-05-15), pages 2 *
雪入红尘: "图像去模糊(二)——Richardson–Lucy算法", 《CSDN》 *
雪入红尘: "图像去模糊(二)——Richardson–Lucy算法", 《CSDN》, 17 October 2016 (2016-10-17), pages 1 - 5 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209845A (en) * 2020-01-03 2020-05-29 平安科技(深圳)有限公司 Face recognition method and device, computer equipment and storage medium
CN111310631A (en) * 2020-02-10 2020-06-19 湖南大学 Target tracking method and system for rotor operation flying robot
CN111310631B (en) * 2020-02-10 2021-05-07 湖南大学 Target tracking method and system for rotor operation flying robot
WO2021184894A1 (en) * 2020-03-20 2021-09-23 深圳市优必选科技股份有限公司 Deblurred face recognition method and system and inspection robot
CN111460939A (en) * 2020-03-20 2020-07-28 深圳市优必选科技股份有限公司 Deblurring face recognition method and system and inspection robot
CN111444899A (en) * 2020-05-14 2020-07-24 聚好看科技股份有限公司 Remote examination control method, server and terminal
CN111444899B (en) * 2020-05-14 2023-10-31 聚好看科技股份有限公司 Remote examination control method, server and terminal
CN111798414A (en) * 2020-06-12 2020-10-20 北京阅视智能技术有限责任公司 Method, device and equipment for determining definition of microscopic image and storage medium
CN112037406A (en) * 2020-08-27 2020-12-04 江门明浩电力工程监理有限公司 Intelligent construction site access control method, system and equipment
CN112561879A (en) * 2020-12-15 2021-03-26 北京百度网讯科技有限公司 Ambiguity evaluation model training method, image ambiguity evaluation method and device
CN112561879B (en) * 2020-12-15 2024-01-09 北京百度网讯科技有限公司 Ambiguity evaluation model training method, image ambiguity evaluation method and image ambiguity evaluation device
CN112669207A (en) * 2020-12-21 2021-04-16 四川长虹电器股份有限公司 Method for enhancing resolution of face image based on television camera
KR102303002B1 (en) * 2021-03-31 2021-09-16 인하대학교 산학협력단 Method and Apparatus for Deblurring of Human and Scene Motion using Pseudo-blur Synthesizer

Similar Documents

Publication Publication Date Title
CN110210432A (en) A kind of face identification method based on intelligent security guard robot under the conditions of untethered
Amato et al. Deep learning for decentralized parking lot occupancy detection
CN109255364B (en) Scene recognition method for generating countermeasure network based on deep convolution
CN108460356B (en) Face image automatic processing system based on monitoring system
Rachmadi et al. Vehicle color recognition using convolutional neural network
CN112380952A (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
JP6192271B2 (en) Image processing apparatus, image processing method, and program
CN104091341B (en) A kind of image fuzzy detection method based on conspicuousness detection
CN109961003A (en) A kind of airborne auxiliary inspection device of embedded transmission line of electricity based on FPGA
CN104036468B (en) Single-frame image super-resolution reconstruction method based on the insertion of pre-amplification non-negative neighborhood
CN109657715B (en) Semantic segmentation method, device, equipment and medium
CN107273894A (en) Recognition methods, device, storage medium and the processor of car plate
CN112132145B (en) Image classification method and system based on model extended convolutional neural network
CN112307853A (en) Detection method of aerial image, storage medium and electronic device
CN110399908A (en) Classification method and device based on event mode camera, storage medium, electronic device
CN110164139A (en) A kind of side parking detection identifying system and method
CN111079511A (en) Document automatic classification and optical character recognition method and system based on deep learning
Mu et al. Single image super resolution with high resolution dictionary
CN110516731A (en) A kind of visual odometry feature point detecting method and system based on deep learning
CN108764289B (en) Method and system for classifying UI (user interface) abnormal pictures based on convolutional neural network
CN116580324A (en) Yolov 5-based unmanned aerial vehicle ground target detection method
US7502524B2 (en) Method and apparatus of processing a skin print image
CN116246158A (en) Self-supervision pre-training method suitable for remote sensing target detection task
CN115984133A (en) Image enhancement method, vehicle snapshot method, device and medium
CN114519799A (en) Real-time detection method and system for multi-feature seat state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination