CN111062399B - Monitoring video face recognition method based on color dithering and image mixing - Google Patents

Monitoring video face recognition method based on color dithering and image mixing Download PDF

Info

Publication number
CN111062399B
CN111062399B CN201911278114.1A CN201911278114A CN111062399B CN 111062399 B CN111062399 B CN 111062399B CN 201911278114 A CN201911278114 A CN 201911278114A CN 111062399 B CN111062399 B CN 111062399B
Authority
CN
China
Prior art keywords
training
image
steps
face
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911278114.1A
Other languages
Chinese (zh)
Other versions
CN111062399A (en
Inventor
廖志梁
王道宁
陶亮
郭宝珠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yicheng Gaoke Dalian Technology Co ltd
Original Assignee
Yicheng Gaoke Dalian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yicheng Gaoke Dalian Technology Co ltd filed Critical Yicheng Gaoke Dalian Technology Co ltd
Priority to CN201911278114.1A priority Critical patent/CN111062399B/en
Publication of CN111062399A publication Critical patent/CN111062399A/en
Application granted granted Critical
Publication of CN111062399B publication Critical patent/CN111062399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

A monitoring video face recognition method based on color dithering and image mixing comprises the following steps: 1) Preparing a face data set; 2) Training in stages; 3) Training by adopting the enhanced image subjected to color dithering; 4) Training by adopting an enhanced image subjected to image mixing; 5) And judging whether the performance of the final face recognition model obtained through training meets the use requirement. The invention uses a color dithering mode to enhance the image, thereby effectively reducing the problems of inaccurate recognition result or recognition failure caused by the color change of the face image; the image is enhanced by using an image mixing mode, so that the problems of inaccurate recognition results or recognition failure caused by blurring of the face image are effectively reduced; the images subjected to color dithering and image mixing are used for training in a staged and alternative way, so that network convergence is effectively facilitated; and whether the face recognition model can be practically used or not is judged by combining the overlapping area of the probability density function with the accuracy, and the reliability is higher.

Description

Monitoring video face recognition method based on color dithering and image mixing
Technical Field
The invention relates to the technical field of image processing.
Background
Researchers train the network through a large number of face images and corresponding subordinate labels by using a deep learning method, and extract face features which can be used for distinguishing different people to obtain a model. When a face image without a subordinate label is input, the subordinate label of the image can be accurately identified by using a trained model.
At present, most face recognition researches directly adopt a public data set to carry out model training and evaluation, and when the accuracy of a training model on a test set of the public data set reaches more than 99%, the training is considered to be finished, so that a better model is obtained.
The disclosed data set is limited by the acquisition way and the manufacturing mode, the image conditions (such as color, illumination, angle, definition and the like) are comparatively unsatisfactory, and a larger gap exists between the disclosed data set and the actual monitoring video application scene. The human face area in the monitoring video often has larger noise interference and motion blur, and meanwhile, the illumination change is large and the resolution is low. The above conditions result in models that are directly trained from the public data set often performing poorly in actual use. Meanwhile, the expressive force of the model obtained through training cannot be objectively reflected only from a single number of the accuracy.
Disclosure of Invention
The invention provides a monitoring video face recognition method based on color dithering and image mixing.
The technical scheme adopted by the invention for achieving the purpose is as follows: a monitoring video face recognition method based on color dithering and image mixing comprises the following steps:
1) Face dataset preparation:
a) Combining together using the public face dataset as a training dataset;
b) Using a monitoring video face data set which is not participated in the training data set construction as a test data set, wherein the monitoring video face data set comprises an intra-class image pair and an inter-class image pair;
2) Training is carried out in stages, and the specific method is as follows:
a) Selecting a network;
b) Selecting a loss function;
c) And (3) starting training, recording the number epoch of training rounds, and calculating the accuracy, recall and loss values of the training data set after the weight of each round is updated in the training process.
d) When the training model is normally converged, the loss value is continuously reduced, and the step 3) is carried out after the accuracy reaches a certain value;
3) The training is carried out by adopting the enhanced image after color dithering, and the specific training method is as follows:
a) Each training round generates a random number alpha, (alpha epsilon (0, 1)), then the image color dithering coefficient is: alpha light =1+α,α dark =1-α;
b) Multiplying the channels of all images of the training dataset by alpha when the training round number epoch is odd, respectively light Then the image I after color dithering new =max(I oldlight 255). Multiplying all image channels of the training data set by alpha when epoch is even dark Then the image I after color dithering new =min(I olddark ,0);
c) The image label after color dithering is unchanged, the image label is input into a network to continue training, and the accuracy, recall rate and loss value of the training data set after each round of weight updating in the training process are calculated;
d) Entering step 4) after the accuracy reaches a certain value;
4) The training is carried out by adopting the enhanced images after image mixing, and the specific training method is as follows:
a) For any image I in training set 0 Label L 0 Generating a random number beta, (beta epsilon (0, 1)) and randomly selecting another image I from the training set 1 Label L 1 The method comprises the steps of carrying out a first treatment on the surface of the New image after image blending
Figure BDA0002314822770000021
Its corresponding new tag->
Figure BDA0002314822770000022
Performing the above operations on all images in the training set;
b) Inputting the images after the label mixing into a network for continuous training, and calculating the accuracy, recall and loss values of the training data set after the weight updating of each round in the training process;
c) After the accuracy reaches a certain value, reusing the images which are not subjected to color dithering and image mixing in the step 1 to continue training, and when the accuracy is larger than a certain value and the loss value is smaller than a certain value, storing the model to finish training;
5) Judging whether the performance of the final face recognition model obtained by training meets the use requirement by using the inter-class image pairs in the test set, wherein the specific judging method is as follows:
c) Calculating the accuracy A of the final model stored in the step 4) on the test set;
d) Extracting features of the inter-class and intra-class image pairs on the test data set obtained in the steps 1) to b) by utilizing the final model stored in the step 4), and calculating the distance t_dis between two images in each pair;
c) Fitting all the obtained distance values to obtain an intra-class probability density function TI (t_dis), and fitting all the obtained distance values of the inter-class image pair to obtain an inter-class probability density function TC (t_dis);
d) And drawing a probability density curve by using TI (t_dis) and TC (t_dis), wherein if the area of an overlapped area of the two curves is smaller than a certain value and the accuracy A is larger than a certain value, the model meets the conditions, and the model is practical, otherwise, the face recognition model does not reach the standard and needs to be retrained.
In said steps 1) -a), the number of categories of the training dataset exceeds 10000 categories, each category comprising at least 10 images, face images belonging to different persons as a single category.
In the steps 1) -b), the intra-class image pairs are: 3 times of extraction without replacement are carried out in each category, and 2 images are randomly collected each time to form an intra-category image pair set; the inter-class image pairs are 10% of the total class number randomly selected, 1 image is randomly extracted without returning from each class, the images are arranged and combined to obtain part of image pairs, 10% of the total class number randomly selected again, 1 image is randomly extracted without returning from each class, the images are arranged and combined to obtain another part of image pairs, and the two parts of images are combined together to form an intra-class image pair set.
In said steps 2) -a), the network structure bits ResNet34, resNet50, resNet101, resNet152, or MobileNet_V3.
In the steps 2) to b), the loss function adopts a triple, deformed version classification loss function Large Margin Softmax Loss or SphereF ace.
In the steps 2) to d), the certain value is 20%.
In the steps 3) to d), the condition that the accuracy rate is reduced occurs when new data is used for continuing the initial training stage, the network is required to be re-converged through training, when the training model is re-converged, the loss value is continuously reduced, and the accuracy rate reaches 50% and the step 4) is performed.
In the steps 4) to c), the condition that the accuracy rate is reduced occurs when new data is used for continuing the initial training stage, the network is required to be re-converged through training, when the training model is re-converged, the loss value is continuously reduced, the accuracy rate reaches 90%, the images which are not subjected to color dithering and image mixing in the step 1) are reused for continuing training, and when the accuracy rate is more than 98%, and the loss value is less than 0.1, the model is saved to finish training.
In the steps 5) -c), all distance values obtained by the intra-class image pairs are fitted by using a Gaussian mixture model, and all distance values obtained by the inter-class image pairs are fitted by using a Gaussian mixture model.
In the steps 5) to d), if the area of the overlapping area of the two curves is smaller than x e [0.1,0.3] and a > =85%, the model is considered to satisfy the condition, which is practical.
According to the monitoring video face recognition method based on color dithering and image mixing, the image is enhanced in a color dithering mode, so that a model obtained through training is adapted to face images acquired by different cameras at different times, larger color difference exists, and the problem of inaccurate recognition results or recognition failure caused by color change of the face images can be effectively reduced; the image is enhanced by using an image mixing mode, so that a model obtained by training is adapted to the problem that the time difference between video acquisition and personnel movement in a monitoring video can cause face blurring, and the problem of inaccurate recognition result or recognition failure caused by the blurring of the face image can be effectively reduced; the images subjected to color dithering and image mixing are used alternately in stages for training in the training process, so that network convergence can be effectively assisted, and the situation that training divergence cannot be converged due to the fact that all the enhanced images participate in training at the same time from the beginning is avoided; the overlapping area of probability density functions between classes in the pair of classes is combined with the accuracy to judge whether the face recognition model can be practically used or not, and the reliability is higher; meanwhile, the problems of the face recognition model in feature extraction can be more intuitively exposed, and a guiding effect is provided for the next training.
Drawings
FIG. 1 is a schematic diagram of probability curves and overlapping regions according to the present invention.
Detailed Description
The monitoring video face recognition method based on the color dithering and the mixed label mainly comprises the following 5 steps:
1) The preparation of the face data set comprises the following specific preparation processes:
a) A plurality of public face data sets are combined together to be used as a training data set of us, the number of categories of the training data set is ensured to exceed 10000 categories, and each category contains at least 10 images. Face images belonging to different persons are regarded as a single class.
b) The monitoring video face data set which is not participated in the training data set construction is used as a test data set, and the specific definition is as follows:
pairs of intra-class images: non-replacement 3 extractions were performed in each category, with 2 images randomly collected at a time, to construct a set of intra-category image pairs (e.g., 10000 categories would eventually result in a set of intra-category image pairs that contains 30000 pairs). Inter-class image pair: 10% of the total category number is randomly selected, and 1 image is randomly extracted without replacement from each category. The images are arranged and combined to obtain partial image pairs; again 10% of the total category number are randomly selected, and 1 image is randomly extracted without replacement for each category. These images are arranged and combined to obtain another part of image pairs, and the two parts of images are combined together to form an intra-class image pair set (for example, 10000 classes are used to finally obtain an inter-class image pair set containing 99000 pairs of image pairs).
2) Training is carried out in stages, and the specific method is as follows:
a) Network architecture aspects select already mature and widely used network architecture, such as ResNet34, resNet50, resNet101, resNet152, mobileNet_V3, etc.;
b) The loss function adopts measurement to learn a common loss function tripleoss or deformed version classification loss functions Larger-Margin Softmax Loss, sphereF ace_loss and the like;
c) Starting training, recording the number epoch of training rounds, and calculating the accuracy, recall rate and loss value of the training data set after the weight of each round is updated in the training process;
d) When the training model converges normally, the loss value decreases continuously, and the step 3 is entered after the accuracy reaches a certain value (20% is taken as an example here).
3) The training is carried out by adopting the enhanced image after color dithering, and the specific training method is as follows:
a) Each training round generates a random number alpha, (alpha epsilon (0, 1)), then the image color dithering coefficient is: alpha light =1+α,α dark =1-α;
b) Multiplying 3 channels of all images of the training dataset by alpha when epoch is odd, respectively li g ht Then the image I after color dithering new =max(I oldlight 255). Multiplying 3 channels of all images of the training dataset by alpha when epoch is even, respectively dark Then the image I after color dithering new =min(I olddark ,0);
c) The image label after color dithering is unchanged, the image label is input into a network to continue training, and the accuracy, recall rate and loss value of the training data set after weight updating of each round in the training process are calculated;
d) In the initial stage of training with new data, the accuracy rate is reduced, and several rounds of training are needed to re-converge the network. When the training model is re-converged, the loss value is continuously reduced, and the step 4 is carried out after the accuracy reaches a certain value (50% is taken as an example here);
4) The training is carried out by adopting the enhanced images after image mixing, and the specific training method is as follows:
a) For any image I in training set 0 Label L 0 Generating a random number beta, (beta epsilon (0, 1)) and randomly selecting another image I from the training set 1 Label L 1 The method comprises the steps of carrying out a first treatment on the surface of the New image after image blending
Figure BDA0002314822770000051
Its corresponding new tag->
Figure BDA0002314822770000052
Performing the above operations on all images in the training set;
b) Inputting the images after the label mixing into a network for continuous training, and calculating the accuracy, recall and loss values of the training data set after the weight updating of each round in the training process;
c) In the initial stage of training with new data, the accuracy rate is reduced, and several rounds of training are needed to re-converge the network. When the training model is re-converged, the loss value is continuously reduced, and after the accuracy reaches a certain value (90% is taken as an example here), the image which is not subjected to color dithering and image mixing in the step 1) is reused for continuous training. When the accuracy is more than 98%, and the loss value is less than 0.1, the model is saved and training is finished;
5) Judging whether the performance of the final face recognition model obtained by training is met or not by utilizing the inter-class image pairs in the test set, wherein the specific judging method is as follows:
a) Calculating the accuracy A of the final model stored in the step 4) on the test set;
b) And (3) extracting features of the inter-class and intra-class image pairs on the test data set obtained in the steps 1) to b) by using the final model stored in the step 4), and calculating the distance t_dis (taking Euclidean distance as an example here) between two images in each pair.
c) And fitting all distance values obtained by the intra-class image pairs by using a Gaussian mixture model to obtain an intra-class probability density function TI (t_dis), and fitting all distance values obtained by the inter-class image pairs by using the Gaussian mixture model to obtain an inter-class probability density function TC (t_dis).
d) And drawing a probability density curve by using TI (t_dis) and TC (t_dis), wherein if the area of the overlapped area of the two curves is smaller than x epsilon [0.1,0.3] as shown in figure one and A > =85%, the model is considered to meet the condition, and the method can be practical. Otherwise, the face recognition model is considered to be unqualified and retrained by a strategy replacement method.
The method enhances the data of the public data set aiming at the monitoring video scene, and changes the conditions (such as color, illumination, angle, definition and the like) of the face image. The training data is more close to the face in the actual monitoring video, so that the generalization of the trained model is stronger; and extracting features from the test set by using the model obtained by training the face recognition network, calculating a probability density function between the features and the distance, further judging whether the model can be practically used, and if not, giving out where the problem is, thereby giving out the optimization direction of the model.
The present invention has been described in terms of embodiments, and it will be appreciated by those of skill in the art that various changes can be made to the features and embodiments, or equivalents can be substituted, without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (10)

1. A monitoring video face recognition method based on color dithering and image mixing is characterized in that: the method comprises the following steps:
1) Face dataset preparation:
a) Combining together using the public face dataset as a training dataset;
b) Using a monitoring video face data set which is not participated in the training data set construction as a test data set, wherein the monitoring video face data set comprises an intra-class image pair and an inter-class image pair;
2) Training is carried out in stages, and the specific method is as follows:
a) Selecting a network;
b) Selecting a loss function;
c) Starting training, recording the number epoch of training rounds, and calculating the accuracy, recall and loss values of the training data set after the weight of each round is updated in the training process;
d) When the training model is normally converged, the loss value is continuously reduced, and the step 3) is carried out after the accuracy reaches a certain value;
3) The training is carried out by adopting the enhanced image after color dithering, and the specific training method is as follows:
a) Each wheel training generates a random number alpha, alpha epsilon (0, 1), and the image color dithering coefficient is as follows: alpha light =1+α,α dark =1-α;
b) Multiplying the channels of all images of the training dataset by alpha when the training round number epoch is odd, respectively light Then the image I after color dithering new =max(I oldlight 255). Multiplying all image channels of the training data set by alpha when epoch is even dark Then the image I after color dithering new =min(I olddark ,0);
c) The image label after color dithering is unchanged, the image label is input into a network to continue training, and the accuracy, recall rate and loss value of the training data set after each round of weight updating in the training process are calculated;
d) Entering step 4) after the accuracy reaches a certain value;
4) The training is carried out by adopting the enhanced images after image mixing, and the specific training method is as follows:
a) For any image I in training set 0 Label L 0 Generating a random number beta, beta epsilon (0, 1), randomly selecting another image I from the training set 1 Label L 1 The method comprises the steps of carrying out a first treatment on the surface of the New image after image blending
Figure QLYQS_1
Its corresponding new tag->
Figure QLYQS_2
Performing the above operations on all images in the training set;
b) Inputting the images after the label mixing into a network for continuous training, and calculating the accuracy, recall and loss values of the training data set after the weight updating of each round in the training process;
c) After the accuracy reaches a certain value, reusing the images which are not subjected to color dithering and image mixing in the step 1 to continue training, and when the accuracy is larger than a certain value and the loss value is smaller than a certain value, storing the model to finish training;
5) Judging whether the performance of the final face recognition model obtained by training meets the use requirement by using the inter-class image pairs in the test set, wherein the specific judging method is as follows:
a) Calculating the accuracy A of the final model stored in the step 4) on the test set;
b) Extracting features of the inter-class and intra-class image pairs on the test data set obtained in the steps 1) to b) by utilizing the final model stored in the step 4), and calculating the distance t_dis between two images in each pair;
c) Fitting all the distance values obtained by the intra-class image pairs to obtain an intra-class probability density function TI (t_dis), and fitting all the distance values obtained by the inter-class image pairs to obtain an inter-class probability density function TC (t_dis);
d) And drawing a probability density curve by using TI (t_dis) and TC (t_dis), wherein if the area of an overlapped area of the two curves is smaller than a certain value and the accuracy A is larger than a certain value, the model can be used after meeting the conditions, otherwise, the face recognition model does not reach the standard and needs to be retrained.
2. The method for recognizing the face of the surveillance video based on color dithering and image mixing according to claim 1, wherein the method comprises the following steps: in said steps 1) -a), the number of categories of the training dataset exceeds 10000 categories, each category comprising at least 10 images, face images belonging to different persons as a single category.
3. The method for recognizing the face of the surveillance video based on color dithering and image mixing according to claim 1, wherein the method comprises the following steps: in the steps 1) -b), the intra-class image pairs are: 3 times of extraction without replacement are carried out in each category, and 2 images are randomly collected each time to form an intra-category image pair set; the inter-class image pairs are 10% of the total class number randomly selected, 1 image is randomly extracted without returning from each class, the images are arranged and combined to obtain part of image pairs, 10% of the total class number randomly selected again, 1 image is randomly extracted without returning from each class, the images are arranged and combined to obtain another part of image pairs, and the two parts of images are combined together to form an inter-class image pair set.
4. The method for recognizing the face of the surveillance video based on color dithering and image mixing according to claim 1, wherein the method comprises the following steps: in the steps 2) -a), the network structure is ResNet34, resNet50, resNet101, resNet152 or MobileNet_V3.
5. The method for recognizing the face of the surveillance video based on color dithering and image mixing according to claim 1, wherein the method comprises the following steps: in the steps 2) to b), the loss function adopts a triple, deformed version classification loss function Large Margin Softmax Loss or SphereF ace.
6. The method for recognizing the face of the surveillance video based on color dithering and image mixing according to claim 1, wherein the method comprises the following steps: in the steps 2) to d), the certain value is 20%.
7. The method for recognizing the face of the surveillance video based on color dithering and image mixing according to claim 1, wherein the method comprises the following steps: in the steps 3) to d), the condition that the accuracy rate is reduced occurs when new data is used for continuing the initial training stage, the network is required to be re-converged through training, when the training model is re-converged, the loss value is continuously reduced, and the accuracy rate reaches 50% and the step 4) is performed.
8. The method for recognizing the face of the surveillance video based on color dithering and image mixing according to claim 1, wherein the method comprises the following steps: in the steps 4) to c), the condition that the accuracy rate is reduced occurs when new data is used for continuing the initial training stage, the network is required to be re-converged through training, when the training model is re-converged, the loss value is continuously reduced, the accuracy rate reaches 90%, the images which are not subjected to color dithering and image mixing in the step 1) are reused for continuing training, and when the accuracy rate is more than 98%, and the loss value is less than 0.1, the model is saved to finish training.
9. The method for recognizing the face of the surveillance video based on color dithering and image mixing according to claim 1, wherein the method comprises the following steps: in the steps 5) -c), fitting is carried out on all distance values obtained by the image pairs in the class by using a Gaussian mixture model, and fitting is carried out on all distance values obtained by the image pairs between the classes by using the Gaussian mixture model.
10. The method for recognizing the face of the surveillance video based on color dithering and image mixing according to claim 1, wherein the method comprises the following steps: in the steps 5) to d), if the area of the overlapping area of the two curves is smaller than x e [0.1,0.3] and a > =85%, the model is considered to satisfy the condition and can be used.
CN201911278114.1A 2019-12-12 2019-12-12 Monitoring video face recognition method based on color dithering and image mixing Active CN111062399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911278114.1A CN111062399B (en) 2019-12-12 2019-12-12 Monitoring video face recognition method based on color dithering and image mixing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911278114.1A CN111062399B (en) 2019-12-12 2019-12-12 Monitoring video face recognition method based on color dithering and image mixing

Publications (2)

Publication Number Publication Date
CN111062399A CN111062399A (en) 2020-04-24
CN111062399B true CN111062399B (en) 2023-04-25

Family

ID=70300775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911278114.1A Active CN111062399B (en) 2019-12-12 2019-12-12 Monitoring video face recognition method based on color dithering and image mixing

Country Status (1)

Country Link
CN (1) CN111062399B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767440A (en) * 2019-01-11 2019-05-17 南京信息工程大学 A kind of imaged image data extending method towards deep learning model training and study
CN110532878A (en) * 2019-07-26 2019-12-03 中山大学 A kind of driving behavior recognition methods based on lightweight convolutional neural networks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170277955A1 (en) * 2016-03-23 2017-09-28 Le Holdings (Beijing) Co., Ltd. Video identification method and system
US10185880B2 (en) * 2017-03-31 2019-01-22 Here Global B.V. Method and apparatus for augmenting a training data set

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767440A (en) * 2019-01-11 2019-05-17 南京信息工程大学 A kind of imaged image data extending method towards deep learning model training and study
CN110532878A (en) * 2019-07-26 2019-12-03 中山大学 A kind of driving behavior recognition methods based on lightweight convolutional neural networks

Also Published As

Publication number Publication date
CN111062399A (en) 2020-04-24

Similar Documents

Publication Publication Date Title
US20200285896A1 (en) Method for person re-identification based on deep model with multi-loss fusion training strategy
CN107451607B (en) A kind of personal identification method of the typical character based on deep learning
CN106650786A (en) Image recognition method based on multi-column convolutional neural network fuzzy evaluation
CN107316001A (en) Small and intensive method for traffic sign detection in a kind of automatic Pilot scene
Tang et al. A multi-stage framework with context information fusion structure for skin lesion segmentation
CN110109060A (en) A kind of radar emitter signal method for separating and system based on deep learning network
CN106372581A (en) Method for constructing and training human face identification feature extraction network
JP6933164B2 (en) Learning data creation device, learning model creation system, learning data creation method, and program
CN106951928A (en) The Ultrasound Image Recognition Method and device of a kind of thyroid papillary carcinoma
CN106874929B (en) Pearl classification method based on deep learning
CN108852350B (en) Modeling method for recognizing and positioning scalp electroencephalogram seizure area based on deep learning algorithm
CN109145944B (en) Classification method based on longitudinal three-dimensional image deep learning features
CN107230267A (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN106778650A (en) Scene adaptive pedestrian detection method and system based on polymorphic type information fusion
CN110178139A (en) Use the system and method for the character recognition of the full convolutional neural networks with attention mechanism
CN111723852B (en) Robust training method for target detection network
CN112784929A (en) Small sample image classification method and device based on double-element group expansion
CN105404865A (en) Probability state restricted Boltzmann machine cascade based face detection method
CN110348448A (en) A kind of license plate character recognition method based on convolutional neural networks
Ariesta et al. Sentence level Indonesian sign language recognition using 3D convolutional neural network and bidirectional recurrent neural network
CN105023025B (en) A kind of opener mark image sorting technique and system
CN106529470A (en) Gesture recognition method based on multistage depth convolution neural network
CN114943894A (en) ConvCRF-based high-resolution remote sensing image building extraction optimization method
Zhao et al. Robust online tracking with meta-updater
CN111062399B (en) Monitoring video face recognition method based on color dithering and image mixing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Face Recognition Method for Surveillance Video Based on Color Jitter and Image Mixing

Effective date of registration: 20230726

Granted publication date: 20230425

Pledgee: Dalian Branch of Shanghai Pudong Development Bank Co.,Ltd.

Pledgor: YICHENG GAOKE (DALIAN) TECHNOLOGY Co.,Ltd.

Registration number: Y2023980049989

PE01 Entry into force of the registration of the contract for pledge of patent right