CN111079715B - Occlusion robustness face alignment method based on double dictionary learning - Google Patents
Occlusion robustness face alignment method based on double dictionary learning Download PDFInfo
- Publication number
- CN111079715B CN111079715B CN202010000354.1A CN202010000354A CN111079715B CN 111079715 B CN111079715 B CN 111079715B CN 202010000354 A CN202010000354 A CN 202010000354A CN 111079715 B CN111079715 B CN 111079715B
- Authority
- CN
- China
- Prior art keywords
- face
- dictionary
- alignment
- shape
- learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention discloses a blocking robustness face alignment method based on double dictionary learning, which comprises the following steps: training and alignment processes. According to the invention, when the facial features of the face are collected, the global shape features and the local appearance features of the face are comprehensively considered, the obtained training model has stronger anti-interference capability on noise during test fitting, meanwhile, aiming at the problem of mutation of key points of the face under the shielding condition, the alignment error is further analyzed, key point mutation parameters and alignment error weight attenuation parameters are introduced, and an alignment error coding matrix is constructed, so that the dictionary learning process pays more attention to non-shielding parts, the influence caused by shielding positions and shielding patterns is reduced, the accuracy of face alignment is improved, and the method is better suitable for the face alignment with shielding under the natural condition.
Description
Technical Field
The invention relates to the technical field of computer vision and pattern recognition, in particular to a shielding robustness face alignment method based on double dictionary learning.
Background
Face alignment, i.e. automatically positioning face key points such as eyes, nose tips, mouth corners, contours, etc. according to input face images, is an indispensable part in applications such as automatic face recognition, face tracking, expression analysis, etc. Macroscopically, facial key points are sparse representation of faces, so that the construction of a dictionary which is simple, effective and high in universality to represent the facial key points is a research hot spot in the fields of computer vision and pattern recognition in recent years.
At present, face alignment methods based on single dictionary learning are developed more mature, most of the methods learn the global shape features of the face in a rough-to-fine mode to obtain the final shape of the face, but the final shape of the face is difficult to embody the internal features of the face only by means of the shape, and under the influence of a series of changes such as local shielding, illumination and the like, the complete face structure cannot be obtained, so that the face alignment precision is seriously reduced. Studies have shown that constructing occlusion subspaces or more complex dictionaries does not completely exclude the effect of occlusion due to variability of occlusion positions and occlusion patterns, and face alignment should be more focused on non-occluded parts.
In view of the above problems, a face alignment method with shielding robustness is needed to be proposed.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a shielding robust face alignment method based on double dictionary learning, which comprehensively considers global shape characteristics and local appearance characteristics of a face in the dictionary learning process, further analyzes alignment errors, introduces key point mutation parameters and alignment error weight attenuation parameters, constructs an error coding matrix, improves the shielding face alignment precision under natural conditions, and is convenient for subsequent recognition, tracking and synthesis of the face.
The aim of the invention can be achieved by adopting the following technical scheme:
a face alignment method of shielding robustness based on double dictionary learning, the said face alignment method includes:
s1, training process:
inputting a training set and initializing a face key point, a shape dictionary and an appearance dictionary;
training comprises L learning processes, wherein in the ith learning process, i=0, 1, … and L-1, according to the shape characteristics and appearance characteristics of the key points of the current face, analyzing alignment errors, constructing an error coding matrix, and solving an optimized shape dictionary and an optimized appearance dictionary;
updating the key points of the face; judging whether the maximum learning times L are reached, if not, carrying out i+1st learning according to the updated face key points, and obtaining a model M formed by L shape dictionaries and L appearance dictionaries when the maximum learning times are reached;
s2, an alignment process: inputting a test face, initializing face key points, and updating face key point coordinates by using the model M obtained by training in the step S1; and after updating for L times, outputting a final face alignment result.
Further, in the step S1, after the training set is input in the training process, the key points, the shape dictionary and the appearance dictionary of the face are initialized, and the specific process is as follows:
performing face detection on the training pictures and normalizing the sizes of all faces; calculating an average face shape vector as an initial face key point; randomly selecting m samples from the training set as initial value values of the shape dictionary and the appearance dictionaryAnd
further, in the S1, alignment errors are analyzed in the training process, key point mutation parameters and alignment error weight attenuation parameters are introduced, an optimized alignment error coding matrix is solved, and a shape dictionary and an appearance dictionary are optimized under the alignment error coding matrix.
Further, in the step S1, a batch processing mode is adopted in the training process, the training process comprises L learning processes, an optimized shape dictionary and an optimized appearance dictionary are stored in each learning process, and when the maximum learning times are reached, a model M formed by L shape dictionaries and L appearance dictionaries is obtained.
Further, in the step S2, the alignment process includes L updating processes, and after L updating processes, a final face alignment result is output, wherein the average face shape vector of the training set is used as an initial face key point after face detection and face scale normalization are carried out on the test image.
Further, the step S1 of training comprises the following steps:
s11, inputting a training set, carrying out face detection on the face picture and normalizing the sizes of all faces; calculating an average face shape vector as an initial face key point P 0 The method comprises the steps of carrying out a first treatment on the surface of the Randomly selecting m samples from the training set as initial values of the shape dictionary and the appearance dictionaryAnd->The training adopts a batch processing mode, and comprises L learning processes;
s12, extracting a current detection key point P in the ith study i Shape feature S of (2) i And appearance characteristics A i ,S i And A i Respectively representing the position information of key points of the human face and the appearance information of the human face under the key points;
s13, in the ith study, the shape features S extracted according to the step S12 i And appearance characteristics A i Analyzing the alignment error, solving to obtain an optimized alignment error coding matrix, and optimizing a shape dictionary under the alignment error coding matrixAppearance dictionary->Sparse coding C of key point features i ;
S14, in the ith learning, according to the optimized shape dictionarySparse coding C of key point features i Updating face key point coordinates P i+1 ;
And S15, judging whether the maximum learning times L are reached, if the maximum learning times are not reached, repeating the steps S12 to S14 to perform the i+1st learning until the maximum learning times are reached, and obtaining a model M consisting of L shape dictionaries and L appearance dictionaries.
Further, the S2, alignment process includes the following steps:
s21, inputting a test picture f, detecting a human face, normalizing the human face size, and taking the average human face shape vector in the step S11 as an initial human face key point p 0 Alignment processTotally comprises L times of updating;
s22, in the ith update, extracting Gabor features of currently detected key points in the test picture as appearance features a i From the model MAnd->According to->And a i Obtaining key point feature sparse coding c of test picture i :
Wherein I II 2 Is l of vector 2 The norm of the sample is calculated, I.I 1 Is l of vector 1 Norm, λ is the regularization parameter, then, according toc i Updating face key point coordinates p i+1 :
S23, judging whether the maximum updating times L are reached, if not, obtaining from the model The (i+1) th update is carried out until the maximum update times are reached, and a final face alignment result p is output t 。
Compared with the prior art, the invention has the following advantages and effects:
according to the occlusion robustness face alignment method based on the double dictionary learning, aiming at the face alignment problem affected by occlusion under natural conditions, the global shape features of the face are collected, the local appearance features of the face are also collected in the model training process, namely, face texture information under key points, and better alignment effect can be achieved compared with the existing single dictionary learning. In addition, by further analyzing the alignment error, a key point mutation parameter and an alignment error weight attenuation parameter are introduced to construct an error coding matrix, and when the learning direction of the key point is seriously deviated from a real key point, the shape of the face is more dependent on the last updated result or the average face, so that the method has shielding robustness.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings that are used in the description of the embodiments will be briefly described.
Fig. 1 is a flowchart of an occlusion robustness face alignment method based on dual dictionary learning.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
The embodiment discloses a face alignment method of shielding robustness based on double dictionary learning, the flow chart is shown in fig. 1, and as can be seen from fig. 1, the face alignment method comprises a training process and an alignment process, and specifically comprises the following steps:
s1, training process:
inputting a training set and initializing a face key point, a shape dictionary and an appearance dictionary;
training comprises L learning processes, wherein in the ith learning process (i=0, 1, …, L-1), according to the shape characteristics and appearance characteristics of the key points of the current face, the alignment errors are further analyzed, an error coding matrix is constructed, and an optimized shape dictionary and an optimized appearance dictionary are solved;
updating the key points of the face; and judging whether the maximum learning times L are reached, if not, carrying out i+1st learning according to the updated face key points, and obtaining a model M consisting of L shape dictionaries and L appearance dictionaries when the maximum learning times are reached.
S2, an alignment process: inputting a test face, initializing face key points, and updating face key point coordinates by using the model M obtained by training in the step S1; and after updating for L times, outputting a final face alignment result.
In this embodiment, the training process in S1 includes the following steps:
s11, inputting a training set and initializing a face key point, a shape dictionary and an appearance dictionary, wherein the specific explanation is as follows:
training adopts a batch processing mode, face detection is carried out on N face pictures, all face scales are normalized, and a training set T= { F, P } is formed by a normalized face sample F and a real key point coordinate P thereof;
calculating an average face shape vector as initial face key point coordinates of N faces/>Specifically, the->Can be defined as:
wherein X is i =(x 1 ,x 2 ,…,x K ),Y i =(y 1 ,y 2 ,…,y K ) Respectively representing the abscissa and the ordinate of K real key points on the ith face sample;
randomly selecting m samples from the training set as a shape dictionaryAnd an appearance dictionaryInitial value of->For real number set->Representation->Is of dimension n s X m, and the matrix elements are all real numbers. n is n s And n a The shape feature and the appearance feature are respectively the size, n in the embodiment s Depending on the number of key points K, n to be detected a An extraction method depending on the appearance characteristics;
the training process includes L learning times, L being generally 3-4 times depending on the size of the data set.
S12, in the ith learning process (i=0, 1, …, L-1), calculating the coordinate P of the currently detected key point i Coordinate difference from the true key point coordinate P as the shape feature S i Based on face image F and current detection key point coordinates P i Extracting Gabor features as appearance features A i The method is characterized by comprising the following steps:
wherein:and->Respectively representing the shape feature and appearance feature of the nth face in the ith learning process, wherein G (-) is a Gabor feature extraction function, usually a Gabor kernel (Gabor filter bank) with multiple scales, directions and frequencies is selected, and convolved with a face image to obtain Gabor features of the area near the key points, if the dimension of the extracted appearance feature is too high, the A can be properly processed i And important features are extracted for PCA dimension reduction processing, so that the operand is reduced.
S13, in the ith learning process (i=0, 1, …, L-1), according to the key point shape features S extracted in the step S12 i And appearance characteristics A i Analyzing the alignment error and solving to obtain an optimized alignment error coding matrix W i And a shape dictionary optimized under the alignment error coding matrixAppearance dictionary->Sparse coding C of key point features i The solved objective function is defined as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the Frobenius norm of the matrix, I.I 1 Is matrix l 1 The norm, λ is a regularization parameter, and for convenient operation, the objective function may be solved in two steps:
first, fixing an appearance dictionaryAccording to shape characteristics S i Alternate iterative solution of optimization +.>And C i :
Since training requires batch processing of N face samples, the error weights are alignedThe operations of the element corresponding to the fidelity term are as follows, the symbol (≡) is defined as the multiplication operation of the matrix corresponding element, specifically, the error weight matrix W is aligned i Element->The definition is as follows:
wherein, the k key point alignment error of the j-th faceShape features defined as the ith learning processShape feature +.>Difference, i.e.)>μ i Represents the attenuation rate delta of alignment error in the weight value range i To cope with parameters of key point mutations in occlusion cases, in particular,δ i The value can be matched with L learning processes, the value is obtained by referring to the normalized face size from coarse to fine, and when the alignment error is + +.>Exceeding delta i When the method is used, the current key point learning direction is proved to deviate from the real key point seriously, and the error weight is smaller than 0.5, so that the updated face key point is more dependent on the last updating result or the initial face shape, and the shielding robustness is achieved.
Second, fixing the shape dictionaryAccording to shape characteristics S i Alternate iterative solution of optimization +.>And C i :
Sparse coding C of key point features i After two optimizations, the shape feature S i And appearance characteristics A i In connection, the learned dictionary has stronger anti-interference capability on noise when the test is fitted.
S14, solving the obtained optimization according to the step S13C i Updating face key point coordinates P i+1 :
S15, judging whether training reaches the maximum learning times L, if not, repeating the steps S12 to S14 to perform the (i+1) th learning until reaching the maximum learning times, and thenStored as model M.
In this embodiment, the alignment process in step S2 is as follows:
s21, inputting a test picture f, detecting a human face, normalizing the human face size, and taking the average human face shape vector in the step S11 as an initial human face key point p 0 The alignment process includes L updates in total;
s22, in the ith update (i=0, 1, …, L-1), extracting Gabor features of the currently detected key points in the test picture as shape features a i From the model MAnd->According to->And a i Obtaining key point feature sparse coding c of test picture i :
S23, judging whether the maximum updating times L are reached, if not, obtaining from the model The (i+1) th update is carried out until the maximum update times are reached, and a final face alignment result p is output t 。/>
Through the description of the technical scheme, when the face features are collected, not only the global shape features of the face are collected, but also the local appearance features of the face, namely the face texture information under key points, are collected, and the shape dictionary and the appearance dictionary are respectively learned aiming at the two features, so that the training model has stronger anti-interference capability on noise in the test fitting process.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.
Claims (5)
1. The occlusion robustness face alignment method based on the double dictionary learning is characterized by comprising the following steps of:
s1, training process:
inputting a training set and initializing a face key point, a shape dictionary and an appearance dictionary;
training comprises L learning processes, wherein in the ith learning process, i=0, 1..and L-1, according to the shape characteristics and the appearance characteristics of the key points of the current face, analyzing alignment errors, constructing an error coding matrix, and solving an optimized shape dictionary and an optimized appearance dictionary;
updating the key points of the face; judging whether the maximum learning times L are reached, if not, carrying out i+1st learning according to the updated face key points, and obtaining a model M formed by L shape dictionaries and L appearance dictionaries when the maximum learning times are reached;
wherein the training process comprises the following steps:
s11, inputting a training set, carrying out face detection on the face picture and normalizing the sizes of all faces; calculating an average face shape vector as an initial face key point P 0 The method comprises the steps of carrying out a first treatment on the surface of the Randomly selecting m samples from the training set as initial values of the shape dictionary and the appearance dictionaryAnd->The training adopts a batch processing mode, and comprises L learning processes;
s12, extracting a current detection key point P in the ith study i Shape feature S of (2) i And appearance characteristics A i ,S i And A i Respectively representing the position information of key points of the human face and the appearance information of the human face under the key points;
s13, in the ith study, the shape features S extracted according to the step S12 i And appearance characteristics A i Analyzing the alignment error, solving to obtain an optimized alignment error coding matrix, and optimizing a shape dictionary under the alignment error coding matrixAppearance dictionary->Sparse coding C of key point features i ;
S14, in the ith learning, according to the optimized shape dictionarySparse coding C of key point features i Updating face key point coordinates P i+1 ;
S15, judging whether the maximum learning times L are reached, if the maximum learning times are not reached, repeating the steps S12 to S14 for i+1st learning until the maximum learning times are reached, and obtaining a model M formed by L shape dictionaries and L appearance dictionaries;
s2, an alignment process: inputting a test face, initializing face key points, and updating face key point coordinates by using the model M obtained by training in the step S1; after updating for L times, outputting a final face alignment result;
wherein the alignment process comprises the steps of:
s21, inputting a test picture f, detecting a human face, normalizing the human face size, and taking the average human face shape vector in the step S11 as an initial human face key point p 0 The alignment process includes L updates in total;
s22, in the ith update, extracting Gabor features of currently detected key points in the test picture as appearance features a i From the model MAnd->According to->And a i Obtaining key point feature sparse coding c of test picture i :
Wherein I 2 Is l of vector 2 The norm of the sample is calculated, I.I 1 Vector of l 1 The norm of the sample is calculated,lambda is a regularization parameter according toc i Updating face key point coordinates p i+1 :
2. The occlusion robustness face alignment method based on double dictionary learning of claim 1, wherein the step of initializing the face key points, the shape dictionary and the appearance dictionary after inputting the training set in the training process comprises the following specific steps:
3. The occlusion robustness face alignment method based on dual dictionary learning according to claim 1, wherein in the training process, alignment errors are analyzed, key point mutation parameters and alignment error weight attenuation parameters are introduced, an optimized alignment error coding matrix is solved, and a shape dictionary and an appearance dictionary are optimized under the alignment error coding matrix.
4. The occlusion robustness face alignment method based on double-dictionary learning according to claim 1, wherein the step S1 is characterized in that a batch processing mode is adopted in the training process, the method comprises L learning processes, each time an optimized shape dictionary and an optimized appearance dictionary are learned and stored, and when the maximum learning times are reached, a model M formed by L shape dictionaries and L appearance dictionaries is obtained.
5. The occlusion robustness face alignment method based on double dictionary learning of claim 1, wherein the step of performing face detection on the test image in the alignment process and normalizing the face scale uses an average face shape vector of a training set as an initial face key point, the alignment includes L updating processes, and a final face alignment result is output after updating for L times.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010000354.1A CN111079715B (en) | 2020-01-02 | 2020-01-02 | Occlusion robustness face alignment method based on double dictionary learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010000354.1A CN111079715B (en) | 2020-01-02 | 2020-01-02 | Occlusion robustness face alignment method based on double dictionary learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111079715A CN111079715A (en) | 2020-04-28 |
CN111079715B true CN111079715B (en) | 2023-04-25 |
Family
ID=70321644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010000354.1A Active CN111079715B (en) | 2020-01-02 | 2020-01-02 | Occlusion robustness face alignment method based on double dictionary learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111079715B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116935471A (en) * | 2023-07-24 | 2023-10-24 | 山东睿芯半导体科技有限公司 | Face recognition method, device, chip and terminal based on double dictionary learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005004454A (en) * | 2003-06-11 | 2005-01-06 | National Institute Of Advanced Industrial & Technology | Method for classifying and registering face image |
CN106570464A (en) * | 2016-10-31 | 2017-04-19 | 华南理工大学 | Human face recognition method and device for quickly processing human face shading |
CN109711283A (en) * | 2018-12-10 | 2019-05-03 | 广东工业大学 | A kind of joint doubledictionary and error matrix block Expression Recognition algorithm |
-
2020
- 2020-01-02 CN CN202010000354.1A patent/CN111079715B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005004454A (en) * | 2003-06-11 | 2005-01-06 | National Institute Of Advanced Industrial & Technology | Method for classifying and registering face image |
CN106570464A (en) * | 2016-10-31 | 2017-04-19 | 华南理工大学 | Human face recognition method and device for quickly processing human face shading |
CN109711283A (en) * | 2018-12-10 | 2019-05-03 | 广东工业大学 | A kind of joint doubledictionary and error matrix block Expression Recognition algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN111079715A (en) | 2020-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yi et al. | An end‐to‐end steel strip surface defects recognition system based on convolutional neural networks | |
CN110348319B (en) | Face anti-counterfeiting method based on face depth information and edge image fusion | |
CN107016681B (en) | Brain MRI tumor segmentation method based on full convolution network | |
CN109033978B (en) | Error correction strategy-based CNN-SVM hybrid model gesture recognition method | |
CN111325190B (en) | Expression recognition method and device, computer equipment and readable storage medium | |
CN109064478B (en) | Astronomical image contour extraction method based on extreme learning machine | |
CN111709313B (en) | Pedestrian re-identification method based on local and channel combination characteristics | |
CN108629762B (en) | Image preprocessing method and system for reducing interference characteristics of bone age evaluation model | |
CN113592923B (en) | Batch image registration method based on depth local feature matching | |
CN113177456B (en) | Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion | |
CN112560710B (en) | Method for constructing finger vein recognition system and finger vein recognition system | |
CN110610174A (en) | Bank card number identification method under complex conditions | |
Lin et al. | Determination of the varieties of rice kernels based on machine vision and deep learning technology | |
CN111079715B (en) | Occlusion robustness face alignment method based on double dictionary learning | |
CN113221660B (en) | Cross-age face recognition method based on feature fusion | |
CN115049952A (en) | Juvenile fish limb identification method based on multi-scale cascade perception deep learning network | |
CN110992301A (en) | Gas contour identification method | |
US11715288B2 (en) | Optical character recognition using specialized confidence functions | |
CN115631526A (en) | Shielded facial expression recognition method based on self-supervision learning technology and application | |
CN111695507B (en) | Static gesture recognition method based on improved VGGNet network and PCA | |
CN111553249B (en) | H-B grading-based accurate facial paralysis degree evaluation method and device under CV | |
CN113627522A (en) | Image classification method, device and equipment based on relational network and storage medium | |
Premananada et al. | Design and Implementation of Automated Image Handwriting Sentences Recognition using Hybrid Techniques on FPGA | |
Sizyakin et al. | A two-stream neural network architecture for the detection and analysis of cracks in panel paintings | |
US11756319B2 (en) | Shift invariant loss for deep learning based image segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |