CN116266347A - Modeling method and system for food recognition model, and food recognition method and system - Google Patents

Modeling method and system for food recognition model, and food recognition method and system Download PDF

Info

Publication number
CN116266347A
CN116266347A CN202111535416.XA CN202111535416A CN116266347A CN 116266347 A CN116266347 A CN 116266347A CN 202111535416 A CN202111535416 A CN 202111535416A CN 116266347 A CN116266347 A CN 116266347A
Authority
CN
China
Prior art keywords
food
features
recognition model
dataset
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111535416.XA
Other languages
Chinese (zh)
Inventor
叶翔鹏
吴红艳
王如心
林越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202111535416.XA priority Critical patent/CN116266347A/en
Publication of CN116266347A publication Critical patent/CN116266347A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a modeling method and a modeling system for a food recognition model, and a food recognition method and a food recognition system, wherein the modeling method for the food recognition model comprises the following steps: acquiring and preprocessing a food image recognition data set, and dividing the preprocessed food image recognition data set into a training set, a verification set and a test set according to a set proportion; constructing a graph dataset based on the preprocessed food image recognition dataset; mining to obtain local features and global features of the graph dataset; fusing the local features and the global features to obtain fused features; and taking the fusion characteristics corresponding to the training set as input characteristics, and training to obtain the food recognition model. According to the method, the depth characteristics of the image are utilized, the image is reconstructed according to the node attribute relationship among the interiors of the image, then the information is aggregated from the image structure by means of the image convolution network, the potential space and semantic relationship in the interior are further mined, the local and global characteristics are fused, and the accuracy of food identification can be effectively improved.

Description

Modeling method and system for food recognition model, and food recognition method and system
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a food recognition model modeling method and system, and a food recognition method and system.
Background
People take food as a day, and food-related studies are becoming more popular because of their importance in life. Nowadays, with the rapid development of social networks and the development of various portable intelligent devices, people usually record, upload and share food pictures, so that the application value of identifying food images is also larger and larger, positive influences are generated on aspects of personal diet management, food recommendation, catering, social interaction and the like, and the social interaction type intelligent device has been widely focused.
Early food identification studies focused on manually extracting image features, and using conventional image features to solve the problem of food image classification. For example, chinese patent application CN201610136517.2 discloses a fast food recognition method based on markov random field, which mainly solves the problem of difficulty in recognizing food due to lack of feature points and irregular shape of food in food image recognition, but only uses traditional manual features, so that recognition accuracy is not high.
With the development of convolutional networks, simple and automatic food recognition methods using deep networks generally possess better performance than conventional methods. The paper multi-scale multi-view depth feature fusion for food recognition (Jiang S, min W, liu L, et al multi-scale multi-view deep feature aggregation for food recognition [ J ]. IEEE Transactions on Image Processing,2019, 29:265-276) proposes a multi-scale multi-view feature aggregation food recognition method which aggregates advanced semantic features, intermediate attribute features and deep visual features into a unified representation, so that the fused features have stronger robustness and comprehensiveness. The CNN convolutional neural network framework is adopted in the paper, and the characteristics of multi-scale and multi-view are fused, but the detail characteristics of the image are still difficult to acquire and distinguish, and the defect of local perception exists.
The same kind of food may have a large intra-class variability and different kinds of food may have very high inter-class similarity. Food recognition belongs to fine-grained recognition, in which there are many minutiae details. The existing convolutional neural network has the defect of local perception, and the local characteristic and the global characteristic cannot be combined. The existing method does not fully excavate the semantic information further by fusing the local and global relations of the food image, and does not excavate the potential space and semantic relation inside by means of the graph neural network.
Disclosure of Invention
The invention aims to provide a food recognition model modeling method and system, and a food recognition method and system aiming at the defects of the prior art.
In order to solve the technical problems, the invention adopts the following technical scheme:
a modeling method of a food recognition model is characterized by comprising the following steps:
acquiring and preprocessing a food image recognition data set, and dividing the preprocessed food image recognition data set into a training set, a verification set and a test set according to a set proportion;
constructing a graph dataset based on the preprocessed food image recognition dataset;
mining to obtain local features and global features of the graph dataset;
fusing the local features and the global features to obtain fused features;
and taking the fusion characteristics corresponding to the training set as input characteristics, and training to obtain the food recognition model.
Further, the method also comprises the step of optimally adjusting parameters of the food identification model by using the verification set.
Further, the method also comprises the step of predicting the food recognition model with precision by using the test set.
As a preferred way, a CNN convolutional network is used to obtain the local features of the graph dataset.
As a preferred approach, a GCN graph rolling network is utilized to obtain global features of the graph dataset.
Preferably, the Food image identification data set comprises a Food image public data set including Food-101 and/or Vireo Food-172.
Based on the same inventive concept, the invention also provides a modeling system of the food recognition model, which is characterized by comprising:
and a data processing module: the food image recognition data set is used for acquiring and preprocessing the food image recognition data set, and dividing the preprocessed food image recognition data set into a training set, a verification set and a test set according to a set proportion;
the diagram dataset construction module: for constructing a map dataset based on the preprocessed food image recognition dataset;
the feature mining module: the method comprises the steps of mining local features and global features of a graph dataset;
and a feature fusion module: the local features and the global features are fused to obtain fusion features;
training module: the method is used for training and obtaining the food recognition model by taking the fusion characteristics corresponding to the training set as input characteristics.
Further, the method further comprises the following steps:
parameter adjustment module: for optimal adjustment of parameters of the food identification model using the validation set.
Based on the same inventive concept, the invention also provides a food identification method which is used for identifying food based on the food identification model obtained by the modeling method.
Based on the same inventive concept, the invention also provides a food recognition system comprising the food recognition model obtained by the modeling method.
Compared with the prior art, the method and the device have the advantages that the depth characteristics of the images are utilized, the images are reconstructed according to the node attribute relationship among the interiors of the images, then the information is aggregated from the image structure by means of the image convolution network, the potential space and semantic relationship in the interiors are further mined, the local and global characteristics are fused, and the accuracy of food identification can be effectively improved. The invention adopts a further feature mining to fuse the local and global features to provide a new food recognition method, which overcomes the defect that detail features of images cannot be acquired and distinguished well and local perception exists in the prior art by missing a fine-granularity food region in the prior art, overcomes the defect that the local features and the global features cannot be fused in the prior art, and provides a new method for improving the accuracy of food recognition.
Drawings
FIG. 1 is a flow chart of the whole technology of the invention.
Fig. 2 is a flowchart of the whole embodiment of the present invention.
Fig. 3 is a schematic diagram of local feature acquisition.
FIG. 4 is a schematic diagram of the acquisition of global features and the construction of a graph dataset.
Fig. 5 is a frame diagram of a food recognition model.
Detailed Description
The method has the main characteristics that the graph is reconstructed by utilizing the node attribute relationship among the interiors of the images, and meanwhile, the information is aggregated from the graph structure by means of the graph convolution network, so that the potential space and semantic relationship in the interiors are further excavated, the problems that the detail features of the images cannot be acquired well, the local perception of the convolution neural network, the connection of the local features and the global features cannot be fused in the food recognition are solved, the accuracy of the food recognition can be further improved, and the method has important significance in the aspects of personal diet management, food recommendation, catering, social interaction and the like.
The basic content of the technical scheme of the invention is as follows:
1. mining of local and global semantic information of an image based on deep learning. And carrying out graph characterization on the image characteristics, converting the image information into graph data information, obtaining graph characterization representation, and then carrying out further global information extraction based on the graph.
2. Global features represent the overall appearance of an image, while local features represent the local characteristics of an image. Therefore, the local features and the global features extracted from the image are integrated into one feature vector to represent the whole vector, and the potential space and semantic relation inside the mining is utilized sufficiently and efficiently, so that the accuracy of food identification is further improved.
The modeling method of the food recognition model comprises the following steps:
acquiring and preprocessing a food image recognition data set, and dividing the preprocessed food image recognition data set into a training set, a verification set and a test set according to a set proportion;
constructing a graph dataset based on the preprocessed food image recognition dataset;
mining to obtain local features and global features of the graph dataset;
fusing the local features and the global features to obtain fused features;
and taking the fusion characteristics corresponding to the training set as input characteristics, and training to obtain the food recognition model.
Preferably further comprising making parameter-optimal adjustments to the food recognition model using the validation set.
Preferably further comprising making accurate predictions of the food identification model using the test set.
Preferably, the local features of the graph dataset are acquired using a CNN convolutional network.
Preferably, the global features of the graph dataset are obtained using a GCN graph rolling network.
Preferably, the Food image identification dataset comprises a Food image public dataset of Food-101 and/or Vireo Food-172.
The invention also provides a modeling system of the food recognition model, which comprises:
and a data processing module: the food image recognition data set is used for acquiring and preprocessing the food image recognition data set, and dividing the preprocessed food image recognition data set into a training set, a verification set and a test set according to a set proportion;
the diagram dataset construction module: for constructing a map dataset based on the preprocessed food image recognition dataset;
the feature mining module: the method comprises the steps of mining local features and global features of a graph dataset;
and a feature fusion module: the local features and the global features are fused to obtain fusion features;
training module: the method is used for training and obtaining the food recognition model by taking the fusion characteristics corresponding to the training set as input characteristics.
Preferably further comprising a parameter adjustment module: for optimal adjustment of parameters of the food identification model using the validation set.
The invention also provides a food identification method which is used for identifying food based on the food identification model obtained by the modeling method.
The invention also provides a food recognition system comprising a food recognition model obtained via the modeling method.
The method for modeling a food recognition model according to the present invention will be further described with reference to fig. 1 to 5.
The modeling method of the food recognition model comprises the following specific implementation steps:
s1, data acquisition and processing
There are many common data sets for Food identification such as Food-101, vireo Food-172, etc. Thus, these food image common data sets are first downloaded and acquired, and then these image data sets are analyzed and subjected to preliminary processing.
S1-1 data acquisition
Public data sets such as Food-101, vireo Food-172 and the like are collected by using a mainstream search engine and some database websites, and meanwhile, the Food image data sets are downloaded and stored.
S1-2: data preprocessing
First, the downloaded data are analyzed to obtain the components of these food image datasets, and the images are all cropped to the size 224x224, while the entire food image dataset is processed according to 7:2:1 to divide the training set, validation set and test set and save the food image dataset for further processing.
S2: building a graph dataset
The acquired food image dataset contains food images, wherein the food images are stored in the form of pixel data. The invention discloses a food recognition method based on graph convolution, which is used for mining node attributes and spatial relations with good graph structures, so that graph characterization is needed for images, and the construction of a graph dataset is completed.
S2-1: selecting a data set
Any one data set obtained by downloading in the step S1 is selected, wherein the data set comprises pixel data of Food images, such as Food-101 data set.
S2-2: acquiring local features
The CNN convolution network is used for acquiring the local characteristics of the food image, the constructed neural network is used for completing the representation of the image, and the neural network for acquiring the local characteristics is constructed as follows:
(1) The first layer is mainly formed by 3 Bottleneck modules, wherein the Bottleneck modules are residual modules with special structures, pass through a 1×1 convolution kernel, then pass through a 3×3 convolution kernel, and finally pass through a 1×1 convolution kernel. After inputting the image, a first feature map having a size of 56×56×256 can be obtained.
(2) The second layer is mainly formed by 8 Bottleneck modules, and features are further extracted after the first feature map is input to obtain a second feature map, wherein the size of the second feature map is 28 multiplied by 512.
(3) The third layer is mainly formed by 36 Bottleneck modules, the second feature map is input, then the features are further extracted to obtain a third feature map, and the size of the third feature map is 14 multiplied by 1024.
(4) The fourth layer is mainly formed by 3 Bottleneck modules, and features are further extracted after the third feature map is input to obtain a fourth feature map, wherein the size of the fourth feature map is 7 multiplied by 2048.
And extracting and storing the fourth feature map to obtain local features.
The local feature acquisition in S2-2 may be replaced by other CNN convolutional networks, not just the network structure of the Resnet152, for example, SENet, SKNet, resnest may be used to perform local feature acquisition.
S2-3: construction of adjacency matrix and processing of node characteristics
And taking out the fourth characteristic diagram, taking each smaller local characteristic as a node, and flattening the fourth characteristic diagram to obtain 49 multiplied by 2048 characteristic vectors, namely obtaining 49 2048-dimension node characteristic matrixes F. Then, obtaining a correlation similarity matrix R through calculation, carrying out softmax function normalization operation on the matrix R, and finally, setting a piece of correlation similarity matrix R
R=F×F T
Figure BDA0003413045170000091
A=f(softmax(R))
And (3) a proper threshold value is obtained, and binarization is carried out through a threshold function, so that a sparse matrix is obtained, and the matrix is the constructed adjacent matrix A.
And F is a node characteristic matrix, R is a correlation similarity matrix, and the correlation degree between each node and each node is determined by solving the correlation degree, namely, the method is used for mining the spatial relationship between local characteristics. f (x) is a threshold function, delta is a threshold, the threshold is obtained through parameter adjustment and can be set to be 0.6, a sparse matrix can be obtained through binarization operation, and the obtained matrix A is the constructed adjacent matrix.
S3: mining global features
The GCN graph convolution network is used for further mining internal relations so as to acquire global features, the constructed graph convolution neural network is used for completing the internal potential space and semantic relations, and the neural network for acquiring the global features is constructed as follows:
(1) The first layer consists of a graph convolution layer and a relu layer, the input feature dimension is 2048 dimensions, and the output feature dimension is 256 dimensions.
(2) The second layer consists of a graph convolution layer and a softmax layer, and is used for inputting characteristic dimensions
Figure BDA0003413045170000092
256 dimensions and 101 dimensions are output feature dimensions.
Wherein Z represents global features acquired by utilizing graph convolutional network mining, F represents node feature matrix, A represents acquired adjacency matrix, and W represents a parameter capable of learning and updating.
Finally, the forward propagation mode of the network is GCN 1- & gt relu- & gt GCN 2- & gt softmax, and the overall feature Z is obtained by continuously aggregating the graph structure information through a graph rolling network and outputting the overall feature Z, wherein the overall feature size is 49x101.
The mining of global features in S3 may replace the simple two-layer graph rolling network with a more advanced or complex graph neural network, for example, with a Transfomer framework.
S4: fusion of local and global features
And (3) re-expanding the global feature obtained in the step (S3), changing the size to be 7x7x101 to be consistent with the first dimension and the second dimension of the local feature, and then splicing and fusing the local feature and the global feature through a concat function to obtain a feature with more discrimination capability than the original feature.
S5: constructing and training a food recognition model
After the fusion feature is obtained in front, but the fusion feature is only a more discriminated feature, and the task of food recognition still cannot be completed, so that the obtained fusion feature is used as an input feature to construct a food recognition model and complete training.
The invention has the following advantages:
(1) The image dataset is converted into the graph dataset, a new thought method is provided for image feature extraction, and the graph characterization method is further researched.
(2) The deep learning method based on the graph convolution network is introduced into the field of food recognition, and the problem that the relation among the internal classes of the local features of the image is difficult to mine is solved.
(3) A new model architecture is designed, internal space relation and semantic relation are mined by using a graph convolution neural network, and local-global feature fusion is completed, so that algorithm performance is improved.
The theoretical deduction proves that the technical scheme is feasible, and the code experiment proves that the method does improve the accuracy of food identification.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are all within the scope of the present invention.

Claims (10)

1. A method of modeling a food recognition model, comprising:
acquiring and preprocessing a food image recognition data set, and dividing the preprocessed food image recognition data set into a training set, a verification set and a test set according to a set proportion;
constructing a graph dataset based on the preprocessed food image recognition dataset;
mining to obtain local features and global features of the graph dataset;
fusing the local features and the global features to obtain fused features;
and taking the fusion characteristics corresponding to the training set as input characteristics, and training to obtain the food recognition model.
2. The method of modeling a food recognition model of claim 1, further comprising optimally adjusting parameters of the food recognition model using the validation set.
3. The method of modeling a food recognition model of claim 1 or 2, further comprising using the test set to accurately predict the food recognition model.
4. The method of modeling a food recognition model of claim 1, wherein the local features of the graph dataset are obtained using a CNN convolutional network.
5. The method of modeling a food recognition model of claim 1, wherein global features of the graph dataset are obtained using a GCN graph convolution network.
6. The Food recognition model modeling method of claim 1, wherein the Food image recognition dataset comprises a Food image common dataset of Food-101 and/or Vireo Food-172.
7. A food recognition model modeling system, comprising:
and a data processing module: the food image recognition data set is used for acquiring and preprocessing the food image recognition data set, and dividing the preprocessed food image recognition data set into a training set, a verification set and a test set according to a set proportion;
the diagram dataset construction module: for constructing a map dataset based on the preprocessed food image recognition dataset;
the feature mining module: the method comprises the steps of mining local features and global features of a graph dataset;
and a feature fusion module: the local features and the global features are fused to obtain fusion features;
training module: the method is used for training and obtaining the food recognition model by taking the fusion characteristics corresponding to the training set as input characteristics.
8. The food recognition model modeling system of claim 7, comprising:
parameter adjustment module: for optimal adjustment of parameters of the food identification model using the validation set.
9. A food recognition method for recognizing food based on the food recognition model obtained by the modeling method according to any one of claims 1 to 6.
10. A food recognition system, characterized in that it comprises a food recognition model obtained via the modeling method of any one of claims 1 to 6.
CN202111535416.XA 2021-12-15 2021-12-15 Modeling method and system for food recognition model, and food recognition method and system Pending CN116266347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111535416.XA CN116266347A (en) 2021-12-15 2021-12-15 Modeling method and system for food recognition model, and food recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111535416.XA CN116266347A (en) 2021-12-15 2021-12-15 Modeling method and system for food recognition model, and food recognition method and system

Publications (1)

Publication Number Publication Date
CN116266347A true CN116266347A (en) 2023-06-20

Family

ID=86742939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111535416.XA Pending CN116266347A (en) 2021-12-15 2021-12-15 Modeling method and system for food recognition model, and food recognition method and system

Country Status (1)

Country Link
CN (1) CN116266347A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911795A (en) * 2024-03-18 2024-04-19 杭州食方科技有限公司 Food image recognition method, apparatus, electronic device, and computer-readable medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911795A (en) * 2024-03-18 2024-04-19 杭州食方科技有限公司 Food image recognition method, apparatus, electronic device, and computer-readable medium
CN117911795B (en) * 2024-03-18 2024-06-11 杭州食方科技有限公司 Food image recognition method, apparatus, electronic device, and computer-readable medium

Similar Documents

Publication Publication Date Title
CN110175580B (en) Video behavior identification method based on time sequence causal convolutional network
CN108052911B (en) Deep learning-based multi-mode remote sensing image high-level feature fusion classification method
CN110135468B (en) Coal gangue identification method
CN109670528B (en) Data expansion method facing pedestrian re-identification task and based on paired sample random occlusion strategy
CN113516012B (en) Pedestrian re-identification method and system based on multi-level feature fusion
US11809485B2 (en) Method for retrieving footprint images
CN108230278B (en) Image raindrop removing method based on generation countermeasure network
CN111126202A (en) Optical remote sensing image target detection method based on void feature pyramid network
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN114092832B (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
CN107729993A (en) Utilize training sample and the 3D convolutional neural networks construction methods of compromise measurement
CN101369281A (en) Retrieval method based on video abstract metadata
CN111382690B (en) Vehicle re-identification method based on multi-loss fusion model
Chen et al. PCB defect detection method based on transformer-YOLO
CN115272776B (en) Hyperspectral image classification method based on double-path convolution and double attention and storage medium
CN112927253A (en) Rock core FIB-SEM image segmentation method based on convolutional neural network
CN105260995A (en) Image repairing and denoising method and system
CN116266347A (en) Modeling method and system for food recognition model, and food recognition method and system
CN110097603B (en) Fashionable image dominant hue analysis method
CN108764287B (en) Target detection method and system based on deep learning and packet convolution
CN112818818B (en) Novel ultra-high-definition remote sensing image change detection method based on AFFPN
Chen et al. Fresh tea sprouts detection via image enhancement and fusion SSD
CN117521209A (en) Integration and display method, system and storage medium of municipal design data
CN111582057A (en) Face verification method based on local receptive field
Li et al. Face recognition algorithm based on multiscale feature fusion network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination