CN112651468A - Multi-scale lightweight image classification method and storage medium thereof - Google Patents
Multi-scale lightweight image classification method and storage medium thereof Download PDFInfo
- Publication number
- CN112651468A CN112651468A CN202110062780.2A CN202110062780A CN112651468A CN 112651468 A CN112651468 A CN 112651468A CN 202110062780 A CN202110062780 A CN 202110062780A CN 112651468 A CN112651468 A CN 112651468A
- Authority
- CN
- China
- Prior art keywords
- sample set
- training
- image classification
- classification method
- loss function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000012549 training Methods 0.000 claims abstract description 63
- 238000012360 testing method Methods 0.000 claims abstract description 18
- 238000013528 artificial neural network Methods 0.000 claims abstract description 11
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 10
- 238000012795 verification Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000001228 spectrum Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 6
- 238000000605 extraction Methods 0.000 abstract description 4
- 230000008859 change Effects 0.000 description 5
- 230000004075 alteration Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000001681 protective effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a multi-scale lightweight image classification method and a storage medium thereof, wherein the multi-scale lightweight image classification method mainly comprises the following steps: establishing a training sample set, a testing sample set and a verification sample set; establishing a deep separable convolutional neural network; introducing an SKNet module into a convolution network intermediate layer of the deep separable convolution neural network to obtain an initial model; establishing a loss function of an initial model; training the initial model according to the training sample set and the loss function to obtain a training model; the training model classifies the test sample set, and compares the classification result with the class label of the verification sample set to obtain the classification accuracy of the test sample set; by utilizing the depth separable convolutional network, the training parameters of the initial model are reduced, the efficiency of feature extraction is improved, and by introducing the SKNet module, the training model can adaptively adjust the acceptance domain of the image, the size of the receptive field can be selected independently, and the image classification effect of different sizes is improved.
Description
Technical Field
The invention relates to the technical field of image classification, in particular to a multi-scale lightweight image classification method and a storage medium thereof.
Background
For a computer to explain the content of a picture, the classification result of the picture by the computer is influenced by viewpoint change, zoom change, deformation, closing change, illumination change, background and intra-class change, and in order to understand the content of the picture, image classification must be applied, which is a task of extracting meaning from the picture by using computer vision and machine learning methods. The conventional image classification mainly adopts a convolutional neural network model, but the convolutional neural network model is more and more difficult to apply to an embedded system or mobile equipment along with the development of a subsequent neural network model, and the conventional image classification method has a poor effect on classifying images with different sizes.
It is seen that improvements and enhancements to the prior art are needed.
Disclosure of Invention
In view of the defects of the prior art, the invention aims to provide a multi-scale lightweight image classification method which can reduce the training parameters of a model, improve the efficiency of feature extraction, accelerate neural network convergence and have a good effect of classifying targets with different sizes.
In order to achieve the purpose, the invention adopts the following technical scheme:
a multi-scale lightweight image classification method specifically comprises the following steps:
establishing a training sample set, a testing sample set and a verification sample set;
establishing a deep separable convolutional neural network;
introducing an SKNet module into a convolution network intermediate layer of the deep separable convolution neural network to obtain an initial model;
establishing a loss function of an initial model;
training the initial model according to the training sample set and the loss function to obtain a training model;
and the training model classifies the test sample set, and compares the classification result with the class label of the verification sample set to obtain the classification accuracy of the test sample set.
In the multi-scale lightweight image classification method, the establishing of the training sample set, the testing sample set and the verifying sample set comprises the step of carrying out expansion preprocessing on the image data set of the training sample set.
In the multi-scale lightweight image classification method, the expansion preprocessing comprises image cutting, image turning, image scaling, brightness adjustment, contrast adjustment, hue adjustment, saturation adjustment and gray level adjustment.
In the multi-scale lightweight image classification method, the depth separable convolution neural network is composed of a plurality of depth separable convolution modules.
In the multi-scale lightweight image classification method, an SKNet module is introduced into a convolution network middle layer of a depth separable convolution neural network to obtain an initial model, a plurality of depth separable networks are respectively used for convolution of feature spectrums in the same convolution layer, pixel-by-pixel addition is carried out on the feature spectrums after convolution, and the feature chart weight is recalibrated through a gating mechanism.
In the multi-scale lightweight image classification method, the loss function is expressed as:
Lfl=-αγ(1-y′)γlogy′-(1-α)(1-γ)y′γlog(1-y′)
wherein L isflAnd in order to obtain a loss function value, alpha is a balance factor, gamma is a selection factor, y is a label value of a sample of the training sample set, and y' is a predicted output label value of the sigmoid activation function.
In the multi-scale lightweight image classification method, the training of the initial model is performed according to the training sample set and the loss function to obtain the training model, and the training is completed and the training model is obtained when the loss value of the loss function is smaller than a preset threshold value or the training times are equal to preset iteration times.
The present application also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the multi-scale lightweight image classification method as described above when executing the computer program.
The present application also provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the multi-scale lightweight image classification method as described above.
Has the advantages that:
the invention provides a multi-scale lightweight image classification method, which utilizes a deep separable convolutional network to reduce training parameters of an initial model, improves the efficiency of feature extraction, accelerates the convergence of a neural network, enables a training model to adaptively adjust the acceptance domain of an image by introducing an SKNet module, can autonomously select the size of a receptive field, and improves the image classification effect of different sizes.
Drawings
Fig. 1 is a control flow chart of a multi-scale lightweight image classification method provided by the present invention.
Detailed Description
The present invention provides a multi-scale lightweight image classification method and a storage medium thereof, and in order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the scope of the invention.
In the description of the present invention, it is to be understood that the terms "first", "second" and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit indication of the number of technical features indicated, and that the specific meanings of the above terms in the present invention will be understood by those of ordinary skill in the art according to the specific circumstances.
Referring to fig. 1, the present invention provides a multi-scale lightweight image classification method, which specifically includes the following steps:
s100, establishing a training sample set, a testing sample set and a verification sample set; the training sample set is used for training a model, the testing sample set is an image set to be tested, the testing sample set is a sample to be tested, and the verifying sample set is used for verifying the classification result of the testing sample set.
S200, establishing a depth separable convolutional neural network; the number of network parameters can be greatly reduced through the deep separable convolutional neural network, and a specific parameter calculation formula is as follows:
wherein DKFor standard convolution kernel size, M is the number of input feature map channels, DFFor output feature map size, N is the output feature map size.
In one embodiment, DK=3,M=2,DF5, N3, the amount of reduction of the network parameters is mainly associated with DKIn connection with, when DKAt 3, the network parameters are reduced by approximately 8 to 9 times the computational load.
S300, introducing an SKNet module into a convolution network intermediate layer of the deep separable convolution neural network to obtain an initial model; the SKNet module is a lightweight embedded module, so that the deep separable convolutional neural network can adaptively adjust the size of an acceptance domain according to multiple scales of image data; by introducing the SKNet module, the initial model can adaptively adjust the receiving domain of the image, and the size of the receptive field can be selected independently.
S400, establishing a loss function of the initial model; the loss function is used to iteratively train an initial model.
S500, training the initial model according to the training sample set and the loss function to obtain a training model; inputting the training sample set into an initial model for network forward propagation, and adjusting a network function through a loss function combined with a back propagation BP algorithm for iterative training, thereby obtaining a training model for pattern classification.
S600, classifying the test sample set by the training model, and comparing the classification result with the class label of the verification sample set to obtain the classification accuracy of the test sample set.
Further, referring to fig. 1, the establishing a training sample set, a testing sample set, and a verifying sample set includes step S01, performing expansion preprocessing on the image data set of the training sample set; by carrying out extension preprocessing on the image data of the training sample set, the influence of factors such as chromatic aberration light of the image data on the training result is reduced.
Further, referring to fig. 1, the expansion preprocessing includes image cropping, image flipping, image scaling, brightness adjustment, contrast adjustment, hue adjustment, saturation adjustment, and gray scale adjustment; the training sample set is expanded through the processing steps, and the influence of factors such as color difference light rays on the training result is reduced.
Further, the depth separable convolutional neural network is composed of a plurality of depth separable convolutional modules; the network structure of the depth separable convolution module is sequentially connected with a depth convolution layer with a convolution kernel of 3 multiplied by 3, a normalization layer, a linear correction unit layer, a point convolution layer with a convolution kernel of 1 multiplied by 1, a normalization layer and a linear correction unit layer, so that the calculation amount of network parameters is greatly reduced.
Further, introducing an SKNet module into a convolution network middle layer of the depth separable convolution neural network to obtain an initial model, wherein the initial model comprises the steps of performing convolution on feature spectrums in the same convolution layer by using a plurality of depth separable networks respectively, performing pixel-by-pixel addition on the feature spectrums after the convolution, and calibrating the weight of the feature diagram again through a gating mechanism; by the method, the initial model can autonomously select the size of the receptive field.
Further, the loss function is expressed as:
Lfl=-αγ(1-y′)γlogy′-(1-α)(1-γ)y′γlog(1-y′)
wherein L isflFor the loss function value, α is the balance factor, γ is the selection factor, and y is the label value of the samples of the training sample setAnd y' is the predicted output tag value through the sigmoid activation function.
When the selection factor γ is greater than 0, the more closely the predicted output tag value y' is to 1 for the positive type samples (i.e., the samples are simple samples), the loss function value LflThe smaller; the closer the predicted output label value y' is to 0 (i.e., the sample is a difficult sample), the more the loss function value LflThe larger. For negative class samples, the closer the prediction output label value y' is to 0 (i.e., the sample is a simple sample), the more the loss function value LflThe smaller; the closer the predicted output label value y' is to 1 (i.e., the sample is a difficult sample), the more the loss function value LflThe larger.
In one embodiment, the balance factor α is 0.25 and the selection factor γ is 2.
Further, training the initial model according to the training sample set and the loss function to obtain a training model, wherein when the loss value of the loss function is smaller than a preset threshold value or the training times are equal to preset iteration times, training is completed and the training model is obtained; in one embodiment, the preset threshold is 0.5, and the preset number of iterations is 1000.
In conclusion, the training parameters of the initial model are reduced by utilizing the deep separable convolutional network, the efficiency of feature extraction is improved, the convergence of the neural network is accelerated, and the training model can adaptively adjust the acceptance domain of the image by introducing the SKNet module, so that the size of the receptive field can be selected independently, and the image classification effect of different sizes is improved.
The present application also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the multi-scale lightweight image classification method as described above when executing the computer program.
The present application also provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the multi-scale lightweight image classification method as described above.
It should be understood that equivalents and modifications of the technical solution and inventive concept thereof may occur to those skilled in the art, and all such modifications and alterations should fall within the protective scope of the present invention.
Claims (9)
1. A multi-scale lightweight image classification method is characterized by comprising the following steps:
establishing a training sample set, a testing sample set and a verification sample set;
establishing a deep separable convolutional neural network;
introducing an SKNet module into a convolution network intermediate layer of the deep separable convolution neural network to obtain an initial model;
establishing a loss function of an initial model;
training the initial model according to the training sample set and the loss function to obtain a training model;
and the training model classifies the test sample set, and compares the classification result with the class label of the verification sample set to obtain the classification accuracy of the test sample set.
2. The multi-scale lightweight image classification method according to claim 1, wherein the establishing of the training sample set, the testing sample set and the verification sample set comprises performing extension preprocessing on an image data set of the training sample set.
3. The multi-scale lightweight image classification method according to claim 2, wherein the expansion preprocessing comprises image cropping, image flipping, image scaling, brightness adjustment, contrast adjustment, hue adjustment, saturation adjustment, and grayscale adjustment.
4. The multi-scale lightweight image classification method according to claim 1, characterized in that the depth separable convolutional neural network is composed of several depth separable convolutional modules.
5. The multi-scale light-weight image classification method according to claim 1, wherein an SKNet module is introduced into a convolution network middle layer of the depth separable convolution neural network to obtain an initial model, and the method comprises the steps of convolving feature spectrums in the same convolution layer by using a plurality of depth separable networks respectively, adding the convolved feature spectrums pixel by pixel, and calibrating feature diagram weights again by a gating mechanism.
6. The multi-scale lightweight image classification method according to claim 1, characterized in that the loss function is expressed as:
Lfl=-αγ(1-y′)γlogy′-(1-α)(1-γ)y′γlog(1-y′)
wherein L isf1And in order to obtain a loss function value, alpha is a balance factor, gamma is a selection factor, y is a label value of a sample of the training sample set, and y' is a predicted output label value of the sigmoid activation function.
7. The multi-scale lightweight image classification method according to claim 1, wherein the training of the initial model according to the training sample set and the loss function to obtain the training model comprises completing the training and obtaining the training model when a loss value of the loss function is less than a preset threshold or the training number is equal to a preset iteration number.
8. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, implements the multi-scale lightweight image classification method of any of claims 1-7.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to perform the multi-scale lightweight image classification method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110062780.2A CN112651468B (en) | 2021-01-18 | 2021-01-18 | Multi-scale lightweight image classification method and storage medium thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110062780.2A CN112651468B (en) | 2021-01-18 | 2021-01-18 | Multi-scale lightweight image classification method and storage medium thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112651468A true CN112651468A (en) | 2021-04-13 |
CN112651468B CN112651468B (en) | 2024-06-04 |
Family
ID=75368280
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110062780.2A Active CN112651468B (en) | 2021-01-18 | 2021-01-18 | Multi-scale lightweight image classification method and storage medium thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112651468B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113205534A (en) * | 2021-05-17 | 2021-08-03 | 广州大学 | Retinal vessel segmentation method and device based on U-Net + |
CN113435409A (en) * | 2021-07-23 | 2021-09-24 | 北京地平线信息技术有限公司 | Training method and device of image recognition model, storage medium and electronic equipment |
CN113469249A (en) * | 2021-06-30 | 2021-10-01 | 阿波罗智联(北京)科技有限公司 | Image classification model training method, classification method, road side equipment and cloud control platform |
CN114926657A (en) * | 2022-06-09 | 2022-08-19 | 山东财经大学 | Method and system for detecting saliency target |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898579A (en) * | 2018-05-30 | 2018-11-27 | 腾讯科技(深圳)有限公司 | A kind of image definition recognition methods, device and storage medium |
US20190205643A1 (en) * | 2017-12-29 | 2019-07-04 | RetailNext, Inc. | Simultaneous Object Localization And Attribute Classification Using Multitask Deep Neural Networks |
CN110633738A (en) * | 2019-08-30 | 2019-12-31 | 杭州电子科技大学 | Rapid classification method for industrial part images |
CN111191736A (en) * | 2020-01-05 | 2020-05-22 | 西安电子科技大学 | Hyperspectral image classification method based on depth feature cross fusion |
CN111311538A (en) * | 2019-12-28 | 2020-06-19 | 北京工业大学 | Multi-scale lightweight road pavement detection method based on convolutional neural network |
WO2020156028A1 (en) * | 2019-01-28 | 2020-08-06 | 南京航空航天大学 | Outdoor non-fixed scene weather identification method based on deep learning |
CN111816156A (en) * | 2020-06-02 | 2020-10-23 | 南京邮电大学 | Many-to-many voice conversion method and system based on speaker style feature modeling |
CN111882554A (en) * | 2020-08-06 | 2020-11-03 | 桂林电子科技大学 | SK-YOLOv 3-based intelligent power line fault detection method |
CN111914797A (en) * | 2020-08-17 | 2020-11-10 | 四川大学 | Traffic sign identification method based on multi-scale lightweight convolutional neural network |
WO2020232840A1 (en) * | 2019-05-23 | 2020-11-26 | 厦门市美亚柏科信息股份有限公司 | Vehicle multi-attribute identification method and device employing neural network structure search, and medium |
CN112101190A (en) * | 2020-09-11 | 2020-12-18 | 西安电子科技大学 | Remote sensing image classification method, storage medium and computing device |
CN112164065A (en) * | 2020-09-27 | 2021-01-01 | 华南理工大学 | Real-time image semantic segmentation method based on lightweight convolutional neural network |
CN114120036A (en) * | 2021-11-23 | 2022-03-01 | 中科南京人工智能创新研究院 | Lightweight remote sensing image cloud detection method |
-
2021
- 2021-01-18 CN CN202110062780.2A patent/CN112651468B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190205643A1 (en) * | 2017-12-29 | 2019-07-04 | RetailNext, Inc. | Simultaneous Object Localization And Attribute Classification Using Multitask Deep Neural Networks |
CN108898579A (en) * | 2018-05-30 | 2018-11-27 | 腾讯科技(深圳)有限公司 | A kind of image definition recognition methods, device and storage medium |
WO2020156028A1 (en) * | 2019-01-28 | 2020-08-06 | 南京航空航天大学 | Outdoor non-fixed scene weather identification method based on deep learning |
WO2020232840A1 (en) * | 2019-05-23 | 2020-11-26 | 厦门市美亚柏科信息股份有限公司 | Vehicle multi-attribute identification method and device employing neural network structure search, and medium |
CN110633738A (en) * | 2019-08-30 | 2019-12-31 | 杭州电子科技大学 | Rapid classification method for industrial part images |
CN111311538A (en) * | 2019-12-28 | 2020-06-19 | 北京工业大学 | Multi-scale lightweight road pavement detection method based on convolutional neural network |
CN111191736A (en) * | 2020-01-05 | 2020-05-22 | 西安电子科技大学 | Hyperspectral image classification method based on depth feature cross fusion |
CN111816156A (en) * | 2020-06-02 | 2020-10-23 | 南京邮电大学 | Many-to-many voice conversion method and system based on speaker style feature modeling |
CN111882554A (en) * | 2020-08-06 | 2020-11-03 | 桂林电子科技大学 | SK-YOLOv 3-based intelligent power line fault detection method |
CN111914797A (en) * | 2020-08-17 | 2020-11-10 | 四川大学 | Traffic sign identification method based on multi-scale lightweight convolutional neural network |
CN112101190A (en) * | 2020-09-11 | 2020-12-18 | 西安电子科技大学 | Remote sensing image classification method, storage medium and computing device |
CN112164065A (en) * | 2020-09-27 | 2021-01-01 | 华南理工大学 | Real-time image semantic segmentation method based on lightweight convolutional neural network |
CN114120036A (en) * | 2021-11-23 | 2022-03-01 | 中科南京人工智能创新研究院 | Lightweight remote sensing image cloud detection method |
Non-Patent Citations (1)
Title |
---|
杨秀芹;张华熊;: "双核压缩激活神经网络艺术图像分类", 中国图象图形学报, no. 05, 16 May 2020 (2020-05-16), pages 967 - 976 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113205534A (en) * | 2021-05-17 | 2021-08-03 | 广州大学 | Retinal vessel segmentation method and device based on U-Net + |
CN113205534B (en) * | 2021-05-17 | 2023-02-03 | 广州大学 | Retinal vessel segmentation method and device based on U-Net + |
CN113469249A (en) * | 2021-06-30 | 2021-10-01 | 阿波罗智联(北京)科技有限公司 | Image classification model training method, classification method, road side equipment and cloud control platform |
CN113469249B (en) * | 2021-06-30 | 2024-04-09 | 阿波罗智联(北京)科技有限公司 | Image classification model training method, classification method, road side equipment and cloud control platform |
CN113435409A (en) * | 2021-07-23 | 2021-09-24 | 北京地平线信息技术有限公司 | Training method and device of image recognition model, storage medium and electronic equipment |
CN114926657A (en) * | 2022-06-09 | 2022-08-19 | 山东财经大学 | Method and system for detecting saliency target |
CN114926657B (en) * | 2022-06-09 | 2023-12-19 | 山东财经大学 | Saliency target detection method and system |
Also Published As
Publication number | Publication date |
---|---|
CN112651468B (en) | 2024-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112651468B (en) | Multi-scale lightweight image classification method and storage medium thereof | |
CN110059586B (en) | Iris positioning and segmenting system based on cavity residual error attention structure | |
CN108830197A (en) | Image processing method, device, computer equipment and storage medium | |
US20220415007A1 (en) | Image normalization processing | |
CN113111979A (en) | Model training method, image detection method and detection device | |
CN112330613B (en) | Evaluation method and system for cytopathology digital image quality | |
CN115526803A (en) | Non-uniform illumination image enhancement method, system, storage medium and device | |
CN111445496B (en) | Underwater image recognition tracking system and method | |
CN113011532A (en) | Classification model training method and device, computing equipment and storage medium | |
CN116468995A (en) | Sonar image classification method combining SLIC super-pixel and graph annotation meaning network | |
CN112836820A (en) | Deep convolutional network training method, device and system for image classification task | |
CN115223032A (en) | Aquatic organism identification and matching method based on image processing and neural network fusion | |
CN117649694A (en) | Face detection method, system and device based on image enhancement | |
CN112270404A (en) | Detection structure and method for bulge defect of fastener product based on ResNet64 network | |
CN117456230A (en) | Data classification method, system and electronic equipment | |
CN115187982B (en) | Algae detection method and device and terminal equipment | |
CN116740587A (en) | Unmanned aerial vehicle aerial photographing target credible identification method based on double uncertainty perception of data and model | |
CN116543295A (en) | Lightweight underwater target detection method and system based on degradation image enhancement | |
CN115908409A (en) | Method and device for detecting defects of photovoltaic sheet, computer equipment and medium | |
CN113947547B (en) | Monte Carlo rendering graph noise reduction method based on multi-scale kernel prediction convolutional neural network | |
CN115546157A (en) | Method, device and storage medium for evaluating radiation quality of satellite image | |
CN114842506A (en) | Human body posture estimation method and system | |
CN114926876A (en) | Image key point detection method and device, computer equipment and storage medium | |
CN116343019A (en) | Target detection method for remote sensing image | |
CN108460401B (en) | Scene image classification method based on non-negative feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |