CN111222530A - Fine-grained image classification method, system, device and storage medium - Google Patents

Fine-grained image classification method, system, device and storage medium Download PDF

Info

Publication number
CN111222530A
CN111222530A CN201910972670.2A CN201910972670A CN111222530A CN 111222530 A CN111222530 A CN 111222530A CN 201910972670 A CN201910972670 A CN 201910972670A CN 111222530 A CN111222530 A CN 111222530A
Authority
CN
China
Prior art keywords
image
local
fine
grained
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910972670.2A
Other languages
Chinese (zh)
Inventor
许广廷
张朝婷
张洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Connections Co ltd
Original Assignee
Smart Connections Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Connections Co ltd filed Critical Smart Connections Co ltd
Priority to CN201910972670.2A priority Critical patent/CN111222530A/en
Publication of CN111222530A publication Critical patent/CN111222530A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention discloses a method, a system, a device and a storage medium for classifying fine-grained images, wherein the method comprises the following steps: acquiring image information to be classified; and inputting the image information into an identification model trained by adopting a local area for fine-grained identification, and outputting an identification classification result. According to the invention, the local area is adopted for carrying out enhanced model training, so that the model is more suitable for fine-grained identification, and when two commodities are highly similar, the identification model can more effectively grasp key distinguishing information in the commodity image, so that the commodities can be quickly and accurately identified and classified, and the method can be widely applied to the field of image data processing.

Description

Fine-grained image classification method, system, device and storage medium
Technical Field
The present invention relates to the field of image data processing, and in particular, to a method, a system, an apparatus, and a storage medium for classifying fine-grained images.
Background
The commodity identification technology based on the image identification and target detection algorithm has great application potential in the fields of new retail and unmanned stores. For example: through commodity discernment, can carry out the record voluntarily when customer selects commodity, the cooperation removes payment means, can need not queue in the exit and pay to promote merchant's efficiency of super-operation. However, the general target detection algorithm focuses more on detection depending on the content such as shape, texture, color, and the like, and for common commodity packages, in the differentiation of various commodities of the same category, the shape regularity is very strong (often rectangular, cylindrical, and the like), and the differentiation can be performed only depending on local content differences, so that the algorithms are often poor in effect.
Disclosure of Invention
In order to solve one of the above technical problems, an object of the present invention is to provide a fine-grained image classification method, system, device and storage medium with better recognition and classification effects.
The first technical scheme adopted by the invention is as follows:
a fine-grained image classification method comprises the following steps:
acquiring image information to be classified;
and inputting the image information into an identification model trained by adopting a local area for fine-grained identification, and outputting an identification classification result.
Further, the method also comprises a step of establishing the identification model, wherein the step of establishing the identification model specifically comprises the following steps:
acquiring an input image, and extracting an image local area of the input image;
carrying out overlapping area detection on the local areas of the images, and carrying out merging processing on the overlapping local areas of the images;
calculating the image complexity of the combined image local area, and screening a plurality of image local areas according to the calculation result;
grouping the image local areas obtained by screening according to a preset mode to obtain a plurality of local image sets;
and training the neural network by combining the input image, the local image set and the preset loss function to obtain a recognition model.
Further, the step of extracting the image local area of the input image specifically includes:
and extracting an image local area of the input image by adopting a selective search algorithm.
Further, the step of detecting an overlapping area of the local areas of the images and merging the overlapping local areas of the images includes:
and calculating the overlapping degree of the local areas of the images by adopting an intersection equation, and merging the corresponding local areas of the images when the overlapping degree is detected to be greater than a preset value.
Further, the step of calculating the image complexity of the merged image local regions and screening the plurality of image local regions according to the calculation result specifically includes the following steps:
calculating the image complexity of the local region of the merged image by adopting an image entropy algorithm;
and sorting the image local areas according to the calculation result, and acquiring a plurality of image local areas according to the sorting sequence.
Further, the step of obtaining the recognition model after training the neural network by combining the input image, the local image set and the preset loss function specifically includes the following steps:
respectively inputting the input image and the local image set into a preset neural network for feature vector extraction to obtain a plurality of feature vectors;
and splicing the obtained feature vectors, and training the neural network by combining a loss function to obtain a recognition model.
Further, the loss function adopts a cross entropy loss function.
The second technical scheme adopted by the invention is as follows:
a fine-grained image classification system comprising:
the input module is used for acquiring image information to be classified;
and the recognition and classification module is used for inputting the image information into a recognition model trained by adopting a local area to perform fine-grained recognition and then outputting a recognition and classification result.
The third technical scheme adopted by the invention is as follows:
a fine-grained image classification apparatus comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method.
The fourth technical scheme adopted by the invention is as follows:
a storage medium having stored therein processor-executable instructions for performing the method as described above when executed by a processor.
The invention has the beneficial effects that: according to the invention, the local area is adopted for enhanced model training, so that the model is more suitable for fine-grained identification, and when two commodities are highly similar, the identification model can more effectively grasp key distinguishing information in the commodity image, thereby accurately identifying and classifying the commodities.
Drawings
FIG. 1 is a flow chart of the steps of a fine-grained image classification method of the present invention;
FIG. 2 is a flowchart illustrating the steps of building a recognition model in an exemplary embodiment;
fig. 3 is a block diagram of a fine-grained image classification system according to the present invention.
Detailed Description
As shown in fig. 1, the present embodiment provides a fine-grained image classification method, including the following steps:
and S1, establishing a recognition model.
And S2, acquiring the image information to be classified.
And S3, inputting the image information into an identification model which is trained by adopting a local area for fine-grained identification, and outputting an identification classification result.
In this embodiment, a fine-grained image recognition technology is specifically adopted, and the fine-grained image recognition technology can enhance the attention to the detailed part of the image, so as to distinguish the sub-category recognition under the same large-category commodity, for example, the types under the same brand of shampoo, such as: deoiling and refreshing, etc., whose image packages are highly similar. However, in the existing common fine-grained image recognition method, it is necessary to manually label a local area of an image that is easily recognized by an algorithm in an object, and train a recognition model. The local region is adopted for enhancing model training, so that the model is more suitable for fine-grained identification, and when two commodities are highly similar, the identification model can more effectively grasp key distinguishing information in the commodity image, so that the commodities can be quickly and accurately identified and classified. The image information includes pictures of commodities, advertisement pictures and the like.
Referring to fig. 2, step S1 specifically includes steps S11 to S15:
s11, acquiring an input image, and extracting an image local area of the input image;
s12, detecting the overlapping area of the local image areas and merging the overlapping local image areas;
s13, calculating the image complexity of the combined image local areas, and screening a plurality of image local areas according to the calculation result;
s14, grouping the image local areas obtained by screening according to a preset mode to obtain a plurality of local image sets;
and S15, training the neural network by combining the input image, the local image set and the preset loss function to obtain a recognition model.
Wherein the step S11 specifically includes: the method comprises the steps of obtaining an input image, and extracting an image local area of the input image by adopting a selective search algorithm.
In the existing common fine-grained image recognition method, a local area of an image which is easy to be recognized by an algorithm in an object needs to be labeled manually, and a recognition model needs to be trained. This kind of mode needs a large amount of manpowers to carry out more complicated mark to the sample, also needs the marking personnel to have certain understanding to algorithm identification sensitive content simultaneously, and these have all promoted the human cost by a wide margin. In the embodiment, the local area of the image is automatically extracted by adopting the selective search algorithm, so that the cost rise caused by manual marking is effectively avoided.
Wherein the step S12 specifically includes: and calculating the overlapping degree of the local areas of the images by adopting an intersection equation, and merging the corresponding local areas of the images when the overlapping degree is detected to be greater than a preset value.
The step S13 specifically includes steps a1 to a 2:
a1, calculating the image complexity of the local area of the merged image by adopting an image entropy algorithm;
and A2, sorting the image local areas according to the calculation result, and acquiring a plurality of image local areas according to the sorting order.
The step S15 specifically includes steps B1 to B2:
b1, respectively inputting the input image and the local image set into a preset neural network for feature vector extraction to obtain a plurality of feature vectors;
and B2, splicing the obtained feature vectors, and training the neural network by combining the loss function to obtain a recognition model. The loss function adopts a cross entropy loss function.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
The above method is explained in detail below with reference to an example of shampoo product identification.
The first step is as follows:
after a shampoo image (namely an input image) is obtained and input, a selective search method (namely a selective search algorithm) is used for extracting the image, and a plurality of relevant areas are extracted according to comprehensive similarity on the basis of 3 parameters of color, texture and spatial overlapping. The similarity calculation uses the following formula 1:
s(ri,rj)=a1scolor(ri,rj)+a2stexture(ri,rj)+a3sfill(ri,rj) (1)
wherein ri and rj represent a candidate computed image grid of 25 pixels, respectively; scolorRepresentative faceColor similarity calculation function, StexttureRepresenting a texture similarity calculation function, SfillRepresenting a spatial overlap goodness-of-overlap calculation function; a1, a2, a3 identify weight assignments, and in an embodiment, a 1-a 2-a 3-1 is set.
The second step is that:
the remaining images are partially merged based on the overlap region. Since the regions extracted by the Selective Search method are all rectangular, in this embodiment, an IoU (intersection ratio) between the regions can be calculated by using the Rect of the OpevCV, and the regions with the value IoU larger than a preset value are combined. Based on practical experience, in the present embodiment, 0.6 is selected as the preset value. After merging, the number of image areas will be reduced.
The third step:
in the obtained image partial region, there are some regions where the image partial region is blank, that is, the image region does not include corresponding character information or icon information, and therefore, such an image partial region has no information amount and is not valuable for identifying a product. A reasonable threshold value, namely a sample part with larger information content, can be adopted, and is more meaningful for subsequent calculation. Therefore, the complexity of the images in the reserved area (namely the images in the local area of the images) is calculated by using an image entropy algorithm, sorting and filtering are carried out based on the calculation result, only the N sample parts with the highest information content are reserved, and the specific numerical value of N can be adjusted in specific application. In this embodiment, N is set to 9, that is, the 9 local regions with the highest information amount are reserved, and the next round of processing is performed. In the image entropy calculation, two-dimensional image gray scale entropy value calculation is adopted, and specifically: h ═ Σ255 i=0Pi,jlogPiWherein i identifies the gray value of the current pixel (i is greater than or equal to 0 and less than or equal to 255), and j represents the gray value of the adjacent domain (j is greater than or equal to 0 and less than or equal to 255). Pi,j=f(I,j)/N2
The fourth step:
the sample integrity information may be lost when the characteristic value extraction is carried out on the local area of the single image, and in order to solve the problem, the distribution geometrical relationship of each characteristic area in the commodity image is relatively fixed, so that a plurality of areas are arrangedThe domains are combined to form a larger partitioned domain. In this embodiment, a principle based on the closest proximity of positions is adopted, N images are locally collected into 3 local sets, and a geometric relationship between local images in each set is retained. In the most immediate principle, we take the upper left corner of the complete image as the coordinate (0, 0), and then calculate the upper left corner coordinate (X) of each image areai,Yi) (1. ltoreq. i. ltoreq.N) based on (X)i+Yi) And (5) taking the values as region sequencing, and splicing adjacent regions in the sequencing to combine into 3 local image sets.
The fifth step:
and respectively inputting the input image and the 3 groups of local area set images into 4 independent CNN networks for feature vector extraction, splicing the extracted 4 feature vectors, introducing the spliced 4 feature vectors into a full-connection layer, and obtaining an evaluation model through cross entropy loss. In the embodiment, a VGG16 network is adopted for feature vector extraction. In order to meet the requirement of the VGG network, the complete image and the local region set image need to be adjusted to 224 × 3 images, the original proportion of the images is maintained in the normalization process, and the insufficient regions need to be filled. In the feature vector extraction, a convolution kernel of 3x3 is selected, the step is set to be 1, and all pooling layer parameters are 2 x. After the second fully connected layer, we can obtain a feature vector of 1 x 4096. After obtaining the feature vectors of the whole image and 3 groups of local feature set vectors, 4 feature vectors are spliced, and cross entropy loss is adopted to evaluate the spliced feature vectors. In this example, we use the cross entropy loss function as: l ═ log (1+ e)-s)。
In specific practice, by the method, two kinds of shampoos (one is a soft and smooth type of the sea fly silk, the other is a fresh and deoiled type of the sea fly silk, the appearance of the sea fly silk is identical, most of textures are similar) can be rapidly and accurately identified and processed, and classified and displayed.
In summary, the method of the present embodiment has at least the following advantages over the existing methods:
(1) the method of the embodiment can perform fine-grained identification by introducing the local region to perform enhanced model training.
(2) And the local characteristics are automatically selected by adopting a Selective Search method, so that the labor cost is greatly reduced.
(3) And screening the characteristic regions according to the information richness, keeping the geometrical relationship constraint between the characteristic regions and greatly improving the identification accuracy.
(4) And local feature enhancement extraction is realized by adopting a light VGG16 network, so that rapid and accurate commodity identification is realized.
As shown in fig. 3, the present embodiment further provides a fine-grained image classification system, including:
the input module is used for acquiring image information to be classified;
and the recognition and classification module is used for inputting the image information into a recognition model trained by adopting a local area to perform fine-grained recognition and then outputting a recognition and classification result.
The fine-grained image classification system of the embodiment can execute the fine-grained image classification method provided by the embodiment of the method of the invention, can execute any combination implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.
The present embodiment further provides a fine-grained image classification device, including:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method.
The fine-grained image classification device of the embodiment can execute the fine-grained image classification method provided by the method embodiment of the invention, can execute any combination implementation steps of the method embodiment, and has corresponding functions and beneficial effects of the method.
The present embodiments also provide a storage medium having stored therein processor-executable instructions, which when executed by a processor, are configured to perform the method as described above.
The storage medium of this embodiment may execute the fine-grained image classification method provided by the method embodiment of the present invention, may execute any combination of the implementation steps of the method embodiment, and has corresponding functions and advantages of the method.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A fine-grained image classification method is characterized by comprising the following steps:
acquiring image information to be classified;
and inputting the image information into an identification model trained by adopting a local area for fine-grained identification, and outputting an identification classification result.
2. The fine-grained image classification method according to claim 1, further comprising a step of establishing a recognition model, wherein the step of establishing a recognition model specifically comprises the steps of:
acquiring an input image, and extracting an image local area of the input image;
carrying out overlapping area detection on the local areas of the images, and carrying out merging processing on the overlapping local areas of the images;
calculating the image complexity of the combined image local area, and screening a plurality of image local areas according to the calculation result;
grouping the image local areas obtained by screening according to a preset mode to obtain a plurality of local image sets;
and training the neural network by combining the input image, the local image set and the preset loss function to obtain a recognition model.
3. The fine-grained image classification method according to claim 2, wherein the step of extracting the image local region of the input image specifically comprises:
and extracting an image local area of the input image by adopting a selective search algorithm.
4. The fine-grained image classification method according to claim 2, wherein the step of detecting overlapping regions of the local regions of the images and merging the overlapping local regions of the images includes:
and calculating the overlapping degree of the local areas of the images by adopting an intersection equation, and merging the corresponding local areas of the images when the overlapping degree is detected to be greater than a preset value.
5. The fine-grained image classification method according to claim 2, wherein the step of calculating the image complexity of the combined image local regions and screening the plurality of image local regions according to the calculation result specifically comprises the following steps:
calculating the image complexity of the local region of the merged image by adopting an image entropy algorithm;
and sorting the image local areas according to the calculation result, and acquiring a plurality of image local areas according to the sorting sequence.
6. The fine-grained image classification method according to claim 2, wherein the step of obtaining the recognition model after training the neural network by combining the input image, the local image set, and the preset loss function specifically comprises the steps of:
respectively inputting the input image and the local image set into a preset neural network for feature vector extraction to obtain a plurality of feature vectors;
and splicing the obtained feature vectors, and training the neural network by combining a loss function to obtain a recognition model.
7. A fine-grained image classification method according to claim 6, characterized in that the loss function is a cross-entropy loss function.
8. A fine-grained image classification system, comprising:
the input module is used for acquiring image information to be classified;
and the recognition and classification module is used for inputting the image information into a recognition model trained by adopting a local area to perform fine-grained recognition and then outputting a recognition and classification result.
9. A fine-grained image classification device characterized by comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a fine-grained image classification method as recited in any one of claims 1-7.
10. A storage medium having stored therein processor-executable instructions, which when executed by a processor, are configured to perform the method of any one of claims 1-7.
CN201910972670.2A 2019-10-14 2019-10-14 Fine-grained image classification method, system, device and storage medium Pending CN111222530A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910972670.2A CN111222530A (en) 2019-10-14 2019-10-14 Fine-grained image classification method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910972670.2A CN111222530A (en) 2019-10-14 2019-10-14 Fine-grained image classification method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN111222530A true CN111222530A (en) 2020-06-02

Family

ID=70828956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910972670.2A Pending CN111222530A (en) 2019-10-14 2019-10-14 Fine-grained image classification method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN111222530A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524137A (en) * 2020-06-19 2020-08-11 平安科技(深圳)有限公司 Cell identification counting method and device based on image identification and computer equipment
CN115100509A (en) * 2022-07-15 2022-09-23 山东建筑大学 Image identification method and system based on multi-branch block-level attention enhancement network
CN115620052A (en) * 2022-10-08 2023-01-17 广州市玄武无线科技股份有限公司 Fine-grained commodity detection method, system, terminal equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170287170A1 (en) * 2016-04-01 2017-10-05 California Institute Of Technology System and Method for Locating and Performing Fine Grained Classification from Multi-View Image Data
CN109086792A (en) * 2018-06-26 2018-12-25 上海理工大学 Based on the fine granularity image classification method for detecting and identifying the network architecture
CN109145979A (en) * 2018-08-15 2019-01-04 上海嵩恒网络科技股份有限公司 sensitive image identification method and terminal system
CN109359684A (en) * 2018-10-17 2019-02-19 苏州大学 Fine granularity model recognizing method based on Weakly supervised positioning and subclass similarity measurement
CN109522967A (en) * 2018-11-28 2019-03-26 广州逗号智能零售有限公司 A kind of commodity attribute recognition methods, device, equipment and storage medium
CN109711448A (en) * 2018-12-19 2019-05-03 华东理工大学 Based on the plant image fine grit classification method for differentiating key field and deep learning
CN110097067A (en) * 2018-12-25 2019-08-06 西北工业大学 It is a kind of based on layer into the Weakly supervised fine granularity image classification method of formula eigentransformation
US20190266451A1 (en) * 2015-01-19 2019-08-29 Ebay Inc. Fine-grained categorization

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190266451A1 (en) * 2015-01-19 2019-08-29 Ebay Inc. Fine-grained categorization
US20170287170A1 (en) * 2016-04-01 2017-10-05 California Institute Of Technology System and Method for Locating and Performing Fine Grained Classification from Multi-View Image Data
CN109086792A (en) * 2018-06-26 2018-12-25 上海理工大学 Based on the fine granularity image classification method for detecting and identifying the network architecture
CN109145979A (en) * 2018-08-15 2019-01-04 上海嵩恒网络科技股份有限公司 sensitive image identification method and terminal system
CN109359684A (en) * 2018-10-17 2019-02-19 苏州大学 Fine granularity model recognizing method based on Weakly supervised positioning and subclass similarity measurement
CN109522967A (en) * 2018-11-28 2019-03-26 广州逗号智能零售有限公司 A kind of commodity attribute recognition methods, device, equipment and storage medium
CN109711448A (en) * 2018-12-19 2019-05-03 华东理工大学 Based on the plant image fine grit classification method for differentiating key field and deep learning
CN110097067A (en) * 2018-12-25 2019-08-06 西北工业大学 It is a kind of based on layer into the Weakly supervised fine granularity image classification method of formula eigentransformation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张阳: "细粒度图像分类算法研究" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524137A (en) * 2020-06-19 2020-08-11 平安科技(深圳)有限公司 Cell identification counting method and device based on image identification and computer equipment
CN111524137B (en) * 2020-06-19 2024-04-05 平安科技(深圳)有限公司 Cell identification counting method and device based on image identification and computer equipment
CN115100509A (en) * 2022-07-15 2022-09-23 山东建筑大学 Image identification method and system based on multi-branch block-level attention enhancement network
CN115100509B (en) * 2022-07-15 2022-11-29 山东建筑大学 Image identification method and system based on multi-branch block-level attention enhancement network
CN115620052A (en) * 2022-10-08 2023-01-17 广州市玄武无线科技股份有限公司 Fine-grained commodity detection method, system, terminal equipment and storage medium
CN115620052B (en) * 2022-10-08 2023-07-04 广州市玄武无线科技股份有限公司 Fine granularity commodity detection method, system, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
US10860879B2 (en) Deep convolutional neural networks for crack detection from image data
CN111052126A (en) Pedestrian attribute identification and positioning method and convolutional neural network system
CN109165645A (en) A kind of image processing method, device and relevant device
CN111222530A (en) Fine-grained image classification method, system, device and storage medium
CN104715023A (en) Commodity recommendation method and system based on video content
CN110766095B (en) Defect detection method based on image gray level features
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN112926652B (en) Fish fine granularity image recognition method based on deep learning
US11354549B2 (en) Method and system for region proposal based object recognition for estimating planogram compliance
CN112883926B (en) Identification method and device for form medical images
CN113221987A (en) Small sample target detection method based on cross attention mechanism
CN113627411A (en) Super-resolution-based commodity identification and price matching method and system
CN108647703B (en) Saliency-based classification image library type judgment method
CN108280469A (en) A kind of supermarket's commodity image recognition methods based on rarefaction representation
CN114862845A (en) Defect detection method, device and equipment for mobile phone touch screen and storage medium
CN111160225A (en) Human body analysis method and device based on deep learning
CN111340782B (en) Image marking method, device and system
JP2023123387A (en) Defect detection method and system
CN110956157A (en) Deep learning remote sensing image target detection method and device based on candidate frame selection
CN112070079B (en) X-ray contraband package detection method and device based on feature map weighting
CN108694398B (en) Image analysis method and device
CN111401415A (en) Training method, device, equipment and storage medium of computer vision task model
CN114612709A (en) Multi-scale target detection method guided by image pyramid characteristics
CN114241317A (en) Adaptive feature fusion detection method based on similar pest images under lamp

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination