CN112308688A - Size meter detection method suitable for e-commerce platform - Google Patents

Size meter detection method suitable for e-commerce platform Download PDF

Info

Publication number
CN112308688A
CN112308688A CN202011393485.7A CN202011393485A CN112308688A CN 112308688 A CN112308688 A CN 112308688A CN 202011393485 A CN202011393485 A CN 202011393485A CN 112308688 A CN112308688 A CN 112308688A
Authority
CN
China
Prior art keywords
neural network
convolutional neural
network model
detection
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011393485.7A
Other languages
Chinese (zh)
Inventor
张翼翔
余卓
卞龙鹏
徐桦
石克阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tanyu Technology Co ltd
Original Assignee
Hangzhou Weier Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Weier Network Technology Co ltd filed Critical Hangzhou Weier Network Technology Co ltd
Priority to CN202011393485.7A priority Critical patent/CN112308688A/en
Publication of CN112308688A publication Critical patent/CN112308688A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Development Economics (AREA)
  • Evolutionary Biology (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method for detecting a size meter suitable for an e-commerce platform, which comprises the following steps: training a convolutional neural network model for detection; inputting the picture to be detected into the convolutional neural network model; causing the convolutional neural network model to output position data of a detection box and a confidence of the position data of the detection box; and removing the detection frames with the confidence degrees lower than the preset confidence degree threshold value and merging the detection frames with the IOU value higher than the preset merging threshold value. The method has the beneficial effects that the method for detecting the size meter suitable for the E-commerce platform detects the picture with the size meter in the product picture by carrying out artificial intelligence judgment on the size meter.

Description

Size meter detection method suitable for e-commerce platform
Technical Field
The application relates to a method for detecting a size meter, in particular to a method for detecting a size meter suitable for an e-commerce platform.
Background
Nowadays, each e-commerce platform is growing and growing, and the number of shops is increasing. In stores in industries such as clothes, shoes and underwear, consumers can realize self-service shopping through the guidance of a size table, so that the number of consulting according to the sizes of commodities is large. The invention is mainly suitable for merchants using customer service robots in groups, and needs background personnel of the merchants to find the size table diagram in the detail page diagram of each commodity and manually configure the size table diagram in a question-answering system in order to meet the requirement of intelligent question-answering. This is a time consuming, tedious and error prone task for the operator, from finding the sizing chart to making the configuration. The personnel cost is high, and the working efficiency is low.
Disclosure of Invention
In order to solve the defects of the prior art, the application provides a method for detecting a size table suitable for an e-commerce platform, which comprises the following steps: training a convolutional neural network model for detection; inputting the picture to be detected into the convolutional neural network model; causing the convolutional neural network model to output position data of a detection box and a confidence of the position data of the detection box; and removing the detection frames with the confidence degrees lower than the preset confidence degree threshold value and merging the detection frames with the IOU value higher than the preset merging threshold value.
Further, the method for training the convolutional neural network model comprises the following steps: and marking a detection frame for the training image containing the size table.
Further, the method for training the convolutional neural network model comprises the following steps: and clustering the detection frames of the training images.
Further, the method for training the convolutional neural network model comprises the following steps: and performing convolution operation on the training image.
Further, the method for training the convolutional neural network model comprises the following steps: and calculating the loss values of the predicted frame and the real frame.
Further, the method for training the convolutional neural network model comprises the following steps: and updating parameters according to the back propagation of the loss value.
Further, the method for training the convolutional neural network model comprises the following steps: and judging whether the trained convolutional neural network model is converged, and stopping training after the convolutional neural network model is converged.
Further, the position and the range of the size table in the training picture are selected by adopting a rectangular frame.
Furthermore, a clustering algorithm adopted when clustering the detection frames of the training images is a K-means algorithm, and after the number n of clustering clusters is specified, n candidate frames can be automatically generated to obtain the width and height values of each candidate frame.
Further, the convolutional neural network model is a full convolutional neural network model.
The application has the advantages that: the method is suitable for the e-commerce platform and can detect the pictures with the size meters in the product pictures by carrying out artificial intelligence judgment on the size meters.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
FIG. 1 is a block diagram illustrating the steps of convolutional neural network model training in a method for size table detection for an e-commerce platform according to an embodiment of the present application;
FIG. 2 is a block diagram illustrating the steps of a method for size table detection for an e-commerce platform according to an embodiment of the present application;
FIG. 3 is an architecture diagram of a convolutional neural network in a method of size table detection suitable for an e-commerce platform according to an embodiment of the present application;
fig. 4 is an example of a picture before and after processing when an example picture is processed by a method for detecting a size table suitable for an e-commerce platform according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1 to 3, the method for detecting a size table suitable for an e-commerce platform of the present application includes the following steps: training a convolutional neural network model for detection; inputting a picture to be detected into a convolutional neural network model; enabling the convolutional neural network model to output position data of the detection frame and confidence of the position data of the detection frame; and removing the detection frames with the confidence degrees lower than the preset confidence degree threshold value and merging the detection frames with the IOU value higher than the preset merging threshold value.
Specifically, the method for training the convolutional neural network model comprises the following steps: and marking a detection frame for the training image containing the size table. And clustering the detection frames of the training images. Performing convolution operation on the training image. And calculating the loss values of the predicted frame and the real frame. And updating parameters according to back propagation of the loss value. And judging whether the trained convolutional neural network model is converged, and stopping training after the convolutional neural network model is converged.
As a specific scheme, when a detection frame is labeled on a training image containing a size table, the position and the range of the size table in the training image are selected by adopting a rectangular frame.
In addition, as a specific scheme, a clustering algorithm adopted when clustering detection frames of training images is a K-means algorithm, n candidate frames can be automatically generated after the number n of clustering clusters is specified, and the width and height values of each candidate frame are obtained.
Preferably, the convolutional neural network model is a full convolutional neural network model.
More specifically, as shown in fig. 1, the basic flow of constructing the convolutional neural network model is as follows: step S110, labeling the detection frame of the training image containing the size table, namely labeling the real position and range of the size table in the picture, and using open source tools such as labelimg and the like and a rectangular frame to map the real position and range of the size table in the picture. Step S120, clustering the detection frames of the training images, for example, the number of clusters is 12, so as to obtain width and height parameters of 12 candidate frames for setting the candidate frames. Step S130, performing a convolution operation on the input picture. Step S140, after performing convolution operation on the picture, performing parameter update by calculating loss values of the predicted frame and the real frame through back propagation. And S150, stopping training after the model is converged to obtain the detection model.
Specifically, in step S120, since the aspect ratio and the aspect length of the detection frame may affect the accuracy of the frame regression in the training phase, a candidate frame that can cover the real detection frame to the greatest extent should be used as possible. Therefore, the used clustering algorithm is a K-means algorithm, n candidate frames can be automatically generated after the number n of clustering clusters is specified, and the width and height values of each candidate frame are obtained.
Specifically, in step S130, the image input size is 416 × 416, the basic structure of the network is a residual structure, and the use of the residual structure can make the network model deeper. The backbone network is all convolutional layers, i.e. a full convolutional network, in which there are two convolutional kernel sizes, 3 × 3 and 1 × 1 respectively. The step size of the 3-by-3 convolution kernel is 2, so that the down sampling of the feature map is realized, and the 1-by-1 convolution mainly performs operations of increasing or decreasing the number of channels.
After the convolution operation is carried out on the backbone network, feature maps of various scales are generated. The receptive fields of the feature maps of each scale are different, the information contained in the feature maps are also different, the feature map with the smaller receptive field is suitable for predicting a small target, that is, the feature map generated by a lower network layer contains less semantic information and more structural information, and conversely, the feature map with the larger receptive field is suitable for predicting a large target, that is, the feature map generated by a higher network layer contains more semantic information and less structural information. Therefore, in the application, the size of the size table is not fixed, and the size table is easy to be confused with other literal information, so that the four characteristics with different sizes are fused. As can be seen from fig. 3, the backbone network extracts feature maps of four scales, which are: 13*13, 26*26, 52*52, 104*104. Wherein the receptive field of the 13 x 13 signature is maximal.
As shown in fig. 3, there are four branches below the backbone network, and feature pyramid method is used to fuse features, and the fusion method is illustrated by using 13 × 13 feature maps and 26 × 26 feature maps as an example. After going to the signature graph of 13 × 1024 from the backbone network, the signature graph is divided into two branches, and the first branch is subjected to convolution with 3 × 3 and convolution with 1 × 1 to obtain the signature graph of 13 × 75 as the output of the scale. The second branch performs upsampling on the feature map of 13 × 1024 to obtain a feature map of 26 × 256, and performs fusion on the feature map of 26 × 512 to obtain a fusion result of 26 × 768, wherein the fusion operation is to perform convolution on the superposition of the channels to obtain a feature map of 26 × 75 as an output feature map of 26 × 26 scale. And by analogy, four feature maps with different scales are finally obtained.
The detection method used in the present application is based on anchor points.
The specific scheme is as follows: after the feature maps are obtained, grid division is carried out on each feature map, 3 candidate frames are generated at the center point of each grid, the width and the height of the candidate frames are determined, in the process of network learning, objects in all the candidate frames of each scale feature map are judged, if the objects exist, frame regression is carried out on the candidate frames, a prediction detection frame is obtained, and if the objects do not exist, the detection frame is discarded. Wherein the candidate box is generated by step S120. Since the present application uses four feature maps with different scales, and each feature map has three detection frames with different widths and heights, n is 12 in step S120.
The detection frames with small areas are allocated to the feature map with small receptive fields, namely the feature map of the shallow layer, and conversely, the detection frames with large areas are allocated to the feature map with large receptive fields.
Specifically, in step S140, the difference between the real detection frame and the predicted detection frame is calculated by the loss function to further update the network parameters, so as to obtain a more accurate prediction result. The loss function is formulated as follows:
Figure 1
and K × K is the number of grids, M is the number of candidate frames generated by each grid, and finally K × M detection frames are obtained.
(2-wi*hi)
As a factor, the loss of small detection boxes is increased, and thus the detection of small size tables is more robust. In practical application, the parameter can be adjusted according to the condition of the data. The four parameters x, y, h, w describe the position and size of the detection box.
The invention is used as an important ring for the automatic configuration of the size meter, namely, the size meter detection, can liberate manpower and improve efficiency. At present, the solution is to classify all the commodity drawings into two categories including a size table and a non-size table through an image classification algorithm.
And finally, updating the parameters of the network in a gradient descending mode, and obtaining a final model after convergence.
As a more specific scheme, in step S210, any image may be input. The image is subjected to convolution calculation through the model obtained in step S150. In step S230, the output results are the position, size information, and confidence score of the size table detection box. In step S240, a confidence threshold and a combination threshold are determined according to actual conditions, and when the confidence of a certain detection frame is smaller than the confidence threshold, the detection result is removed. When the IOU values of some detection boxes are greater than the merge threshold, then the detection boxes are merged.
The IOU calculation formula is as follows: IOU is interaction (bbx1, bbx 2)/unity (bbx1, bbx 2).
The meaning of the calculation formula is the intersection of the detection boxes divided by the union of the detection boxes.
In the prior art, because a size table diagram is not a picture only containing a size table, but is mixed with other information such as images, characters and the like, two difficulties are brought: (1) for the classification task. Because the size table has uncertain factors such as multiple styles, unfixed position, unfixed proportion of the size table occupying the area of the picture and the like, the intra-class difference of the picture classification is large, and the inter-class difference is small, thereby causing difficulty to the classification task. (2) For subsequent configuration tasks. Because the classified pictures of the size table are not pure, the contents of the size table are easily confused with other useless character information in the pictures when being analyzed, and the analysis is not accurate.
In the application, the positioning of the size meter is used as an important ring of the automatic configuration function of the size meter, and the image classification method has large obstacles and has the problems of small inter-class difference and large intra-class difference. By using a size table detection method, the accuracy and the recall rate can be improved by detecting the characteristic diagram of the scale; in addition, the area positioning of the size table can be directly obtained through detection, other useless interference information is removed, and convenience is provided for subsequent information extraction; finally, because of the introduction of 1 x1 convolution kernel in the network, a large number of parameters are reduced, and secondly, because the method is a one-stage detection model, a plurality of steps are reduced compared with a two-stage detection model. Therefore, the detection method is fast.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A method for detecting a size meter suitable for an e-commerce platform is characterized by comprising the following steps:
the method for detecting the size table suitable for the e-commerce platform comprises the following steps:
training a convolutional neural network model for detection;
inputting the picture to be detected into the convolutional neural network model;
causing the convolutional neural network model to output position data of a detection box and a confidence of the position data of the detection box;
and removing the detection frames with the confidence degrees lower than the preset confidence degree threshold value and merging the detection frames with the IOU value higher than the preset merging threshold value.
2. The method for size table inspection suitable for e-commerce platforms of claim 1, wherein:
the method for training the convolutional neural network model comprises the following steps:
and marking a detection frame for the training image containing the size table.
3. The method for size table detection suitable for e-commerce platforms of claim 2, wherein:
the method for training the convolutional neural network model comprises the following steps:
and clustering the detection frames of the training images.
4. The method for size table detection suitable for e-commerce platforms of claim 3, wherein:
the method for training the convolutional neural network model comprises the following steps:
and performing convolution operation on the training image.
5. The method for size table detection suitable for e-commerce platforms of claim 4, wherein:
the method for training the convolutional neural network model comprises the following steps:
and calculating the loss values of the predicted frame and the real frame.
6. The method for size table inspection suitable for e-commerce platforms of claim 5, wherein:
the method for training the convolutional neural network model comprises the following steps:
and updating parameters according to the back propagation of the loss value.
7. The method for size table inspection suitable for e-commerce platforms of claim 6, wherein:
the method for training the convolutional neural network model comprises the following steps:
and judging whether the trained convolutional neural network model is converged, and stopping training after the convolutional neural network model is converged.
8. The method for size table inspection suitable for e-commerce platforms of claim 7, wherein:
and selecting the position and the range of the size table in the training picture by adopting a rectangular frame.
9. The method for size table inspection suitable for e-commerce platforms of claim 8, wherein:
and the clustering algorithm adopted when clustering the detection frames of the training images is a K-means algorithm, and after the number n of clustering clusters is specified, n candidate frames can be automatically generated to obtain the width and height values of each candidate frame.
10. The method for size table inspection suitable for e-commerce platforms of claim 9, wherein:
the convolutional neural network model is a full convolutional neural network model.
CN202011393485.7A 2020-12-02 2020-12-02 Size meter detection method suitable for e-commerce platform Pending CN112308688A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011393485.7A CN112308688A (en) 2020-12-02 2020-12-02 Size meter detection method suitable for e-commerce platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011393485.7A CN112308688A (en) 2020-12-02 2020-12-02 Size meter detection method suitable for e-commerce platform

Publications (1)

Publication Number Publication Date
CN112308688A true CN112308688A (en) 2021-02-02

Family

ID=74487284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011393485.7A Pending CN112308688A (en) 2020-12-02 2020-12-02 Size meter detection method suitable for e-commerce platform

Country Status (1)

Country Link
CN (1) CN112308688A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117549314A (en) * 2024-01-09 2024-02-13 承德石油高等专科学校 Industrial robot intelligent control system based on image recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170147905A1 (en) * 2015-11-25 2017-05-25 Baidu Usa Llc Systems and methods for end-to-end object detection
CN110298266A (en) * 2019-06-10 2019-10-01 天津大学 Deep neural network object detection method based on multiple dimensioned receptive field Fusion Features
CN110807422A (en) * 2019-10-31 2020-02-18 华南理工大学 Natural scene text detection method based on deep learning
CN110991435A (en) * 2019-11-27 2020-04-10 南京邮电大学 Express waybill key information positioning method and device based on deep learning
CN111199248A (en) * 2019-12-26 2020-05-26 东北林业大学 Clothing attribute detection method based on deep learning target detection algorithm
CN111310861A (en) * 2020-03-27 2020-06-19 西安电子科技大学 License plate recognition and positioning method based on deep neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170147905A1 (en) * 2015-11-25 2017-05-25 Baidu Usa Llc Systems and methods for end-to-end object detection
CN110298266A (en) * 2019-06-10 2019-10-01 天津大学 Deep neural network object detection method based on multiple dimensioned receptive field Fusion Features
CN110807422A (en) * 2019-10-31 2020-02-18 华南理工大学 Natural scene text detection method based on deep learning
CN110991435A (en) * 2019-11-27 2020-04-10 南京邮电大学 Express waybill key information positioning method and device based on deep learning
CN111199248A (en) * 2019-12-26 2020-05-26 东北林业大学 Clothing attribute detection method based on deep learning target detection algorithm
CN111310861A (en) * 2020-03-27 2020-06-19 西安电子科技大学 License plate recognition and positioning method based on deep neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张传伟;李妞妞;岳向阳;杨满芝;王睿;丁宇鹏;: "基于改进YOLOv2算法的交通标志检测", 计算机系统应用, no. 06 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117549314A (en) * 2024-01-09 2024-02-13 承德石油高等专科学校 Industrial robot intelligent control system based on image recognition
CN117549314B (en) * 2024-01-09 2024-03-19 承德石油高等专科学校 Industrial robot intelligent control system based on image recognition

Similar Documents

Publication Publication Date Title
CN109523520B (en) Chromosome automatic counting method based on deep learning
US8965116B2 (en) Computer-aided assignment of ratings to digital samples of a manufactured web product
CN109711288A (en) Remote sensing ship detecting method based on feature pyramid and distance restraint FCN
CN113378686B (en) Two-stage remote sensing target detection method based on target center point estimation
CN110163208B (en) Scene character detection method and system based on deep learning
US20040213459A1 (en) Multispectral photographed image analyzing apparatus
CN106557778A (en) Generic object detection method and device, data processing equipment and terminal device
CN111860510B (en) X-ray image target detection method and device
CN115731164A (en) Insulator defect detection method based on improved YOLOv7
CN104346370A (en) Method and device for image searching and image text information acquiring
CN115272652A (en) Dense object image detection method based on multiple regression and adaptive focus loss
CN113033516A (en) Object identification statistical method and device, electronic equipment and storage medium
CN110363812A (en) A kind of image-recognizing method
CN117152484B (en) Small target cloth flaw detection method based on improved YOLOv5s
CN114565548A (en) Industrial defect identification method, system, computing device and storage medium
CN111242899A (en) Image-based flaw detection method and computer-readable storage medium
CN114998840B (en) Mouse target detection method based on deep cascade supervised learning
CN107392254A (en) A kind of semantic segmentation method by combining the embedded structural map picture from pixel
CN112308688A (en) Size meter detection method suitable for e-commerce platform
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN113762257A (en) Identification method and device for marks in makeup brand images
JP3064334B2 (en) Drawing processing method and apparatus
Busch et al. Automated verification of a topographic reference dataset: System design and practical results
CN114743045A (en) Small sample target detection method based on double-branch area suggestion network
CN114638989A (en) Fault classification visualization method based on target detection and fine-grained identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230530

Address after: 104058, No. 2-10, No. 311 Huangpu Avenue Middle, Tianhe District, Guangzhou City, Guangdong Province, 510000

Applicant after: Guangzhou Tanyu Technology Co.,Ltd.

Address before: 601-5, 1382 Wenyi West Road, Cangqian street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: Hangzhou Weier Network Technology Co.,Ltd.

TA01 Transfer of patent application right