CN110956174A - A method of identifying the part number - Google Patents

A method of identifying the part number Download PDF

Info

Publication number
CN110956174A
CN110956174A CN201910454457.2A CN201910454457A CN110956174A CN 110956174 A CN110956174 A CN 110956174A CN 201910454457 A CN201910454457 A CN 201910454457A CN 110956174 A CN110956174 A CN 110956174A
Authority
CN
China
Prior art keywords
picture
numbering
area
region
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910454457.2A
Other languages
Chinese (zh)
Inventor
高越
邵蕾
马占宇
桂冠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201910454457.2A priority Critical patent/CN110956174A/en
Publication of CN110956174A publication Critical patent/CN110956174A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种器件编号的识别方法,通过获取包含编号区域的器件图片,使用编号区域定位网络模型对器件图片中的编号区域进行定位,提取出进行定位的编号区域,获取编号区域图片,将编号区域图片输入训练好的编号识别网络模型,获取器件编号;本发明解决了不能对编号进行分区域识别的问题,提供了一种准确度高、识别效率高的器件编号识别方法,本发明提供的器件编号的识别方法不需要再网络环境下进行,解决了在无网络环境下不能进行器件编号识别的问题。

Figure 201910454457

The invention discloses a method for identifying a device number. By acquiring a device picture including a numbered area, using a numbered area positioning network model to locate the numbered area in the device picture, extracting the numbered area for positioning, and obtaining the numbered area picture, Input the numbered area picture into the trained number recognition network model to obtain the device number; the invention solves the problem that the number cannot be identified by region, and provides a device number recognition method with high accuracy and high recognition efficiency. The identification method of the provided device number does not need to be performed in a network environment, which solves the problem that the device number identification cannot be performed in a non-network environment.

Figure 201910454457

Description

Device number identification method
Technical Field
The invention relates to a device number identification method, and belongs to the technical field of computer vision image processing.
Background
In the prior art, the numbering of the devices usually adopts a manual marking and counting mode, and a large amount of manpower and material resources are wasted in the process. With the continuous development of deep learning and big data, the field of image recognition makes a breakthrough progress, and intelligent products can be used for replacing characters needed by people in image recognition.
At present, various OCR character recognition systems are generally used for recognizing characters in the field of image recognition, and although the recognition rate is high, the OCR character recognition systems cannot accurately position the characters; moreover, OCR character recognition systems need to be networked, and the network speed cannot be guaranteed generally in a factory component environment, which results in slower character recognition.
Disclosure of Invention
The present invention is directed to a method for identifying a device number, so as to solve one of the above drawbacks or defects caused in the prior art.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
acquiring a picture of the device to be tested containing the number area;
inputting the picture of the device to be tested into the trained numbering area positioning network model, and positioning the numbering area of the device; extracting a numbering region for positioning, and acquiring a numbering region picture of the device; the numbering region positioning network model comprises a YOLO v3 network added with a numbering region category and a number category;
inputting the picture of the numbering region into the trained number recognition network model to obtain the device number; the number recognition network model includes a YOLO v3 network joining a number area category and a number category.
The method for training the numbering area positioning network model comprises the following steps:
acquiring a device picture containing a numbering region;
marking the number area of the picture containing the number area;
and inputting the picture subjected to numbering region labeling into a numbering region positioning network model, and training the numbering region positioning network model.
Further, the method comprises the step of carrying out boundary frame marking on the numbering area of the picture through a labelImg marking tool, and obtaining the picture with the marked numbering area.
The method for identifying the network model by the training number comprises the following steps:
extracting a numbering region part in the picture subjected to numbering region labeling, and acquiring a numbering region picture;
marking each number in the picture of the numbering area;
and inputting the picture of the numbering area subjected to digital labeling into a numbering recognition network model, and training the numbering recognition network model.
Further, the method comprises the step of carrying out boundary box labeling on each number in the picture of the numbered area through a labelImg labeling tool, wherein the category of each boundary box is the number in the boundary box.
Further, the method includes labeling the bounding boxes of the numbers in the picture in the numbering area on the premise that the bounding boxes are not overlapped.
The method for identifying the device number comprises the steps of positioning a number area in a device picture through a number area positioning network model, extracting a number area picture for positioning, inputting the number area picture into a trained number identification network model, and obtaining the device number; the invention solves the problem that the serial number can not be identified in different regions, provides the serial number identification method with high accuracy and high identification efficiency, and solves the problem that the device serial number can not be identified in a network-free environment without being carried out in a network-free environment.
Drawings
Fig. 1 is a flowchart of a method for identifying a device number according to an embodiment of the present invention;
FIG. 2 is a diagram of a YOLOv3 network architecture according to an embodiment of the present invention;
FIG. 3 is an effect diagram of numbering regions of steel devices according to an embodiment of the present invention;
fig. 4 is a diagram illustrating an effect of labeling each number in a picture of a numbered region according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
When positioning is carried out, the Yolov3 neural network obtains a prior frame through a clustering method, and 4 coordinate values are predicted for each boundary frame by using the prior frame: t is tx、ty、twAnd th. A picture is divided into S × S units, and for the predicted unit, if the target cell is at the margin of the upper left corner of the imagex,cy) And it corresponds to a bounding box with width and height pw、phThen the predicted value of the network is:
bx=σ(tx)+cx(1)
by=σ(ty)+cy(2)
bw=pwetw(3)
bh=pheth(4)
wherein, bxAnd byIs the coordinate of the center point of the predicted bounding box, bwIs to predict the width of the bounding box, bhIs the width of the prediction bounding box, σ (t)x)、σ(ty) Is the loss of square error of the coordinates, txAnd tyCoordinate value of center point, t, representing normalized bounding boxwAnd thRespectively representing the width and height of the normalized bounding box.
The Darknet-53 network is a network used for extracting features in the YOLOv3 network, a first feature map is obtained through 7 convolutions on the basis of a feature map obtained by the Darknet-53 network, and first prediction is carried out on the feature map. And then obtaining the output of the 3 rd to last convolutional layer from back to front, carrying out convolution once multiplied by 2 upsampling for one time, connecting the upsampling feature with the 43 th convolution feature, obtaining a second feature map through 7 convolutions, and carrying out second prediction on the feature map. And then obtaining the output of the 3 rd to last convolutional layer from back to front, carrying out convolution once multiplied by 2 upsampling once, connecting the upsampling feature with the 26 th convolution feature, obtaining a third feature map through 7 convolutions, and carrying out third prediction on the feature map.
The YOLOv3 network uses a new Darknet-53 network to extract features, which merges YOLOv2, Darknet-19 and other new residual networks, consisting of successive 3 x 3 and 1 x 1 convolutional layers, each followed by a BN layer and a LeakyReLU layer. Finally it has 53 convolutional layers and is therefore named Darknet-53 network. Experiments on ImageNet show that the Darknet-53 network indeed works well, and compared with ResNet-152 and ResNet-101, Darknet-53 not only has equivalent classification accuracy, but also has much faster calculation speed and fewer network layers. Darknet-53 adopts a layer-hopping connection mode of ResNet, uses a large number of residual error layer-hopping structures, and has three advantages. The first advantage is that an important point of the current deep learning model is whether the network can be converged finally, and the structure can enable the network to be converged even under the condition of deep depth. The second advantage is that the deeper the network built by the deep learning is, the better the expressed object characteristics are, and the better the target detection and classification effects are. The third advantage is that 1 × 1 convolution in the residual structure reduces the number of channels of each convolution to a great extent, which not only reduces parameters and the stored model, but also reduces the calculation amount.
The embodiment provides a method for identifying a device number, which is implemented on a Python platform, and for example, referring to fig. 1, the method for identifying a device number includes the following steps:
step 1: training a numbering area positioning network model;
using a mobile phone to shoot a plurality of device pictures containing numbering areas, wherein each picture contains clear numbering information of the steel device, and one picture contains one number;
and marking the number area in each picture by using a labelImg marking tool through the bounding box, positioning the number position in each picture, marking the bounding box, referring to fig. 3, wherein the types of the bounding boxes for marking the number areas in each picture are the same, and the type names can be customized. After the successful annotation, an XML file is generated, and then the XML file is converted into a TXT file suitable for YOLOv3 training.
Adding a numbering area type and a number type into a YOLO v3 network, wherein the numbering area type refers to the name of a numbering area for positioning, and the name of the numbering area can be customized; the number category refers to each number specifically contained in the numbered region. The YOLOv3 network is shown in fig. 2;
extracting the picture of the numbering area marked by using the boundary frame, and acquiring the picture of the numbering area;
and taking the numbering region as a feature, training a YOLOv3 network added with the numbering region type and the number type by using the picture labeled with the numbering region, and positioning the picture numbering region by using a YOLOv3 network added with the numbering region type and the number type to form a numbering region positioning network model of the picture.
In the picture numbering region positioning network model, the numbering region in the picture is positioned through a Darknet-53 network, and the positioning characteristics of the whole numbering region are extracted.
Step 2: training the number to identify the network model;
extracting a numbering region part in the picture subjected to numbering region labeling and obtained according to the step 1, and obtaining a numbering region picture;
using a labelImg labeling tool to label each number in the number area picture, and acquiring the number area picture subjected to number labeling, wherein the type of each boundary frame in the number area picture subjected to number labeling is the number in the boundary frame, namely the label name of each boundary frame is the print number contained in the boundary frame: for example, if the number is "321024", the positions of six numbers, that is, "3", "2", "1", "0", "2", and "4", are labeled with bounding boxes respectively, and the names thereof are "3", "2", "1", "0", "2", and "4";
the labeling is performed with as little overlap as possible so as to better enable the network to learn the characteristics of the words to achieve the desired recognition effect, and the effect diagram for performing the digital labeling is shown in fig. 4.
And taking the number label as a characteristic, inputting the picture of the number area subjected to the number label into a YOLOv3 network added with the number area type and the number type for training, wherein the YOLOv3 network added with the number area type and the number type is used for identifying each number in the number area, and the network forms a number identification network model of the picture.
And step 3: positioning a numbering area of a picture sample to be detected;
when testing is carried out, a mobile phone is used for shooting a picture of a device containing a numbering region, the picture contains clear numbering information of the steel device, one picture contains one number, and the picture of the device to be tested containing the numbering region is obtained;
inputting a picture of the device to be tested into the trained numbering area positioning network model obtained in the step 1, and positioning the numbering area of the device;
and extracting the number area for positioning, and acquiring the number area picture of the device.
And 4, step 4: identifying the serial number of the picture sample to be detected;
and (4) inputting the picture of the numbering region acquired according to the step (3) into the trained number identification network model acquired according to the step (2), identifying each number in the numbering region, and outputting the device number.
According to the method for identifying the device number, the improved YOLOv3 algorithm is used for positioning the number area of the device and identifying the number in the positioned number area, the problems that the existing OCR character identification system cannot identify the area, the identification speed is low and the existing OCR character identification system needs to rely on a network are solved, the method provided by the embodiment of the invention has the advantages of high accuracy and good robustness, and the problem of waste of manpower and material resources in the traditional manual counting device number is solved.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (6)

1. A method for identifying a device number, the method comprising the steps of:
acquiring a picture of the device to be tested containing the number area;
inputting the picture of the device to be tested into the trained numbering area positioning network model, and positioning the numbering area of the device; extracting a numbering region for positioning, and acquiring a numbering region picture of the device; the numbering region positioning network model comprises a YOLO v3 network added with a numbering region category and a number category;
inputting the picture of the numbering region into the trained number recognition network model to obtain the device number; the number recognition network model includes a YOLO v3 network joining a number area category and a number category.
2. The device number identification method according to claim 1, wherein the method for training the numbering area positioning network model comprises the following steps:
acquiring a device picture containing a numbering region;
marking the number area of the picture containing the number area;
and inputting the picture subjected to numbering region labeling into a numbering region positioning network model, and training the numbering region positioning network model.
3. The method for identifying the device number according to claim 2, wherein the labelImg labeling tool is used for labeling the bounding box of the number area of the picture, so as to obtain the picture with the number area labeled.
4. The device number recognition method according to claim 2, wherein the method of training the number recognition network model comprises the steps of:
extracting a numbering region part in the picture subjected to numbering region labeling, and acquiring a numbering region picture;
marking each number in the picture of the numbering area;
and inputting the picture of the numbering area subjected to digital labeling into a numbering recognition network model, and training the numbering recognition network model.
5. The device number identification method according to claim 4, wherein bounding box labeling is performed on each number in the numbered region picture by a labelImg labeling tool, and the category of each bounding box is a name of the number in the bounding box.
6. The method of identifying a device number according to claim 5, wherein the bounding box labeling is performed on each number in the numbered region picture on the premise that the bounding boxes do not overlap.
CN201910454457.2A 2019-05-29 2019-05-29 A method of identifying the part number Withdrawn CN110956174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910454457.2A CN110956174A (en) 2019-05-29 2019-05-29 A method of identifying the part number

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910454457.2A CN110956174A (en) 2019-05-29 2019-05-29 A method of identifying the part number

Publications (1)

Publication Number Publication Date
CN110956174A true CN110956174A (en) 2020-04-03

Family

ID=69975488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910454457.2A Withdrawn CN110956174A (en) 2019-05-29 2019-05-29 A method of identifying the part number

Country Status (1)

Country Link
CN (1) CN110956174A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326838A (en) * 2021-06-24 2021-08-31 浙江理工大学 Mobile phone light guide plate model number identification method based on deep learning network
CN118038433A (en) * 2024-01-31 2024-05-14 北汽利戴工业技术服务(北京)有限公司 Cylinder body model detecting system based on visual identification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326838A (en) * 2021-06-24 2021-08-31 浙江理工大学 Mobile phone light guide plate model number identification method based on deep learning network
CN118038433A (en) * 2024-01-31 2024-05-14 北汽利戴工业技术服务(北京)有限公司 Cylinder body model detecting system based on visual identification

Similar Documents

Publication Publication Date Title
CN108846835B (en) Image change detection method based on depthwise separable convolutional network
CN110349122A (en) A kind of pavement crack recognition methods based on depth convolution fused neural network
CN111160352A (en) Workpiece metal surface character recognition method and system based on image segmentation
CN110610166A (en) Text region detection model training method and device, electronic equipment and storage medium
CN111191611B (en) Traffic sign identification method based on deep learning
CN110766002A (en) A deep learning-based method for detecting the character region of ship names
CN112541922A (en) Test paper layout segmentation method based on digital image, electronic equipment and storage medium
CN107301414A (en) Chinese positioning, segmentation and recognition methods in a kind of natural scene image
CN108765349A (en) A kind of image repair method and system with watermark
CN114782770A (en) A method and system for license plate detection and license plate recognition based on deep learning
CN112528845A (en) Physical circuit diagram identification method based on deep learning and application thereof
CN110245583A (en) An intelligent identification method for vehicle exhaust inspection report
CN113674216A (en) Subway tunnel disease detection method based on deep learning
CN114419667A (en) Character detection method and system based on transfer learning
CN111008576A (en) Pedestrian detection and model training and updating method, device and readable storage medium thereof
CN116740528A (en) A method and system for target detection in side scan sonar images based on shadow features
CN111666954A (en) Method and system for extracting joint learning of salient region
CN112164040A (en) Steel surface defect identification method based on semi-supervised deep learning algorithm
CN111739029A (en) Detection method of tooth loss of electric bucket based on deep learning convolutional neural network
CN112825141A (en) Method and device for recognizing text, recognition equipment and storage medium
CN111950556A (en) A detection method of number plate printing quality based on deep learning
CN111814576A (en) A deep learning-based image recognition method for shopping receipts
CN109325487B (en) Full-category license plate recognition method based on target detection
CN111414855B (en) Telegraph pole sign target detection and identification method based on end-to-end regression model
CN116758545A (en) Paper medicine packaging steel seal character recognition method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200403