CN110245663A - One kind knowing method for distinguishing for coil of strip information - Google Patents

One kind knowing method for distinguishing for coil of strip information Download PDF

Info

Publication number
CN110245663A
CN110245663A CN201910559952.XA CN201910559952A CN110245663A CN 110245663 A CN110245663 A CN 110245663A CN 201910559952 A CN201910559952 A CN 201910559952A CN 110245663 A CN110245663 A CN 110245663A
Authority
CN
China
Prior art keywords
coil
strip
indicate
image
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910559952.XA
Other languages
Chinese (zh)
Other versions
CN110245663B (en
Inventor
陈旭
杜鹏飞
张建安
王磊
邢晨
阎阅
於枫
邬知衡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Electrical Apparatus Research Institute Group Co Ltd
Original Assignee
Shanghai Electrical Apparatus Research Institute Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Electrical Apparatus Research Institute Group Co Ltd filed Critical Shanghai Electrical Apparatus Research Institute Group Co Ltd
Priority to CN201910559952.XA priority Critical patent/CN110245663B/en
Publication of CN110245663A publication Critical patent/CN110245663A/en
Application granted granted Critical
Publication of CN110245663B publication Critical patent/CN110245663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5846Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • G06Q10/0875Itemisation or classification of parts, supplies or services, e.g. bill of materials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0006Industrial image inspection using a design-rule based approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Marketing (AREA)
  • Finance (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • Operations Research (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to one kind to know method for distinguishing for coil of strip information, which is characterized in that includes the following steps;Using the position coordinates for determining reel number in coil of strip image obtained with the method for minimum area-encasing rectangle, followed by based on convolutional neural networks numerical character classification mode by the coil of strip image state Feature Conversion at position coordinates be character numerical value, which is the reel number of current coil of strip;Using the temperature of temperature detecting module detection coil of strip, meanwhile, it shoots to obtain the side image of coil of strip by video camera, judges whether the circularity of coil of strip is up to standard using side image by host computer, and judging result is transmitted to backstage.The present invention realizes the quick identification to coil of strip, reduces the scan period, improves logistic efficiency, coil of strip information identifying method provided by the invention can not only accurate detection go out the information of coil of strip, and it is fast to detect speed, has the characteristics that real-time.

Description

One kind knowing method for distinguishing for coil of strip information
Technical field
The present invention relates to field of image processings, and in particular to one kind knows method for distinguishing for coil of strip information.
Background technique
Automatic identification and location technology are one of finished room intelligence and unmanned effective way, pass through intelligent measurement With positioning, suspender transformation and automatic control technology, human assistance will be largely reduced, the biggish work of manpower intervention risk is especially avoided Industry.Finished room is important logistics storage department, iron company, and the charge and discharge operations of coil of strip are to influence logistic efficiency and safety Prominent link.The method that overwhelming majority steel warehouse mainly uses manual operation and monitoring in the transportational process of coil of strip at present.? There are security risks by worker under this working method, and manual operation relies primarily on the naked-eye observation of driver, and there are certain Randomness, the unnecessary start and stop that cause to drive a vehicle cause working efficiency low.There are also the higher steel storehouses of the automatic degree of fraction Library assists realizing automatic identification and crawl coil of strip as sensor using laser, but since the laser scanning coil of strip period is longer, leads Cause production efficiency low, so there is an urgent need to one kind can accurately identify coil of strip information, reduce the scan period, preferably to complete The method of the identification and positioning of coil of strip.
A kind of coil of strip information automatic recognition system disclosed in utility model patent 201621247572.0, including two dimensional code are raw At device, image collecting device, information processing unit, the two dimensional code generating means are set to planisher operation area, by steel Volume information two dimensional code generates papery two dimensional code and papery two dimensional code is pasted on coil of strip;Described image acquisition device is set to weight Unit operation area is rolled up, for acquiring the papery two-dimensional barcode information for being pasted on coil of strip;The information processing unit is for receiving image The collected two-dimensional barcode information of acquisition device, and coil of strip information is reverted back, image collecting device uses at least two cameras, takes the photograph As head is between the upper and lower every setting.Its advantage is that operator does not need to verify by coil of strip and input data, coil of strip information are obstructed Manual delivery is crossed, is transmitted by the automatic identification of system, the error of coil of strip information has been prevented, operator enters the number of operation area It reduces, substantially reduces security risk.
Patent of invention 201811297283.5 is related to a kind of digital image recognition system and method for hot-metal bottle tank number, belongs to Metallurgical automation technology field.The system includes image capture module, motion detection block, obj ect detection module, colored molten iron Tank image sound state character representation and describing module, the numerical character categorization module based on convolutional neural networks, host computer edge The structures such as computing module.It is hot environment that the system and method, which solve hot-metal bottle during transporting molten iron, is not available The problems such as RFID etc. identifies equipment, and conventional digital image processing technique cannot adapt to various complex environments very well, by iron The Digital Character Image of water pot surface printing carries out analysis and feature extraction, establishes the identification model based on feature, realizes to iron The identification of water pot tank real-time online, automatically tracks hot-metal bottle logistics information to realize, improves production efficiency and saves manpower Cost.
Patent of invention 201811112046.7 discloses a kind of coil of strip recognition positioning method based on stereoscopic vision, main to wrap It includes following steps: (1) establishing binocular stereo vision model, obtain image pair;(2) video camera is carried out using Zhang Zhengyou scaling method Calibration;(3) disparity map is obtained using Stereo Matching Algorithm;(4) segmentation to target coil of strip is realized using depth histogram, calculate The world coordinates of the X-axis and Y-axis of coil of strip out;(5) three dimensional point cloud of coil of strip is calculated according to re-projection matrix, to coil of strip Point cloud data carries out smoothing denoising;(6) character column fitting finally is carried out to the point cloud data after denoising, existed to obtain coil of strip The world coordinates of Z axis.Method of the invention can accurately identify positioning coil of strip position, reduce the laser scanning period, improve at The logistic efficiency in product warehouse.
Summary of the invention
The purpose of the present invention is: realize the quick identification to coil of strip in warehouse.
In order to achieve the above object, method for distinguishing is known for coil of strip information the technical solution of the present invention is to provide a kind of, Characterized by comprising the following steps:
Step 1, alignment sensor detect that the lorry of steel coil transportation is parked to designated position, and coordinate information is fed back to Host computer, host computer is mobile according to coordinate information control video camera is obtained, and keeps the primary optical axis of video camera vertical with current lorry, and And it is consistent the coil of strip center delivered in the height and lorry of video camera;
Step 2 utilizes the image of camera acquisition coil of strip;
Step 3 judges that whether coil of strip is complete in the image obtained, if completely, retaining present image, enters step 4, if It is imperfect, abandon present image, return step 3;
Step 4, host computer determine the position of reel number in previous step image obtained using the method for minimum area-encasing rectangle Coordinate is set, it is followed by the mode of the numerical character classification based on convolutional neural networks that the coil of strip image state at position coordinates is special Sign is converted to character numerical value, which is the reel number of current coil of strip;
Step 5 compares the reel number stored in the reel number of current coil of strip and MES data library, after successful match, obtains The reel number of current coil of strip is final recognition result, final recognition result is transmitted to backstage, from the background according to final recognition result Inventory is managed, enters step 6, if it fails to match, return step 2;
Coil of strip on lorry is hung in tracks by step 6, driving, by tracks by steel coil transportation to target position It is stored, during transportation, is examined when tracks are temporarily ceased using temperature using the start and stop interval of tracks The temperature of module detection coil of strip is surveyed, and the temperature value that will test is transmitted to backstage, meanwhile, it shoots to obtain coil of strip by video camera Side image, judge whether the circularity of coil of strip up to standard using side image by host computer, and judging result is transmitted to from the background, The following steps are included:
Step 601, the outer circular edge point and inner circle marginal point for obtaining coil of strip in side image;
Step 602 is fitted to obtain ideal outer circle using outer circular edge point, is fitted to obtain in ideal using inner circle marginal point Circle;
The inner circle center of circle of step 603, the outer circle center of circle for obtaining ideal outer circle and ideal inner circle, if the outer circle center of circle with it is interior round The heart be overlapped, then by after coincidence the outer circle center of circle and the inner circle center of circle be defined as calculate the center of circle, enter step 604, if the outer circle center of circle with The inner circle center of circle is not overlapped, then return step 601;
Step 604, the distance for calculating each outer circular edge point to the calculating center of circle, and maximum range value Rmax is therefrom obtained, Each inner circle marginal point is calculated to the distance for calculating the center of circle, and therefrom obtains lowest distance value Rmin, is calculated value of delta, δ= Rmax-Rmin, if value of delta is less than preset threshold value, then it is assumed that the circularity of current coil of strip is up to standard, otherwise it is assumed that current steel The circularity of volume is below standard.
Preferably, the video camera is demarcated, removal causes the image obtained to occur by the reason of camera lens Distortion guarantees coil of strip not by distortion effects.
Preferably, in step 4, character zone is extracted using the position coordinates of reel number, for more accurately identification coil of strip reel number Number, the coil of strip image state feature of extraction include brightness, chromaticity and textural characteristics, in which:
The acquisition methods of brightness are as follows:
If I (i, j) is the character zone pixel after segmentation, then brightness B=(∑ I (i, j))/Count, in formula, Count is the number of pixels of character zone;
The acquisition methods of chromaticity are as follows:
The color third moment of each component in rgb space is taken effectively to reflect character information, i-th of Color Channel of character zone point Third moment under amountIn formula, pi,Gray scale is j's in i-th of Color Channel component The probability that pixel occurs,
The acquisition methods of textural characteristics are as follows:
Static Texture complication is described using grey scale difference statistical method, if (x, y), (x+ Δ x, y+ Δ y) is character area (gray scale difference value between x+ Δ x, y+ Δ y) is Δ g (x, y)=g for two pixels in domain, pixel (x, y) and pixel (x+ Δ x, y+ Δ y), g (x, y) are the gray scale of pixel (x, y) to (x, y)-g, and (x+ Δ x, y+ Δ y) is pixel (x+ Δ x, y to g The gray scale of+Δ y), if all possible values of grey scale difference are m grades, enable pixel (x, y) to being moved in character zone, Accumulative Δ g (x, y) takes the number of different value, makes the histogram of Δ g (x, y), by histogram it is found that Δ g (x, y) value it is general Rate is Δ p (i), using the uniformity coefficient for extracting angle direction second moment ASM reflection image grayscale distribution, if similar pixel Grey value difference is larger, then ASM value is bigger, illustrates that texture is more coarse.
Preferably, in step 4, the identification for carrying out number based on the improved network architecture of LeNet-5 will be at position coordinates Coil of strip image state Feature Conversion is character numerical value, includes: input layer based on the improved network architecture of LeNet-5, for inputting number Word image;Convolutional layer one carries out the digital picture of convolution operation extraction input layer input by convolution window logarithm word image Internal characteristics;One S2 of sample level carries out down-sampled operate using characteristic image of the down-sampled method in maximum pond to convolutional layer one To characteristic image;The size of two convolution filter of convolutional layer is set as being attached between sample level one by convolutional layer two; Sample level two carries out down-sampled operation by the characteristic image to convolutional layer two and obtains;Convolutional layer three, to two characteristic pattern of sample level It is with the mode connected entirely, the i.e. each convolutional filtering of convolutional layer three between convolutional layer three and sample level two as convolution operation obtains Device executes convolution to the characteristic image of sample level two;By image after above each step be degraded to single pixel characteristic image with Carry out sort operation;Convolutional layer three is connected by way of connecting entirely with the last layer output layer, is exported the result is that one one-dimensional Vector, position corresponding to the largest component in vector are exactly the final classification result of network model output.
Preferably, during the improved network architecture training based on LeNet-5, use sparse cross entropy as loss Function, using self-adaption gradient descent algorithm reverse train network, for single sample, cost function is as follows:
Formula (1) is a square error cost function, and in formula (1), W indicates weight parameter, and b indicates that intercept item, (x, y) indicate single A sample, hw,b(x) complex nonlinear hypothesized model is indicated;
For a data set containing m sample, whole cost function is defined as:
In formula (2), previous item is a mean square deviation item of cost function J (W, b), and latter is a weight attenuation term, It is to prevent over-fitting to reduce weight amplitude.In formula (2), (x(i), y(i)) indicate sample set, nlIndicate mind Through the total number of plies of network, l indicates that neural net layer, λ indicate objective function under the constraints, the attainable greatest gradient of institute,Table Show the synaptic weight parameter between l layers of jth node and l+1 the i-th node of layer, siIndicate i-th layer of number of nodes, sjIndicate jth The number of nodes of layer;
In each iterative process of network training, cost function all is minimized using gradient descent method, parameter W, b is carried out Fine tuning, formula are as follows:
In formula (3), (4), α is learning rate,Indicate the intercept item of l+1 the i-th node of layer;
For each node i of output layer n, each node residual error is solved using following formula:
In formula (5),Indicate the n-th 1 layers of i-th node residual error,Indicate the activation value of the n-th 1 layers of i-th node, yiIt indicates I-th layer of output,Indicate the output valve of the n-th 1 layers of i-th node,Indicate the derivative of activation primitive
The residual computations formula of L layers of i-node is as follows:
In formula (6),Indicate the residual error of L layers of i-node, Sl+1Indicate l+1 layers of number of nodes,Indicate L+1 The residual error of layer j node,Indicate the activation value of l layers of jth node,Indicate the derivative of activation primitive;
Each layer residual error formula indicates are as follows:
Partial derivative calculation formula indicates are as follows:
Formula (7), (8), in (9),Indicate the output valve of l layers of jth node;
The partial derivative of whole sample cost function can be solved accordingly:
In formula (10), (11), λ indicates objective function under the constraints, the attainable greatest gradient of institute;
By all parametersWithOne is initialized as close to zero numerical value,Indicate l layers of jth node and Synaptic weight parameter between l+1 the i-th node of layer,The intercept item for indicating l+1 the i-th node of layer obtains that cost letter can be made The parameter W and b that number minimizes are reduced in the training process with this parameter because of error caused by weight.
The present invention realizes the quick identification to coil of strip, reduces the scan period, improves logistic efficiency, and the present invention provides Coil of strip information identifying method can not only accurate detection go out the information of coil of strip, and detect that speed is fast, with real-time Feature.
Detailed description of the invention
Fig. 1 is the flow chart of present invention identification coil of strip information;
Fig. 2 is the flow chart that the identification model in the present invention is established;
Fig. 3 is the flow chart of present invention identification reel number;
Fig. 4 is roundness measurement flow chart of the invention;
Fig. 5 is the convolutional neural networks integrated stand composition that the present invention uses;
Fig. 6 is the model support composition that the identification of number is carried out the present invention is based on the improved network architecture of LeNet-5.
Specific embodiment
Present invention will be further explained below with reference to specific examples.It should be understood that these embodiments are merely to illustrate the present invention Rather than it limits the scope of the invention.In addition, it should also be understood that, after reading the content taught by the present invention, those skilled in the art Member can make various changes or modifications the present invention, and such equivalent forms equally fall within the application the appended claims and limited Range.
As shown in Figures 1 and 2, the present invention provides one kind knows method for distinguishing for coil of strip information, comprising the following steps:
Step 1, alignment sensor detect that the lorry of steel coil transportation is parked to designated position, and coordinate information is fed back to Host computer, PC control control calibrated video camera movement according to coordinate information is obtained, make the primary optical axis of video camera and work as Preceding lorry is vertical, and is consistent the coil of strip center delivered in the height and lorry of video camera;Video camera is demarcated Purpose be remove by the reason of camera lens cause obtain image be distorted, guarantee the calculating knot of entire calculating process Fruit is not by distortion effects;
Step 2 utilizes the image of camera acquisition coil of strip;
Step 3 judges that whether coil of strip is complete in the image obtained, if completely, retaining present image, enters step 4, if It is imperfect, abandon present image, return step 3;
Step 4, host computer determine the position of reel number in previous step image obtained using the method for minimum area-encasing rectangle Coordinate is set, character zone is extracted using the position coordinates of reel number, classifies followed by the numerical character based on convolutional neural networks Mode by the coil of strip image state Feature Conversion at character zone be character numerical value, which is the volume of current coil of strip Number.
The present invention using the deep learning algorithm based on convolutional neural networks establish identification model to the reel number in image into Row identification.As shown in figure 5, the deep learning algorithm based on convolutional neural networks establishes the overall architecture of identification model by convolutional layer Replace composition with maximum pond sample level.High level is hidden layer and the logistic regression classification that full articulamentum corresponds to conventional multilayer perceptron Device.The input of first full articulamentum is to carry out the characteristic image that feature extraction obtains by convolutional layer and sub-sampling layer.Last Layer output layer is a classifier, can use logistic regression.
Specifically, as shown in fig. 6, the deep learning algorithm based on convolutional neural networks establishes identification model using being based on The improved network architecture of LeNet-5 carries out the identification of number.Include: input layer based on the improved network architecture of LeNet-5, is used for Input digital image;One C1 of convolutional layer carries out the number that convolution operation extracts input layer input by convolution window logarithm word image The internal characteristics of word image.In the present embodiment, convolutional layer one obtains 6 convolution characteristic images;One S2 of sample level, using maximum The down-sampled method in pond carries out down-sampled operation to the 6 width characteristic images of one C1 of convolutional layer and obtains characteristic image;Two C3 of convolutional layer, will The size of two C3 convolution filter of convolutional layer is set as being attached between sample level S2;Two S4 of sample level, by volume The characteristic image of two C3 of lamination carries out down-sampled operation and obtains;Three C5 of convolutional layer, to two S4 characteristic image convolution operation of sample level It obtains, is with the mode connected entirely, the i.e. each convolution filter pair of three C5 of convolutional layer between two S4 of three C5 of convolutional layer and sample level The characteristic image of two S4 of sample level executes convolution, and in the present embodiment, three C5 of convolutional layer obtains 120 single pixel characteristic images. Single pixel characteristic image is degraded to by image after above each step to carry out sort operation.Three C5 of convolutional layer by connecting entirely The mode connect is connected with the last layer output layer, and ten kinds of classification of ten node on behalf handwritten numeral images of output layer may. The result is that the one-dimensional vector that length is 10, position corresponding to the largest component in vector is exactly network model output for output Final classification result.This coding mode is equally also used for the label of training sample set.During the training period, use is sparse Cross entropy is as loss function, using self-adaption gradient descent algorithm reverse train network.
For single sample, its cost function is as follows:
Formula (1) is a square error cost function, in formula (1),.W indicates weight parameter, and b indicates that intercept item, (x, y) indicate Single sample, hW,b(x) complex nonlinear hypothesized model is indicated;For a data set containing m sample, its whole generation Valence function can be defined as:
In formula (2), previous item is a mean square deviation item of cost function J (W, b), and latter is a weight attenuation term, It is to prevent over-fitting to reduce weight amplitude.In formula (2)) (x(i), y(i)) indicate sample set, nlIndicate mind Through the total number of plies of network, l indicates that neural net layer, λ indicate objective function under the constraints, the attainable greatest gradient of institute,Table Show the synaptic weight parameter between l layers of jth node and l+1 the i-th node of layer, siIndicate i-th layer of number of nodes, sjIndicate jth The number of nodes of layer;
In each iterative process of network training, cost function all is minimized using gradient descent method, parameter W, b is carried out Fine tuning, formula are as follows:
In formula (3), (4), α is learning rate,Indicate the intercept item of l+1 the i-th node of layer.Above formula is main Calculation step be to solve for cost function partial derivative.
For each node i of output layer n, each node residual error is solved using following formula:
In formula (5),,Indicate the n-th 1 layers of i-th node residual error,Indicate the activation value of the n-th 1 layers of i-th node, yiTable Show i-th layer of output,Indicate the output valve of the n-th 1 layers of i-th node,Indicate the derivative of activation primitive;Formula (5) Derivation process is as follows:
In above formula,Indicate the n-th 1 layers of number of nodes, yjIndicate the output of jth layer,Indicate the n-th 1 layers of jth node Output valve,Indicate the activation value of the n-th 1 layers of jth node.
The residual computations formula of L layers of i-node is as follows:
In formula (6), formula (7),Indicate the residual error of L layers of i-node, Sl+1Indicate L+1 layers of i-node number,It indicates The residual error of L+1 layers of j node,Indicate the activation value of l layers of jth node,Indicate the derivative of activation primitive, Indicate n-thl-1The activation value of the i-th node of layer,Indicate the residual error of the n-th 1 layers of j node,Indicate n-thl-1Layer kth node Activation value,Indicate activation primitive,Indicate n-thl-1Layer kth node couples power with the n-th 1 layers of jth node Weight parameter,Indicate n-thl-1The synaptic weight parameter of the i-th node of layer and the n-th 1 layers of jth node,Activation primitive Derivative indicate,Indicate the residual error of the n-th 1 layers of j node.
Each layer residual error formula may be expressed as:
Partial derivative calculation formula may be expressed as:
The partial derivative of whole sample cost function can be solved accordingly:
In formula (11), (12), λ indicates objective function under the constraints, the attainable greatest gradient of institute.
By all parametersWithOne is initialized as close to zero numerical value,Indicate l layers of jth node and Synaptic weight parameter between l+1 the i-th node of layer,The intercept item for indicating l+1 the i-th node of layer obtains that cost letter can be made The parameter W and b that number minimizes.It is reduced in the training process with this parameter because of error caused by weight.
It is compared by abundant experimental results, the present invention devises 3 convolutional layers, is activated behind each convolutional layer using Relu Function carries out non-linear conversion, increases the feature representation ability of network.Characteristic is advanced before entering convolutional layer BatchNormal normalized can accelerate network convergence rate in this way.It is used after the third layer of network and layer 5 The pond MaxPool layer.Use full articulamentum last two layers, behind first full articulamentum using Dropout method at Reason avoids over-fitting occur when training.During the training period, use sparse cross entropy as loss function, using adaptive Gradient descent algorithm reverse train network.
Coil of strip image state feature in this step includes brightness, chromaticity and textural characteristics, is extracted in image Rgb space color third moment color information, calculate grey scale difference statistical nature indicate textural characteristics:
(1) brightness
If I (i, j) is the character zone pixel after segmentation, then brightness B=(∑ I (i, j))/Count, in formula, Count is the number of pixels of character zone;
(2) chromaticity
It is verified by many experiments, takes the color third moment of each component in rgb space that can effectively reflect character information.Word Accord with the third moment under the Color Channel component of i-th of regionIn formula, pi,jI-th of face The probability that the pixel that gray scale is j in the component of chrominance channel occurs,
(3) textural characteristics
Static Texture complication is described using grey scale difference statistical method, if (x, y), (x+ Δ x, y+ Δ y) is character area (gray scale difference value between x+ Δ x, y+ Δ y) is Δ g (x, y)=g for two pixels in domain, pixel (x, y) and pixel (x+ Δ x, y+ Δ y), g (x, y) are the gray scale of pixel (x, y) to (x, y)-g, and (x+ Δ x, y+ Δ y) is pixel (x+ Δ x, y to g The gray scale of+Δ y), if all possible values of grey scale difference are m grades, enable pixel (x, y) to being moved in character zone, Accumulative Δ g (x, y) takes the number of different value, makes the histogram of Δ g (x, y), by histogram it is found that Δ g (x, y) value it is general Rate is Δ p (i).The present invention is using the uniformity coefficient for extracting angle direction second moment ASM reflection image grayscale distribution, if close Grey scale pixel value differ greatly, then ASM value is bigger, illustrates that texture is more coarse.
Step 5 compares the reel number stored in the reel number of current coil of strip and MES data library, after successful match, obtains The reel number of current coil of strip is final recognition result, final recognition result is transmitted to backstage, from the background according to final recognition result Inventory is managed, enters step 6, if it fails to match, return step 2;
Coil of strip on lorry is hung in tracks by step 6, driving, by tracks by steel coil transportation to target position It is stored, during transportation, is examined when tracks are temporarily ceased using temperature using the start and stop interval of tracks The temperature of module detection coil of strip is surveyed, and the temperature value that will test is transmitted to backstage, meanwhile, it shoots to obtain coil of strip by video camera Side image, judge whether the circularity of coil of strip up to standard using side image by host computer, and judging result is transmitted to from the background, The following steps are included:
Step 601, the outer circular edge point and inner circle marginal point for obtaining coil of strip in side image;
Step 602 is fitted to obtain ideal outer circle using outer circular edge point, is fitted to obtain in ideal using inner circle marginal point Circle;
The inner circle center of circle of step 603, the outer circle center of circle for obtaining ideal outer circle and ideal inner circle, if the outer circle center of circle with it is interior round The heart be overlapped, then by after coincidence the outer circle center of circle and the inner circle center of circle be defined as calculate the center of circle, enter step 604, if the outer circle center of circle with The inner circle center of circle is not overlapped, then return step 601;
Step 604, the distance for calculating each outer circular edge point to the calculating center of circle, and maximum range value Rmax is therefrom obtained, Each inner circle marginal point is calculated to the distance for calculating the center of circle, and therefrom obtains lowest distance value Rmin, is calculated value of delta, δ= Rmax-Rmin, if value of delta is less than preset threshold value, then it is assumed that the circularity of current coil of strip is up to standard, otherwise it is assumed that current steel The circularity of volume is below standard.

Claims (5)

1. one kind knows method for distinguishing for coil of strip information, which comprises the following steps:
Step 1, alignment sensor detect that the lorry of steel coil transportation is parked to designated position, coordinate information are fed back to upper Machine, PC control is mobile according to coordinate information control video camera is obtained, and keeps the primary optical axis of video camera vertical with current lorry, and And it is consistent the coil of strip center delivered in the height and lorry of video camera;
Step 2 utilizes the image of camera acquisition coil of strip;
Step 3 judges that whether coil of strip is complete in the image obtained, if completely, retaining present image, enters step 4, if endless It is whole, abandon present image, return step 3;
Step 4, host computer determine that the position of reel number in previous step image obtained is sat using the method for minimum area-encasing rectangle Mark, the mode followed by the numerical character classification based on convolutional neural networks turn the coil of strip image state feature at position coordinates It is changed to character numerical value, which is the reel number of current coil of strip;
Step 5 compares the reel number stored in the reel number of current coil of strip and MES data library, after successful match, obtains current The reel number of coil of strip is final recognition result, final recognition result is transmitted to backstage, from the background according to final recognition result to library It deposits and is managed, enter step 6, if it fails to match, return step 2;
Coil of strip on lorry is hung in tracks by step 6, driving, is carried out steel coil transportation to target position by tracks Storage, during transportation, using the start and stop interval of tracks, when tracks temporarily cease, utilizes temperature detection mould Block detects the temperature of coil of strip, and the temperature value that will test is transmitted to backstage, meanwhile, it shoots to obtain the side of coil of strip by camera Face image utilizes side image judges whether the circularity of coil of strip is up to standard by host computer, and judging result is transmitted to backstage, including Following steps:
Step 601, the outer circular edge point and inner circle marginal point for obtaining coil of strip in side image;
Step 602 is fitted to obtain ideal outer circle using outer circular edge point, is fitted to obtain ideal inner circle using inner circle marginal point;
The inner circle center of circle of step 603, the outer circle center of circle for obtaining ideal outer circle and ideal inner circle, if the outer circle center of circle and inner circle center of circle weight It closes, is then defined as the outer circle center of circle after coincidence to calculate the center of circle with the inner circle center of circle, 604 is entered step, if the outer circle center of circle and inner circle The center of circle is not overlapped, then return step 601;
Step 604, the distance for calculating each outer circular edge point to the calculating center of circle, and maximum range value Rmax is therefrom obtained, it calculates Each inner circle marginal point and therefrom obtains lowest distance value Rmin to the distance for calculating the center of circle, is calculated value of delta, and δ= Rmax-Rmin, if value of delta is less than preset threshold value, then it is assumed that the circularity of current coil of strip is up to standard, otherwise it is assumed that current steel The circularity of volume is below standard.
2. as described in claim 1 a kind of for coil of strip information knowledge method for distinguishing, which is characterized in that in step 1, to described Video camera is demarcated, and removal causes the image obtained to be distorted by the reason of camera lens, guarantees that coil of strip is not distorted It influences.
3. as described in claim 1 a kind of for coil of strip information knowledge method for distinguishing, which is characterized in that be more acurrate in step 4 Identification steel coil number word, the coil of strip image state feature of extraction includes brightness, chromaticity and textural characteristics, In:
The acquisition methods of brightness are as follows:
If I (i, j) is the character zone pixel after segmentation, then brightness B=(∑ I (i, j))/Count, in formula, Count For the number of pixels of character zone;
The acquisition methods of chromaticity are as follows:
The color third moment of each component in rgb space is taken effectively to reflect character information, under i-th of Color Channel component of character zone Third momentIn formula, pi,jGray scale is the pixel of j in i-th of Color Channel component The probability of appearance,
The acquisition methods of textural characteristics are as follows:
Static Texture complication is described using grey scale difference statistical method, if (x, y), (x+ Δ x, y+ Δ y) is in character zone Two pixels, pixel (x, y) and pixel (gray scale difference value between x+ Δ x, y+ Δ y) be Δ g (x, y)=g (x, Y) (x+ Δ x, y+ Δ y), g (x, y) are the gray scale of pixel (x, y) to-g, and (x+ Δ x, y+ Δ y) is pixel (x+ Δ x, y+ Δ to g Y) gray scale, if all possible values of grey scale difference are m grade, enable pixel (x, y) to being moved in character zone, it is accumulative Δ g (x, y) takes the number of different value, makes the histogram of Δ g (x, y), by histogram it is found that the probability of Δ g (x, y) value is Δ p (i), using the uniformity coefficient for extracting angle direction second moment ASM reflection image grayscale distribution, if similar pixel grey scale Value differs greatly, then ASM value is bigger, illustrates that texture is more coarse.
4. as described in claim 1 a kind of for coil of strip information knowledge method for distinguishing, which is characterized in that in step 4, be based on It is number of characters that the improved network architecture of LeNet-5, which carries out digital identification for the coil of strip image state Feature Conversion at position coordinates, Value, includes: input layer based on the improved network architecture of LeNet-5, is used for input digital image;Convolutional layer one, passes through convolution window Mouth carries out the internal characteristics that convolution operation extracts the digital picture of input layer input to digital picture;One S2 of sample level, using most The down-sampled method of great Chi carries out down-sampled operation to the characteristic image of convolutional layer one and obtains characteristic image;Convolutional layer two, by convolution The size of two convolution filter of layer is set as being attached between sample level one;Sample level two, by convolutional layer two Characteristic image carries out down-sampled operation and obtains;Convolutional layer three obtains two characteristic image convolution operation of sample level, convolutional layer three with It is with the mode connected entirely, i.e. characteristic image execution of each convolution filter of convolutional layer three to sample level two between sample level two Convolution;Single pixel characteristic image is degraded to by image after above each step to carry out sort operation;Convolutional layer three passes through The mode connected entirely is connected with the last layer output layer, the result is that an one-dimensional vector, the largest component institute in vector is right for output The position answered is exactly the final classification result of network model output.
5. as claimed in claim 4 a kind of for coil of strip information knowledge method for distinguishing, which is characterized in that be based on LeNet- described During 5 improved network architecture training, use sparse cross entropy as loss function, it is reversed using self-adaption gradient descent algorithm Training network, for single sample, cost function is as follows:
Formula (1) is a square error cost function, and in formula (1), W indicates weight parameter, and b indicates that intercept item, (x, y) indicate single sample This, hW,b(x) complex nonlinear hypothesized model is indicated;
For a data set containing m sample, whole cost function is defined as:
In formula (2), previous item is a mean square deviation item of cost function J (W, b), and latter is a weight attenuation term, it is In order to reduce weight amplitude, over-fitting is prevented.(x in formula (2)(i), y(i)) indicate sample set, nlIndicate neural network Total number of plies, l indicate that neural net layer, λ indicate objective function under the constraints, the attainable greatest gradient of institute,Indicate l layers Synaptic weight parameter between jth node and l+1 the i-th node of layer, siIndicate i-th layer of number of nodes, sjIndicate the section of jth layer Points;
In each iterative process of network training, cost function all is minimized using gradient descent method, parameter W, b is finely adjusted, Formula is as follows:
In formula (3), (4), α is learning rate,Indicate the intercept item of l+1 the i-th node of layer;
For each node i of output layer n, each node residual error is solved using following formula:
In formula (5),Indicate the n-th 1 layers of i-th node residual error,Indicate the activation value of the n-th 1 layers of i-th node, yiIndicate i-th layer Output,Indicate the output valve of the n-th 1 layers of i-th node,Indicate the derivative of activation primitive;
The residual computations formula of L layers of i-node is as follows:
In formula (6),Indicate the residual error of L layers of i-node, Sl+1Indicate l+1 layers of number of nodes,Indicate L+1 layers of j section The residual error of point,Indicate the activation value of l layers of jth node,Indicate the derivative of activation primitive;
Each layer residual error formula indicates are as follows:
Partial derivative calculation formula indicates are as follows:
Formula (7), (8), in (9),Indicate the output valve of l layers of jth node;
The partial derivative of whole sample cost function can be solved accordingly:
In formula (10), (11), λ indicates objective function under the constraints, the attainable greatest gradient of institute;
By all parametersWithOne is initialized as close to zero numerical value,Indicate l layers of jth node and l+1 Synaptic weight parameter between the i-th node of layer,The intercept item for indicating l+1 the i-th node of layer obtains that cost function can be made The parameter W and b of minimum are reduced in the training process with this parameter because of error caused by weight.
CN201910559952.XA 2019-06-26 2019-06-26 Method for identifying steel coil information Active CN110245663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910559952.XA CN110245663B (en) 2019-06-26 2019-06-26 Method for identifying steel coil information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910559952.XA CN110245663B (en) 2019-06-26 2019-06-26 Method for identifying steel coil information

Publications (2)

Publication Number Publication Date
CN110245663A true CN110245663A (en) 2019-09-17
CN110245663B CN110245663B (en) 2024-02-02

Family

ID=67889512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910559952.XA Active CN110245663B (en) 2019-06-26 2019-06-26 Method for identifying steel coil information

Country Status (1)

Country Link
CN (1) CN110245663B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598958A (en) * 2019-10-10 2019-12-20 武汉科技大学 Steel ladle grading management analysis method and system
CN110763692A (en) * 2019-10-29 2020-02-07 复旦大学 Belted steel burr detecting system
CN111443666A (en) * 2020-03-25 2020-07-24 唐山钢铁集团有限责任公司 Intelligent tracking method for steel coil quality judgment parameters based on database model
CN111898716A (en) * 2020-07-31 2020-11-06 广东昆仑信息科技有限公司 Method and system for automatically matching and tracking iron frame number and ladle number based on RFID (radio frequency identification) technology
CN111968103A (en) * 2020-08-27 2020-11-20 中冶赛迪重庆信息技术有限公司 Steel coil spacing detection method, system, medium and electronic terminal
CN113269759A (en) * 2021-05-28 2021-08-17 中冶赛迪重庆信息技术有限公司 Steel coil information detection method, system, medium and terminal based on image recognition
CN113280730A (en) * 2020-02-19 2021-08-20 宝钢日铁汽车板有限公司 System and method for efficiently detecting strip head of steel coil
CN113701652A (en) * 2021-09-23 2021-11-26 安徽工业大学 Intelligent high-precision detection and defect diagnosis system for inner diameter of steel coil
CN113920116A (en) * 2021-12-13 2022-01-11 武汉市什仔伟业勇进印务有限公司 Intelligent control method and system for color box facial tissue attaching process based on artificial intelligence
CN114486913A (en) * 2022-01-20 2022-05-13 宝钢湛江钢铁有限公司 Method for detecting geometric characteristics of edge of steel coil
CN114581911A (en) * 2022-03-07 2022-06-03 柳州钢铁股份有限公司 Steel coil label identification method and system
EP4350539A1 (en) * 2022-10-04 2024-04-10 Primetals Technologies Germany GmbH Method and system for automatic image-based recognition of identification information on an object

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4796209A (en) * 1986-06-26 1989-01-03 Allegheny Ludlum Corporation Random inventory system
JPH0724528A (en) * 1993-07-12 1995-01-27 Nippon Steel Corp Steel belt coil with longitudinal position mark
KR101482466B1 (en) * 2013-12-24 2015-01-13 주식회사 포스코 Strip winding apparatus for hot rolling line and method of the same
CN204680056U (en) * 2015-05-22 2015-09-30 宝鸡石油钢管有限责任公司 A kind of coil of strip information identification and positioning system
KR20170074306A (en) * 2015-12-21 2017-06-30 주식회사 포스코 Strip winding apparatus
CN206340020U (en) * 2016-11-22 2017-07-18 柳州钢铁股份有限公司 Coil of strip information automatic recognition system
CN109344825A (en) * 2018-09-14 2019-02-15 广州麦仑信息科技有限公司 A kind of licence plate recognition method based on convolutional neural networks
CN109447908A (en) * 2018-09-25 2019-03-08 上海大学 A kind of coil of strip recognition positioning method based on stereoscopic vision
CN109635797A (en) * 2018-12-01 2019-04-16 北京首钢自动化信息技术有限公司 Coil of strip sequence precise positioning method based on multichip carrier identification technology

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4796209A (en) * 1986-06-26 1989-01-03 Allegheny Ludlum Corporation Random inventory system
JPH0724528A (en) * 1993-07-12 1995-01-27 Nippon Steel Corp Steel belt coil with longitudinal position mark
KR101482466B1 (en) * 2013-12-24 2015-01-13 주식회사 포스코 Strip winding apparatus for hot rolling line and method of the same
CN204680056U (en) * 2015-05-22 2015-09-30 宝鸡石油钢管有限责任公司 A kind of coil of strip information identification and positioning system
KR20170074306A (en) * 2015-12-21 2017-06-30 주식회사 포스코 Strip winding apparatus
CN206340020U (en) * 2016-11-22 2017-07-18 柳州钢铁股份有限公司 Coil of strip information automatic recognition system
CN109344825A (en) * 2018-09-14 2019-02-15 广州麦仑信息科技有限公司 A kind of licence plate recognition method based on convolutional neural networks
CN109447908A (en) * 2018-09-25 2019-03-08 上海大学 A kind of coil of strip recognition positioning method based on stereoscopic vision
CN109635797A (en) * 2018-12-01 2019-04-16 北京首钢自动化信息技术有限公司 Coil of strip sequence precise positioning method based on multichip carrier identification technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郑庆元;周思跃;陈金波;林万誉;: "基于立体视觉的钢卷检测技术", 计量与测试技术, no. 05 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598958B (en) * 2019-10-10 2023-09-08 武汉科技大学 Ladle hierarchical management analysis method and system
CN110598958A (en) * 2019-10-10 2019-12-20 武汉科技大学 Steel ladle grading management analysis method and system
CN110763692A (en) * 2019-10-29 2020-02-07 复旦大学 Belted steel burr detecting system
CN110763692B (en) * 2019-10-29 2022-04-12 复旦大学 Belted steel burr detecting system
CN113280730A (en) * 2020-02-19 2021-08-20 宝钢日铁汽车板有限公司 System and method for efficiently detecting strip head of steel coil
CN111443666A (en) * 2020-03-25 2020-07-24 唐山钢铁集团有限责任公司 Intelligent tracking method for steel coil quality judgment parameters based on database model
CN111898716A (en) * 2020-07-31 2020-11-06 广东昆仑信息科技有限公司 Method and system for automatically matching and tracking iron frame number and ladle number based on RFID (radio frequency identification) technology
CN111898716B (en) * 2020-07-31 2023-05-23 广东昆仑信息科技有限公司 Method and system for automatically matching and tracking iron frame number and ladle number based on RFID (radio frequency identification) identification technology
CN111968103A (en) * 2020-08-27 2020-11-20 中冶赛迪重庆信息技术有限公司 Steel coil spacing detection method, system, medium and electronic terminal
CN111968103B (en) * 2020-08-27 2023-05-09 中冶赛迪信息技术(重庆)有限公司 Steel coil interval detection method, system, medium and electronic terminal
CN113269759A (en) * 2021-05-28 2021-08-17 中冶赛迪重庆信息技术有限公司 Steel coil information detection method, system, medium and terminal based on image recognition
CN113701652A (en) * 2021-09-23 2021-11-26 安徽工业大学 Intelligent high-precision detection and defect diagnosis system for inner diameter of steel coil
CN113701652B (en) * 2021-09-23 2024-06-07 安徽工业大学 Intelligent high-precision detection and defect diagnosis system for inner diameter of steel coil
CN113920116A (en) * 2021-12-13 2022-01-11 武汉市什仔伟业勇进印务有限公司 Intelligent control method and system for color box facial tissue attaching process based on artificial intelligence
CN113920116B (en) * 2021-12-13 2022-03-15 武汉市什仔伟业勇进印务有限公司 Intelligent control method and system for color box facial tissue attaching process based on artificial intelligence
CN114486913A (en) * 2022-01-20 2022-05-13 宝钢湛江钢铁有限公司 Method for detecting geometric characteristics of edge of steel coil
CN114581911A (en) * 2022-03-07 2022-06-03 柳州钢铁股份有限公司 Steel coil label identification method and system
EP4350539A1 (en) * 2022-10-04 2024-04-10 Primetals Technologies Germany GmbH Method and system for automatic image-based recognition of identification information on an object

Also Published As

Publication number Publication date
CN110245663B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN110245663A (en) One kind knowing method for distinguishing for coil of strip information
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN106548182B (en) Pavement crack detection method and device based on deep learning and main cause analysis
De Charette et al. Real time visual traffic lights recognition based on spot light detection and adaptive traffic lights templates
CN104008370B (en) A kind of video face identification method
CN103824066B (en) A kind of licence plate recognition method based on video flowing
WO2017190574A1 (en) Fast pedestrian detection method based on aggregation channel features
CN110728225B (en) High-speed face searching method for attendance checking
CN107403168A (en) A kind of facial-recognition security systems
CN104992449A (en) Information identification and surface defect on-line detection method based on machine visual sense
CN103593670A (en) Copper sheet and strip surface defect detection method based on-line sequential extreme learning machine
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN101715111B (en) Method for automatically searching abandoned object in video monitoring
CN110097596A (en) A kind of object detection system based on opencv
CN108918532A (en) A kind of through street traffic sign breakage detection system and its detection method
CN110197166A (en) A kind of car body loading condition identification device and method based on image recognition
CN111914761A (en) Thermal infrared face recognition method and system
CN113449606B (en) Target object identification method and device, computer equipment and storage medium
CN110619336B (en) Goods identification algorithm based on image processing
CN108205649A (en) Driver drives to take the state identification method and device of phone
CN114004814A (en) Coal gangue identification method and system based on deep learning and gray scale third moment analysis
CN112686248A (en) Certificate increase and decrease type detection method and device, readable storage medium and terminal
CN106169086B (en) High-resolution optical image under navigation data auxiliary damages method for extracting roads
CN111950556A (en) License plate printing quality detection method based on deep learning
CN115082509B (en) Method for tracking non-feature target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant