CN116935363A - Cutter identification method, cutter identification device, electronic equipment and readable storage medium - Google Patents

Cutter identification method, cutter identification device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116935363A
CN116935363A CN202310815863.3A CN202310815863A CN116935363A CN 116935363 A CN116935363 A CN 116935363A CN 202310815863 A CN202310815863 A CN 202310815863A CN 116935363 A CN116935363 A CN 116935363A
Authority
CN
China
Prior art keywords
cutter
tool
image
training
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310815863.3A
Other languages
Chinese (zh)
Other versions
CN116935363B (en
Inventor
李安平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Weizhen Technology Co ltd
Original Assignee
Dongguan Weizhen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Weizhen Technology Co ltd filed Critical Dongguan Weizhen Technology Co ltd
Priority to CN202310815863.3A priority Critical patent/CN116935363B/en
Publication of CN116935363A publication Critical patent/CN116935363A/en
Application granted granted Critical
Publication of CN116935363B publication Critical patent/CN116935363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a cutter identification method, a device, electronic equipment and a computer readable storage medium, wherein the method comprises the steps of firstly obtaining a cutter image of a target cutter, carrying out normalization processing on the cutter image, and then inputting the cutter image subjected to normalization processing into a convolutional neural network trained in advance to obtain a cutter classification vector; inputting the cutter classification vector into a normalized exponential function to obtain a normalized array; and taking the maximum value in the normalized array as the identification confidence, and taking the index value corresponding to the maximum value in the normalized array as the mark number of the target tool in the normalized array when the identification confidence is larger than a preset threshold, wherein the mark number can represent the type information of the target tool. The tool recognition method of the present application can thus determine the tool type in the tool image.

Description

Cutter identification method, cutter identification device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of tool recognition technologies, and in particular, to a tool recognition method, a device, an electronic apparatus, and a computer readable storage medium.
Background
With the development of the manufacturing industry towards automation and intellectualization, the numerical control machining center is widely used, and more auxiliary systems of the machining center, such as a machine tool anti-collision alarm system and a cutter abrasion monitoring system, are generated. The production of the auxiliary system of the machining center greatly improves the machining efficiency, saves the machining cost and improves the economic benefit for enterprises.
However, the auxiliary system in the related art cannot identify the type of the tool, so that when the auxiliary system is operated, the processing methods of different types of tools are the same, for example, in a machine tool anti-collision alarm system, the different types of tools adopt the same anti-collision alarm threshold value, and therefore, the situation of missing report or false report is caused. Thus, there is a need for a method that can identify the type of tool.
Disclosure of Invention
The present application aims to solve at least one of the technical problems existing in the prior art. To this end, the present application proposes a tool recognition method, apparatus, electronic device, and computer-readable storage medium, capable of inputting a tool image to a convolutional neural network trained in advance, thereby determining a tool type in the tool image.
An embodiment of a first aspect of the present application provides a tool recognition method, including:
acquiring a tool image of a target tool;
normalizing the cutter image;
inputting the cutter image subjected to the normalization processing into a convolutional neural network trained in advance to obtain a cutter classification vector;
inputting the cutter classification vector into a normalized exponential function to obtain a normalized array;
taking the maximum value in the normalized array as an identification confidence, and taking an index value corresponding to the maximum value in the normalized array as a mark number of the target tool when the identification confidence is larger than a preset threshold; the marking number characterizes the type of the target tool.
The cutter identification method provided by the embodiment of the application has at least the following beneficial effects: firstly, acquiring a cutter image of a target cutter, carrying out normalization processing on the cutter image, and then inputting the cutter image subjected to normalization processing into a convolutional neural network trained in advance to obtain a cutter classification vector; inputting the cutter classification vector into a normalized exponential function to obtain a normalized array; and taking the maximum value in the normalized array as the identification confidence, and taking the index value corresponding to the maximum value in the normalized array as the mark number of the target tool in the normalized array when the identification confidence is larger than a preset threshold, wherein the mark number can represent the type information of the target tool. The tool recognition method of the present application can thus determine the tool type in the tool image.
According to some embodiments of the application, the training step of the convolutional neural network comprises:
acquiring a cutter training image, classifying the cutter training image, and distributing the mark number to each cutter type; wherein the cutter types are in one-to-one correspondence with the mark numbers;
performing enhancement processing on the cutter training image;
carrying out normalization processing on the cutter training image subjected to the enhancement processing to obtain a training set;
inputting the training set into the initial convolutional neural network to obtain a cutter training classification vector;
inputting the cutter training classification vector into the normalized exponential function to obtain a training normalized array;
calculating to obtain a loss value according to the real distribution of the mark numbers corresponding to the cutter training image, the training normalization array and the loss function;
and updating parameters of the convolutional neural network according to the loss value.
According to some embodiments of the application, the loss function is:
wherein Loss is the Loss value, y is the true distribution of the mark numbers,for the training normalized array, c is the total number of the index numbers, i is 0,1.
According to some embodiments of the application, the inputting the training set into the initial convolutional neural network to obtain a cutter training classification vector includes:
inputting the training set into a backbone network to obtain an image training feature vector;
and inputting the image training feature vector to a convolutional network classification layer to obtain the cutter training classification vector.
According to some embodiments of the application, the normalized exponential function is:
wherein ,for the training normalized array, P (c=k) is the probability of the index number of the tool training image being k, Z i Z for the value of the ith bit of the cutter training classification vector k Training the value of the kth bit of the feature vector for the image, i being 0, 1; k is 0, 1..c; c is the total number of the index numbers.
According to some embodiments of the application, the normalizing the tool image includes:
calculating a green channel brightness average value and a green channel brightness standard deviation of the cutter image according to the cutter image;
calculating a red channel brightness average value and a red channel brightness standard deviation of the cutter image according to the cutter image;
calculating a blue channel brightness average value and a blue channel brightness standard deviation of the cutter image according to the cutter image;
subtracting the average red channel brightness value from the red channel brightness value of each cutter image, and dividing the average red channel brightness value by the standard deviation of the red channel brightness value;
subtracting the green channel brightness average value from the green channel brightness value of each cutter image, and dividing the green channel brightness average value by the green channel brightness standard deviation;
and subtracting the average value of the brightness of the blue channel from the brightness value of the blue channel of each cutter image, and dividing the average value by the standard deviation of the brightness of the blue channel.
According to some embodiments of the application, the updating the parameters of the convolutional neural network according to the loss value includes:
and updating parameters of the convolutional neural network by adopting a random gradient descent algorithm.
An embodiment of a second aspect of the present application provides a tool recognition apparatus, including:
the acquisition module is used for acquiring a tool image of the target tool;
the normalization module is used for carrying out normalization processing on the cutter image;
the recognition module is used for inputting the cutter image subjected to the normalization processing into a convolutional neural network trained in advance to obtain a cutter classification vector;
the vector normalization module is used for inputting the tool classification vector into a normalization exponential function to obtain a normalization array;
the mark number determining module is used for taking the maximum value in the normalized array as the identification confidence, and taking the index value corresponding to the maximum value in the normalized array as the mark number of the target tool when the identification confidence is larger than a preset threshold.
An embodiment of a third aspect of the present application provides an electronic device, including a memory storing a computer program and a processor implementing the tool recognition method according to any one of the embodiments of the first aspect when the processor executes the computer program.
An embodiment of a fourth aspect of the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, performs the tool recognition method according to any one of the embodiments of the first aspect.
Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
The application is further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of a method for identifying a tool according to some embodiments of the application;
FIG. 2 is a flow chart of normalization of a tool recognition method according to some embodiments of the present application;
FIG. 3 is a schematic diagram of a training process of a convolutional neural network of a tool recognition method according to some embodiments of the present application;
FIG. 4 is a schematic diagram of a training sub-process of a convolutional neural network of a tool recognition method according to further embodiments of the present application;
FIG. 5 is a schematic view of a tool recognition device according to some embodiments of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
In the description of the present application, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present application and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present application.
In the description of the present application, the meaning of a number is one or more, the meaning of a number is two or more, and greater than, less than, exceeding, etc. are understood to exclude the present number, and the meaning of a number is understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present application, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present application can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
In the description of the present application, the descriptions of the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Referring to fig. 1, an embodiment of a first aspect of the present application provides a tool recognition method, including, but not limited to, the steps of:
step S110, acquiring a tool image of a target tool;
step S120, carrying out normalization processing on the cutter image;
step S130, inputting the normalized cutter image into a pre-trained convolutional neural network to obtain a cutter classification vector;
step S140, inputting the tool classification vector into a normalized exponential function to obtain a normalized array;
step S150, taking the maximum value in the normalized array as the identification confidence, and taking the index value corresponding to the maximum value in the normalized array as the mark number of the target tool when the identification confidence is larger than a preset threshold; the index number characterizes the type of target tool.
The cutter identification method of the embodiment of the application comprises the steps of S110 to S150, firstly acquiring a cutter image of a target cutter, carrying out normalization processing on the cutter image, and then inputting the cutter image subjected to normalization processing into a convolutional neural network trained in advance to obtain a cutter classification vector; inputting the cutter classification vector into a normalized exponential function to obtain a normalized array; and taking the maximum value in the normalized array as the identification confidence, and taking the index value corresponding to the maximum value in the normalized array as the mark number of the target tool in the normalized array when the identification confidence is larger than a preset threshold, wherein the mark number can represent the type information of the target tool. Therefore, the tool identification method can determine the tool type in the tool image without manually confirming the tool type.
It should be noted that, before step S110, for example, when training the convolutional neural network, it is necessary to construct a tool information database, where the tool information database includes various tool types and specification information corresponding to each tool type, and assign a label number to each tool type, where the label number corresponds to the tool type one by one. After step S150, after the mark number is determined, the tool type and tool specification information corresponding to the mark number can be acquired according to the tool information database by the mark number, and then the tool type and tool specification information is sent to the machining center auxiliary system, so that the machining center auxiliary system can determine the tool type and tool specification information, and the machining center auxiliary system can conveniently execute corresponding operations according to the tool type and tool specification information. For example, the tool specification information comprises tool size information, the machining center auxiliary system is a machine tool anti-collision alarm system, and after the machine tool anti-collision system obtains the tool size information, corresponding operation can be performed according to the tool size information, so that damage caused by collision between a machine tool and a tool is avoided. The machine tool anti-collision alarm system can be a system in the prior art, and the embodiment of the application is not repeated.
It will be appreciated that in step S110, a tool image is acquired first, and the tool image may be acquired by configuring the machine tool with a camera. In other embodiments, a video of the tool is acquired by a camera and the tool image is extracted from the video.
When the tool identification method is applied, visual equipment is only required to be added into the original machine tool system to acquire the tool image, a machine tool is not required to be changed, and the tool identification method is not damaged to be installed, and does not have any influence on a machine tool PLC system and an NC system. The cutter identification method provided by the embodiment of the application has low deployment cost in application, and can be deployed and applied to a machine tool system without a PLC professional.
It will be appreciated that referring to fig. 2, step S120 may include, but is not limited to, the following steps:
step S210, calculating the average value and standard deviation of the green channel brightness of the cutter image according to the cutter image;
step S220, calculating the average value of the brightness of the red channel and the standard deviation of the brightness of the red channel of the cutter image according to the cutter image;
step S230, calculating the average value of the brightness of the blue channel and the standard deviation of the brightness of the blue channel of the cutter image according to the cutter image;
step S240, subtracting the average value of the red channel brightness from the red channel brightness value of each cutter image, and dividing the red channel brightness value by the standard deviation of the red channel brightness value;
step S250, subtracting the average value of the green channel brightness from the green channel brightness value of each cutter image, and dividing the average value by the standard deviation of the green channel brightness;
in step S260, the blue channel brightness value of each tool image is subtracted by the average value of the blue channel brightness, and divided by the standard deviation of the blue channel brightness.
It is noted that the acquired tool images are typically plural, and the present application is not limited to a specific number of tool images. Through steps S210 to S230, the average brightness value and standard brightness difference of each channel of RGB of the tool image are calculated, specifically:
wherein n is the number of tool images, i is 0, 1..n; mean R Mean red channel brightness for all tool images, X R(i) A brightness value of a red channel for an i-th tool image; mean G Mean green channel brightness for all tool images, X G(i) A brightness value of a color channel of the ith cutter image; mean B Blue channel brightness level for all tool imagesMean value, X B(i) A brightness value of a blue channel of the ith tool image; std R Std for the standard deviation of red channel brightness for all tool images G Std for the standard deviation of green channel brightness for all tool images B The standard deviation of the brightness of the blue channel for all tool images.
After step S230, the RGB luminance values of each tool image are normalized through steps S240 to S260, so as to obtain a normalized tool image, specifically:
wherein ,XR_new The brightness value of the red channel of the normalized cutter image is obtained; x is X G_new The brightness value of the green channel of the normalized cutter image is obtained; x is X B_new The blue channel luminance value of the processed tool image is normalized.
It may be appreciated that, in step S150, the maximum value in the normalized array is taken as the recognition confidence, and when the recognition confidence is greater than the preset threshold, the index value corresponding to the maximum value in the normalized array is taken as the index number of the target tool. If the recognition confidence is less than or equal to the preset threshold, repeating the steps S110 to S140 until the recognition confidence is greater than the preset threshold. And when the identification confidence is larger than a preset threshold, the obtained mark number of the target cutter has reliability. It should be noted that, the preset threshold is not limited in particular in the embodiment of the present application, and a person skilled in the art may set the preset threshold according to actual needs.
It will be appreciated that with reference to fig. 3, the training step of the convolutional neural network includes, but is not limited to, the following steps:
step S310, acquiring a cutter training image, classifying the cutter training image, and distributing a mark number to each cutter type; wherein the cutter types are in one-to-one correspondence with the mark numbers;
step S320, carrying out enhancement processing on the cutter training image;
step S330, carrying out normalization processing on the cutter training image subjected to enhancement processing to obtain a training set;
step S340, inputting the training set into an initial convolutional neural network to obtain a cutter training classification vector;
step S350, inputting the cutter training classification vector into a normalization exponential function to obtain a training normalization array;
step S360, calculating a loss value according to the real distribution of the mark numbers corresponding to the cutter training image, the training normalization array and the loss function;
and step S370, updating parameters of the convolutional neural network according to the loss value.
And (3) performing iterative updating on parameters of the convolutional neural network through circulating the steps S310 to S370 until the training stopping condition is reached, so as to obtain the trained convolutional neural network. The parameters of the convolutional neural network may refer to weights or bias matrices of all convolutional layers in the convolutional neural network, and the training stop condition may be that the number of loops reaches a preset loop threshold, or the loss value is lower than a preset loss value. The embodiment of the application does not specifically limit the preset loss value and the preset cycle threshold.
Notably, in step S310, a cutter training image is acquired, classified, and a reference number is assigned to each cutter type. Various types of cutters can be shot through a camera, so that cutter training images are obtained, and the images are acquired from different visual angles, different illumination and different focal lengths, so that the cutters in different states are ensured to have acquired images. Then constructing a cutter information database, wherein the cutter information database comprises various cutter types and specification information corresponding to each cutter type, and a mark number is allocated to each cutter type, and the mark numbers are in one-to-one correspondence with the cutter types.
It is noted that in step S320, the tool training image is subjected to enhancement processing, and the tool image is subjected to operations such as color change, rotation change, random cropping, and the like, to enhance the tool image sample.
Note that, in step S330, the tool training image after the enhancement processing is normalized, so as to obtain a training set. The number of the cutter training images is usually plural, and the present application is not limited to the specific number of the cutter training images. Calculating a green channel brightness average value and a green channel brightness standard deviation of the cutter training image according to the cutter training image; according to the cutter training image, calculating the average value of the brightness of the red channel and the standard deviation of the brightness of the red channel of the cutter training image; calculating the average value and standard deviation of the brightness of the blue channel of the cutter training image according to the cutter training image; subtracting the average value of the brightness of the red channel from the brightness value of the red channel of each cutter training image, and dividing the average value by the standard deviation of the brightness of the red channel; subtracting the average value of the green channel brightness from the green channel brightness value of each cutter training image, and dividing the average value by the standard deviation of the green channel brightness; the blue channel luminance value of each cutter training image is subtracted by the average value of the blue channel luminance and divided by the standard deviation of the blue channel luminance. And the normalized cutter training image is obtained, a part of the cutter training image is used as a training set, and the other part of the cutter training image is used as a verification set.
In an embodiment, when normalizing the training images, the training images are further cut according to a preset size, for example, all the training images are cut to size 418×418.
In an embodiment, each time step S310 to step S360 are performed, a loss value of the verification set when the verification set is input to the convolutional neural network is calculated through the verification set, and parameters of the convolutional neural network are updated through the loss value until the loss value of the verification set when the verification set is input to the convolutional neural network is lower than a preset loss value.
It can be appreciated that the loss function is:
wherein Loss is a Loss value, y is the true distribution of the mark numbers,to train the normalized array, c is the total number of index numbers, i is 0,1.
It will be appreciated that referring to fig. 4, step S340, inputting the training set into the initial convolutional neural network to obtain the cutter training classification vector may include, but is not limited to, the following steps:
step S410, inputting a training set into a backbone network to obtain an image training feature vector;
step S420, inputting the image training feature vector into a convolutional network classification layer to obtain a cutter training classification vector.
The convolutional neural network comprises a backbone network and a convolutional network classification layer, wherein during training, a training set is input into the backbone network to obtain an image training feature vector; and then inputting the image training feature vector into a convolutional network classification layer to obtain a cutter training classification vector. In some embodiments, the backbone network employs a resnet108 network, the convolutional network classification layer type is a fully connected network, where the input dimension is m×n, the output dimension is m×c, where M is the number of samples input, N is the size of the image feature converted into a one-dimensional vector, and C is the total number of index numbers of the tool. Correspondingly, in step S130, the normalized tool image is input to the backbone network of the convolutional neural network, the feature of the tool image is extracted to obtain a tool image feature vector, and the tool image feature vector is input to the convolutional network classification layer of the convolutional neural network to obtain a tool classification vector.
It can be appreciated that the normalized exponential function softmax is:
wherein ,to train the normalized array, P (c=k) is the probability of the tool training image with the index number k, Z i For the value of the ith bit of the cutter training classification vector, Z k Values of the kth bit of the training feature vector for the image, i is 0,1,..c; k is 0, 1..c; c is the total number of index numbers. In step S130, the normalized tool image is input to the backbone network of the convolutional neural network, the feature of the tool image is extracted to obtain a tool image feature vector, the tool image feature vector is input to the convolutional network classification layer of the convolutional neural network to obtain a tool classification vector, and the normalized index function is used to obtain the tool classification vector>To normalize the array, P (c=k) is the probability when the index number of the tool image is k, Z i For the value of the ith bit of the tool classification vector, Z k A value of the kth bit of the tool image feature vector, i is 0,1,..c; k is 0, 1..c; c is the total number of index numbers.
It is understood that, in an embodiment, step S370, updating the parameters of the convolutional neural network according to the loss value may include the following steps:
and updating parameters of the convolutional neural network by adopting a random gradient descent algorithm.
Specifically, the parameters of the convolutional neural network are updated by the following formula:
wherein is theta i+1 Parameters characterizing the (i+1) th iteration training, θ i Parameters for the ith iteration training; alpha represents the learning rate and is a constant;characterizing the partial derivative, J (θ, ε, e, …) is a functional expression of all parameter components in the convolutional neural network, and is actually a parameter of the convolutional neural network. θ, ε, and ε each represent a parameter in a parameter set of the convolutional neural network, for example, may be weights or bias matrices of all convolutional layers in the convolutional neural network, and the specific parameter is not specifically limited in accordance with the present application. According to the embodiment of the application, the parameters of the convolutional neural network are updated by adopting a random gradient descent algorithm until the loss value of the verification set input to the convolutional neural network tends to be stable, so that the convolutional neural network trained in advance is obtained.
Referring to fig. 5, a second aspect of the embodiment of the present application provides a tool recognition apparatus, comprising:
the acquisition module 510 is used for acquiring a tool image of the target tool by the acquisition module 510;
the image normalization module 520, the normalization module 520 is used for normalizing the cutter image;
the recognition module 530 is used for inputting the tool image subjected to normalization processing into a pre-trained convolutional neural network to obtain a tool classification vector;
the vector normalization module 540 is used for inputting the tool classification vector into the normalization exponential function to obtain a normalization array;
the marking number determining module 550 is configured to take the maximum value in the normalized array as the recognition confidence, and take the index value corresponding to the maximum value in the normalized array as the marking number of the target tool when the recognition confidence is greater than a preset threshold.
The tool recognition device firstly acquires a tool image of a target tool through the acquisition module 510, then normalizes the tool image through the image normalization module 520, and then inputs the tool image subjected to normalization to a convolutional neural network trained in advance through the recognition module 530 to obtain a tool classification vector; inputting the tool classification vector to the normalized exponential function through the vector normalization module 540 to obtain a normalized array; the maximum value in the normalized array is used as the recognition confidence through the mark number determining module 550, and when the recognition confidence is greater than a preset threshold, the index value corresponding to the maximum value in the normalized array is used as the mark number of the target tool, and the mark number can represent the type information of the target tool. The tool recognition method of the present application can thus determine the tool type in the tool image.
It should be noted that, the specific implementation of the tool recognition device is substantially the same as the specific embodiment of the tool recognition method described above, and will not be described herein again.
In a third aspect, an embodiment of the present application further provides an electronic device, where the electronic device includes a memory and a processor, and the memory stores a computer program, and the processor implements the tool recognition method when executing the computer program. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 6, fig. 6 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
the processor 601 may be implemented by a general-purpose CPU (central processing unit), a microprocessor, an application-specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solution provided by the embodiments of the present application;
the memory 602 may be implemented in the form of read-only memory (ReadOnlyMemory, ROM), static storage, dynamic storage, or random access memory (RandomAccessMemory, RAM). The memory 602 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present disclosure are implemented by software or firmware, relevant program codes are stored in the memory 602, and the processor 601 invokes the tool recognition method for executing the embodiments of the present disclosure;
an input/output interface 603 for implementing information input and output;
the communication interface 604 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
a bus 605 for transferring information between the various components of the device (e.g., the processor 601, memory 602, input/output interface 603, and communication interface 604);
wherein the processor 601, the memory 602, the input/output interface 603 and the communication interface 604 are communicatively coupled to each other within the device via a bus 605.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the above-described tool recognition method.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments of the present application have been described in detail with reference to the accompanying drawings, but the present application is not limited to the above embodiments, and various changes can be made within the knowledge of one of ordinary skill in the art without departing from the spirit of the present application. Furthermore, embodiments of the application and features of the embodiments may be combined with each other without conflict.

Claims (10)

1. A method of identifying a tool, comprising:
acquiring a tool image of a target tool;
normalizing the cutter image;
inputting the cutter image subjected to the normalization processing into a convolutional neural network trained in advance to obtain a cutter classification vector;
inputting the cutter classification vector into a normalized exponential function to obtain a normalized array;
taking the maximum value in the normalized array as an identification confidence, and taking an index value corresponding to the maximum value in the normalized array as a mark number of the target tool when the identification confidence is larger than a preset threshold; the marking number characterizes the type of the target tool.
2. The tool recognition method according to claim 1, wherein the training step of the convolutional neural network comprises:
acquiring a cutter training image, classifying the cutter training image, and distributing the mark number to each cutter type; wherein the cutter types are in one-to-one correspondence with the mark numbers;
performing enhancement processing on the cutter training image;
carrying out normalization processing on the cutter training image subjected to the enhancement processing to obtain a training set;
inputting the training set into the initial convolutional neural network to obtain a cutter training classification vector;
inputting the cutter training classification vector into the normalized exponential function to obtain a training normalized array;
calculating to obtain a loss value according to the real distribution of the mark numbers corresponding to the cutter training image, the training normalization array and the loss function;
and updating parameters of the convolutional neural network according to the loss value.
3. The tool recognition method according to claim 2, wherein the loss function is:
wherein Loss is the Loss value, y is the true distribution of the mark numbers,for the training normalized array, c is the total number of the index numbers, i is 0,1.
4. The tool recognition method of claim 2, wherein the inputting the training set into the initial convolutional neural network to obtain a tool training classification vector comprises:
inputting the training set into a backbone network to obtain an image training feature vector;
and inputting the image training feature vector to a convolutional network classification layer to obtain the cutter training classification vector.
5. The tool recognition method of claim 4, wherein the normalized exponential function is:
wherein ,for the training normalized array, P (c=k) is the probability of the index number of the tool training image being k, Z i Z for the value of the ith bit of the cutter training classification vector k Training the value of the kth bit of the feature vector for the image, i being 0, 1; k is 0, 1..c; c isThe total number of the index numbers.
6. The tool recognition method according to claim 1, wherein the normalizing the tool image includes:
calculating a green channel brightness average value and a green channel brightness standard deviation of the cutter image according to the cutter image;
calculating a red channel brightness average value and a red channel brightness standard deviation of the cutter image according to the cutter image;
calculating a blue channel brightness average value and a blue channel brightness standard deviation of the cutter image according to the cutter image;
subtracting the average red channel brightness value from the red channel brightness value of each cutter image, and dividing the average red channel brightness value by the standard deviation of the red channel brightness value;
subtracting the green channel brightness average value from the green channel brightness value of each cutter image, and dividing the green channel brightness average value by the green channel brightness standard deviation;
and subtracting the average value of the brightness of the blue channel from the brightness value of the blue channel of each cutter image, and dividing the average value by the standard deviation of the brightness of the blue channel.
7. The tool recognition method according to claim 2, wherein updating the parameters of the convolutional neural network according to the loss value includes:
and updating parameters of the convolutional neural network by adopting a random gradient descent algorithm.
8. A tool recognition device, comprising:
the acquisition module is used for acquiring a tool image of the target tool;
the normalization module is used for carrying out normalization processing on the cutter image;
the recognition module is used for inputting the cutter image subjected to the normalization processing into a convolutional neural network trained in advance to obtain a cutter classification vector;
the vector normalization module is used for inputting the tool classification vector into a normalization exponential function to obtain a normalization array;
the mark number determining module is used for taking the maximum value in the normalized array as the identification confidence, and taking the index value corresponding to the maximum value in the normalized array as the mark number of the target tool when the identification confidence is larger than a preset threshold.
9. An electronic device comprising a memory storing a computer program and a processor implementing the tool recognition method according to any one of claims 1 to 7 when the computer program is executed by the processor.
10. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the tool recognition method of any one of claims 1 to 7.
CN202310815863.3A 2023-07-04 2023-07-04 Cutter identification method, cutter identification device, electronic equipment and readable storage medium Active CN116935363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310815863.3A CN116935363B (en) 2023-07-04 2023-07-04 Cutter identification method, cutter identification device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310815863.3A CN116935363B (en) 2023-07-04 2023-07-04 Cutter identification method, cutter identification device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN116935363A true CN116935363A (en) 2023-10-24
CN116935363B CN116935363B (en) 2024-02-23

Family

ID=88376648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310815863.3A Active CN116935363B (en) 2023-07-04 2023-07-04 Cutter identification method, cutter identification device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116935363B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598709A (en) * 2019-08-12 2019-12-20 北京智芯原动科技有限公司 Convolutional neural network training method and license plate recognition method and device
CN110796161A (en) * 2019-09-18 2020-02-14 平安科技(深圳)有限公司 Recognition model training method, recognition device, recognition equipment and recognition medium for eye ground characteristics
CN111368893A (en) * 2020-02-27 2020-07-03 Oppo广东移动通信有限公司 Image recognition method and device, electronic equipment and storage medium
WO2020221278A1 (en) * 2019-04-29 2020-11-05 北京金山云网络技术有限公司 Video classification method and model training method and apparatus thereof, and electronic device
WO2020248581A1 (en) * 2019-06-11 2020-12-17 中国科学院自动化研究所 Graph data identification method and apparatus, computer device, and storage medium
CN112990432A (en) * 2021-03-04 2021-06-18 北京金山云网络技术有限公司 Target recognition model training method and device and electronic equipment
CN114462479A (en) * 2021-12-23 2022-05-10 浙江大华技术股份有限公司 Model training method, model searching method, model, device and medium
CN115170926A (en) * 2022-09-08 2022-10-11 南京邮电大学 Lightweight target image recognition method, device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020221278A1 (en) * 2019-04-29 2020-11-05 北京金山云网络技术有限公司 Video classification method and model training method and apparatus thereof, and electronic device
WO2020248581A1 (en) * 2019-06-11 2020-12-17 中国科学院自动化研究所 Graph data identification method and apparatus, computer device, and storage medium
CN110598709A (en) * 2019-08-12 2019-12-20 北京智芯原动科技有限公司 Convolutional neural network training method and license plate recognition method and device
CN110796161A (en) * 2019-09-18 2020-02-14 平安科技(深圳)有限公司 Recognition model training method, recognition device, recognition equipment and recognition medium for eye ground characteristics
WO2021051519A1 (en) * 2019-09-18 2021-03-25 平安科技(深圳)有限公司 Recognition model training method and apparatus, fundus feature recognition method and apparatus, device and medium
CN111368893A (en) * 2020-02-27 2020-07-03 Oppo广东移动通信有限公司 Image recognition method and device, electronic equipment and storage medium
CN112990432A (en) * 2021-03-04 2021-06-18 北京金山云网络技术有限公司 Target recognition model training method and device and electronic equipment
CN114462479A (en) * 2021-12-23 2022-05-10 浙江大华技术股份有限公司 Model training method, model searching method, model, device and medium
CN115170926A (en) * 2022-09-08 2022-10-11 南京邮电大学 Lightweight target image recognition method, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王晓斌;黄金杰;刘文举;: "基于优化卷积神经网络结构的交通标志识别", 计算机应用, no. 02, 10 February 2017 (2017-02-10) *
贾宇霞;樊帅昌;易晓梅;: "基于显著性增强和迁移学习的鱼类识别研究", 渔业现代化, no. 01, 15 February 2020 (2020-02-15) *

Also Published As

Publication number Publication date
CN116935363B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN110930353B (en) Method and device for detecting state of hole site protection door, computer equipment and storage medium
CN107622274B (en) Neural network training method and device for image processing and computer equipment
CN106845890B (en) Storage monitoring method and device based on video monitoring
CN107832780B (en) Artificial intelligence-based wood board sorting low-confidence sample processing method and system
CN113095438B (en) Wafer defect classification method, device and system thereof, electronic equipment and storage medium
JP2018182724A5 (en)
CN108701224A (en) Visual vehicle parking take sensor
CN108460346B (en) Fingerprint identification method and device
CN113077462B (en) Wafer defect classification method, device, system, electronic equipment and storage medium
CN110705531B (en) Missing character detection and missing character detection model establishing method and device
KR20200045023A (en) Method and apparatus for recognizing vehicle number based on learning model
CN114553591B (en) Training method of random forest model, abnormal flow detection method and device
CN110569693B (en) Vehicle body color recognition method and device
US20210213615A1 (en) Method and system for performing image classification for object recognition
CN113033451A (en) Overhead line fault identification method and system based on deep learning
CN116935363B (en) Cutter identification method, cutter identification device, electronic equipment and readable storage medium
CN114581419A (en) Transformer insulating sleeve defect detection method, related equipment and readable storage medium
CN108900895B (en) Method and device for shielding target area of video stream
CN112232295B (en) Method and device for confirming newly-added target ship and electronic equipment
CN114463656A (en) Detection model training method, device, equipment and storage medium
CN116645337A (en) Multi-production-line ceramic defect detection method and system based on federal learning
CN115526859A (en) Method for identifying production defects, distributed processing platform, equipment and storage medium
CN114972540A (en) Target positioning method and device, electronic equipment and storage medium
CN109559450A (en) A kind of Intelligent cargo cabinet management system
CN110634120A (en) Vehicle damage judgment method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant