CN108363942B - Cutter identification method, device and equipment based on multi-feature fusion - Google Patents

Cutter identification method, device and equipment based on multi-feature fusion Download PDF

Info

Publication number
CN108363942B
CN108363942B CN201711430107.XA CN201711430107A CN108363942B CN 108363942 B CN108363942 B CN 108363942B CN 201711430107 A CN201711430107 A CN 201711430107A CN 108363942 B CN108363942 B CN 108363942B
Authority
CN
China
Prior art keywords
cutter
image
tool
structural features
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711430107.XA
Other languages
Chinese (zh)
Other versions
CN108363942A (en
Inventor
彭莉
刘丹
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ennew Digital Technology Co Ltd
Original Assignee
Ennew Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ennew Digital Technology Co Ltd filed Critical Ennew Digital Technology Co Ltd
Priority to CN201711430107.XA priority Critical patent/CN108363942B/en
Publication of CN108363942A publication Critical patent/CN108363942A/en
Application granted granted Critical
Publication of CN108363942B publication Critical patent/CN108363942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a cutter identification method, a device and equipment based on multi-feature fusion, and the scheme comprises the following steps: the method comprises the steps of identifying an image with a cutter to obtain a cutter image to be identified, roughly classifying the cutter image and extracting structural features, extracting non-structural features of the cutter image according to the extracted structural features of the cutter image, comparing the similarity of the non-structural features with the non-structural features corresponding to a preset cutter sample image to obtain a similarity comparison result, determining the cutter detail classification corresponding to the cutter image by using the similarity comparison result, and identifying the property of the cutter in the cutter image according to the cutter detail classification. According to the scheme, the tool can be identified more reliably.

Description

Cutter identification method, device and equipment based on multi-feature fusion
Technical Field
The present application relates to the field of image recognition, and in particular, to a method, an apparatus, and a device for tool recognition based on multi-feature fusion.
Background
In order to improve the safety of transportation modes such as airplanes and subways, safety inspection is usually performed on passengers, and cutters are objects which are mainly detected in the safety inspection process.
The identification of the tool in the conventional security inspection is mainly realized manually by security personnel, the process has higher requirements on the knowledge level of the security personnel on the type of the tool, the manual judgment subjectivity is strong, and wrong identification are easily caused.
Based on this, a more reliable tool identification scheme is needed.
Disclosure of Invention
Some embodiments of the present application provide a method, an apparatus, and a device for tool identification based on multi-feature fusion, so as to solve the following technical problems in the prior art: a more reliable tool identification scheme is needed.
Some embodiments of the present application employ the following technical solutions:
a tool identification method based on multi-feature fusion comprises the following steps:
identifying the image with the cutter to obtain a cutter image to be identified;
roughly classifying the cutter image and extracting structural features;
extracting non-structural features of the tool image according to the extracted structural features of the tool image;
carrying out similarity comparison on the non-structural features and non-structural features corresponding to a preset cutter sample image to obtain a similarity comparison result;
determining the cutter detail category corresponding to the cutter image by using the similarity comparison result;
and identifying the properties of the cutter in the cutter image according to the cutter detail classification.
Optionally, the roughly classifying the tool image and extracting the structural features specifically include:
roughly classifying the cutter image and extracting structural features by using a machine learning model trained on the basis of a first labeled sample;
the first labeling sample is a cutter image, and the labeled content comprises the following corresponding contents: tool coarse category, tool structure information.
Optionally, the tool structure information includes: tool size information, and/or positional relationship information of tool components.
Optionally, training the machine learning model based on the first labeled sample specifically includes:
and performing class classification training and structural feature regression training on the machine learning model in a multi-task training mode based on the first labeled sample.
Optionally, the extracting, according to the extracted structural feature of the tool image, a non-structural feature of the tool image specifically includes:
extracting non-structural features of the tool image according to the extracted structural features of the tool image by using a machine learning model trained based on a second labeled sample;
and the second labeled sample is a cutter image, and the labeled content comprises corresponding cutter detail categories.
Optionally, the machine learning model trained based on the second labeled sample comprises a convolutional neural network.
Optionally, the non-structural features comprise global non-structural features and local non-structural features;
the extracting of the non-structural feature of the tool image according to the extracted structural feature of the tool image specifically includes:
inputting the cutter image into the convolutional neural network for processing to obtain a classification feature output by the last convolutional layer as a global non-structural feature of the cutter image;
and extracting local non-structural features of the tool image according to the global non-structural features of the tool image and the extracted structural features of the tool image.
A tool recognition device based on multi-feature fusion comprises:
the acquisition module is used for identifying the image with the cutter to obtain a cutter image to be identified;
the rough classification extraction module is used for roughly classifying the cutter image and extracting structural features;
the second extraction module is used for extracting non-structural features of the tool image according to the extracted structural features of the tool image;
the comparison module is used for carrying out similarity comparison on the non-structural features and the non-structural features corresponding to the preset cutter sample image to obtain a similarity comparison result;
a fine classification module, which determines the cutter fine classification corresponding to the cutter image by using the similarity comparison result;
and the identification module is used for identifying the properties of the cutter in the cutter image according to the cutter detail classification.
Optionally, the rough classification extraction module performs rough classification on the tool image and extracts structural features, and specifically includes:
the rough classification extraction module is used for roughly classifying the cutter image and extracting structural features by utilizing a machine learning model trained on the basis of a first labeled sample;
the first labeling sample is a cutter image, and the labeled content comprises the following corresponding contents: tool coarse category, tool structure information.
A tool recognition apparatus based on multi-feature fusion, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
identifying the image with the cutter to obtain a cutter image to be identified;
roughly classifying the cutter image and extracting structural features;
extracting non-structural features of the tool image according to the extracted structural features of the tool image;
carrying out similarity comparison on the non-structural features and non-structural features corresponding to a preset cutter sample image to obtain a similarity comparison result;
determining the cutter detail category corresponding to the cutter image by using the similarity comparison result;
and identifying the properties of the cutter in the cutter image according to the cutter detail classification.
The above-mentioned at least one technical scheme that some embodiments of this application adopt can reach following beneficial effect: the tool can be more reliably identified through the identification mode that the structural features and the non-structural features of the tool image are fused.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flow chart of a tool identification method based on multi-feature fusion according to some embodiments of the present application;
FIG. 2 is a detailed flow chart of the lead operation of the identification tool provided by some embodiments of the present application;
FIG. 3 is a schematic structural diagram of a tool recognition device based on multi-feature fusion according to some embodiments of the present application;
fig. 4 is a schematic structural diagram of a tool recognition device based on multi-feature fusion according to some embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flowchart of a tool identification method based on multi-feature fusion according to some embodiments of the present application. In this flow, from the perspective of the device, the execution subject may be a security check-related device, and of course, if the method is used in a scenario other than security check, the execution subject may be another corresponding server or terminal, which is not specifically limited in this application.
In addition, from the viewpoint of programs, the execution subject of some steps in the embodiments of the present application may be a program installed in the above-described apparatus. The program may be in the form of a client, a web page, or the like, and this application is not limited to this.
The process in fig. 1 may include the following steps:
s102: and identifying the image with the tool to obtain the image of the tool to be identified.
In some embodiments of the present application, the raw images may include images acquired by a general image pickup device such as a camera, a monitoring camera, or the like, or images acquired by a special scanning device in a security inspection device such as a security inspection machine. From these raw images, the image with the tool can be identified, and further the tool in the tool image can be identified for its properties, to determine whether the relevant specifications are met, whether a regulatory action is required, etc.
S104: and carrying out rough classification on the cutter image and extracting structural features.
In some embodiments of the present application, step S104 may be implemented based on at least one machine learning model trained in advance. For example, a machine learning model is used for roughly classifying the tool image and extracting the structural features; or, roughly classifying the tool image by one machine learning model, and extracting structural features of the tool image by the other machine learning model; and so on.
In some embodiments of the present application, the tool rough categories are divided in advance according to the properties of the tools. For example, the dagger type, the kitchen knife type, and the watermelon knife type are not unique. Generally, the tool structures corresponding to different tool rough categories have obvious differences, and are easier to distinguish from the appearance.
In some embodiments of the present application, the structural feature may reflect structural information such as tool dimensions (e.g., tool aspect ratio, blade length, tool width, etc.), positional relationships of tool components (e.g., positional relationships of tool shank to blade, etc.), tool tip angle, etc.
S106: and extracting non-structural features of the tool image according to the extracted structural features of the tool image.
In some embodiments of the present application, step S106 may be implemented based on at least one machine learning model trained in advance. For example, the method is implemented by a machine learning model based on a convolutional neural network, and the non-structural features include global non-structural features and/or local non-structural features, which may reflect non-structural appearance features of the tool, such as texture, color, and style, or abstract features obtained by mapping these appearance features. The non-structural features serve to further distinguish different tools in the tool coarse category than the structural features.
S108: and carrying out similarity comparison on the non-structural features and the non-structural features corresponding to the preset cutter sample image to obtain a similarity comparison result.
S110: and determining the corresponding tool detail classification of the tool image by using the similarity comparison result.
In some embodiments of the present application, the compared tool sample images correspond to tool fine categories included in tool coarse categories determined by the coarse categories.
In some embodiments of the present application, the pre-targeted tool coarse category further divides the tool fine category. For example, the same tools may be classified into the same tool classification, etc. The division mode is not unique, and the granularity of the fine classification is at least smaller than that of the coarse classification.
Further, corresponding tool sample images (for example, one or more tool sample images and the like) may be prepared for the tool detail categories, and the non-structural features of the tool sample images are extracted according to the schemes in steps S104 and S106 for comparison in subsequent tool identification.
In some embodiments of the present application, the coarse category of the tool corresponding to the tool image is already determined in step S104, and the fine category of the tool corresponding to the tool image is further determined by comparing the similarity of the non-structural features. The specific way of similarity comparison is not limited in this application, and for example, the similarity comparison may be based on vector distance comparison, or based on vector cosine operation comparison, and the like. The fine classification corresponding to the tool sample graph with the highest similarity and exceeding the set threshold can be determined as the fine classification corresponding to the tool image, so that the tool included in the tool image can be identified. If none of the similarities exceeds the set threshold, the recognition may be considered to be failed.
S112: and identifying the properties of the cutter in the cutter image according to the cutter detail classification.
In some embodiments of the present application, what nature of the specific recognition depends on the actual scenario. In a security inspection scenario, the property generally refers to whether the tool is a controlled tool or a non-controlled tool, whether the tool is a real tool or a toy tool, and the like.
According to the method of FIG. 1, the tool can be identified more reliably because the identification method that the structural features and the non-structural features of the tool image are fused is adopted.
Based on the method of fig. 1, some embodiments of the present application also provide some specific embodiments of the method, and further embodiments, which are explained below.
In some embodiments of the present application, for step S104, the roughly classifying the tool image and extracting the structural features specifically may include: roughly classifying the cutter image and extracting structural features by using a machine learning model trained on the basis of a first labeled sample; the first labeling sample is a cutter image, and the labeled content comprises the following corresponding contents: tool coarse category, tool structure information. The tool configuration information includes, for example: tool size information, and/or positional relationship information of tool components, and the like.
As mentioned above, the rough classification and the structural feature extraction can be implemented by using one machine learning model, or can be implemented by using different machine learning models respectively. For the former case, the machine learning model can be subjected to class classification training and structural feature regression training in a multi-task training mode, and the multi-task training mode is beneficial to more accurate classification.
Similarly, in some embodiments of the application, for step S106, the extracting the non-structural feature of the tool image according to the extracted structural feature of the tool image specifically may include: extracting non-structural features of the tool image according to the extracted structural features of the tool image by using a machine learning model trained based on a second labeled sample; and the second labeled sample is a cutter image, and the labeled content comprises corresponding cutter detail categories.
In practical application, for the second labeled sample and the first labeled sample, the two samples may be the same, but the labeled contents are different, so that different machine learning models with functions matched with the labeled contents can be trained.
In some embodiments of the present application, the machine learning model trained based on the second labeled sample comprises a convolutional neural network. The convolutional neural network can extract local non-structural features through convolutional layers, aggregate the local non-structural features layer by layer, and output global non-structural features at the last convolutional layer. The local non-structural features of the critical part can be preferably selected for identification according to identification needs.
For example, for step S106, the extracting non-structural features of the tool image according to the extracted structural features of the tool image may specifically include: inputting the cutter image into the convolutional neural network for processing to obtain a classification feature output by the last convolutional layer as a global non-structural feature of the cutter image; and extracting the local non-structural features according to the global non-structural features and the extracted structural features of the tool image. For example, the local non-structural features of the tool holder and/or the tool body included in the tool image are extracted according to the global non-structural features and the information of the position relationship between the tool holder and the tool body determined according to the extracted structural features.
As can be seen from the above description, before identifying the tool, there are typically a number of pre-jobs to be done, such as marking samples, training models, collecting tool sample images, etc. Some embodiments of the present application provide a detailed flow chart of the foregoing pre-operation, as shown in fig. 2. It should be noted that the detailed flow of fig. 2 is an exemplary flow, and is not a limitation of the present application, and the preposition work may have more than one detailed flow according to the above detailed description.
The process in fig. 2 mainly comprises the following steps:
step 1: marking the tool image sample, wherein the marked content comprises: tool coarse category and structure information; the rough classification of the cutters comprises: dagger type, kitchen knife type, watermelon knife type, etc.; the structure information includes: length-width ratio of the cutter, positional relationship information of the cutter handle and the cutter body, and the like; the labeling mode may be manual labeling or image feature ID as a label, and this application is not limited in particular.
Step 2: performing class classification training and structural feature regression training on the machine learning model by using the labeled sample in the step 1 and adopting a multi-task training mode;
and step 3: manually classifying the tool image samples into a tool fine classification standard according to the same tool, and classifying and marking the tool image samples again;
and 4, step 4: using the labeled sample in the step (3) and using a convolutional neural network for classification training, wherein the classification feature output by the last convolutional layer of the trained convolutional neural network is used as a global non-structural feature, and further using the structural information in the step (1) can obtain local non-structural information;
and 5: and establishing a tool sample image library, storing one or more tool sample images and non-structural characteristics of the tool sample images for each tool detail class, and using the non-structural characteristics to identify similarity comparison of the tool images.
Based on the same idea, some embodiments of the present application also provide an apparatus, a device and a non-volatile computer storage medium corresponding to the method of fig. 1.
Fig. 3 is a schematic structural diagram of a tool identification device based on multi-feature fusion according to some embodiments provided in the present application, the device including:
the acquisition module 301 identifies an image with a tool to obtain a tool image to be identified;
a rough classification extraction module 302, which performs rough classification on the tool images and extracts structural features;
a second extraction module 303, configured to extract an unstructured feature of the tool image according to the extracted structural feature of the tool image;
the comparison module 304 is used for carrying out similarity comparison on the non-structural features and the non-structural features corresponding to the preset cutter sample image to obtain a similarity comparison result;
a fine classification module 305, which determines a tool fine classification corresponding to the tool image by using the similarity comparison result;
and the identifying module 306 is used for identifying the properties of the cutter in the cutter image according to the cutter detail classification.
Optionally, the rough classification extraction module 302 performs rough classification on the tool image and extracts structural features, and specifically includes:
the rough classification extraction module 302 performs rough classification on the tool image and extracts structural features by using a machine learning model trained based on a first labeled sample;
the first labeling sample is a cutter image, and the labeled content comprises the following corresponding contents: tool coarse category, tool structure information.
Optionally, the tool structure information includes: tool size information, and/or positional relationship information of tool components.
Optionally, training the machine learning model based on the first labeled sample specifically includes:
and performing class classification training and structural feature regression training on the machine learning model in a multi-task training mode based on the first labeled sample.
Optionally, the second extracting module 303 extracts a non-structural feature of the tool image according to the extracted structural feature of the tool image, and specifically includes:
the second extraction module 303 extracts the non-structural feature of the tool image according to the extracted structural feature of the tool image by using a machine learning model trained based on a second labeled sample;
and the second labeled sample is a cutter image, and the labeled content comprises corresponding cutter detail categories.
Optionally, the machine learning model trained based on the second labeled sample comprises a convolutional neural network.
Optionally, the non-structural features comprise global non-structural features and local non-structural features;
the second extraction module 303 extracts the non-structural feature of the tool image according to the extracted structural feature of the tool image, and specifically includes:
the second extraction module 303 inputs the tool image into the convolutional neural network for processing, so as to obtain a classification feature output by the last convolutional layer, which is used as a global non-structural feature of the tool image;
and extracting local non-structural features of the tool image according to the global non-structural features of the tool image and the extracted structural features of the tool image.
Fig. 4 is a schematic structural diagram of a tool identification device based on multi-feature fusion according to some embodiments of the present application, where the device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
identifying the image with the cutter to obtain a cutter image to be identified;
roughly classifying the cutter image and extracting structural features;
extracting non-structural features of the tool image according to the extracted structural features of the tool image;
carrying out similarity comparison on the non-structural features and non-structural features corresponding to a preset cutter sample image to obtain a similarity comparison result;
determining the cutter detail category corresponding to the cutter image by using the similarity comparison result;
and identifying the properties of the cutter in the cutter image according to the cutter detail classification.
The present application provides a multi-feature fusion based tool identification non-volatile computer storage medium of some embodiments that stores computer-executable instructions configured to:
identifying the image with the cutter to obtain a cutter image to be identified;
roughly classifying the cutter image and extracting structural features;
extracting non-structural features of the tool image according to the extracted structural features of the tool image;
carrying out similarity comparison on the non-structural features and non-structural features corresponding to a preset cutter sample image to obtain a similarity comparison result;
determining the cutter detail category corresponding to the cutter image by using the similarity comparison result;
and identifying the properties of the cutter in the cutter image according to the cutter detail classification.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the device and media embodiments, the description is relatively simple as it is substantially similar to the method embodiments, and reference may be made to some descriptions of the method embodiments for relevant points.
The device and the medium provided by the embodiment of the application correspond to the method one to one, so the device and the medium also have the similar beneficial technical effects as the corresponding method, and the beneficial technical effects of the method are explained in detail above, so the beneficial technical effects of the device and the medium are not repeated herein.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A tool identification method based on multi-feature fusion is characterized by comprising the following steps:
identifying the image with the cutter to obtain a cutter image to be identified;
roughly classifying the cutter image and extracting structural features;
extracting non-structural features of the tool image according to the extracted structural features of the tool image;
carrying out similarity comparison on the non-structural features and non-structural features corresponding to a preset cutter sample image to obtain a similarity comparison result; wherein, the compared cutter sample image corresponds to the cutter fine classification contained in the cutter coarse classification determined by the coarse classification;
determining the cutter detail category corresponding to the cutter image by using the similarity comparison result;
and identifying the properties of the cutter in the cutter image according to the cutter detail classification.
2. The method of claim 1, wherein the coarsely classifying the tool image and extracting structural features specifically comprises:
roughly classifying the cutter image and extracting structural features by using a machine learning model trained on the basis of a first labeled sample;
the first labeling sample is a cutter image, and the labeled content comprises the following corresponding contents: tool coarse category, tool structure information.
3. The method of claim 2, wherein the tool configuration information comprises: tool size information, and/or positional relationship information of tool components.
4. The method of claim 2, wherein training the machine learning model based on the first labeled sample comprises:
and performing class classification training and structural feature regression training on the machine learning model in a multi-task training mode based on the first labeled sample.
5. The method according to claim 1, wherein the extracting non-structural features of the tool image according to the extracted structural features of the tool image specifically comprises:
extracting non-structural features of the tool image according to the extracted structural features of the tool image by using a machine learning model trained based on a second labeled sample;
and the second labeled sample is a cutter image, and the labeled content comprises corresponding cutter detail categories.
6. The method of claim 5, in which the machine learning model trained based on the second labeled sample comprises a convolutional neural network.
7. The method of claim 6, wherein the non-structural features include global non-structural features and local non-structural features;
the extracting of the non-structural feature of the tool image according to the extracted structural feature of the tool image specifically includes:
inputting the cutter image into the convolutional neural network for processing to obtain a classification feature output by the last convolutional layer as a global non-structural feature of the cutter image;
and extracting local non-structural features of the tool image according to the global non-structural features of the tool image and the extracted structural features of the tool image.
8. A tool recognition device based on multi-feature fusion is characterized by comprising:
the acquisition module is used for identifying the image with the cutter so as to obtain a cutter image to be identified;
the rough classification extraction module is used for carrying out rough classification on the cutter image and extracting structural features;
the second extraction module is used for extracting the non-structural features of the tool image according to the extracted structural features of the tool image;
the comparison module is used for carrying out similarity comparison on the non-structural features and the non-structural features corresponding to the preset cutter sample image to obtain a similarity comparison result; wherein, the compared cutter sample image corresponds to the cutter fine classification contained in the cutter coarse classification determined by the coarse classification;
a fine classification module for determining a fine classification of the tool corresponding to the tool image by using the similarity comparison result;
and the identification module is used for identifying the property of the cutter in the cutter image according to the cutter detail classification.
9. The apparatus of claim 8, wherein the classification extraction module performs rough classification on the tool image and extracts structural features, and specifically comprises:
roughly classifying the cutter image to be recognized and extracting structural features by using a machine learning model trained on the basis of a first labeled sample;
the first labeling sample is a cutter image, and the labeled content comprises the following corresponding contents: tool coarse category, tool structure information.
10. A tool recognition device based on multi-feature fusion, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
identifying the image with the cutter to obtain a cutter image to be identified;
roughly classifying the cutter image and extracting structural features;
extracting non-structural features of the tool image according to the extracted structural features of the tool image;
carrying out similarity comparison on the non-structural features and non-structural features corresponding to a preset cutter sample image to obtain a similarity comparison result; wherein, the compared cutter sample image corresponds to the cutter fine classification contained in the cutter coarse classification determined by the coarse classification;
determining the cutter detail category corresponding to the cutter image by using the similarity comparison result;
and identifying the properties of the cutter in the cutter image according to the cutter detail classification.
CN201711430107.XA 2017-12-26 2017-12-26 Cutter identification method, device and equipment based on multi-feature fusion Active CN108363942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711430107.XA CN108363942B (en) 2017-12-26 2017-12-26 Cutter identification method, device and equipment based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711430107.XA CN108363942B (en) 2017-12-26 2017-12-26 Cutter identification method, device and equipment based on multi-feature fusion

Publications (2)

Publication Number Publication Date
CN108363942A CN108363942A (en) 2018-08-03
CN108363942B true CN108363942B (en) 2020-09-25

Family

ID=63010216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711430107.XA Active CN108363942B (en) 2017-12-26 2017-12-26 Cutter identification method, device and equipment based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN108363942B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241948A (en) * 2018-10-18 2019-01-18 杜海朋 A kind of NC cutting tool visual identity method and device
CN110201899B (en) * 2019-04-19 2021-11-23 深圳市金洲精工科技股份有限公司 Method and device for sorting hard alloy cutters
CN113744141B (en) * 2020-11-19 2024-04-16 北京京东乾石科技有限公司 Image enhancement method and device and automatic driving control method and device
CN116475815B (en) * 2023-05-17 2023-12-15 广州里工实业有限公司 Automatic tool changing method, system and device of numerical control machine tool and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708372A (en) * 2012-02-29 2012-10-03 北京无线电计量测试研究所 Automatic detection and identification method for hidden articles
CN106312692A (en) * 2016-11-02 2017-01-11 哈尔滨理工大学 Tool wear detection method based on minimum enclosing rectangle

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7606417B2 (en) * 2004-08-16 2009-10-20 Fotonation Vision Limited Foreground/background segmentation in digital images with differential exposure calculations
US8694443B2 (en) * 2008-11-03 2014-04-08 International Business Machines Corporation System and method for automatically distinguishing between customers and in-store employees
CN101783076B (en) * 2010-02-04 2012-06-13 西安理工大学 Method for quick vehicle type recognition under video monitoring mode
CN103413141B (en) * 2013-07-29 2017-02-22 西北工业大学 Ring illuminator and fusion recognition method utilizing ring illuminator illumination based on shape, grain and weight of tool
CN105869170A (en) * 2016-04-13 2016-08-17 宿迁学院 Identification and classification method for workpiece surface texture image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708372A (en) * 2012-02-29 2012-10-03 北京无线电计量测试研究所 Automatic detection and identification method for hidden articles
CN106312692A (en) * 2016-11-02 2017-01-11 哈尔滨理工大学 Tool wear detection method based on minimum enclosing rectangle

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Somatilake, S.,et.al.An image-based food classification system.《In Image and Vision Computing New Zealand》.2007, *
一种标识刀、量、辅具的新方法——机械加工刀具、量具、辅具分类代码系统;王志博,等;《兵工标准化》;20000815;全文 *
基于几何特征的刀具图像检索研究与应用;李欣言;《中国优秀硕士学位论文全文数据库》;20161231;全文 *

Also Published As

Publication number Publication date
CN108363942A (en) 2018-08-03

Similar Documents

Publication Publication Date Title
CN108363942B (en) Cutter identification method, device and equipment based on multi-feature fusion
CN107358596B (en) Vehicle loss assessment method and device based on image, electronic equipment and system
Hoang An Artificial Intelligence Method for Asphalt Pavement Pothole Detection Using Least Squares Support Vector Machine and Neural Network with Steerable Filter‐Based Feature Extraction
CN112700408B (en) Model training method, image quality evaluation method and device
CN110781768A (en) Target object detection method and device, electronic device and medium
CN110533018B (en) Image classification method and device
CN108108731B (en) Text detection method and device based on synthetic data
JP2018136926A (en) Method and system for container code recognition
Fernandes et al. Pavement pathologies classification using graph-based features
CN106681854B (en) Information verification method, device and system
RU2018145499A (en) AUTOMATION OF PERFORMANCE CHECK
CN111931727A (en) Point cloud data labeling method and device, electronic equipment and storage medium
RU2003108433A (en) METHOD FOR PRE-PROCESSING THE MACHINE READABLE FORM IMAGE
CN111353491B (en) Text direction determining method, device, equipment and storage medium
CN111652266A (en) User interface component identification method and device, electronic equipment and storage medium
CN110909363A (en) Software third-party component vulnerability emergency response system and method based on big data
US20140233841A1 (en) Method of chekcing the appearance of the surface of a tyre
CN107515849A (en) It is a kind of into word judgment model generating method, new word discovery method and device
CN103106346A (en) Character prediction system based on off-line writing picture division and identification
CN110728193B (en) Method and device for detecting richness characteristics of face image
CN112232368A (en) Target recognition model training method, target recognition method and related device thereof
WO2024179207A1 (en) Road object recognition method and apparatus
CN108573244A (en) A kind of vehicle checking method, apparatus and system
Nguyen et al. An innovative and automated method for characterizing wood defects on trunk surfaces using high-density 3D terrestrial LiDAR data
CN112347131B (en) Urban rail project demand identification and coverage method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant