CN113935997B - Image processing method, storage medium and image processing device for detecting material - Google Patents

Image processing method, storage medium and image processing device for detecting material Download PDF

Info

Publication number
CN113935997B
CN113935997B CN202111540585.2A CN202111540585A CN113935997B CN 113935997 B CN113935997 B CN 113935997B CN 202111540585 A CN202111540585 A CN 202111540585A CN 113935997 B CN113935997 B CN 113935997B
Authority
CN
China
Prior art keywords
image
feature
scrap
matched
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111540585.2A
Other languages
Chinese (zh)
Other versions
CN113935997A (en
Inventor
孙军欢
张春海
冀旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhixing Technology Co Ltd
Original Assignee
Shenzhen Zhixing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhixing Technology Co Ltd filed Critical Shenzhen Zhixing Technology Co Ltd
Priority to CN202111540585.2A priority Critical patent/CN113935997B/en
Publication of CN113935997A publication Critical patent/CN113935997A/en
Application granted granted Critical
Publication of CN113935997B publication Critical patent/CN113935997B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to an image processing method, a storage medium, and an image processing apparatus for material detection. The method comprises the following steps: obtaining a first image and a second image and inputting a feature extraction network based on SIFT feature detection, thereby obtaining at least one feature key point and a corresponding feature vector representing scale invariance between the first image and the second image; calculating the matching degree between the first image and the second image according to the feature vectors, setting a distance threshold according to the matching degree, and screening matched feature points from the feature key points according to the distance threshold, wherein the distance between each matched feature point and at least one other feature key point relative to the matched feature point in the feature key points is smaller than the distance threshold; determining an invariant region of the first image relative to the second image according to the matched feature points; and determining a changed region of the first image based on the unchanged region. Thus, the identification error is reduced and the detection effect is improved.

Description

Image processing method, storage medium and image processing device for detecting material
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to an image processing method, a storage medium, and an image processing apparatus for material detection.
Background
With the development of artificial intelligence technology and computer vision technology, a neural network model or a neural network algorithm based on a deep learning technology is widely adopted in applications such as face recognition, identity verification and the like to analyze and detect acquired images or videos, so that the production efficiency is improved, and safety and objectivity are ensured. In the industrial application field, a neural network model based on a deep learning technology can be used for automatic detection, for example, in the waste steel recycling process, a large amount of accumulated waste steel needs to be detected. These scrap steels may be considered as an assembly of scrap pieces, also called a scrap piece assembly, and wherein each scrap piece may be handled individually. Traditionally, manual inspection is performed by field workers, including determining the type, size, area, weight, price, etc. of different scrap pieces. The neural network model based on the deep learning technology is adopted to automatically detect the scrap steel parts, so that the production efficiency can be greatly improved, and the operation safety can be ensured. On the other hand, in applications such as logistics centers and port transportation, a neural network model based on deep learning technology may also be used for automatic detection, for example, for goods stacked in logistics centers waiting to be sorted and transported or goods parked in ports waiting to be transported, which may also be regarded as a set of multiple pieces of goods, each of which may be transported separately.
However, the steel scrap pieces or cargo pieces stacked together are often shielded or even completely covered, and have similar shapes, which makes accurate detection challenging. Therefore, an image processing method, a storage medium, and an image processing apparatus for detecting a material are needed, which can realize accurate detection and are suitable for material detection such as recycling of waste steel, sorting and carrying of goods, and the like.
Disclosure of Invention
In a first aspect, an embodiment of the present application provides an image processing method. The image processing method comprises the following steps: obtaining a first image and a second image; inputting the first image and the second image into a feature extraction network based on SIFT feature detection, thereby obtaining at least one feature key point representing scale invariance between the first image and the second image and a feature vector corresponding to the at least one feature key point; calculating the matching degree between the first image and the second image according to the feature vector, setting a distance threshold according to the matching degree, and screening matched feature points from the at least one feature key point according to the distance threshold, wherein the distance between each matched feature point of the matched feature points and at least one other feature key point relative to the matched feature points in the at least one feature key point is smaller than the distance threshold; determining an invariant region of the first image relative to the second image from the matching feature points; and determining a changed region of the first image relative to the second image from the unchanged region of the first image.
The technical solution described in the first aspect obtains feature key points and corresponding feature vectors through a feature extraction network based on SIFT feature detection, calculates matching degrees based on the feature vectors and further sets a distance threshold, the method comprises the steps of screening matched feature points from feature key points according to a distance threshold, finally determining an invariant region of a first image according to the matched feature points and determining a variable region of the first image according to the invariant region of the first image, so that adverse effects caused by factors (such as the change of external parameters of a camera, the change of the distance between the first image and a second image, the change of a scaling scale and the like) which possibly affect a detection effect are effectively overcome, errors in pixel-level prediction and edge identification errors caused by the position deviation of pixel points are effectively reduced, and the method is suitable for providing intelligent automatic detection based on a computer vision technology in the material detection of waste steel recycling, goods sorting and carrying and the like.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that the first image and the second image are acquired by the same image acquisition device according to different scaling scales.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that determining a changed region of the first image relative to the second image according to the unchanged region of the first image includes: and carrying out negation operation on the first image according to the unchanged area of the first image to obtain the changed area of the first image.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that determining, according to the matching feature points, an invariant region of the first image relative to the second image includes: performing expansion operation on each matching characteristic point of the matching characteristic points according to expansion parameters to obtain an expansion pattern corresponding to the matching characteristic point; and obtaining the minimum external contour matched with the expansion pattern respectively corresponding to the matched characteristic points through a contour search model. Wherein the area occupied by the minimum circumscribing outline is an invariant area of the first image.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that the expansion parameter is set according to a distance between the matching feature point and a nearest neighboring matching feature point relative to the matching feature point.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that a shape of the minimum circumscribed outline is set according to an application scenario of the image processing method.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that the image processing method further includes: and before the expansion operation, point filling and/or adjacent point growing are carried out on the matched characteristic points so as to add new matched characteristic points.
According to a possible implementation manner of the technical solution of the first aspect, the embodiment of the present application further provides that the image processing method is used for automatic detection in a scrap handling process, the first image is collected after a specific handling operation, the second image is collected before the specific handling operation, and a change area of the first image is used for determining at least one kind of associated information of a scrap subset associated with the specific handling operation.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that the determining at least one related information of the steel scrap subset associated with the specific handling operation by the change area of the first image includes: determining a material part segmentation recognition result corresponding to the change area of the first image according to the change area of the first image and the material part segmentation recognition result of the first image, wherein the material part segmentation recognition result of the first image is obtained by inputting the first image into a material part segmentation recognition model; and determining at least one type of associated information of the steel scrap subset associated with the specific conveying operation according to the material part segmentation identification result corresponding to the change area of the first image. Wherein the at least one associated information of the subset of scrap pieces associated with the particular handling operation comprises at least one of: contour information, category information, source information, coordinate information, area information, pixel feature information.
In a second aspect, the present application provides a non-transitory computer-readable storage medium storing computer instructions which, when executed by a processor, implement the image processing method according to any one of the first aspect.
The technical solution described in the second aspect obtains feature key points and corresponding feature vectors through a feature extraction network based on SIFT feature detection, calculates matching degrees based on the feature vectors and further sets a distance threshold, the method comprises the steps of screening matched feature points from feature key points according to a distance threshold, finally determining an invariant region of a first image according to the matched feature points and determining a variable region of the first image according to the invariant region of the first image, so that adverse effects caused by factors (such as the change of external parameters of a camera, the change of the distance between the first image and a second image, the change of a scaling scale and the like) which possibly affect a detection effect are effectively overcome, errors in pixel-level prediction and edge identification errors caused by the position deviation of pixel points are effectively reduced, and the method is suitable for providing intelligent automatic detection based on a computer vision technology in the material detection of waste steel recycling, goods sorting and carrying and the like.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor implements the image processing method according to any one of the first aspect by executing the executable instructions.
The technical proposal described in the third aspect obtains feature key points and corresponding feature vectors through a feature extraction network based on SIFT feature detection, calculates the matching degree based on the feature vectors and further sets a distance threshold, the method comprises the steps of screening matched feature points from feature key points according to a distance threshold, finally determining an invariant region of a first image according to the matched feature points and determining a variable region of the first image according to the invariant region of the first image, so that adverse effects caused by factors (such as the change of external parameters of a camera, the change of the distance between the first image and a second image, the change of a scaling scale and the like) which possibly affect a detection effect are effectively overcome, errors in pixel-level prediction and edge identification errors caused by the position deviation of pixel points are effectively reduced, and the method is suitable for providing intelligent automatic detection based on a computer vision technology in the material detection of waste steel recycling, goods sorting and carrying and the like.
In a fourth aspect, an embodiment of the present application provides a detection method. The detection method comprises the following steps: obtaining an image sequence consisting of a plurality of images corresponding to a scrap handling operation, the scrap handling operation comprising a plurality of scrap handling operations, any two adjacent images of the image sequence being collected before and after a scrap handling operation of the plurality of scrap handling operations, respectively, for each scrap handling operation of the plurality of scrap handling operations: obtaining a first image and a second image corresponding to the secondary scrap handling operation, the first image being captured after the secondary scrap handling operation and the second image being captured before the secondary scrap handling operation; inputting the first image and the second image into a feature extraction network based on SIFT feature detection so as to obtain at least one feature key point representing scale invariance between the first image and the second image and a feature vector corresponding to the at least one feature key point; calculating the matching degree between the first image and the second image according to the feature vector, setting a distance threshold value according to the matching degree, and screening matched feature points from the at least one feature key point according to the distance threshold value, wherein the distance between each matched feature point of the matched feature points and at least one other feature key point relative to the matched feature points in the at least one feature key point is smaller than the distance threshold value; determining an invariant region of the first image relative to the second image from the matching feature points; determining a changed region of the first image relative to the second image from an unchanged region of the first image; determining a material part segmentation recognition result corresponding to the change area of the first image according to the change area of the first image and the material part segmentation recognition result of the first image, wherein the material part segmentation recognition result of the first image is obtained by inputting the first image into a material part segmentation recognition model; and determining at least one piece of associated information of a steel scrap subset associated with the steel scrap carrying operation according to a material segmentation identification result corresponding to the change area of the first image, and determining at least one piece of associated information of a steel scrap set corresponding to the steel scrap carrying operation based on at least one piece of associated information of a steel scrap subset associated with each steel scrap carrying operation of the steel scrap carrying operations.
The technical solution described in the fourth aspect is to obtain feature key points and corresponding feature vectors through a feature extraction network based on SIFT feature detection, calculate matching degrees based on the feature vectors and further set a distance threshold, the method comprises the steps of screening matched feature points from feature key points according to a distance threshold, finally determining an invariant region of a first image according to the matched feature points and determining a variable region of the first image according to the invariant region of the first image, so that adverse effects caused by factors (such as the change of external parameters of a camera, the change of the distance between the first image and a second image, the change of a scaling scale and the like) which possibly affect a detection effect are effectively overcome, errors in pixel-level prediction and edge identification errors caused by the position deviation of pixel points are effectively reduced, and the method is suitable for providing intelligent automatic detection based on a computer vision technology in the material detection of waste steel recycling, goods sorting and carrying and the like.
According to a possible implementation manner of the technical solution of the fourth aspect, an embodiment of the present application further provides that the at least one piece of associated information of the scrap steel part set includes at least one of: contour information, category information, source information, coordinate information, area information, pixel feature information.
According to a possible implementation manner of the technical solution of the fourth aspect, an embodiment of the present application further provides that determining an invariant region of the first image with respect to the second image according to the matching feature points includes: performing expansion operation on each matching characteristic point of the matching characteristic points according to expansion parameters to obtain an expansion pattern corresponding to the matching characteristic point; and obtaining the minimum external contour matched with the expansion pattern respectively corresponding to the matched characteristic points through a contour search model. Wherein the area occupied by the minimum circumscribing outline is an invariant area of the first image.
According to a possible implementation manner of the technical solution of the fourth aspect, the embodiment of the present application further provides that the expansion parameter is set according to a distance between the matching feature point and a nearest neighboring matching feature point relative to the matching feature point.
According to a possible implementation manner of the technical solution of the fourth aspect, the embodiment of the present application further provides that the shape of the minimum circumscribing profile is determined according to the shape of the carrier used for the scrap handling operation.
According to a possible implementation manner of the technical solution of the fourth aspect, an embodiment of the present application further provides that the detecting method further includes: and before the expansion operation, point filling and/or adjacent point growing are carried out on the matched characteristic points so as to add new matched characteristic points.
According to a possible implementation manner of the technical solution of the fourth aspect, an embodiment of the present application further provides that the plurality of images are acquired by an image acquisition device with a variable zoom scale.
According to a possible implementation manner of the technical solution of the fourth aspect, an embodiment of the present application further provides that at least two images of the plurality of images are acquired by the image acquisition device according to different scaling scales.
In a fifth aspect, an embodiment of the present application provides an image processing apparatus. The image processing apparatus includes: a receiving module for obtaining a first image and a second image; a feature extraction network, wherein the feature extraction network obtains at least one feature keypoint characterizing scale invariance between the first image and the second image and a feature vector corresponding to the at least one feature keypoint based on SIFT feature detection and based on the first image and the second image; the matching module is used for calculating the matching degree between the first image and the second image according to the feature vector, setting a distance threshold value according to the matching degree, and screening matched feature points from the at least one feature key point according to the distance threshold value, wherein the distance between each matched feature point of the matched feature points and at least one other feature key point relative to the matched feature points in the at least one feature key point is smaller than the distance threshold value; an invariant region determining module for determining an invariant region of the first image relative to the second image according to the matching feature points; and a changed region determining module for determining a changed region of the first image relative to the second image according to the unchanged region of the first image.
The technical solution described in the fifth aspect is to obtain feature key points and corresponding feature vectors through a feature extraction network based on SIFT feature detection, calculate matching degrees based on the feature vectors and further set distance thresholds, the method comprises the steps of screening matched feature points from feature key points according to a distance threshold, finally determining an invariant region of a first image according to the matched feature points and determining a variable region of the first image according to the invariant region of the first image, so that adverse effects caused by factors (such as the change of external parameters of a camera, the change of the distance between the first image and a second image, the change of a scaling scale and the like) which possibly affect a detection effect are effectively overcome, errors in pixel-level prediction and edge identification errors caused by the position deviation of pixel points are effectively reduced, and the method is suitable for providing intelligent automatic detection based on a computer vision technology in the material detection of waste steel recycling, goods sorting and carrying and the like.
According to a possible implementation manner of the technical solution of the fifth aspect, an embodiment of the present application further provides that the first image and the second image are acquired by the same image acquisition device according to different scaling scales.
According to a possible implementation manner of the technical solution of the fifth aspect, an embodiment of the present application further provides that determining an invariant region of the first image with respect to the second image according to the matching feature points includes: performing expansion operation on each matching characteristic point of the matching characteristic points according to expansion parameters to obtain an expansion pattern corresponding to the matching characteristic point; and obtaining the minimum external contour matched with the expansion pattern respectively corresponding to the matched characteristic points through a contour search model. Wherein the area occupied by the minimum circumscribed outline is an invariant area of the first image, the expansion parameter is set according to a distance between the matching feature point and a nearest neighboring matching feature point with respect to the matching feature point, and a shape of the minimum circumscribed outline is set according to an application scene of the image processing apparatus.
According to a possible implementation manner of the technical solution of the fifth aspect, the embodiment of the present application further provides that the image processing apparatus is used for automatic detection in a scrap handling process, the first image is collected after a specific handling operation, the second image is collected before the specific handling operation, and a change area of the first image is used for determining at least one kind of associated information of a scrap subset associated with the specific handling operation.
According to a possible implementation manner of the technical solution of the fifth aspect, an embodiment of the present application further provides that the determining at least one related information of the subset of scrap pieces associated with the specific handling operation by the change area of the first image includes: determining a material part segmentation recognition result corresponding to the change area of the first image according to the change area of the first image and the material part segmentation recognition result of the first image, wherein the material part segmentation recognition result of the first image is obtained by inputting the first image into a material part segmentation recognition model; and determining at least one type of associated information of the steel scrap subset associated with the specific conveying operation according to the material part segmentation identification result corresponding to the change area of the first image. Wherein the at least one associated information of the subset of scrap pieces associated with the particular handling operation comprises at least one of: contour information, category information, source information, coordinate information, area information, pixel feature information.
Drawings
In order to explain the technical solutions in the embodiments or background art of the present application, the drawings used in the embodiments or background art of the present application will be described below.
Fig. 1 shows a schematic flowchart of an image processing method for detecting a material part according to an embodiment of the present application.
Fig. 2 shows a block diagram of an electronic device used in the image processing method shown in fig. 1 according to an embodiment of the present application.
Fig. 3 shows a block diagram of an image processing apparatus for material detection according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an image processing method, a storage medium and an image processing device for detecting a material in order to solve the technical problem of how to realize accurate detection in material detection such as waste steel recovery, cargo sorting and carrying and the like. The image processing method comprises the following steps: obtaining a first image and a second image; inputting the first image and the second image into a feature extraction network based on SIFT feature detection, thereby obtaining at least one feature key point representing scale invariance between the first image and the second image and a feature vector corresponding to the at least one feature key point; calculating the matching degree between the first image and the second image according to the feature vector, setting a distance threshold according to the matching degree, and screening matched feature points from the at least one feature key point according to the distance threshold, wherein the distance between each matched feature point of the matched feature points and at least one other feature key point relative to the matched feature points in the at least one feature key point is smaller than the distance threshold; determining an invariant region of the first image relative to the second image from the matching feature points; and determining a changed region of the first image relative to the second image from the unchanged region of the first image. Thus, the feature key points and the corresponding feature vectors are obtained through the feature extraction network based on SIFT feature detection, the matching degree is calculated based on the feature vectors, and then the distance threshold is set, the method comprises the steps of screening matched feature points from feature key points according to a distance threshold, finally determining an invariant region of a first image according to the matched feature points and determining a variable region of the first image according to the invariant region of the first image, so that adverse effects caused by factors (such as the change of external parameters of a camera, the change of the distance between the first image and a second image, the change of a scaling scale and the like) which possibly affect a detection effect are effectively overcome, errors in pixel-level prediction and edge identification errors caused by the position deviation of pixel points are effectively reduced, and the method is suitable for providing intelligent automatic detection based on a computer vision technology in the material detection of waste steel recycling, goods sorting and carrying and the like.
The embodiment of the application can be applied to the following application scenes, including but not limited to, industrial automation, goods sorting in logistics centers, port automation, intelligent automatic goods inspection and judgment, waste steel recovery, intelligent automatic waste steel inspection and judgment, and any application scenes, such as coal automatic sorting, garbage recovery, garbage automatic sorting and the like, which can improve the production efficiency and reduce the labor cost through the identification method and device for intelligent material inspection and judgment.
The embodiments of the present application may be modified and improved according to specific application environments, and are not limited herein.
In order to make the technical field of the present application better understand, embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
Aspects of the present application and various embodiments and implementations mentioned below relate to concepts of artificial intelligence, machine learning, and neural networks. In general, Artificial Intelligence (AI) studies the nature of human Intelligence and builds intelligent machines that can react in a manner similar to human Intelligence. Research in the field of artificial intelligence applications includes robotics, speech recognition, natural language processing, image recognition, decision reasoning, human-computer interaction, expert systems, and the like. Machine Learning (ML) studies how artificial intelligence systems model or implement human Learning behavior, acquire new knowledge or skills, reorganize existing knowledge structures, and improve self-competency. Machine learning learns rules from a large number of samples, data, or experiences through various algorithms to identify new samples or to make decisions and predictions about events. Examples of machine learning algorithms include decision tree learning, bayesian classification, support vector machines, clustering algorithms, and the like. Deep Learning (DL) refers to the natural Deep structures of the human brain and cognitive processes graded by depth, studies how to input large amounts of data into complex models, and "trains" the models to learn how to grab features. Neural Networks (NN) can be divided into Artificial Neural Networks (ANN) and Spiking Neural Networks (SNN). The SNN simulates a pulse neuron model of a biological nerve working mechanism, and pulse coding information is adopted in the calculation process. Currently, ANN is widely used. The neural network NN referred to herein generally refers to an artificial neural network, i.e., an ANN, unless specified otherwise or indicated otherwise or a different interpretation is made in conjunction with the context.
The ANN refers to an algorithmic mathematical model which is established by the inspiration of a brain neuron structure and a nerve conduction principle, and has a network structure which imitates animal neural network behavior characteristics to process information. Neural networks comprise a large number of interconnected nodes or neurons, sometimes referred to as artificial neurons or perceptrons, which are inspired by the structure of neurons in the brain. The Shallow Neural Network (shadow Neural Network) only comprises an input layer and an output layer, wherein the input layer is responsible for receiving input signals, and the output layer is responsible for outputting calculation results of the Neural Network. After the input signals are linearly combined, an Activation Function (Activation Function) is applied to the input signals for transformation to obtain a result of an output layer. The complex model used in Deep learning is mainly a multi-layer Neural Network, sometimes referred to as Deep Neural Network (DNN). The multi-layer neural network includes hidden layers in addition to an input layer and an output layer, each hidden layer includes an arbitrary number of neurons connected as nodes with a node of a previous layer in a network structure, and each neuron can be regarded as a linear combiner and assigns a weight to each connected input value for weighted linear combination. The activation function is a nonlinear mapping after weighted linear combination of input signals, which in a multilayer neural network can be understood as a functional relationship between the output of a neuron in a previous layer and the input of a neuron in a next layer. Each hidden layer may have a different activation function. Common activation functions are ReLU, Sigmoid, Tanh, etc. The neural network passes the information of each layer to the next layer through the mesh structure. The forward propagation is a process of calculating layer by layer from an input layer to an output layer, the weighted linear combination and the transformation are repeatedly carried out in the forward propagation process, and finally, a Loss Function (Loss Function) is calculated and used for measuring the deviation degree between the predicted value and the true value of the model. The back propagation is to propagate from the output layer to the hidden layer to the input layer, and the neural network parameters are corrected according to the error between the actual output and the expected output in the back propagation process. DNN can be classified into Convolutional Neural Network (CNN), Fully Connected Neural Network (FCN), and Recurrent Neural Network (RNN) according to the composition of a base layer. The CNN is composed of a convolutional layer, a pooling layer and a full-link layer. The FCN consists of multiple fully connected layers. The RNN consists of fully connected layers but with feedback paths and gating operations between layers, also called recursive layers. Different types of neural network base layers have different computational characteristics and computational requirements, for example, the computation proportion of convolutional layers in some neural networks is high and the computation amount of each convolutional layer is large. In addition, the calculation parameters of each convolution layer of the neural network, such as the convolution kernel size and the input/output characteristic diagram size, vary widely.
Fig. 1 shows a schematic flowchart of an image processing method for detecting a material part according to an embodiment of the present application. As shown in fig. 1, the image processing method includes the following steps.
Step S102: a first image and a second image are obtained.
The first image and the second image may be obtained by capturing an image, or by performing frame extraction, sampling, or screenshot on the video data, which is not limited herein.
Step S104: and inputting the first image and the second image into a feature extraction network based on SIFT feature detection, thereby obtaining at least one feature key point representing scale invariance between the first image and the second image and a feature vector corresponding to the at least one feature key point.
Scale-invariant feature transform (SIFT) feature detection refers to describing local features in an image by using a SIFT algorithm or a similar machine vision algorithm and extracting SIFT features irrelevant to the size and rotation of the image.
Step S106: and calculating the matching degree between the first image and the second image according to the feature vector, setting a distance threshold value according to the matching degree, and screening matched feature points from the at least one feature key point according to the distance threshold value, wherein the distance between each matched feature point of the matched feature points and at least one other feature key point relative to the matched feature points in the at least one feature key point is smaller than the distance threshold value.
Step S108: determining an invariant region of the first image relative to the second image from the matching feature points.
Step S110: determining a changed region of the first image relative to the second image from an unchanged region of the first image.
The image processing method includes the steps of obtaining feature key points and corresponding feature vectors through a feature extraction network based on SIFT feature detection based on a first image and a second image, calculating matching degree based on the feature vectors and further setting a distance threshold, screening matched feature points from the feature key points according to the distance threshold, finally determining an invariant region of the first image according to the matched feature points and determining a variant region of the first image according to the invariant region of the first image. As mentioned above, in the material detection of the steel scrap recycling, the cargo sorting and carrying, etc., problems that the steel scrap or the cargo are shielded from each other or even completely covered and have similar shapes are often faced. Taking the material detection of a scrap collection in the application of scrap steel recycling as an example, the scrap collection composed of a plurality of scrap stacked together and possibly shielded and covered by each other needs to be transported from one place to another place, and such a transporting operation often needs to go through a plurality of transporting operations, each transporting operation transports a part of the scrap in the scrap collection. For this purpose, if a first image and a second image are respectively captured before and after a certain conveying operation, for example, the first image is captured after a specific conveying operation and the second image is captured before the specific conveying operation, a change area of the first image relative to the second image can be determined by the first image, the second image and an image processing method, and the change area of the first image represents a change caused by the specific conveying operation, that is, a scrap material transported by the specific conveying operation can be estimated. The scrap steel parts conveyed by each conveying operation in the multiple conveying operations are reflected through the multiple images, and the information of the multiple images is integrated, so that the overall situation of the scrap steel part set, such as the number of the scrap steel parts in the scrap steel part set, and the like can be calculated. It should be understood that the first image and the second image are relative meanings in the image processing method, and correspond to the case before and after the same conveying operation. In the image processing method, it is set that the first image is captured after a specific conveying operation and the second image is captured before the specific conveying operation, and therefore, a change region of the first image with respect to the second image is finally generated. In a possible embodiment, if it is set that the first image is captured before a specific handling operation and the second image is captured after the specific handling operation, the image processing method may be adjusted accordingly such that the last generated is the area of change of the second image relative to the first image. In other words, the image processing method finally generates a variation region of an image captured at a later time with respect to an image captured at a previous time by comparing an image captured at a previous time and an image captured at a later time in the same conveying operation. This is because the image acquired at a later moment, for example the first image, after a specific handling operation, should register a change due to the specific handling operation, for example a reduction of a portion of the remaining not yet handled scrap pieces of the scrap piece collection due to the specific handling operation. In practical application, under the influence of interference such as weather, illumination, camera shake and the like, a change of a distance, a change of a scaling scale or other factors which may affect a detection effect may exist between two images respectively collected before and after the same carrying operation. These adversely affect the material detection based on the computer vision technology, which may cause errors in pixel-level prediction, for example, a pixel point originally belonging to a certain object may have a position offset due to interference, thereby causing an identification error. The image processing method can cope well with these adverse effects due to disturbance factors such as positional deviation of pixel points, etc., which will be described in detail below.
In step S104, at least one feature key point and a feature vector corresponding to the at least one feature key point are obtained by using a feature extraction network based on SIFT feature detection, which characterize the scale invariance between the first image and the second image. Here, the local feature of the SIFT feature is used regardless of the size and rotation of the image, that is, the feature remains unchanged as the image is enlarged or reduced. This means that the respective scaling scales of the cameras used for capturing the first image and the second image, respectively, are changed without affecting at least one feature keypoint representing scale invariance between the first image and the second image obtained by the feature extraction network based on SIFT feature detection. In other words, it is assumed that the camera external parameters (the position, the rotation direction, and the like of the camera) corresponding to the capturing of the first image are the first camera external parameters, the camera external parameters corresponding to the capturing of the second image are the second camera external parameters, and it is assumed that the first image and the second image are respectively captured before and after a certain transporting operation, and the first image and the second image are affected by interference such as weather, light, camera shake, and the like during the transporting operation, so that there are changes in the distance between the first image and the second image, changes in the zoom scale, or other factors that may affect the detection effect, which may be expressed as the above-mentioned difference between the first camera external parameters and the second camera external parameters. Because the camera external parameters change, the positions of the pixel points of the same object on the two images may be shifted but the object does not substantially move, but such a positional shift may cause a pixel-level prediction error, for example, a pixel point that should belong to a certain object is recognized as a pixel point of another object or an edge recognition error is generated at a boundary or an edge zone of the two objects. In order to overcome the change of the external parameters of the camera, or the change of the distance between the first image and the second image, the change of the scaling scale and other factors which may affect the detection effect, the image processing method obtains at least one feature key point representing the scale invariance between the first image and the second image and a feature vector corresponding to the at least one feature key point through a feature extraction network based on the SIFT feature detection in step S104, thereby effectively overcoming the adverse effect brought by the factors which may affect the detection effect and being beneficial to coping with the interference of weather conditions, illumination changes, camera shake and the like. After step S104 is performed, step S106 is then performed.
In step S106, based on the at least one feature key point obtained in step S104 and the feature vector corresponding to the at least one feature key point, a matching degree is calculated and a distance threshold is further set, and then a matching feature point is screened out from the feature key points according to the distance threshold. Here, the at least one feature keypoint characterizes scale invariance between the first image and the second image, and the feature vector corresponding to the at least one feature keypoint can be used to obtain a degree of matching between the first image and the second image, or a degree of matching between the feature keypoints between the first image and the second image. And setting a distance threshold value according to the matching degree, wherein the distance threshold value is equivalent to a screening scale and is used for screening out the matching feature points within the threshold range. As mentioned above, the at least one feature keypoint is obtained based on a feature extraction network for SIFT feature detection, and characterizes the scale invariance between the first image and the second image. Therefore, the feature key points and the corresponding feature vectors obtained by utilizing the characteristic of the SIFT features which are not changed along with scaling can be used for better measuring the matching degree between the two images and reducing the influence of scaling. Specifically, matching feature points are screened out from the at least one feature keypoint according to the distance threshold, wherein the distance between each matching feature point of the matching feature points and at least one other feature keypoint of the at least one feature keypoint relative to the matching feature point is smaller than the distance threshold. This means that matching feature points are screened from the feature keypoints, which are themselves feature keypoints. And the distance between the matching feature point and at least another feature keypoint is less than the distance threshold. In some embodiments, starting from each feature keypoint, a search range is set by taking the feature keypoint as a center and taking the distance threshold as a search step length, if at least one other feature keypoint can be found in the search range, the feature keypoint is judged to be a matched feature point, and if not, the feature keypoint is judged not to be a matched feature point. In some embodiments, a number of matching feature points of the first batch may be determined, the matching feature points are regarded as parent feature points, starting from each parent feature point, a search range is set by taking the parent feature point as a center and taking the distance threshold as a search step, feature key points in the search range are all marked as child feature points of the parent feature point, and then, the search continues by taking each child feature point of the parent feature point as a center and taking the distance threshold as a search step until no more child feature points can be found. These parent feature points and the respective child feature points are matching feature points. It should be understood that any manner of screening out matching feature points from the feature key points specifically according to the distance threshold may be adopted as long as the constraint condition required in step S106 is satisfied, that is, the distance between each matching feature point of the matching feature points and at least one other feature key point relative to the matching feature point in the at least one feature key point is less than the distance threshold. After step S106 is executed, step S108 is then executed.
In step S108, an invariant region of the first image with respect to the second image is determined from the matching feature points. Here, the matching feature points are selected from feature key points according to a distance threshold, the feature key points are obtained by a feature extraction network based on SIFT feature detection and characterize scale invariance between the first image and the second image, so that an invariant region of the first image relative to the second image can be determined according to the matching feature points, and the invariant region of the first image obtained in this way benefits from the above-mentioned characteristic of the SIFT feature that does not change with scaling, so that adverse effects caused by factors (such as changes of camera external parameters, or changes of distance between the first image and the second image, changes of scaling scale, and the like) which may affect detection effects are effectively overcome. After step S108 is performed, step S110 is then performed.
In step S110, a changed region of the first image relative to the second image is determined from an unchanged region of the first image. As mentioned above, the invariant region of the first image benefits from the scaling invariant property of the SIFT feature, and thus, adverse effects caused by factors (such as changes in camera external parameters, or changes in distance between the first image and the second image, changes in scaling scale, etc.) which may affect the detection effect are effectively overcome. Therefore, by determining the change area of the first image relative to the second image according to the invariant area of the first image, the change area of the first image obtained in this way also benefits from the characteristic of the SIFT feature that the change area does not change with scaling, so that the adverse effect brought by factors (such as the change of external parameters of a camera, or the change of the distance between the first image and the second image, the change of the scaling scale and the like) which may influence the detection effect is effectively overcome, and the interference of weather conditions, illumination changes, camera shaking and the like can be coped with. For example, it is assumed that the first image and the second image are respectively captured before and after a certain transporting operation, and are affected by interference of weather, illumination, camera shake and the like during the transporting operation, so that a change of distance between the first image and the second image, a change of zoom scale or other factors which may affect the detection effect exist, this may result in the positions of the pixels of the same object being shifted but the object is not substantially moved in the two images, but the change area of the first image determined according to the image processing method avoids the recognition error caused by the position deviation caused by the interference of the pixel point originally belonging to a certain object, and further avoid identifying the pixel point which should belong to a certain object as the pixel point of another object or generating edge identification error at the juncture or edge zone of two objects.
Referring to steps S102 to S110, the image processing method obtains feature key points and corresponding feature vectors through a feature extraction network based on SIFT feature detection, calculates a matching degree based on the feature vectors and further sets a distance threshold, the method comprises the steps of screening matched feature points from feature key points according to a distance threshold, finally determining an invariant region of a first image according to the matched feature points and determining a variable region of the first image according to the invariant region of the first image, so that adverse effects caused by factors (such as the change of external parameters of a camera, the change of the distance between the first image and a second image, the change of a scaling scale and the like) which possibly affect a detection effect are effectively overcome, errors in pixel-level prediction and edge identification errors caused by the position deviation of pixel points are effectively reduced, and the method is suitable for providing intelligent automatic detection based on a computer vision technology in the material detection of waste steel recycling, goods sorting and carrying and the like.
In a possible embodiment, the first image and the second image are acquired by the same image acquisition device according to different scaling scales. As mentioned above, the image processing method obtains feature key points and corresponding feature vectors through a feature extraction network based on SIFT feature detection, calculates the degree of matching based on the feature vectors and further sets a distance threshold, the matched feature points are screened out from the feature key points according to the distance threshold, finally the invariant region of the first image is determined according to the matched feature points, the variant region of the first image is determined according to the invariant region of the first image, the change of external parameters (the position, the rotation direction and the like of the camera) of the camera is effectively overcome, or a change in the distance between the first image and the second image, a change in the zoom scale, etc., which means that, the first image and the second image acquired according to different scaling scales can still be used for determining the change area of the first image and the detection effect is not adversely affected by the different scaling scales. Therefore, the method is beneficial to providing different flexible zooming scales to acquire the first image and the second image respectively, and is beneficial to improving the detection effect. For example, the image capturing device may have a flexible adjustable zoom scale or other off-camera parameters, etc. for adjusting to different application requirements to achieve better detection effect.
In a possible embodiment, determining a changed region of the first image relative to the second image from an unchanged region of the first image comprises: and carrying out negation operation on the first image according to the unchanged area of the first image to obtain the changed area of the first image. In this way, the characteristic of the SIFT features that do not change with scaling is utilized, or the feature extraction network with feature key points detected based on the SIFT features is utilized to obtain and characterize the scale invariance between the first image and the second image, so that the change area of the first image is determined through the inversion operation.
In one possible embodiment, determining an invariant region of the first image with respect to the second image from the matching feature points comprises: performing expansion operation on each matching characteristic point of the matching characteristic points according to expansion parameters to obtain an expansion pattern corresponding to the matching characteristic point; and obtaining a minimum circumscribed outline matched with the expansion patterns respectively corresponding to the matched feature points through an outline searching model, wherein the area occupied by the minimum circumscribed outline is the unchanged area of the first image. Here, the matching feature points are selected from feature key points according to a distance threshold, and the feature key points are obtained by a feature extraction network based on SIFT feature detection and characterize scale invariance between the first image and the second image, so that an invariant region of the first image relative to the second image can be determined according to the matching feature points. After the matched feature points are obtained through screening, richer guidance for determining the invariant region can be obtained through a dilation operation (for example, enlarging the matched feature points from one point to a circle or other shapes) and a contour search model. In some embodiments, the inflation parameters may be adjusted to affect the resulting inflation pattern and the matching minimum circumscribing profile. For example, the dilation parameter may be set to half the distance between two adjacent matching feature points, so that the dilation pattern corresponding to the two matching feature points resulting from the dilation operation may be two tangent circles; for another example, the expansion parameter may be set to be more than half of the distance between two adjacent matching feature points, and the expansion pattern corresponding to the two matching feature points obtained by performing the expansion operation may be two intersecting circles. By adjusting the expansion parameters, including setting different expansion parameters for different matching feature points, the distribution situation of the matching feature points screened out in practice can be better adapted. Further, the shape of the figure specifically used for the expansion operation may be other than a circular shape. In some embodiments, when a dilation operation is performed to transform a matching feature point from a point to a circle or other shape, the pixel values within the circle or other shape should be consistent with the pixel values of the matching feature point; in other embodiments, the pixel values within the circle or other shape may not coincide with the pixel values of the matching feature points, e.g., may be slightly different. In some embodiments, the dilation parameter is set based on a distance between the matching feature point and a nearest neighbor matching feature point relative to the matching feature point. In addition, the parameters of the contour search model may also be adjusted, for example, different types of minimum bounding contours may be set, such as a circle, a square, a rectangle, or any suitable shape. Through minimum circumscribed profile, can better adaptation in the practical application the automobile body shape of the carrier that needs to detect. For example, in the material detection of the waste steel recovery, a rectangular shape or a splice of two rectangular shapes can be considered as a minimum circumscribed outline, so as to better adapt to the body shape or the compartment shape of a vehicle for loading the waste steel to be recovered. In some embodiments, the shape of the minimum circumscribing profile is set according to an application scenario of the image processing method. This allows a better adaptation to the requirements in the application scenario. Also, in some embodiments, prior to performing the dilation operation, point filling and/or adjacent point growing is performed on the matched feature points to add new matched feature points. For example, assuming that there are 50 matched feature points obtained after the filtering, the matched feature points with relatively sparse distribution can be made denser by means of point filling, such as filling a new matched feature point between two matched feature points, for example, obtaining a total of 80 matched feature points after the point filling. Similarly, new matching feature points can be added in a neighboring point growing manner, so that matching feature points which are relatively sparsely distributed become denser, and subsequent processing is facilitated.
In a possible embodiment, the image processing method is used for automatic detection during scrap handling, the first image is acquired after a specific handling operation, the second image is acquired before the specific handling operation, and the change area of the first image is used for determining at least one piece of associated information of a scrap subset associated with the specific handling operation. Here, in the material detection of waste steel recycling, cargo sorting and carrying, and the like, problems that waste steel materials or cargo materials are mutually shielded or even completely covered, and the appearance is similar are often faced. Taking automatic detection in the process of transporting scrap as an example, a scrap set composed of a plurality of scrap stacked together and possibly shielded and covered mutually needs to be transported from one place to another place, and such transporting operation often needs to go through multiple transporting operations, and each transporting operation transports a part of the scrap in the scrap set. For this purpose, if a first image and a second image are respectively captured before and after a certain conveying operation, for example, the first image is captured after a specific conveying operation and the second image is captured before the specific conveying operation, a change area of the first image relative to the second image can be determined by the first image, the second image and an image processing method, and the change area of the first image represents a change caused by the specific conveying operation, that is, a scrap material transported by the specific conveying operation can be estimated. The scrap steel parts conveyed by each conveying operation in the multiple conveying operations are reflected through the multiple images, and the information of the multiple images is integrated, so that the overall situation of the scrap steel part set, such as the number of the scrap steel parts in the scrap steel part set, and the like can be calculated. In some embodiments, the area of change of the first image is used to determine at least one associated information of a subset of scrap pieces associated with the particular handling operation, including: determining a material part segmentation recognition result corresponding to the change area of the first image according to the change area of the first image and the material part segmentation recognition result of the first image, wherein the material part segmentation recognition result of the first image is obtained by inputting the first image into a material part segmentation recognition model; and determining at least one piece of associated information of a steel scrap subset associated with the specific conveying operation according to a material part segmentation identification result corresponding to the change area of the first image, wherein the at least one piece of associated information of the steel scrap subset associated with the specific conveying operation comprises at least one of the following information: contour information, category information, source information, coordinate information, area information, pixel feature information. The profile information indicates the profile of each scrap part in the scrap part set, and may be a result of matching with a plurality of preset profile types, or may be semantic descriptions (such as side length, curvature, and the like) in a numerical manner, or may be generalized semantic descriptions (such as a disc shape, a strip shape, and the like). The type information indicates how many types of steel scrap pieces are contained in each steel scrap piece of the steel scrap piece set and the number of each type of steel scrap piece, and the information can be used for further analyzing and extracting more information, so that the related information at least comprises type information under general conditions. For example, the type information of the scrap steel parts set may indicate that each scrap steel part of the scrap steel parts set has 10 train wheels, 20 car bearings, 30 screws, and the like. The source information indicates from which location a scrap piece comes, for example from a train or barge. The coordinate information indicates the coordinates of a certain scrap piece on the image. The area information indicates the area of a certain scrap piece identified on the image. The pixel characteristic information indicates characteristics of all pixels to which a certain scrap piece belongs. It should be understood that more abundant associated information of the scrap steel part set can be obtained according to the computer vision technology which is specifically adopted to obtain the semantic segmentation result of the original image. The above listed examples of association information are illustrative only and not limiting. Therefore, abundant associated information is obtained, and basis is provided for decision making and subsequent processing. Further, other processes such as weight estimation, price estimation, and quality estimation may be performed based on the related information.
Referring to fig. 1, the image processing method is suitable for automatic detection during the transportation of the scrap, and specifically, the embodiment of the present application provides a detection method. The detection method comprises the following steps: obtaining an image sequence consisting of a plurality of images corresponding to a scrap handling operation, the scrap handling operation comprising a plurality of scrap handling operations, any two adjacent images of the image sequence being collected before and after a scrap handling operation of the plurality of scrap handling operations, respectively, for each scrap handling operation of the plurality of scrap handling operations: obtaining a first image and a second image corresponding to the secondary scrap handling operation, the first image being captured after the secondary scrap handling operation and the second image being captured before the secondary scrap handling operation; inputting the first image and the second image into a feature extraction network based on SIFT feature detection so as to obtain at least one feature key point representing scale invariance between the first image and the second image and a feature vector corresponding to the at least one feature key point; calculating the matching degree between the first image and the second image according to the feature vector, setting a distance threshold value according to the matching degree, and screening matched feature points from the at least one feature key point according to the distance threshold value, wherein the distance between each matched feature point of the matched feature points and at least one other feature key point relative to the matched feature points in the at least one feature key point is smaller than the distance threshold value; determining an invariant region of the first image relative to the second image from the matching feature points; determining a changed region of the first image relative to the second image from an unchanged region of the first image; determining a material part segmentation recognition result corresponding to the change area of the first image according to the change area of the first image and the material part segmentation recognition result of the first image, wherein the material part segmentation recognition result of the first image is obtained by inputting the first image into a material part segmentation recognition model; and determining at least one piece of associated information of a steel scrap subset associated with the steel scrap carrying operation according to a material segmentation identification result corresponding to the change area of the first image, and determining at least one piece of associated information of a steel scrap set corresponding to the steel scrap carrying operation based on at least one piece of associated information of a steel scrap subset associated with each steel scrap carrying operation of the steel scrap carrying operations. In some embodiments, the at least one piece of associated information of the set of scrap pieces comprises at least one of: contour information, category information, source information, coordinate information, area information, pixel feature information. In some embodiments, determining an invariant region of the first image relative to the second image from the matching feature points comprises: performing expansion operation on each matching characteristic point of the matching characteristic points according to expansion parameters to obtain an expansion pattern corresponding to the matching characteristic point; and obtaining a minimum circumscribed outline matched with the expansion patterns respectively corresponding to the matched feature points through an outline searching model, wherein the area occupied by the minimum circumscribed outline is the unchanged area of the first image. In some embodiments, the dilation parameter is set based on a distance between the matching feature point and a nearest neighbor matching feature point relative to the matching feature point. In some embodiments, the shape of the minimum circumscribing profile is determined according to a vehicle profile for the scrap handling operation. In some embodiments, the detection method further comprises: and before the expansion operation, point filling and/or adjacent point growing are carried out on the matched characteristic points so as to add new matched characteristic points. In some embodiments, the plurality of images are acquired by an image acquisition device having a variable zoom scale. In some embodiments, at least two images of the plurality of images are acquired by the image acquisition device according to different zoom scales.
Therefore, the detection method for automatically detecting the scrap steel parts in the conveying process effectively overcomes the adverse effects caused by factors (such as the change of external parameters of a camera, the change of the distance between a first image and a second image, the change of the scaling scale and the like) which possibly influence the detection effect, and also effectively reduces the error of prediction and edge identification error at the pixel level caused by the position deviation of the pixel point. The detection method determines a change area of the first image relative to the second image, and the change area of the first image represents the change caused by the scrap conveying operation, namely the change area can be used for calculating the scrap conveyed by the scrap conveying operation. The scrap transported by each transportation operation in the multiple transportation operations is reflected by a plurality of images of the image sequence, and the information is integrated to calculate the whole condition of the scrap set corresponding to the scrap transportation operation. In addition, the detection effect can be improved through expansion operation and a contour searching model, for example, the shape of the minimum external contour can be determined according to the carrier contour of the scrap steel part carrying operation, so that the carrier contour can be better adapted. Moreover, an image acquisition device with a variable zoom scale may be utilized, for example, at least two images of the plurality of images are acquired according to different zoom scales, so as to adjust the zoom scale for different application requirements to achieve a better detection effect.
It is to be understood that the above-described method may be implemented by a corresponding execution body or carrier. In some exemplary embodiments, a non-transitory computer readable storage medium stores computer instructions that, when executed by a processor, implement the above-described method and any of the above-described embodiments, implementations, or combinations thereof. In some example embodiments, an electronic device includes: a processor; a memory for storing processor-executable instructions; wherein the processor implements the above method and any of the above embodiments, implementations, or combinations thereof by executing the executable instructions.
Fig. 2 shows a block diagram of an electronic device used in the image processing method shown in fig. 1 according to an embodiment of the present application. As shown in FIG. 2, the electronic device includes a main processor 202, an internal bus 204, a network interface 206, a main memory 208, and secondary processor 210 and secondary memory 212, as well as a secondary processor 220 and secondary memory 222. The main processor 202 is connected to the main memory 208, and the main memory 208 can be used for storing computer instructions executable by the main processor 202, so that the image processing method shown in fig. 1 can be implemented, including some or all of the steps, and any possible combination or combination and possible replacement or variation of the steps. The network interface 206 is used to provide network connectivity and to transmit and receive data over a network. The internal bus 204 is used to provide internal data interaction between the main processor 202, the network interface 206, the auxiliary processor 210, and the auxiliary processor 220. The auxiliary processor 210 is coupled to the auxiliary memory 212 and provides auxiliary computing power, and the auxiliary processor 220 is coupled to the auxiliary memory 222 and provides auxiliary computing power. The auxiliary processors 210 and 220 may provide the same or different auxiliary computing capabilities including, but not limited to, computing capabilities optimized for particular computing requirements such as parallel processing capabilities or tensor computing capabilities, computing capabilities optimized for particular algorithms or logic structures such as iterative computing capabilities or graph computing capabilities, and the like. The auxiliary processors 210 and 220 may include one or more processors of a particular type, such as a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or the like, so that customized functionality and structure may be provided. In some exemplary embodiments, the electronic device may not include an auxiliary processor, may include only one auxiliary processor, and may include any number of auxiliary processors and each have a corresponding customized function and structure, which are not specifically limited herein. The architecture of the two auxiliary processors shown in FIG. 2 is for illustration only and should not be construed as limiting. In addition, main processor 202 may include a single-core or multi-core computing unit to provide the functions and operations necessary for embodiments of the present application. In addition, the main processor 202 and the auxiliary processors (such as the auxiliary processor 210 and the auxiliary processor 220 in fig. 2) may have different architectures, that is, the electronic device may be a heterogeneous architecture based system, for example, the main processor 202 may be a general-purpose processor based on an instruction set operating system, such as a CPU, and the auxiliary processor may be a graphics processor GPU suitable for parallelized computation or a dedicated accelerator suitable for neural network model-related operations. The auxiliary memory (e.g., auxiliary memory 212 and auxiliary memory 222 shown in fig. 2) may be used to implement customized functions and structures in cooperation with the respective auxiliary processors. And main memory 208 stores the necessary instructions, software, configurations, data, etc. to cooperate with main processor 202 to provide the functionality and operations necessary for the embodiments of the present application. In some exemplary embodiments, the electronic device may not include the auxiliary memory, may include only one auxiliary memory, and may further include any number of auxiliary memories, which is not specifically limited herein. The architecture of the two auxiliary memories shown in fig. 2 is illustrative only and should not be construed as limiting. Main memory 208 and possibly secondary memory may include one or more of the following features: volatile, nonvolatile, dynamic, static, readable/writable, read-only, random-access, sequential-access, location-addressability, file-addressability, and content-addressability, and may include random-access memory (RAM), flash memory, read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, a recordable and/or rewriteable Compact Disc (CD), a Digital Versatile Disc (DVD), a mass storage media device, or any other form of suitable storage media. The internal bus 204 may include any of a variety of different bus structures or combinations of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. It should be understood that the electronic device shown in fig. 2, the illustrated structure of which does not constitute a specific limitation on the apparatus or system in question, may in some exemplary embodiments include more or fewer components than the specific embodiments and the drawings, or combine certain components, or split certain components, or have a different arrangement of components.
With continued reference to fig. 2, in one possible implementation, the auxiliary processor 210 and/or the auxiliary processor 220 may have a computing architecture that is custom designed for the characteristics of neural network computing, such as a neural network accelerator. Moreover, the electronic device may include any number of auxiliary processors each having a computing architecture that is custom designed for the characteristics of neural network computations, or the electronic device may include any number of neural network accelerators. In some embodiments, for illustrative purposes only, an exemplary neural network accelerator may be: the neural network accelerator is provided with a time domain computing architecture based on a control flow, and the instruction flow of an instruction set is customized based on a neural network algorithm to perform centralized control on computing resources and storage resources; alternatively, neural network accelerators with a data-flow based spatial computation architecture, such as two-dimensional spatial computation arrays based on Row Stationary (RS) data flows, two-dimensional matrix multiplication arrays using Systolic arrays (Systolic Array), and the like; or any neural network accelerator having any suitable custom designed computational architecture.
Fig. 3 shows a block diagram of an image processing apparatus for material detection according to an embodiment of the present application. As shown in fig. 3, the image processing apparatus includes: a receiving module 310, configured to obtain a first image and a second image; a feature extraction network 320, wherein the feature extraction network 320 obtains at least one feature keypoint characterizing scale invariance between the first image and the second image and a feature vector corresponding to the at least one feature keypoint based on SIFT feature detection and based on the first image and the second image; a matching module 330, configured to calculate a matching degree between the first image and the second image according to the feature vector, set a distance threshold according to the matching degree, and screen a matching feature point from the at least one feature key point according to the distance threshold, where a distance between each matching feature point of the matching feature points and at least one other feature key point of the at least one feature key point, which is opposite to the matching feature point, is smaller than the distance threshold; an invariant region determining module 340, configured to determine an invariant region of the first image with respect to the second image according to the matching feature points; and a changed region determining module 350 for determining a changed region of the first image relative to the second image based on the unchanged region of the first image.
The image processing device obtains feature key points and corresponding feature vectors through a feature extraction network based on SIFT feature detection, calculates the matching degree based on the feature vectors and further sets a distance threshold, the method comprises the steps of screening matched feature points from feature key points according to a distance threshold, finally determining an invariant region of a first image according to the matched feature points and determining a variable region of the first image according to the invariant region of the first image, so that adverse effects caused by factors (such as the change of external parameters of a camera, the change of the distance between the first image and a second image, the change of a scaling scale and the like) which possibly affect a detection effect are effectively overcome, errors in pixel-level prediction and edge identification errors caused by the position deviation of pixel points are effectively reduced, and the method is suitable for providing intelligent automatic detection based on a computer vision technology in the material detection of waste steel recycling, goods sorting and carrying and the like.
In a possible embodiment, the first image and the second image are acquired by the same image acquisition device according to different scaling scales.
In one possible embodiment, determining an invariant region of the first image with respect to the second image from the matching feature points comprises: performing expansion operation on each matching characteristic point of the matching characteristic points according to expansion parameters to obtain an expansion pattern corresponding to the matching characteristic point; and obtaining a minimum circumscribed outline matched with the expansion patterns respectively corresponding to the matched feature points through an outline searching model, wherein the area occupied by the minimum circumscribed outline is an unchanged area of the first image, the expansion parameter is set according to the distance between the matched feature point and the nearest adjacent matched feature point relative to the matched feature point, and the shape of the minimum circumscribed outline is set according to the application scene of the image processing method.
In a possible embodiment, the image processing device is used for automatic detection during the handling of the scrap pieces, the first image is acquired after a specific handling operation, the second image is acquired before the specific handling operation, and the change area of the first image is used for determining at least one piece of associated information of a subset of scrap pieces associated with the specific handling operation.
In a possible embodiment, the area of change of the first image is used to determine at least one piece of information associated with a subset of scrap pieces associated with the specific handling operation, including: determining a material part segmentation recognition result corresponding to the change area of the first image according to the change area of the first image and the material part segmentation recognition result of the first image, wherein the material part segmentation recognition result of the first image is obtained by inputting the first image into a material part segmentation recognition model; and determining at least one piece of associated information of a steel scrap subset associated with the specific conveying operation according to a material part segmentation identification result corresponding to the change area of the first image, wherein the at least one piece of associated information of the steel scrap subset associated with the specific conveying operation comprises at least one of the following information: contour information, category information, source information, coordinate information, area information, pixel feature information.
The embodiments provided herein may be implemented in any one or combination of hardware, software, firmware, or solid state logic circuitry, and may be implemented in connection with signal processing, control, and/or application specific circuitry. Particular embodiments of the present application provide an apparatus or device that may include one or more processors (e.g., microprocessors, controllers, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), etc.) that process various computer-executable instructions to control the operation of the apparatus or device. Particular embodiments of the present application provide an apparatus or device that can include a system bus or data transfer system that couples the various components together. A system bus can include any of a variety of different bus structures or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. The devices or apparatuses provided in the embodiments of the present application may be provided separately, or may be part of a system, or may be part of other devices or apparatuses.
Particular embodiments provided herein may include or be combined with computer-readable storage media, such as one or more storage devices capable of providing non-transitory data storage. The computer-readable storage medium/storage device may be configured to store data, programmers and/or instructions that, when executed by a processor of an apparatus or device provided by embodiments of the present application, cause the apparatus or device to perform operations associated therewith. The computer-readable storage medium/storage device may include one or more of the following features: volatile, non-volatile, dynamic, static, read/write, read-only, random access, sequential access, location addressability, file addressability, and content addressability. In one or more exemplary embodiments, the computer-readable storage medium/storage device may be integrated into a device or apparatus provided in the embodiments of the present application or belong to a common system. The computer-readable storage medium/memory device may include optical, semiconductor, and/or magnetic memory devices, etc., and may also include Random Access Memory (RAM), flash memory, read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, a recordable and/or rewriteable Compact Disc (CD), a Digital Versatile Disc (DVD), a mass storage media device, or any other form of suitable storage media.
The above is an implementation manner of the embodiments of the present application, and it should be noted that the steps in the method described in the embodiments of the present application may be sequentially adjusted, combined, and deleted according to actual needs. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. It is to be understood that the embodiments of the present application and the structures shown in the drawings are not to be construed as particularly limiting the devices or systems concerned. In other embodiments of the present application, an apparatus or system may include more or fewer components than the specific embodiments and figures, or may combine certain components, or may separate certain components, or may have a different arrangement of components. Those skilled in the art will understand that various modifications and changes may be made in the arrangement, operation, and details of the methods and apparatus described in the specific embodiments without departing from the spirit and scope of the embodiments herein; without departing from the principles of embodiments of the present application, several improvements and modifications may be made, and such improvements and modifications are also considered to be within the scope of the present application.

Claims (19)

1. An image processing method, characterized in that the image processing method comprises:
obtaining a first image and a second image;
inputting the first image and the second image into a feature extraction network based on SIFT feature detection, thereby obtaining at least one feature key point representing scale invariance between the first image and the second image and a feature vector corresponding to the at least one feature key point;
calculating the matching degree between the first image and the second image according to the feature vector, setting a distance threshold according to the matching degree, and screening matched feature points from the at least one feature key point according to the distance threshold, wherein the distance between each matched feature point of the matched feature points and at least one other feature key point relative to the matched feature points in the at least one feature key point is smaller than the distance threshold;
determining an invariant region of the first image relative to the second image from the matching feature points; and
determining a changed region of the first image relative to the second image from an unchanged region of the first image,
determining an invariant region of the first image relative to the second image from the matched feature points, comprising:
performing expansion operation on each matching characteristic point of the matching characteristic points according to expansion parameters to obtain an expansion pattern corresponding to the matching characteristic point;
obtaining the minimum external contour matched with the expansion pattern respectively corresponding to the matched characteristic points through a contour searching model,
wherein the area occupied by the minimum circumscribing outline is an invariant area of the first image,
and the shape of the minimum circumscribed outline is set according to the application scene of the image processing method.
2. The image processing method according to claim 1, wherein the first image and the second image are acquired by a same image acquisition device according to different scaling scales.
3. The method of claim 1, wherein determining a changed region of the first image relative to the second image from an unchanged region of the first image comprises:
and carrying out negation operation on the first image according to the unchanged area of the first image to obtain the changed area of the first image.
4. The image processing method according to claim 1, wherein the dilation parameter is set based on a distance between the matching feature point and a nearest neighboring matching feature point with respect to the matching feature point.
5. The image processing method according to claim 1, characterized in that the image processing method further comprises:
and before the expansion operation, point filling and/or adjacent point growing are carried out on the matched characteristic points so as to add new matched characteristic points.
6. The image processing method according to claim 1,
the image processing method is used for automatic detection in the process of transporting the scrap steel parts,
the first image is captured after a specific handling operation, the second image is captured before the specific handling operation,
the changed area of the first image is used for determining at least one piece of associated information of the steel scrap piece subset associated with the specific handling operation.
7. The image processing method according to claim 6, wherein the change area of the first image is used for determining at least one associated information of a subset of scrap pieces associated with the specific handling operation, and comprises the following steps:
determining a material part segmentation recognition result corresponding to the change area of the first image according to the change area of the first image and the material part segmentation recognition result of the first image, wherein the material part segmentation recognition result of the first image is obtained by inputting the first image into a material part segmentation recognition model; and
determining at least one type of associated information of a steel scrap subset associated with the specific conveying operation according to a material part segmentation identification result corresponding to the change area of the first image,
wherein the at least one associated information of the subset of scrap pieces associated with the particular handling operation comprises at least one of: contour information, category information, source information, coordinate information, area information, pixel feature information.
8. A non-transitory computer readable storage medium storing computer instructions which, when executed by a processor, implement the image processing method according to any one of claims 1 to 7.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the image processing method according to any one of claims 1 to 7 by executing the executable instructions.
10. A detection method, characterized in that the detection method comprises:
obtaining an image sequence consisting of a plurality of images corresponding to a scrap handling operation, the scrap handling operation comprising a plurality of scrap handling operations, any two adjacent images of the image sequence being collected before and after a scrap handling operation of one of the plurality of scrap handling operations,
and aiming at each scrap steel part carrying operation of the multiple scrap steel part carrying operations:
obtaining a first image and a second image corresponding to the secondary scrap handling operation, the first image being captured after the secondary scrap handling operation and the second image being captured before the secondary scrap handling operation;
inputting the first image and the second image into a feature extraction network based on SIFT feature detection so as to obtain at least one feature key point representing scale invariance between the first image and the second image and a feature vector corresponding to the at least one feature key point;
calculating the matching degree between the first image and the second image according to the feature vector, setting a distance threshold value according to the matching degree, and screening matched feature points from the at least one feature key point according to the distance threshold value, wherein the distance between each matched feature point of the matched feature points and at least one other feature key point relative to the matched feature points in the at least one feature key point is smaller than the distance threshold value;
determining an invariant region of the first image relative to the second image from the matching feature points;
determining a changed region of the first image relative to the second image from an unchanged region of the first image;
determining a material part segmentation recognition result corresponding to the change area of the first image according to the change area of the first image and the material part segmentation recognition result of the first image, wherein the material part segmentation recognition result of the first image is obtained by inputting the first image into a material part segmentation recognition model; and
determining at least one piece of related information of a steel scrap subset associated with the steel scrap carrying operation according to a material segmentation recognition result corresponding to the change area of the first image, determining at least one piece of related information of the steel scrap subset associated with each steel scrap carrying operation of the plurality of steel scrap carrying operations based on the at least one piece of related information of the steel scrap subset associated with the steel scrap carrying operation,
determining an invariant region of the first image relative to the second image from the matched feature points, comprising:
performing expansion operation on each matching characteristic point of the matching characteristic points according to expansion parameters to obtain an expansion pattern corresponding to the matching characteristic point;
obtaining the minimum external contour matched with the expansion pattern respectively corresponding to the matched characteristic points through a contour searching model,
wherein the area occupied by the minimum circumscribing outline is an invariant area of the first image,
wherein the shape of the minimum circumscribing profile is determined according to the shape of a carrier used for the scrap handling operation.
11. The inspection method of claim 10, wherein the at least one piece of information associated with the collection of scrap pieces comprises at least one of: contour information, category information, source information, coordinate information, area information, pixel feature information.
12. The method of claim 10, wherein the dilation parameter is set based on a distance between the matched feature point and a nearest neighbor matched feature point relative to the matched feature point.
13. The detection method according to claim 10, further comprising:
and before the expansion operation, point filling and/or adjacent point growing are carried out on the matched characteristic points so as to add new matched characteristic points.
14. The detection method according to any one of claims 10 to 13, wherein the plurality of images are acquired by an image acquisition device having a variable zoom scale.
15. The inspection method of claim 14, wherein at least two of the plurality of images are acquired by the image acquisition device at different zoom scales.
16. An image processing apparatus characterized by comprising:
a receiving module for obtaining a first image and a second image;
a feature extraction network, wherein the feature extraction network obtains at least one feature keypoint characterizing scale invariance between the first image and the second image and a feature vector corresponding to the at least one feature keypoint based on SIFT feature detection and based on the first image and the second image;
the matching module is used for calculating the matching degree between the first image and the second image according to the feature vector, setting a distance threshold value according to the matching degree, and screening matched feature points from the at least one feature key point according to the distance threshold value, wherein the distance between each matched feature point of the matched feature points and at least one other feature key point relative to the matched feature points in the at least one feature key point is smaller than the distance threshold value;
an invariant region determining module for determining an invariant region of the first image relative to the second image according to the matching feature points; and
a changed region determination module for determining a changed region of the first image relative to the second image based on an unchanged region of the first image.
Determining an invariant region of the first image relative to the second image from the matched feature points, comprising:
performing expansion operation on each matching characteristic point of the matching characteristic points according to expansion parameters to obtain an expansion pattern corresponding to the matching characteristic point;
obtaining the minimum external contour matched with the expansion pattern respectively corresponding to the matched characteristic points through a contour searching model,
wherein the area occupied by the minimum circumscribed outline is an invariant area of the first image, the expansion parameter is set according to a distance between the matching feature point and a nearest neighboring matching feature point with respect to the matching feature point, and a shape of the minimum circumscribed outline is set according to an application scene of the image processing apparatus.
17. The apparatus according to claim 16, wherein the first image and the second image are acquired by a same image acquisition device at different scales.
18. The image processing apparatus according to claim 16,
the image processing device is used for automatic detection in the process of transporting the scrap steel parts,
the first image is captured after a specific handling operation, the second image is captured before the specific handling operation,
the changed area of the first image is used for determining at least one piece of associated information of the steel scrap piece subset associated with the specific handling operation.
19. The image processing apparatus of claim 18, wherein the changed region of the first image is used to determine at least one associated information of a subset of scrap pieces associated with the particular handling operation, comprising:
determining a material part segmentation recognition result corresponding to the change area of the first image according to the change area of the first image and the material part segmentation recognition result of the first image, wherein the material part segmentation recognition result of the first image is obtained by inputting the first image into a material part segmentation recognition model; and
determining at least one type of associated information of a steel scrap subset associated with the specific conveying operation according to a material part segmentation identification result corresponding to the change area of the first image,
wherein the at least one associated information of the subset of scrap pieces associated with the particular handling operation comprises at least one of: contour information, category information, source information, coordinate information, area information, pixel feature information.
CN202111540585.2A 2021-12-16 2021-12-16 Image processing method, storage medium and image processing device for detecting material Active CN113935997B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111540585.2A CN113935997B (en) 2021-12-16 2021-12-16 Image processing method, storage medium and image processing device for detecting material

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111540585.2A CN113935997B (en) 2021-12-16 2021-12-16 Image processing method, storage medium and image processing device for detecting material

Publications (2)

Publication Number Publication Date
CN113935997A CN113935997A (en) 2022-01-14
CN113935997B true CN113935997B (en) 2022-03-04

Family

ID=79289105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111540585.2A Active CN113935997B (en) 2021-12-16 2021-12-16 Image processing method, storage medium and image processing device for detecting material

Country Status (1)

Country Link
CN (1) CN113935997B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830008B (en) * 2023-02-06 2023-05-05 上海爱梵达云计算有限公司 Scrap steel waste degree analysis system based on image analysis comparison judgment
CN116625243B (en) * 2023-07-26 2023-09-19 湖南隆深氢能科技有限公司 Intelligent detection method, system and storage medium based on frame coil stock cutting machine

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018232518A1 (en) * 2017-06-21 2018-12-27 Vancouver Computer Vision Ltd. Determining positions and orientations of objects
WO2021000702A1 (en) * 2019-06-29 2021-01-07 华为技术有限公司 Image detection method, device, and system
EP3798975A1 (en) * 2019-09-29 2021-03-31 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for detecting subject, electronic device, and computer readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9977978B2 (en) * 2011-11-14 2018-05-22 San Diego State University Research Foundation Image station matching, preprocessing, spatial registration and change detection with multi-temporal remotely-sensed imagery
US10366306B1 (en) * 2013-09-19 2019-07-30 Amazon Technologies, Inc. Item identification among item variations
US9120621B1 (en) * 2014-03-25 2015-09-01 Amazon Technologies, Inc. Verifying bin content in an automated materials handling facility
US9174800B1 (en) * 2014-03-25 2015-11-03 Amazon Technologies, Inc. Verifying bin content in a materials handling facility
CN113283478B (en) * 2021-05-10 2022-09-09 青岛理工大学 Assembly body multi-view change detection method and device based on feature matching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018232518A1 (en) * 2017-06-21 2018-12-27 Vancouver Computer Vision Ltd. Determining positions and orientations of objects
WO2021000702A1 (en) * 2019-06-29 2021-01-07 华为技术有限公司 Image detection method, device, and system
EP3798975A1 (en) * 2019-09-29 2021-03-31 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for detecting subject, electronic device, and computer readable storage medium

Also Published As

Publication number Publication date
CN113935997A (en) 2022-01-14

Similar Documents

Publication Publication Date Title
Srivastava et al. Comparative analysis of deep learning image detection algorithms
US10282589B2 (en) Method and system for detection and classification of cells using convolutional neural networks
CN110135503B (en) Deep learning identification method for parts of assembly robot
CN113935997B (en) Image processing method, storage medium and image processing device for detecting material
WO2017078886A1 (en) Generic mapping for tracking target object in video sequence
US20230201973A1 (en) System and method for automatic detection of welding tasks
CN114187442A (en) Image processing method, storage medium, electronic device, and image processing apparatus
CN113936220B (en) Image processing method, storage medium, electronic device, and image processing apparatus
Stenroos Object detection from images using convolutional neural networks
Suzuki et al. Superpixel convolution for segmentation
Panta et al. IterLUNet: Deep learning architecture for pixel-wise crack detection in levee systems
CN114067171A (en) Image recognition precision improving method and system for overcoming small data training set
WO2024078112A1 (en) Method for intelligent recognition of ship outfitting items, and computer device
CN114092817B (en) Target detection method, storage medium, electronic device, and target detection apparatus
CN114187211A (en) Image processing method and device for optimizing image semantic segmentation result
Kälber et al. U-Net based Zero-hour Defect Inspection of Electronic Components and Semiconductors.
CN116385466A (en) Method and system for dividing targets in image based on boundary box weak annotation
CN113936253A (en) Material conveying operation cycle generation method, storage medium, electronic device and device
KR102462733B1 (en) Robust Multi-Object Detection Apparatus and Method Using Siamese Network
CN114241262A (en) Sucker work cycle generation method, storage medium, electronic device and device
CN114170194A (en) Image processing method, storage medium and device for automatic detection of scrap steel parts
Li et al. MDM-YOLO: Research on Object Detection Algorithm Based on Improved YOLOv4 for Marine Organisms
CN113963280B (en) Identification method and device for intelligent detection and judgment of material and part and storage medium
CN112949634B (en) Railway contact net nest detection method
Afonso Learning to detect defects in industrial production lines from a few examples

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant