CN113420776A - Multi-side joint detection article classification method based on model fusion - Google Patents

Multi-side joint detection article classification method based on model fusion Download PDF

Info

Publication number
CN113420776A
CN113420776A CN202110359605.XA CN202110359605A CN113420776A CN 113420776 A CN113420776 A CN 113420776A CN 202110359605 A CN202110359605 A CN 202110359605A CN 113420776 A CN113420776 A CN 113420776A
Authority
CN
China
Prior art keywords
model
fusion
image acquisition
image
turntable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110359605.XA
Other languages
Chinese (zh)
Inventor
安康
林雪松
柳晖
刘翔鹏
管西强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Normal University
University of Shanghai for Science and Technology
Original Assignee
Shanghai Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Normal University filed Critical Shanghai Normal University
Priority to CN202110359605.XA priority Critical patent/CN113420776A/en
Publication of CN113420776A publication Critical patent/CN113420776A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses a multi-side joint detection article classification method based on model fusion, which comprises the following steps: collecting images of the objects to be classified at different visual angles, and respectively inputting the images into the processing model I to obtain each prediction probability matrix; fusing the prediction probability matrixes to obtain a fusion matrix; and finally, inputting the fusion matrix into a processing model II to obtain a classification result of the articles to be classified. The multi-side joint detection article classification method based on model fusion realizes article identification based on deep learning of multi-model fusion, and has higher accuracy; an article identification model is constructed based on a neural network, and the method has strong feature extraction capability; based on the multi-side image combined input model, multiple features of one part can be obtained simultaneously, and the method has a remarkable effect on similar part classification and has a great application prospect.

Description

Multi-side joint detection article classification method based on model fusion
Technical Field
The invention belongs to the technical field of visual detection, relates to a multi-side joint detection article classification method based on model fusion, and particularly relates to a method for completing article classification by applying a convolutional neural network based on model fusion after multi-surface article image acquisition.
Background
Article classification is always a hot field of artificial intelligence in image direction research, and how to use images to achieve rapid and accurate classification is a topic commonly discussed by numerous scholars. With the development of deep learning technology in recent years, a new method is provided for the field of artificial intelligence, and compared with the feature design extraction of the traditional method, the method simplifies the manual operation. From the first Lenet to the nearest Densenet network, the performance of the network is continuously increased, the problem of gradient disappearance is solved, and the depth of a new model is continuously updated. But in the classification of similar articles, performance does not work best because features are generally the same in the same large class of articles, but the details of the articles are not similar; on the other hand, images shot by the articles at certain angles are consistent, so that label errors can be caused in the training stage of deep learning, and article differences cannot be well learned, namely, the identification and classification operations of the articles cannot be completed from a certain single angle, and the articles can be mixed with similar articles.
Therefore, the development of a method capable of realizing rapid and accurate classification of articles is of great practical significance.
Disclosure of Invention
The invention aims to overcome the defect of poor identification and classification effects of the existing articles and provides a method capable of realizing rapid and accurate classification of articles.
In order to achieve the purpose, the invention provides the following technical scheme:
a multi-side joint detection article classification method based on model fusion comprises the following steps:
collecting images of the objects to be classified at different visual angles, and respectively inputting the images into the processing model I to obtain each prediction probability matrix; fusing the prediction probability matrixes to obtain a fusion matrix; finally, inputting the fusion matrix into a processing model II to obtain a classification result of the articles to be classified;
the processing model I is a Densenet model, the input of the processing model I is an article image, the output of the processing model I is a corresponding category of an article, the training is to train the model by taking the article image with known category as a training set, model parameters are continuously adjusted, and finally the upper limit of the training times is reached;
the processing model II is a BP neural network (wherein the initialization weight W is initialized randomly by adopting Gaussian distribution), the input of the processing model II is the fusion matrix of each image of the article, the output of the processing model II is the corresponding category of the article, the training process takes the fusion matrix of each image of the article containing the known category and the sample set of the corresponding category as the training set to train the model, the model parameters are continuously adjusted, and finally the upper limit of the training times (for example, 50 times) is reached, and the fusion matrix of each image of the article of the known category is the matrix obtained by fusing the prediction probability matrixes obtained by inputting each image of the article of the known category into the processing model I. Wherein the BP neural network adopts an Adam optimizer, and the learning rate is 0.001.
The multi-side joint detection article classification method based on model fusion acquires article images (the acquired characteristics are more) in a multi-angle image acquisition mode, processes the images acquired by each image acquisition device by adopting a Densenet model, inputs the images into a BP neural network after fusion, completes deep learning based on multi-model fusion to realize article identification, and has high classification accuracy and great application prospect.
As a preferred technical scheme:
the method for classifying the multi-side joint detection articles based on the model fusion is characterized in that the acquisition of the images of the articles to be classified at different viewing angles is completed by a visual image acquisition system, and the method specifically comprises the following operations: placing the articles to be classified on the center of a turntable of a visual image acquisition system, and starting all image acquisition equipment of the visual image acquisition system to start to acquire images of the articles to be classified;
the visual image acquisition system comprises a turntable for placing articles to be classified, wherein the turntable is connected with a turntable driving device and can horizontally rotate under the driving of the turntable driving device; the position of the steerable part of carousel increases data abundance and reliability, compares and only adopts the overlook image to discern in traditional machine vision, and the image of many visual angles can provide more part information, is favorable to the degree of depth learning model to learn more complete part information to can prevent the overfitting, promote the model generalization ability.
More than two image acquisition devices with the vision centers aligned with the center of the turntable are arranged around the turntable, the image acquisition devices are positioned above the turntable, the image acquisition devices are arranged on different directions of the turntable and have different height differences with the turntable;
the image acquisition equipment and the turntable driving device are respectively connected with the central processing unit.
The universal visual image acquisition system is adopted to finish image acquisition, so that system errors can be eliminated to a certain extent, and the universal visual image acquisition system is good in reliability and adaptability (can be widely applied to various devices).
According to the method for classifying the multi-side joint detection object based on model fusion, the object image of the known class is obtained by placing the object of the known class on the center of the turntable of the visual image acquisition system and starting the visual image acquisition system to acquire the object of the known class, that is, the class probability corresponding to the object of the known class is determined.
According to the multi-side joint detection article classification method based on model fusion, the central processing unit processes the image acquired by the image acquisition equipment by adopting the observation visual angle information of the image acquisition equipment, and then the data can be effectively expanded.
Namely, the data expansion is carried out by applying the method as follows:
the image is preprocessed to 80 × 80 pixels, data enhancement processing is performed on each image, that is, each image is regarded as an 80 × 80 matrix, operations of shifting, rotating, mirroring and turning over are performed on each image randomly, so that the shifting and turning over are within a certain range (using the range of counterclockwise rotation and clockwise rotation to be within 10 degrees, and the proportion is horizontal shifting or vertical shifting within 0.1 range), and each image is processed to generate brand new N images, that is, the database is expanded by N times (for example, the database can be expanded by 10 times).
According to the model fusion-based multi-side joint detection article classification method, compared with the state corresponding to the enhanced object, the state corresponding to the enhanced data is translated by less than or equal to 20% in the left-right or up-down direction or the random rotation angle in the clockwise or counterclockwise direction is less than or equal to 30 °, only one feasible technical scheme is provided, and a person skilled in the art can generate the enhanced data through translation and rotation operations according to actual needs.
According to the multi-side joint detection article classification method based on model fusion, the turntable is arranged in a frame, the image acquisition equipment is fixed on the frame, and the frame is also fixed with a light source;
a black back plate is arranged below the rotary disc;
the turntable driving device is a driving motor.
According to the multi-side joint detection article classification method based on model fusion, the light source is arranged above the turntable, and the frame is sleeved with the soft light cover;
the frame is a square frame;
the image acquisition equipment is total three, arranges respectively in the A side of frame, B side and top, and the difference in height of three image acquisition equipment and carousel is all different, and A side and B side are mutually perpendicular.
According to the multi-side joint detection article classification method based on model fusion, corners of the frame are in rounded corner transition; the frame is formed by fixedly splicing a plurality of aluminum alloy square tubes.
The method for classifying the articles based on the multi-side joint detection of the model fusion comprises the following steps that a processing model I is specifically a Densenet121 network model; based on Densenet, the network has a total of 121 layers, and adopts a repeated structure of Input-BN-Dropout (0.4) -Dense100(Input represents an Input layer, BN represents a batch normalization layer, Dropout represents a random loss weight proportion, a number carried by the random loss weight proportion represents a proportion of a discarding weight, Dense represents a fully-connected layer, a number carried by the fully-connected layer represents the number of neurons in the layer), the structure is repeated for 4 times, a Dense layer is adopted in the last layer, and an activation function calculates the classification probability of each kind by using a softmax classifier:
Figure BDA0003004980490000041
according to the probability yiObtaining the final prediction result, predicting the maximum probability y in the probability matrixiThe index i of (a) is the result of the prediction;
judging whether the model is trained or not according to the predicted result in the model training process, if so, saving a Densenet121 model and parameters, otherwise, performing the training again;
the Densenet121 network model is used for carrying out feature extraction and calculating a convolutional layer, a full-link layer and a pooling layer by using forward propagation;
the set of data sets used to train the processing model I includes images of items of known class and their corresponding classes acquired by the visual image acquisition system.
In the above method for classifying articles through multi-side joint detection based on model fusion, a group of data sets used for training and processing the model II includes a matrix fused matrix obtained after an image of an article of a known type acquired by each image acquisition device in the visual image acquisition system is input into the processing model I and a corresponding type thereof.
The input of the processing model II is a result obtained by fusing the results obtained by processing the pictures acquired by different image acquisition devices through the densenert 121 network model, taking three image acquisition devices as an example, the three image acquisition devices are respectively input into the processing model I to obtain corresponding output vectors, which are respectively stored and named as camera0.npy, camera1.npy, and camera2.npy, and then the three files are combined into a numpy matrix named as camera.npy (i.e., the input of the processing model II), and the type of the article (i.e., the output of the processing model II) corresponding to the data (camera.npy) used in the training process is known.
The fusion process (extended dimension) is specifically as follows:
the first image prediction result is A ═ a1,a2,......,an]The second and third sheets are each B ═ B1,b2,......,bn]、C=[c1,c2,......,cn]The digital subscript in each matrix represents a label of a part, and simultaneously, the probability of predicting the image into each part is expanded on the 0 th dimension of the matrix, the first matrix and the second matrix are subjected to concatemate operation, the two matrices with the dimension of picture number of 100 are merged to obtain a matrix with the dimension of picture number of 200, and the obtained matrix with the picture number of 200 and the picture number of 100 are merged in the 0 th dimension to obtain a final matrix with the feature input picture number of 300. Let the fused feature input be D1Then D is1Is composed of
D1=concatenate(A,B,C)
Obtaining the processing result D of one image1Is [ a ]1,a2,...,an,b1,b2,...,bn,c1,c2,...,cn]。
The fusion process is not limited to this, and the fusion can also be performed in a form of feature addition, which is specifically as follows:
D2=A+B+C
obtained result D2Is [ a ]1+b1+c1,a2+b2+c2,……an+bn+cn]。
Meanwhile, a form of adding and superposing the extended dimension and the feature can also be used, which is specifically as follows:
D3=concatenate(D1,D2)
obtained result D3Is [ a ]1,a2……an,……c1,……cn,a1+b1+c1,a2+b2+c2,……an+bn+cn]。
According to the above multi-side joint detection article classification method based on model fusion, the image acquired by the image acquisition device needs to be preprocessed as follows before application:
(1) graying;
(2) removing image noise by using a Gaussian fuzzy algorithm with the radius of 3 x 3;
(3) using a canny operator to carry out edge detection, setting two threshold values of the canny operator to be 25-150, and finding the edge of the whole article;
(4) finding out the minimum external square according to the edge, and intercepting the whole external square;
(5) the square image is scaled to a suitable size (specifically 80 x 80 pixels) by bilinear interpolation.
Has the advantages that:
(1) the multi-side joint detection article classification method based on model fusion realizes article identification based on deep learning of multi-model fusion, and has higher accuracy;
(2) the multi-side joint detection article classification method based on model fusion disclosed by the invention has the advantages that an article identification model is constructed based on a neural network, and the method has stronger feature extraction capability;
(3) the multi-side joint detection article classification method based on model fusion can simultaneously obtain a plurality of characteristics of one part based on the multi-side image joint input model, has a remarkable effect particularly on similar part classification, and has a wide application prospect.
Drawings
FIG. 1 is a schematic diagram of the overall structure of a visual image acquisition system according to the present invention;
FIG. 2 is a schematic flow chart of a multi-side joint detection article classification method based on model fusion according to the present invention;
FIG. 3 is a schematic diagram of the process and effect of image preprocessing;
FIG. 4 is a flow chart of the processing of data through processing model I → processing model II;
FIG. 5 is a diagram of a BP neural network architecture;
FIG. 6 is a graph showing the test results.
Detailed Description
The present invention will be described in more detail with reference to the accompanying drawings, in which embodiments of the invention are shown and described, and it is to be understood that the embodiments described are merely illustrative of some, but not all embodiments of the invention.
In the description of the present invention, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.
A multi-side joint detection article classification method based on model fusion, the steps of which are shown in fig. 2 (the first layer model and the second layer model in fig. 2 correspond to a processing model I and a processing model II):
(1) placing the articles to be classified on the center of a turntable of a visual image acquisition system, and starting the visual image acquisition system to acquire images of the articles to be classified;
a visual image acquisition system, as shown in fig. 1, comprising a turntable for placing an image to be acquired, the turntable being connected to a turntable driving device (driving motor) and being driven by the turntable driving device to rotate horizontally, the turntable being arranged in a frame (a square frame formed by fixedly splicing a plurality of aluminum alloy square tubes, the corners of the frame being in fillet transition), a black back plate being arranged below the turntable, the frame being sheathed with a soft light cover;
three image acquisition devices with vision centers aligned with the center of the turntable are arranged around the turntable, the image acquisition devices are positioned above the turntable and fixed on the frame, the three image acquisition devices are respectively arranged on the side A, the side B (the side A is vertical to the side B) and the top of the frame, the height differences of the three image acquisition devices and the turntable are different, and the frame is also fixed with a light source which is arranged above the turntable;
the image acquisition equipment and the turntable driving device are respectively connected with the central processing unit, and the central processing unit adopts the observation visual angle information of the image acquisition equipment to process the pictures acquired by the image acquisition equipment so as to effectively expand the data;
(2) the image acquired by the image acquisition device is preprocessed, specifically as shown in fig. 3:
(2.1) graying;
(2.2) removing image noise by using a Gaussian blur algorithm with a radius of 3 x 3;
(2.3) carrying out edge detection by using a canny operator, setting two threshold values of the canny operator to be 25-150, and finding the edge of the whole article;
(2.4) finding out the minimum external square according to the edge, and intercepting the whole external square;
(2.5) scaling the square image to a proper size (specifically 80 x 80 pixels) by a bilinear interpolation method;
(3) respectively inputting the images acquired by each image acquisition device into the processing model I to obtain a prediction probability matrix corresponding to each image acquisition device, wherein the processing flows of the steps (3) to (5) are shown in FIG. 4;
the processing model I is a Densenet121 network model, the training process of the model I is that the image of an article of a known class is used as input, the corresponding class probability of the article is used as theoretical output, the model parameters are continuously adjusted, the termination condition of the training is that the upper limit of the training times (50 times) is reached, a group of data sets used for training the processing model I (the data set used for training the processing model I is a training set, the data sets are specifically obtained by acquiring 50 or more images by using a visual image acquisition system and then performing data enhancement, the size of the images is 640 x 480 pixels) comprise the images of the article of the known class acquired by the visual image acquisition system and the corresponding class, and the images of the article of the known class are obtained by placing the article of the known class on the center of a turntable of the visual image acquisition system and starting the visual image acquisition system to acquire the article of the known class;
(4) fusing the prediction probability matrixes obtained in the step (3) to obtain a fusion matrix;
(5) inputting the fusion matrix obtained in the step (4) into a processing model II to obtain a classification result of the articles to be classified;
the processing model II is a BP neural network (as shown in fig. 5), the training process is a process of continuously adjusting model parameters by using a fusion matrix of images of an article of a known type as Input and using a probability of a corresponding type of the article as theoretical output, the training is terminated when an upper limit of training times (50 times) is reached, the training process adopts a repetitive structure of Input-BN-Dropout (0.4) -density 100(Input represents an Input layer, BN represents a batch normalization layer, Dropout represents a random loss weight proportion, numbers carried thereafter represent a proportion of discarding weights, density represents a fully-connected layer, numbers thereafter represent a number of neurons in the layer), the structure repeats 4 times, a density layer is used in a last layer, an activation function uses softmax, the BP neural network adopts an Adam optimizer, a learning rate is 0.001, a group of data sets used for training the processing model II includes a group of images of an article of a known type acquired by image acquisition devices in a visual image acquisition system, and is Input into the processing model I to obtain the processing model II The obtained matrix after matrix fusion and the corresponding category thereof are obtained, the fusion matrix of each image of the article in the known category is obtained by fusing prediction probability matrixes obtained by inputting each image of the article in the known category into the processing model I, the prediction probability matrixes obtained by inputting each image into the processing model I are specifically A1, A2 and A3, and the fusion matrix of each image of the article in the known category is the matrix B obtained by connecting A1, A2 and A3 in a matrix dimension-extending mode.
The above examples specifically employ the following schemes:
building an experimental platform:
a frame is adopted to build a collector, three cameras are respectively arranged right above the collector, the front of each part is inclined by 45 degrees, the rear of each part is inclined by 45 degrees, a turntable which can be remotely controlled is arranged in the middle of the collector, a lamp tube and a rheostat are arranged around the frame, light can be adjusted, and a computer for experiments is configured into a windows7 system, an English Window 1080ti display card and a vscode software;
data set and experimental design:
data set:
the collected data scenarios include: changing different poses of the parts in three directions after the parts are placed, shooting a next group of images, wherein the number of the types of the parts collected in the experiment set is 100, the number of each collected image is 30 or more, then dividing the images into a training set and a testing set, performing data enhancement on the training set, and inputting the training set into a model for training, wherein the enhancement times are 10 times or more;
experiment design:
to test the performance of the method, an experiment was designed:
comparing the accuracy of a deep learning method based on model fusion with the accuracy of a traditional single model prediction method, training the model by using a training set for 50 times, setting the BP neural network parameter of a second layer to be 0.001, and adopting an Adam optimizer to test that the accuracy of the model fusion method is superior to that of common single model prediction;
and (3) testing results:
the experimental test set adopts 100 parts pictures, each 2 pictures, the accuracy of the test result is 88.5% by using the traditional method, when the characteristics output by the first layer of a plurality of images are fused, three prediction matrixes are added, three prediction matrixes are subjected to dimension expansion fusion into one matrix, the results of the former two types of addition and the dimension expansion are subjected to dimension expansion fusion, the experimental result is shown in figure 6, the experiment shows that the model fusion-based deep learning method has higher accuracy of part prediction and can reach 98%, and the improvement result is nearly 10%.
The embodiment discovers that the parts are detected by a deep learning method based on model fusion, the parts are preliminarily predicted by using images from three angles through the first layer Densenet121, and the prediction results of fusing a plurality of images are input into the second layer BP neural network to obtain the final prediction result, so that higher part identification precision can be achieved, the identification precision of the parts is effectively improved, and the process after part identification in the production process is guaranteed.
Through verification, the multi-side joint detection article classification method based on model fusion disclosed by the invention realizes article identification based on deep learning of multi-model fusion, and has higher accuracy; an article identification model is constructed based on a neural network, and the method has strong feature extraction capability; based on the multi-side image combined input model, a plurality of characteristics of one part can be obtained simultaneously, and the method has a remarkable effect particularly on similar part classification and has a great application prospect.
Although specific embodiments of the present invention have been described above, it will be appreciated by those skilled in the art that these embodiments are merely illustrative and various changes or modifications may be made without departing from the principles and spirit of the invention.

Claims (10)

1. A multi-side joint detection article classification method based on model fusion is characterized by comprising the following steps:
collecting images of the objects to be classified at different visual angles, and respectively inputting the images into the processing model I to obtain each prediction probability matrix; fusing the prediction probability matrixes to obtain a fusion matrix; finally, inputting the fusion matrix into a processing model II to obtain a classification result of the articles to be classified;
the processing model I is a Densenet model, the input of the processing model I is an article image, the output of the processing model I is a corresponding category of an article, the training is to train the model by taking the article image with known category as a training set, model parameters are continuously adjusted, and finally the upper limit of the training times is reached;
the processing model II is a BP neural network, the input of the processing model II is a fusion matrix of each image of an article, the output of the processing model II is a corresponding category of the article, the training process takes a sample set containing the fusion matrix of each image of the article with the known category and the corresponding category as a training set to train the model, model parameters are continuously adjusted, and finally the upper limit of the training times is reached, wherein the fusion matrix of each image of the article with the known category is a matrix obtained by fusing prediction probability matrixes obtained by inputting each image of the article with the known category into the processing model I.
2. The method for classifying the multi-side joint detection object based on the model fusion as claimed in claim 1, wherein the collecting of the images of the object to be classified from different viewing angles is accomplished by a visual image collecting system, and the specific operations are as follows: placing the articles to be classified on the center of a turntable of a visual image acquisition system, and starting all image acquisition equipment of the visual image acquisition system to start to acquire images of the articles to be classified;
the visual image acquisition system comprises a turntable for placing articles to be classified, wherein the turntable is connected with a turntable driving device and can horizontally rotate under the driving of the turntable driving device;
more than two image acquisition devices with the vision centers aligned with the center of the turntable are arranged around the turntable, the image acquisition devices are positioned above the turntable, the image acquisition devices are arranged on different directions of the turntable and have different height differences with the turntable;
the image acquisition equipment and the turntable driving device are respectively connected with the central processing unit.
3. The method as claimed in claim 2, wherein the image of the object of the known category is obtained by placing the object of the known category on a center of a turntable of the visual image capturing system and starting the visual image capturing system to capture the object of the known category.
4. The method for multi-side joint detection and item classification based on model fusion as claimed in claim 1, wherein the central processing unit employs the observation angle information of the image acquisition device to process the image acquired by the image acquisition device, so as to effectively expand the data.
5. The method for multi-side joint detection and article classification based on model fusion as claimed in claim 1, wherein a frame for covering the turntable is arranged outside the turntable, all image acquisition devices are fixedly connected with the frame, and a light source is further fixed on the frame;
the bottom of the rotary table is provided with a black back plate;
the turntable driving device is a driving motor.
6. The method for multi-side joint inspection and article classification based on model fusion as claimed in claim 5, wherein the light source is disposed above the turntable, and the frame is covered with a soft cover;
the frame is a square frame;
the image acquisition equipment is total three, arranges respectively in the A side of frame, B side and top, and the difference in height of three image acquisition equipment and carousel is all different, and A side and B side are mutually perpendicular.
7. The method for multi-side joint detection and article classification based on model fusion as claimed in claim 6, wherein corners of the frame are rounded; the frame is formed by fixedly splicing a plurality of aluminum alloy square tubes.
8. The method for multi-side joint detection and article classification based on model fusion as claimed in claim 3, wherein the processing model I is a Densenet121 network model;
the set of data sets used to train the processing model I includes images of items of known class and their corresponding classes acquired by the visual image acquisition system.
9. The method as claimed in claim 3, wherein the set of data sets used for training the processing model II includes a matrix fused matrix obtained by inputting the image of an object of a known class acquired by each image acquisition device in the visual image acquisition system into the processing model I and its corresponding class.
10. The method for multi-side joint detection and article classification based on model fusion as claimed in claim 2, wherein the image collected by the image collecting device needs to be preprocessed before application as follows:
(1) graying;
(2) removing image noise;
(3) using a canny operator to carry out edge detection;
(4) finding out the minimum external square according to the edge, and intercepting the whole external square;
(5) and scaling the square image to a proper size by a bilinear interpolation method.
CN202110359605.XA 2021-04-02 2021-04-02 Multi-side joint detection article classification method based on model fusion Pending CN113420776A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110359605.XA CN113420776A (en) 2021-04-02 2021-04-02 Multi-side joint detection article classification method based on model fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110359605.XA CN113420776A (en) 2021-04-02 2021-04-02 Multi-side joint detection article classification method based on model fusion

Publications (1)

Publication Number Publication Date
CN113420776A true CN113420776A (en) 2021-09-21

Family

ID=77711997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110359605.XA Pending CN113420776A (en) 2021-04-02 2021-04-02 Multi-side joint detection article classification method based on model fusion

Country Status (1)

Country Link
CN (1) CN113420776A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151093A (en) * 2022-11-28 2023-05-23 小米汽车科技有限公司 Method for acquiring part model, method for detecting part and related equipment thereof
CN116740549A (en) * 2023-08-14 2023-09-12 南京凯奥思数据技术有限公司 Vehicle part identification method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151093A (en) * 2022-11-28 2023-05-23 小米汽车科技有限公司 Method for acquiring part model, method for detecting part and related equipment thereof
CN116740549A (en) * 2023-08-14 2023-09-12 南京凯奥思数据技术有限公司 Vehicle part identification method and system
CN116740549B (en) * 2023-08-14 2023-11-07 南京凯奥思数据技术有限公司 Vehicle part identification method and system

Similar Documents

Publication Publication Date Title
CN109523552B (en) Three-dimensional object detection method based on viewing cone point cloud
CN108491880B (en) Object classification and pose estimation method based on neural network
Anwar et al. Image colorization: A survey and dataset
Byravan et al. Se3-nets: Learning rigid body motion using deep neural networks
CN107953329B (en) Object recognition and attitude estimation method and device and mechanical arm grabbing system
CN109410168B (en) Modeling method of convolutional neural network for determining sub-tile classes in an image
CN112801015B (en) Multi-mode face recognition method based on attention mechanism
CN110032925B (en) Gesture image segmentation and recognition method based on improved capsule network and algorithm
CN109766873B (en) Pedestrian re-identification method based on hybrid deformable convolution
CN109740539B (en) 3D object identification method based on ultralimit learning machine and fusion convolution network
CN111709980A (en) Multi-scale image registration method and device based on deep learning
CN113420776A (en) Multi-side joint detection article classification method based on model fusion
CN110674741A (en) Machine vision gesture recognition method based on dual-channel feature fusion
CN110827304A (en) Traditional Chinese medicine tongue image positioning method and system based on deep convolutional network and level set method
CN112001219A (en) Multi-angle multi-face recognition attendance checking method and system
CN113780423A (en) Single-stage target detection neural network based on multi-scale fusion and industrial product surface defect detection model
CN116188825A (en) Efficient feature matching method based on parallel attention mechanism
CN111310720A (en) Pedestrian re-identification method and system based on graph metric learning
CN111340878A (en) Image processing method and device
CN109919215B (en) Target detection method for improving characteristic pyramid network based on clustering algorithm
CN112364881A (en) Advanced sampling consistency image matching algorithm
CN114663880A (en) Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism
CN112802048B (en) Method and device for generating layer generation countermeasure network with asymmetric structure
Chiu et al. See the difference: Direct pre-image reconstruction and pose estimation by differentiating hog
CN110717910B (en) CT image target detection method based on convolutional neural network and CT scanner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination