CN115565115A - Outfitting intelligent identification method and computer equipment - Google Patents

Outfitting intelligent identification method and computer equipment Download PDF

Info

Publication number
CN115565115A
CN115565115A CN202211235668.5A CN202211235668A CN115565115A CN 115565115 A CN115565115 A CN 115565115A CN 202211235668 A CN202211235668 A CN 202211235668A CN 115565115 A CN115565115 A CN 115565115A
Authority
CN
China
Prior art keywords
image
neural network
convolutional neural
network model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211235668.5A
Other languages
Chinese (zh)
Inventor
甄希金
续爱民
张盈彬
郭威
明星
骆晓萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shipbuilding Technology Research Institute
Original Assignee
Shanghai Shipbuilding Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shipbuilding Technology Research Institute filed Critical Shanghai Shipbuilding Technology Research Institute
Priority to CN202211235668.5A priority Critical patent/CN115565115A/en
Publication of CN115565115A publication Critical patent/CN115565115A/en
Priority to PCT/CN2023/112528 priority patent/WO2024078112A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention provides an outfitting intelligent identification method for a ship outfitting warehouse, which comprises the following steps: an image acquisition step of acquiring a large number of images containing or not containing a target object; an image preprocessing step, wherein the acquired images are processed by utilizing linear operation, logical operation, spatial operation and image transformation of the images to form input sample tensor data; a sample data set acquisition step, namely simplifying the tensor data of the received input sample, integrating the tensor data into input sample data and dividing the input sample data into a training set data sample and a test set data sample which are acquired from the acquired image; a characteristic extraction step, namely establishing a convolutional neural network model for training and evaluating to obtain the convolutional neural network model matched with the training set data sample; a model calibration step, which is to perform optimization training to improve the precision; and an image recognition step of recognizing the target object. The invention can accurately identify whether the content in the picture is the target object in a very short time.

Description

Outfitting intelligent identification method and computer equipment
Technical Field
The invention relates to the technical field of ship intelligent manufacturing, in particular to a outfitting intelligent identification method and computer equipment for an intelligent ship outfitting warehouse.
Background
In ship manufacturing enterprises, the stacking of outfitting equipment relates to a series of operations such as gathering and distribution of the outfitting equipment, logistics transmission, marking of information, planning of storage yard and storage and use, and some equipment belongs to imported products, relevant information data is English description, detailed information of the equipment is difficult to accurately identify by field material receiving and installation personnel at the first time, the equipment is used as a project, how to install and debug the equipment and the like, the accuracy of the information of the outfitting equipment is very important, and a backbone enterprise in the industry introduces an intelligent outfitting three-dimensional warehouse to realize unmanned outfitting storage management, so that the information identification of the outfitting at the stage of receiving and storing the outfitting is required to be accurate, the using process can be quickly identified through an image identification method, and the management efficiency is improved.
Disclosure of Invention
The invention aims to provide an outfitting intelligent identification method and computer equipment for a ship outfitting warehouse, which are used for accurately identifying whether target outfitting exists in a picture in a very short time and solving the technical problem that information identification of outfitting at the stage of requiring goods receiving and warehousing by the existing intelligent outfitting three-dimensional warehouse is accurate.
In order to realize the purpose, the technical scheme of the invention is as follows:
an outfitting intelligent identification method for a ship outfitting warehouse is characterized by comprising the following steps:
step S11), an image acquisition step, wherein images are acquired facing a ship outfitting warehouse, the acquired images are images which contain or do not contain a large number of target objects, and the target objects comprise outfitting parts;
step S12), an image preprocessing step, namely processing the acquired image by utilizing linear operation, logical operation, spatial operation and image transformation of the image to form input sample tensor data;
step S13) a sample data set obtaining step, namely receiving tensor data of input samples, simplifying the tensor data of the input samples to integrate the tensor data of the input samples into input sample data, wherein the input sample data is divided into training set data samples and testing set data samples which are obtained from collected images;
step S14), a characteristic extraction step, namely establishing a convolutional neural network model, calling the sample data set to train and evaluate the model, and obtaining a convolutional neural network model matched with the training set data sample;
step S15), a model calibration step, namely performing optimization training on the convolutional neural network model matched with the training set data sample to improve the accuracy of the convolutional neural network model matched with the training set data sample;
and S16) image identification, namely identifying the target object by using the calibrated convolutional neural network model matched with the training set data sample.
Further, step S12 further includes an arithmetic operation of the image, where the arithmetic operation of the image is to store pixel values in the acquired image in an array, and the addition, subtraction, multiplication, and division of the array are direct operations of two same-position elements, where the addition is used for image noise reduction, the subtraction is used for enhancing difference between images, and the image multiplication or division is used for shading correction.
Further, in step S12, the linear operation of the image includes: suppose that the operator H acting on the image f (x, y) is such that H [ f (x, y)]= g (x, y) and satisfies H [ a [) i f i (x,y)+a j f j (x,y)]=a i H[f i (x,y)]+a j H[f j (x,y)]=a i g i (x,y)+a j g j (x, y) wherein a i ,a j Is an arbitrary constant, f i ,f j If the two images are any images with the same size, H is linear operation;
the image logic operation comprises image set operation, wherein the image set operation is the intersection, the union and the complement of images, and the logic operation mode comprises AND, NOR and XOR;
the spatial operations of the image include single pixel operations, neighborhood operations, and geometric spatial transformations.
Further, in step S12, the image preprocessing step further includes: image cutting, image size redefinition, image data conversion into tensor and data standardization.
Further, in step S14, the feature extracting step includes:
constructing a training model aiming at a training set data sample;
calling the training set data samples to train the training model until convergence so as to generate a convolutional neural network model to be evaluated, wherein the convolutional neural network model is suitable for the training set data samples;
evaluating the convolutional neural network model to be evaluated, and obtaining a convolutional neural network model matched with a training set data sample after the convolutional neural network model to be evaluated meets a preset evaluation standard;
and inputting the test set data sample into a convolutional neural network model matched with the training set data sample to predict the attribute characteristics of the test set data sample and obtain the accuracy of the convolutional neural network model matched with the training set data sample.
Further, in step S14, the constructed data training model includes: the input layer, set up in the convolutional layer of input layer lower floor, set up in the pooling layer of convolutional layer lower floor, set up in the full tie layer of pooling layer lower floor, set up in the abandoning layer of full tie layer lower floor and set up in abandoning the output layer of layer lower floor.
Further, in step S14, in the process of calling the training set data sample to train the training model until convergence, the method further includes optimizing the data training model by using a pre-stored optimization function.
Further, in the step S14, the preset evaluation criterion includes a loss function; the step of evaluating the convolutional neural network model to be evaluated comprises the following steps: calculating a loss function of the convolutional neural network model to be evaluated; comparing the numerical value of the loss function of the convolutional neural network model to be evaluated with a preset loss threshold value to obtain the convolutional neural network model matching the training set data samples; and the convolutional neural network model matched with the training set data sample is a data training model corresponding to the minimum numerical value of the loss function.
Further, in step S15, the convolutional neural network model of the matched training set data sample is trained twice or three times by adding and modifying the data sample, so as to improve the precision of the convolutional neural network model of the matched training set data sample.
Further, a last aspect of the present invention provides an apparatus comprising: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored by the memory to enable the equipment to execute the outfitting intelligent identification method.
The outfitting intelligent identification method for the ship outfitting warehouse has the following beneficial effects: the identification method can accurately identify whether the content in the picture is the target object in a very short time.
Drawings
Fig. 1 is a flow chart of an intelligent outfitting identification method according to the invention;
FIG. 2 is a schematic diagram of the schematic structure of convolutional neural network convolutional layer;
FIG. 3 is a schematic diagram of a schematic structure of a convolutional neural network pooling layer;
FIG. 4 is a schematic diagram of a schematic structure of a fully connected layer of a convolutional neural network;
FIG. 5 is a flow chart of the feature extraction step of the present invention;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
Other advantages and capabilities of the present invention will be readily apparent to those skilled in the art from the disclosure herein, wherein the embodiments of the present invention are described below by way of specific vessel part identification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the drawings only show the components related to the present invention rather than being drawn according to the number, shape and size of the components in actual implementation, and the types, the numbers and the proportions of the components can be changed in actual implementation, and the layout of the components can be more complicated.
The embodiment provides an outfitting intelligent identification method, which comprises the following steps:
step S11), an image acquisition step, namely acquiring images facing a ship outfitting warehouse, wherein the acquired images are images which contain or do not contain a large number of target objects, and the target objects comprise outfitting parts;
step S12), an image preprocessing step, namely processing the acquired image by utilizing linear operation, logical operation, spatial operation and image transformation of the image to form tensor data of the input sample;
step S13) a sample data set obtaining step, namely receiving tensor data of input samples, simplifying the tensor data of the input samples to integrate the tensor data of the input samples into input sample data, wherein the input sample data is divided into training set data samples and testing set data samples which are obtained from collected images;
step S14), a characteristic extraction step, namely establishing a convolutional neural network model, calling the sample data set to train and evaluate the model, and obtaining the convolutional neural network model matched with the training set data sample;
step S15), a model calibration step, namely performing optimization training on the convolutional neural network model matched with the training set data sample to improve the accuracy of the convolutional neural network model matched with the training set data sample;
and S16) image identification, namely identifying the target object by using the calibrated convolutional neural network model matched with the training set data sample.
The planning method of the invention comprises the following steps when in specific implementation:
step S11 includes: image acquisition: a number of pictures are taken, with or without the target object. In order to improve the identification accuracy, background images can be effectively distinguished, images are collected facing a ship outfitting warehouse, and the target object comprises an outfitting.
Step S12 includes: image preprocessing: arithmetic operation of images, linear operation of images, logical operation of images, spatial operation of images, image transformation.
Specifically, pixel values in the image are stored in an array, and the addition, subtraction, multiplication and division of the array are two elements with the same position for direct operation, and array multiplication is exemplified as follows:
Figure BDA0003883477700000041
in the arithmetic operation of the images, an addition operation may be used for image noise reduction, a subtraction operation may be used for enhancing differences between the images, image multiplication may be performed by multiplying a template image with a corresponding image so as to leave an ROI region of interest, and image multiplication or division may be used for shading correction.
Linear operation of the image: suppose that the operator H acting on the image f (x, y) is such that H [ f (x, y)]= g (x, y) and satisfies H [ a [) i f i (x,y)+a j f j (x,y)]=a i H[f i (x,y)]+a j H[f j (x,y)]=a i g i (x,y)+a j g j (x, y) wherein a i ,a j Is an arbitrary constant, f i ,f j Is any two images with the same sizeThen H is linear operation.
The image logic operation comprises image set operation, the image set operation is the intersection, the union and the complement of the images, and the logic operation comprises AND, NOR and XOR.
The spatial operations of the image include single pixel operations, neighborhood operations, and geometric spatial transformations.
The image preprocessing step also comprises image cutting, image size redefinition, image data conversion into tensor and data standardization. Specifically, an image is cut out from the center of the image using transform. Centercrop (), the parameter of this function being size, representing the cut size; redefine the image size using the transform. Transforming the image data into tensors by using transforms. Normalization of the data using transform. Normalization () can speed up the convergence of the model.
Step S13 includes: receiving input sample tensor data; specifically, the input sample tensor data is simplified to be integrated into input sample data; labeling the output result as: the image contains a target object label of "1", and the image does not contain a target object label of "0". Whereby the input sample data is divided into training set data samples and test set data samples obtained from the acquired images.
The training model constructed in step S14 includes: the device comprises an input layer, a convolution layer arranged on the lower layer of the input layer, a pooling layer arranged on the lower layer of the convolution layer, a full-connection layer arranged on the lower layer of the pooling layer, a discarding layer (dropout) arranged on the lower layer of the full-connection layer and an output layer arranged on the lower layer of the discarding layer. In this embodiment, the training model is a convolutional neural network.
In this embodiment, the input layer is used to process multidimensional data. Specifically, the input layer performs matrix transformation (reshape process) on the input matrix sample tensor, and then inputs the matrix sample tensor.
And the convolutional layer is used for extracting attribute characteristics from the sample data processed by the input layer and outputting a characteristic diagram. Each convolution layer is composed of a plurality of convolution units, and the parameters of each convolution unit are optimized through a back propagation algorithm. The convolution operation aims to extract different features of input, the convolution layer at the first layer can only extract some low-level features, such as the levels of edges, lines and corners, and the convolution layer at more layers can iteratively extract more complex features from the low-level features. The interior of the convolution kernel comprises a plurality of convolution kernels, wherein each element corresponds to a weight coefficient and a deviation value.
The value of the convolution kernel, which is a parameter of the network, is considered to be set and is adjustable. The characteristic image and the convolution kernel exist in a matrix form, and can be subjected to convolution calculation. The convolution kernel firstly performs convolution operation on a first area of the characteristic image, and the result is used as one point on the output characteristic image.
The convolution calculation process is exemplified as shown in fig. 2 below, and the feature map is input and multiplied by the convolution kernel, and then output. After the feature image is subjected to the convolution process, a new feature map is output, which requires that the convolution kernel and the feature map are subjected to multiple convolution calculations, that is, the convolution kernel slides on the input feature map, and the sliding generally slides from left to right and from top to bottom. Different sliding step lengths and different output characteristic graphs.
And the pooling layer is used for performing feature selection and filtering on a feature map output by the convolutional layer after feature extraction is performed on the convolutional layer.
The selection and filtering of the feature map by the pooling layer is substantially one form of down-sampling, and there are many different forms of non-linear pooling functions. An example of a maximum pooling process is shown in FIG. 3:
on the left is a feature map and on the right is the features retained after maximum pooling. The maximum pooling process is as if a 2 size filter is used, with 2 x 2 zones being selected and a step of 2. If a feature is extracted in the filter, its maximum value is retained. If this feature is not extracted, it may not be present in the region, and its maximum value is still small.
The pooling layer will constantly reduce the spatial size of the data and thus the number of parameters and the amount of calculations will also decrease, and to some extent the overfitting can be controlled.
The fully-connected layer is generally composed of two parts, a linear part and a non-linear part. In the data training model, after passing through a plurality of convolutional layers and pooling layers, 1 or more than 1 full connection layer is connected. Each neuron in the fully-connected layer is fully connected with all neurons in the layer before the neuron.
The fully-connected layer may integrate local information with category distinction in convolutional or pooling layers, such as the overall fully-connected network shown in fig. 4: it is already a complete conversion process of the fully connected layer from the input layer (input layer) to the hidden layer1 (hidden layer 1), linear part details: for an input layer vector x (n-dimensional), whose output is hidden layer1 set to z (m-dimensional), it needs to be transformed into a vector of m-dimensional, multiplied by a matrix W of m × n, plus an offset b, i.e. W × x + b = z. The significance of the process is: for each pixel point, a group of weights is given, operation is carried out to obtain a final value, actually, the weights are random when being at the initial stage, and subsequent weights can learn by themselves through a back propagation process. The nonlinear part: generally referred to as an activation function.
In order to improve the performance of the convolutional neural network, the activation functions between neurons in the full connection layer of the present invention respectively adopt a ReLU function and a Sigmoid function, as shown in fig. 4.
The dropout process is used to prevent overfitting. Overfitting: it is the training that works well and the loss function value can be reduced very low, but it is not as good as the test data set because it depends too much on the characteristics of the existing training data set. The dropout process can randomly set the partial activation function to zero (make the weights of some hidden layer nodes of the network non-functional) during model training to avoid overfitting. In forward propagation, the activation value of a neuron is stopped with a certain probability p, so that the model is more generalized, because it is less dependent on some local features.
Due to the different behavior of the discarding process during training and prediction, the output layer has two different output variables, the different output variables having the same weight, respectively output variable in training mode (output _ train) and output variable in prediction mode (output _ test).
As shown in fig. 5, in step S14, the feature extraction step includes:
step S141) constructing a training model aiming at the training set data sample;
step S142) calling the training set data sample to train the training model until convergence so as to generate a convolutional neural network model to be evaluated, wherein the convolutional neural network model is suitable for the training set data sample; in the process of calling the training set data sample to train the training model until convergence, optimizing the data training model by using a pre-stored optimization function;
step S143) evaluating the convolutional neural network model to be evaluated, and obtaining a convolutional neural network model matching with the training set data sample after the convolutional neural network model to be evaluated meets the preset evaluation standard;
step S144) inputting the test set data sample into a convolutional neural network model matching the training set data sample to predict the attribute characteristics of the test set data sample and obtain the accuracy of the convolutional neural network model matching the training set data sample.
In step S14, the process of generating the convolutional neural network model to be evaluated, which is specifically applicable to the training set data samples, is to invoke the established neural network model, and generate a suitable model by adjusting parameters.
In this embodiment, when training the model, the tf. The parameters of this function are x, y, batch _ size, num _ epochs, shuffle. Where x is training data x _ train, y is label y = y _ train, batch _ size is the number of samples selected at each iteration, and num _ epoch = None indicates that the condition for training to stop is that the number of iterations is reached.
In this embodiment, the preset evaluation criterion includes a loss function, and the loss function includes a cross-entropy loss function. The step S14 includes: and calculating a Loss function (Loss) of the convolutional neural network model to be evaluated, wherein the Loss is a Cross Entropy Loss function (Cross Entropy Loss). Comparing the numerical value of the loss function of the convolutional neural network model to be evaluated with a preset loss threshold value to obtain the convolutional neural network model of the matched training set data sample; and the convolutional neural network model matched with the training set data sample is a data training model corresponding to the minimum numerical value of the loss function.
In this embodiment, when evaluating the model, a tf.estimators.inputs.numpy _ input _ fn () function is used to load data, where x is the predicted data x _ test, y is the label y = y _ test, the size of the batch _ size is unchanged, and shuffle = False.
In the step S15, the convolutional neural network model of the matching training set data sample is trained twice or three times by adding and modifying the data sample, so as to improve the accuracy of the convolutional neural network model of the matching training set data sample.
According to the intelligent outfitting identification method, whether the target object is contained in the image can be accurately identified in a very short time, and a new idea is provided for image identification.
The embodiment also provides a computer device, on which a computer program is stored, where the computer program, when executed by a processor, implements the above-mentioned outfitting intelligent identification method.
The computer device may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing the data of the intelligent outfitting identification method. The network interface of the computer device is used for communicating with an external terminal through a network connection.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An outfitting intelligent identification method is characterized by comprising the following steps:
step S11), an image acquisition step, namely acquiring images facing a ship outfitting warehouse, wherein the acquired images are images which contain or do not contain a large number of target objects, and the target objects comprise outfitting parts;
step S12), an image preprocessing step, namely processing the acquired image by utilizing linear operation, logical operation, spatial operation and image transformation of the image to form input sample tensor data;
step S13), a sample data set obtaining step, namely receiving tensor data of an input sample, simplifying the tensor data of the input sample to integrate the tensor data of the input sample into input sample data, wherein the input sample data is divided into a training set data sample and a test set data sample which are obtained from an acquired image;
step S14), a characteristic extraction step, namely establishing a convolutional neural network model, calling the sample data set to train and evaluate the model, and obtaining the convolutional neural network model matched with the training set data sample;
step S15), a model calibration step, namely performing optimization training on the convolutional neural network model matched with the training set data sample to improve the accuracy of the convolutional neural network model matched with the training set data sample;
and S16) image identification, namely identifying the target object by using the calibrated convolutional neural network model matched with the training set data sample.
2. The intelligent outfitting identification method according to claim 1, wherein the step S12 further comprises an arithmetic operation of the image, the arithmetic operation of the image is to store pixel values in the acquired image in an array, the array is an addition, subtraction, multiplication and division direct operation of two same-position elements, wherein the addition is used for image noise reduction, the subtraction is used for enhancing the difference between images, and the image multiplication or the division is used for shading correction.
3. The intelligent outfitting identification method according to claim 1, wherein in the step S12, the linear operation of the image comprises: suppose that the operator H acting on the image f (x, y) is such that H [ f (x, y)]= g (x, y) and satisfies H [ a ] i f i (x,y)+a j f j (x,y)]=a i H[f i (x,y)]+a j H[f j (x,y)]=a i g i (x,y)+a j g j (x, y) wherein a i ,a j Is an arbitrary constant, f i ,f j If the two images are any images with the same size, H is linear operation;
the image logic operation comprises image set operation, wherein the image set operation is the intersection, the union and the complement of images, and the logic operation mode comprises AND, NOR and XOR;
the spatial operations of the image include single pixel operations, neighborhood operations, and geometric spatial transformations.
4. The intelligent outfitting identification method according to claim 1, wherein in the step S12, the image preprocessing step further comprises: image cutting, image size redefinition, image data conversion into tensor and data standardization.
5. The intelligent outfitting identification method according to claim 1, wherein in the step S14, the feature extraction step comprises:
constructing a training model aiming at a training set data sample;
calling the training set data samples to train the training model until convergence so as to generate a convolutional neural network model to be evaluated, wherein the convolutional neural network model is suitable for the training set data samples;
evaluating the convolutional neural network model to be evaluated, and obtaining a convolutional neural network model matched with a training set data sample after the convolutional neural network model to be evaluated meets a preset evaluation standard;
and inputting the test set data sample into a convolutional neural network model matched with the training set data sample to predict the attribute characteristics of the test set data sample and obtain the accuracy of the convolutional neural network model matched with the training set data sample.
6. The intelligent outfitting identification method according to claim 5, wherein the step S14 is characterized in that the constructed data training model comprises: the input layer, set up in the convolution layer of input layer lower floor, set up in the pooling layer of convolution layer lower floor, set up in the full tie layer of pooling layer lower floor, set up in the abandoning layer of full tie layer lower floor and set up in the output layer of abandoning layer lower floor.
7. The fitting-out intelligent identification method according to claim 5, wherein in the step S14, in a process of calling the training set data samples to train the training model until convergence, the method further comprises optimizing the data training model by using a pre-stored optimization function.
8. The intelligent outfitting identification method according to claim 5, wherein in the step S14, the preset evaluation criterion comprises a loss function; the step of evaluating the convolutional neural network model to be evaluated comprises the following steps: calculating a loss function of the convolutional neural network model to be evaluated; comparing the numerical value of the loss function of the convolutional neural network model to be evaluated with a preset loss threshold value to obtain the convolutional neural network model of the matched training set data sample; and the convolutional neural network model of the matched training set data sample is a data training model corresponding to the minimum numerical value of the loss function.
9. The outfitting intelligent identification method according to claim 1, wherein in step S15, the convolutional neural network model of the matching training set data samples is trained twice or three times through addition and modification of the data samples, so as to improve the accuracy of the convolutional neural network model of the matching training set data samples.
10. A computer device having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the intelligent outfitting identification method according to any one of claims 1 to 9.
CN202211235668.5A 2022-10-10 2022-10-10 Outfitting intelligent identification method and computer equipment Pending CN115565115A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211235668.5A CN115565115A (en) 2022-10-10 2022-10-10 Outfitting intelligent identification method and computer equipment
PCT/CN2023/112528 WO2024078112A1 (en) 2022-10-10 2023-08-11 Method for intelligent recognition of ship outfitting items, and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211235668.5A CN115565115A (en) 2022-10-10 2022-10-10 Outfitting intelligent identification method and computer equipment

Publications (1)

Publication Number Publication Date
CN115565115A true CN115565115A (en) 2023-01-03

Family

ID=84745884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211235668.5A Pending CN115565115A (en) 2022-10-10 2022-10-10 Outfitting intelligent identification method and computer equipment

Country Status (2)

Country Link
CN (1) CN115565115A (en)
WO (1) WO2024078112A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024078112A1 (en) * 2022-10-10 2024-04-18 上海船舶工艺研究所(中国船舶集团有限公司第十一研究所) Method for intelligent recognition of ship outfitting items, and computer device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766232A (en) * 2021-02-19 2021-05-07 南京邮电大学 Road risk target identification method based on reconfigurable convolutional neural network
CN112906795A (en) * 2021-02-23 2021-06-04 江苏聆世科技有限公司 Whistle vehicle judgment method based on convolutional neural network
CN114926691A (en) * 2022-05-31 2022-08-19 中国计量大学 Insect pest intelligent identification method and system based on convolutional neural network
CN115565115A (en) * 2022-10-10 2023-01-03 上海船舶工艺研究所(中国船舶集团有限公司第十一研究所) Outfitting intelligent identification method and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024078112A1 (en) * 2022-10-10 2024-04-18 上海船舶工艺研究所(中国船舶集团有限公司第十一研究所) Method for intelligent recognition of ship outfitting items, and computer device

Also Published As

Publication number Publication date
WO2024078112A1 (en) 2024-04-18

Similar Documents

Publication Publication Date Title
CA3088899C (en) Systems and methods for preparing data for use by machine learning algorithms
CN110175671B (en) Neural network construction method, image processing method and device
US20230095606A1 (en) Method for training classifier, and data processing method, system, and device
CN111127364B (en) Image data enhancement strategy selection method and face recognition image data enhancement method
CN113095370B (en) Image recognition method, device, electronic equipment and storage medium
CN115170934B (en) Image segmentation method, system, equipment and storage medium
CN110222718B (en) Image processing method and device
US20230048405A1 (en) Neural network optimization method and apparatus
CN111368656A (en) Video content description method and video content description device
CN114048468A (en) Intrusion detection method, intrusion detection model training method, device and medium
WO2024078112A1 (en) Method for intelligent recognition of ship outfitting items, and computer device
CN115496144A (en) Power distribution network operation scene determining method and device, computer equipment and storage medium
CN112364974A (en) Improved YOLOv3 algorithm based on activation function
CN115170565A (en) Image fraud detection method and device based on automatic neural network architecture search
CN113516019B (en) Hyperspectral image unmixing method and device and electronic equipment
CN109800815B (en) Training method, wheat recognition method and training system based on random forest model
CN112749737A (en) Image classification method and device, electronic equipment and storage medium
CN116977001A (en) Geological disaster prevention and treatment engineering cost management system and method thereof
CN114565196A (en) Multi-event trend prejudging method, device, equipment and medium based on government affair hotline
JP2022175851A (en) Information processing apparatus, information processing method, and program
US20200184331A1 (en) Method and device for processing data through a neural network
CN110705695A (en) Method, device, equipment and storage medium for searching model structure
CN117078604B (en) Unmanned laboratory intelligent management method and system
US20230229965A1 (en) Machine learning system and machine learning method
EP3940601A1 (en) Information processing apparatus, information processing method, and information program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination