CN116109606B - Container lock pin disassembly and assembly safety management method and system based on image analysis - Google Patents

Container lock pin disassembly and assembly safety management method and system based on image analysis Download PDF

Info

Publication number
CN116109606B
CN116109606B CN202310149782.4A CN202310149782A CN116109606B CN 116109606 B CN116109606 B CN 116109606B CN 202310149782 A CN202310149782 A CN 202310149782A CN 116109606 B CN116109606 B CN 116109606B
Authority
CN
China
Prior art keywords
image
target
container
target container
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310149782.4A
Other languages
Chinese (zh)
Other versions
CN116109606A (en
Inventor
崔迪
魏宏大
孙国庆
张霞
周亚飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Waterborne Transport Research Institute
Original Assignee
China Waterborne Transport Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Waterborne Transport Research Institute filed Critical China Waterborne Transport Research Institute
Priority to CN202310149782.4A priority Critical patent/CN116109606B/en
Publication of CN116109606A publication Critical patent/CN116109606A/en
Application granted granted Critical
Publication of CN116109606B publication Critical patent/CN116109606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W90/00Enabling technologies or technologies with a potential or indirect contribution to greenhouse gas [GHG] emissions mitigation

Abstract

The application relates to a container lock pin disassembly and assembly safety management method and system based on image analysis, wherein the method comprises the following steps: collecting related images of a target container; preprocessing the related images to generate target images; extracting a feature vector in the target image; inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result; performing secondary processing on the first recognition result to generate a second recognition result; and detaching or installing the lock pin of the target container based on the second identification result, so as to realize the safety management of the detachment and installation of the lock pin of the target container. According to the application, through pretreatment and feature extraction steps, the efficiency of disassembling and assembling the lock pin of the container can be improved, the accuracy of disassembling and assembling the lock pin of the container can be improved through carrying out secondary identification on images, the automation degree of disassembling and assembling the lock pin can be effectively improved through accurately positioning the lock hole, the potential danger is reduced, and the safety and reliability of disassembling and assembling the lock pin of the container are improved.

Description

Container lock pin disassembly and assembly safety management method and system based on image analysis
Technical Field
The application relates to the technical field of lock pin disassembly and assembly safety management, in particular to a container lock pin disassembly and assembly safety management method and system based on image analysis.
Background
The container transportation is the main transportation mode of the current port, in order to ensure the reliability and stability during transportation, a plurality of containers are required to be fixedly connected through container locking pins in the container loading process, and after the containers reach the destination code head, the locking pins are detached to realize the unloading of the containers. Due to the continuous development of globalization trade, the freight volume of shipping containers is continuously increased, the residence time of the ship in the port is shortened from the analysis of voyage economic accounting, the berthing cost of the ship can be reduced, the freight throughput of the port is improved, and the economic benefit is further improved.
At present, the lock pin is generally dismounted manually, so that a great deal of manpower and time are required, the loading and unloading operation environment is severe, and potential hazards exist for the health and life of staff. The loading and unloading speed of cargoes is seriously influenced by manual disassembly and assembly, the problems of low efficiency, easy fatigue, low accuracy and the like exist in manual operation, the actual production and working requirements cannot be met, the degree of automation of the disassembly and assembly of the lock pin is an effective way for improving the freight efficiency of the container, and the key problem is that the lock pin is accurately positioned at the lock hole position, so that the disassembly and assembly of the lock pin of the container based on image analysis becomes a better solution.
Disclosure of Invention
Accordingly, it is desirable to provide a container lock pin disassembly safety management method and system based on image analysis, which can improve disassembly accuracy and efficiency.
In one aspect, a method for managing security of container lock pin disassembly based on image analysis is provided, the method comprising:
step A: collecting related images of a target container;
and (B) step (B): preprocessing the related images to generate target images;
step C: extracting a feature vector in the target image;
step D: inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result;
step E: performing secondary processing on the first recognition result to generate a second recognition result;
step F: and detaching or installing the lock pin of the target container based on the second identification result, so as to realize the safety management of the detachment and installation of the lock pin of the target container.
In one embodiment, the method further comprises: preprocessing the related image to generate a target image, wherein the generating of the target image comprises the following steps: classifying the target container based on the related data of the target container: classifying the target container into a first category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is greater than a first preset time; classifying the target container into a second category when the service life of the target container is less than or equal to a first preset annual limit and the operating time in a severe environment is longer than a first preset time; classifying the target container into a third category when the service life of the target container is less than or equal to a first preset annual limit value and the operation duration in a severe environment is less than or equal to a first preset time; classifying the target container into a fourth category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is less than or equal to a first preset time; the harsh environment includes at least one of: rainy weather, snowy and snowy weather, and sand storm weather; and generating the target image based on the one-to-one mapping relation between the classification category and the related image.
In one embodiment, the method further comprises: extracting the feature vector in the target image comprises: graying the target image to generate a gray image:
Gray=0.3R+0.59G+0.11B
wherein R, B, G represents the original pixel values of the red, green and blue channels, respectively, and Gray represents the graying channel pixel value;
normalizing the gray scale image:
wherein I (x, y) represents a grayscale image, and (x, y) represents a pixel point in the grayscale image;
calculating gradient amplitude and gradient direction of pixel points in the gray level image:
wherein G (x, y) represents the gradient magnitude, s (x, y) represents the gradient direction, G x (x, y) represents a horizontal gradient, G y (x, y) represents a vertical gradient;
dividing the gray image into cell units with the same size, combining a fixed number of cell units into a block, constructing a direction gradient histogram for each cell unit, connecting the direction gradient histograms of the cell units contained in each block in series to obtain a direction gradient histogram of the whole block, combining the feature vectors of all the cell units in the block to obtain the feature vector in the target image, wherein the dimension k is as follows:
where a and d represent the height and width of the input image, e and b represent the size values of the cells and blocks, c represents the moving step length, and q represents the total number of gradient directions contained in one cell.
In one embodiment, the method further comprises: performing dimension reduction processing on the target image of the first category, wherein the dimension reduction processing comprises the following steps: forming a matrix X of z rows and k columns by the feature vectors in the target image, wherein z represents the number of the feature vectors, and k represents the dimension of the feature vectors; zero-equalizing all rows of the matrix X; solving a covariance matrix of the matrix X; obtaining the eigenvalue and corresponding eigenvector of the covariance matrix; and arranging the feature vectors into a matrix according to the corresponding feature values from top to bottom, and taking the first m rows to form a matrix R, namely the data of the first class of target images after dimension reduction to m dimensions.
In one embodiment, the method further comprises: the construction process of the convolutional neural network model comprises the following steps: acquiring a training image sample; constructing an initial convolutional neural network model, wherein the initial convolutional neural network model comprises one of the following components: a LeNet network model, a VGGNet network model, a ResNet network model, or a GoogleNet network model; inputting the training image sample into the initial convolutional neural network model for training; and when the training learning rate reaches a preset value, judging that the initial convolutional neural network model training is completed, and obtaining the convolutional neural network model.
In one embodiment, the method further comprises: performing secondary processing on the first recognition result to generate a second recognition result includes: if the category corresponding to the target image input into the convolutional neural network model is a container of a first category, a second category or a fourth category, processing the first recognition result output by the convolutional neural network model: acquiring a central coordinate value of a lock hole in a first identification result; calculating the difference between the central coordinate value and the ideal coordinate value, wherein the ideal coordinate value is obtained in the following way: selecting the same position of an initial target container, shooting a first image with the highest service life and the longest operation time in a severe environment, and shooting a second image with the lowest service life and the shortest operation time in the severe environment; splicing the first image and the second image to obtain an actual coordinate value of the spliced image; correcting the actual coordinate value based on a time distortion coefficient to obtain the ideal coordinate value; when the coordinate difference value of any three or more lock holes in the container is smaller than a first preset value, outputting the corresponding ideal coordinate value, namely the second identification result.
In another aspect, a system for security management of container locking pin disassembly based on image analysis is provided, the system comprising:
the acquisition unit is used for acquiring related images of the target container;
the preprocessing unit is used for preprocessing the related images to generate target images;
an extracting unit, configured to extract a feature vector in the target image;
the first recognition result generation unit is used for inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result;
the second recognition result generating unit is used for carrying out secondary processing on the first recognition result to generate a second recognition result;
and the result application unit is used for detaching or installing the lock pin of the target container based on the second identification result, so as to realize the safety management of the detachment of the lock pin of the target container.
In one embodiment, the method further comprises: the preprocessing unit specifically comprises: classifying the target container based on the related data of the target container: classifying the target container into a first category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is greater than a first preset time; classifying the target container into a second category when the service life of the target container is less than or equal to a first preset annual limit and the operating time in a severe environment is longer than a first preset time; classifying the target container into a third category when the service life of the target container is less than or equal to a first preset annual limit value and the operation duration in a severe environment is less than or equal to a first preset time; classifying the target container into a fourth category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is less than or equal to a first preset time; the harsh environment includes at least one of: rainy weather, snowy and snowy weather, and sand storm weather; and generating the target image based on the one-to-one mapping relation between the classification category and the related image.
In yet another aspect, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of:
step A: collecting related images of a target container;
and (B) step (B): preprocessing the related images to generate target images;
step C: extracting a feature vector in the target image;
step D: inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result;
step E: performing secondary processing on the first recognition result to generate a second recognition result;
step F: and detaching or installing the lock pin of the target container based on the second identification result, so as to realize the safety management of the detachment and installation of the lock pin of the target container.
In yet another aspect, a computer readable storage medium is provided, having stored thereon a computer program which when executed by a processor performs the steps of:
step A: collecting related images of a target container;
and (B) step (B): preprocessing the related images to generate target images;
step C: extracting a feature vector in the target image;
Step D: inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result;
step E: performing secondary processing on the first recognition result to generate a second recognition result;
step F: and detaching or installing the lock pin of the target container based on the second identification result, so as to realize the safety management of the detachment and installation of the lock pin of the target container.
The container lock pin disassembly and assembly safety management method and system based on image analysis, wherein the method comprises the following steps: collecting related images of a target container; preprocessing the related images to generate target images; extracting a feature vector in the target image; inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result; performing secondary processing on the first recognition result to generate a second recognition result; the lock pin of the target container is detached or installed based on the second recognition result, so that safety management of the disassembly and assembly of the lock pin of the target container is realized.
Drawings
FIG. 1 is an application environment diagram of a container lock pin disassembly and assembly security management method based on image analysis in one embodiment;
FIG. 2 is a flow diagram of a method for security management of container lock pin disassembly based on image analysis in one embodiment;
FIG. 3 is a block diagram of a container lock pin removal security management system based on image analysis in one embodiment;
fig. 4 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be understood that throughout this specification and the claims, unless the context clearly requires otherwise, the words "comprise", "comprising", and the like, are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, it is the meaning of "including but not limited to".
It should also be appreciated that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
It should be noted that the terms "S1", "S2", and the like are used for the purpose of describing the steps only, and are not intended to be construed to be specific as to the order or sequence of steps, nor are they intended to limit the present application, which is merely used to facilitate the description of the method of the present application, and are not to be construed as indicating the sequence of steps. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
The container lock pin disassembly and assembly safety management method based on image analysis can be applied to an application environment shown in fig. 1. The terminal 102 communicates with a data processing platform disposed on the server 104 through a network, where the terminal 102 may be, but is not limited to, various personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices, and the server 104 may be implemented by a stand-alone server or a server cluster composed of a plurality of servers.
Example 1
In one embodiment, as shown in fig. 2, a method for managing the disassembly and assembly safety of a container lock pin based on image analysis is provided, and the method is applied to the terminal in fig. 1 for illustration, and includes the following steps:
s1: a related image of the target container is acquired.
It should be noted that, the target container in this embodiment is a container with a lock pin to be detached, and the related image is a part image, such as a lock hole, required for detaching the lock pin.
S2: and preprocessing the related images to generate target images.
It should be noted that this step specifically includes:
classifying the target container based on the related data of the target container, wherein the related data are pre-stored data in a database, such as operation years and the like:
classifying the target container into a first category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is greater than a first preset time;
classifying the target container into a second category when the service life of the target container is less than or equal to a first preset annual limit and the operating time in a severe environment is longer than a first preset time;
Classifying the target container into a third category when the service life of the target container is less than or equal to a first preset annual limit value and the operation duration in a severe environment is less than or equal to a first preset time;
classifying the target container into a fourth category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is less than or equal to a first preset time;
the harsh environment includes at least one of: rainy weather, snowy and snowy weather, and sand storm weather;
and generating the target image based on the one-to-one mapping relation between the classification category and the related image.
Among them, the third type of target container is not exposed to bad weather such as wind and rain for a long time because of a short operation time, and therefore, there is no contamination such as deep stains and corrosion that affect the accuracy of lock hole recognition.
S3: and extracting the characteristic vector in the target image.
It should be noted that this step specifically includes:
graying the target image to generate a gray image:
Gray=0.3R+0.59G+0.11B
wherein R, B, G represents the original pixel values of the red, green and blue channels, respectively, and Gray represents the graying channel pixel value;
Normalizing the gray scale image:
wherein I (x, y) represents a grayscale image, and (x, y) represents a pixel point in the grayscale image;
calculating gradient amplitude and gradient direction of pixel points in the gray level image:
wherein G (x, y) represents the gradient magnitude, s (x, y) represents the gradient direction, G x (x, y) represents a horizontal gradient, G y (x, y) represents a vertical gradient;
dividing the gray image into cell units with the same size, combining a fixed number of cell units into a block, constructing a direction gradient histogram for each cell unit, connecting the direction gradient histograms of the cell units contained in each block in series to obtain a direction gradient histogram of the whole block, combining the feature vectors of all the cell units in the block to obtain the feature vector in the target image, wherein the dimension k is as follows:
where a and d represent the height and width of the input image, e and b represent the size values of the cells and blocks, c represents the moving step length, and q represents the total number of gradient directions contained in one cell.
Further, the method for performing dimension reduction processing on the first class of target images based on the serious pollution degree affecting the recognition accuracy of the first class of target images can improve the recognition accuracy and efficiency of the class of images, and specifically comprises the following steps:
Forming a matrix X of z rows and k columns by the feature vectors in the target image, wherein z represents the number of the feature vectors, and k represents the dimension of the feature vectors;
zero-equalizing all rows of the matrix X;
solving a covariance matrix of the matrix X;
obtaining the eigenvalue and corresponding eigenvector of the covariance matrix;
and arranging the feature vectors into a matrix according to the corresponding feature values from top to bottom, and taking the first m rows to form a matrix R, namely the data of the first class of target images after dimension reduction to m dimensions.
S4: and inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result.
It should be noted that, the construction process of the convolutional neural network model includes:
acquiring a training image sample, wherein the training image sample comprises a plurality of related images shot in various environments;
constructing an initial convolutional neural network model, wherein the initial convolutional neural network model comprises one of the following components: a LeNet network model, a VGGNet network model, a ResNet network model, or a GoogleNet network model;
inputting the training image sample into the initial convolutional neural network model for training;
And when the training learning rate reaches a preset value, judging that the initial convolutional neural network model training is completed, and obtaining the convolutional neural network model.
And (3) inputting the feature vector obtained in the step (S4) into the convolutional neural network model to obtain a first recognition result.
S5: and carrying out secondary processing on the first recognition result to generate a second recognition result.
Since the third type of target image includes a relatively small number of factors affecting the recognition accuracy, the first recognition result corresponding to the third type of target image may be output as a final result, and if the type corresponding to the target image input to the convolutional neural network model is a container of the first type, the second type, or the fourth type, the first recognition result output from the convolutional neural network model is processed, specifically, the steps include:
acquiring a central coordinate value of a lock hole in a first identification result;
calculating the difference between the central coordinate value and the ideal coordinate value, wherein the ideal coordinate value is obtained in the following way:
selecting the same position of an initial target container, shooting a first image with the highest service life and the longest operation time in a severe environment, and shooting a second image with the lowest service life and the shortest operation time in the severe environment, wherein the shooting positions and angles of the two images are consistent;
Splicing the first image and the second image to obtain an actual coordinate value of the spliced image;
correcting the actual coordinate value based on a time distortion coefficient to obtain the ideal coordinate value, wherein the time distortion coefficient is the time length multiplied by ten percent;
when the coordinate difference value of any three or more lock holes in the container is smaller than a first preset value, outputting the corresponding ideal coordinate value, namely the second identification result.
S6: and detaching or installing the lock pin of the target container based on the second identification result, so as to realize the safety management of the detachment and installation of the lock pin of the target container.
In the container lock pin disassembly and assembly safety management method based on image analysis, the method comprises the following steps: collecting related images of a target container; preprocessing the related images to generate target images; extracting a feature vector in the target image; inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result; performing secondary processing on the first recognition result to generate a second recognition result; the lock pin of the target container is detached or installed based on the second recognition result, so that safety management of the disassembly and assembly of the lock pin of the target container is realized.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Example 2
In one embodiment, as shown in fig. 3, there is provided a container lock pin disassembly and assembly safety management system based on image analysis, comprising: the device comprises an acquisition unit, a preprocessing unit, an extraction unit, a first recognition result generation unit, a second recognition result generation unit and a result application unit, wherein:
the acquisition unit is used for acquiring related images of the target container;
The preprocessing unit is used for preprocessing the related images to generate target images;
an extracting unit, configured to extract a feature vector in the target image;
the first recognition result generation unit is used for inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result;
the second recognition result generating unit is used for carrying out secondary processing on the first recognition result to generate a second recognition result;
and the result application unit is used for detaching or installing the lock pin of the target container based on the second identification result, so as to realize the safety management of the detachment of the lock pin of the target container.
As a preferred implementation manner, in the embodiment of the present invention, the preprocessing unit is specifically configured to:
classifying the target container based on the related data of the target container:
classifying the target container into a first category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is greater than a first preset time;
classifying the target container into a second category when the service life of the target container is less than or equal to a first preset annual limit and the operating time in a severe environment is longer than a first preset time;
Classifying the target container into a third category when the service life of the target container is less than or equal to a first preset annual limit value and the operation duration in a severe environment is less than or equal to a first preset time;
classifying the target container into a fourth category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is less than or equal to a first preset time;
the harsh environment includes at least one of: rainy weather, snowy and snowy weather, and sand storm weather;
and generating the target image based on the one-to-one mapping relation between the classification category and the related image.
As a preferred implementation manner, in the embodiment of the present invention, the extracting unit is specifically configured to:
graying the target image to generate a gray image:
Gray=0.3R+0.59G+0.11B
wherein R, B, G represents the original pixel values of the red, green and blue channels, respectively, and Gray represents the graying channel pixel value;
normalizing the gray scale image:
wherein I (x, y) represents a grayscale image, and (x, y) represents a pixel point in the grayscale image;
calculating gradient amplitude and gradient direction of pixel points in the gray level image:
Wherein G (x, y) represents the gradient magnitude, s (x, y) represents the gradient direction, G x (x, y) represents a horizontal gradient, G y (x, y) represents a vertical gradient;
dividing the gray image into cell units with the same size, combining a fixed number of cell units into a block, constructing a direction gradient histogram for each cell unit, connecting the direction gradient histograms of the cell units contained in each block in series to obtain a direction gradient histogram of the whole block, combining the feature vectors of all the cell units in the block to obtain the feature vector in the target image, wherein the dimension k is as follows:
where a and d represent the height and width of the input image, e and b represent the size values of the cells and blocks, c represents the moving step length, and q represents the total number of gradient directions contained in one cell.
As a preferred implementation manner, in the embodiment of the present invention, the extracting unit is specifically further configured to:
forming a matrix X of z rows and k columns by the feature vectors in the target image, wherein z represents the number of the feature vectors, and k represents the dimension of the feature vectors;
zero-equalizing all rows of the matrix X;
solving a covariance matrix of the matrix X;
obtaining the eigenvalue and corresponding eigenvector of the covariance matrix;
And arranging the feature vectors into a matrix according to the corresponding feature values from top to bottom, and taking the first m rows to form a matrix R, namely the data of the first class of target images after dimension reduction to m dimensions.
As a preferred implementation manner, in the embodiment of the present invention, the first recognition result generating unit is specifically configured to:
acquiring a training image sample;
constructing an initial convolutional neural network model, wherein the initial convolutional neural network model comprises one of the following components: a LeNet network model, a VGGNet network model, a ResNet network model, or a GoogleNet network model;
inputting the training image sample into the initial convolutional neural network model for training;
and when the training learning rate reaches a preset value, judging that the initial convolutional neural network model training is completed, and obtaining the convolutional neural network model.
As a preferred implementation manner, in the embodiment of the present invention, the second recognition result generating unit is specifically configured to:
if the category corresponding to the target image input into the convolutional neural network model is a container of a first category, a second category or a fourth category, processing the first recognition result output by the convolutional neural network model:
Acquiring a central coordinate value of a lock hole in a first identification result;
calculating the difference between the central coordinate value and the ideal coordinate value, wherein the ideal coordinate value is obtained in the following way:
selecting the same position of an initial target container, shooting a first image with the highest service life and the longest operation time in a severe environment, and shooting a second image with the lowest service life and the shortest operation time in the severe environment;
splicing the first image and the second image to obtain an actual coordinate value of the spliced image;
correcting the actual coordinate value based on a time distortion coefficient to obtain the ideal coordinate value;
when the coordinate difference value of any three or more lock holes in the container is smaller than a first preset value, outputting the corresponding ideal coordinate value, namely the second identification result.
Specific limitations regarding the image analysis-based container lock pin disassembly and assembly safety management system can be found in the above description of the image analysis-based container lock pin disassembly and assembly safety management method, and will not be repeated here. The modules in the container lock pin disassembly and assembly safety management system based on image analysis can be realized in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Example 3
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, a display screen, and an input system connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program when executed by the processor is used for realizing a container lock pin disassembly and assembly safety management method based on image analysis. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input system of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by persons skilled in the art that the architecture shown in fig. 4 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of when executing the computer program:
s1: collecting related images of a target container;
s2: preprocessing the related images to generate target images;
s3: extracting a feature vector in the target image;
s4: inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result;
s5: performing secondary processing on the first recognition result to generate a second recognition result;
s6: and detaching or installing the lock pin of the target container based on the second identification result, so as to realize the safety management of the detachment and installation of the lock pin of the target container.
In one embodiment, the processor when executing the computer program further performs the steps of:
classifying the target container based on the related data of the target container:
classifying the target container into a first category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is greater than a first preset time;
classifying the target container into a second category when the service life of the target container is less than or equal to a first preset annual limit and the operating time in a severe environment is longer than a first preset time;
classifying the target container into a third category when the service life of the target container is less than or equal to a first preset annual limit value and the operation duration in a severe environment is less than or equal to a first preset time;
classifying the target container into a fourth category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is less than or equal to a first preset time;
the harsh environment includes at least one of: rainy weather, snowy and snowy weather, and sand storm weather;
And generating the target image based on the one-to-one mapping relation between the classification category and the related image.
In one embodiment, the processor when executing the computer program further performs the steps of:
graying the target image to generate a gray image:
Gray=0.3R+0.59G+0.11B
wherein R, B, G represents the original pixel values of the red, green and blue channels, respectively, and Gray represents the graying channel pixel value;
normalizing the gray scale image:
wherein I (x, y) represents a grayscale image, and (x, y) represents a pixel point in the grayscale image;
calculating gradient amplitude and gradient direction of pixel points in the gray level image:
wherein G (x, y) represents the gradient magnitude, s (x, y) represents the gradient direction, G x (x, y) represents a horizontal gradient, G y (x, y) represents a vertical gradient;
dividing the gray image into cell units with the same size, combining a fixed number of cell units into a block, constructing a direction gradient histogram for each cell unit, connecting the direction gradient histograms of the cell units contained in each block in series to obtain a direction gradient histogram of the whole block, combining the feature vectors of all the cell units in the block to obtain the feature vector in the target image, wherein the dimension k is as follows:
Where a and d represent the height and width of the input image, e and b represent the size values of the cells and blocks, c represents the moving step length, and q represents the total number of gradient directions contained in one cell.
In one embodiment, the processor when executing the computer program further performs the steps of:
forming a matrix X of z rows and k columns by the feature vectors in the target image, wherein z represents the number of the feature vectors, and k represents the dimension of the feature vectors;
zero-equalizing all rows of the matrix X;
solving a covariance matrix of the matrix X;
obtaining the eigenvalue and corresponding eigenvector of the covariance matrix;
and arranging the feature vectors into a matrix according to the corresponding feature values from top to bottom, and taking the first m rows to form a matrix R, namely the data of the first class of target images after dimension reduction to m dimensions.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a training image sample;
constructing an initial convolutional neural network model, wherein the initial convolutional neural network model comprises one of the following components: a LeNet network model, a VGGNet network model, a ResNet network model, or a GoogleNet network model;
inputting the training image sample into the initial convolutional neural network model for training;
And when the training learning rate reaches a preset value, judging that the initial convolutional neural network model training is completed, and obtaining the convolutional neural network model.
In one embodiment, the processor when executing the computer program further performs the steps of:
if the category corresponding to the target image input into the convolutional neural network model is a container of a first category, a second category or a fourth category, processing the first recognition result output by the convolutional neural network model:
acquiring a central coordinate value of a lock hole in a first identification result;
calculating the difference between the central coordinate value and the ideal coordinate value, wherein the ideal coordinate value is obtained in the following way:
selecting the same position of an initial target container, shooting a first image with the highest service life and the longest operation time in a severe environment, and shooting a second image with the lowest service life and the shortest operation time in the severe environment;
splicing the first image and the second image to obtain an actual coordinate value of the spliced image;
correcting the actual coordinate value based on a time distortion coefficient to obtain the ideal coordinate value;
when the coordinate difference value of any three or more lock holes in the container is smaller than a first preset value, outputting the corresponding ideal coordinate value, namely the second identification result.
Example 4
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
s1: collecting related images of a target container;
s2: preprocessing the related images to generate target images;
s3: extracting a feature vector in the target image;
s4: inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result;
s5: performing secondary processing on the first recognition result to generate a second recognition result;
s6: and detaching or installing the lock pin of the target container based on the second identification result, so as to realize the safety management of the detachment and installation of the lock pin of the target container.
In one embodiment, the computer program when executed by the processor further performs the steps of:
classifying the target container based on the related data of the target container:
classifying the target container into a first category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is greater than a first preset time;
classifying the target container into a second category when the service life of the target container is less than or equal to a first preset annual limit and the operating time in a severe environment is longer than a first preset time;
Classifying the target container into a third category when the service life of the target container is less than or equal to a first preset annual limit value and the operation duration in a severe environment is less than or equal to a first preset time;
classifying the target container into a fourth category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is less than or equal to a first preset time;
the harsh environment includes at least one of: rainy weather, snowy and snowy weather, and sand storm weather;
and generating the target image based on the one-to-one mapping relation between the classification category and the related image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
graying the target image to generate a gray image:
Gray=0.3R+0.59G+0.11B
wherein R, B, G represents the original pixel values of the red, green and blue channels, respectively, and Gray represents the graying channel pixel value;
normalizing the gray scale image:
wherein I (x, y) represents a grayscale image, and (x, y) represents a pixel point in the grayscale image;
calculating gradient amplitude and gradient direction of pixel points in the gray level image:
Wherein G (x, y) represents the gradient magnitude and s (x, y) representsGradient direction, G x (x, y) represents a horizontal gradient, G y (x, y) represents a vertical gradient;
dividing the gray image into cell units with the same size, combining a fixed number of cell units into a block, constructing a direction gradient histogram for each cell unit, connecting the direction gradient histograms of the cell units contained in each block in series to obtain a direction gradient histogram of the whole block, combining the feature vectors of all the cell units in the block to obtain the feature vector in the target image, wherein the dimension k is as follows:
where a and d represent the height and width of the input image, e and b represent the size values of the cells and blocks, c represents the moving step length, and q represents the total number of gradient directions contained in one cell.
In one embodiment, the processor when executing the computer program further performs the steps of:
forming a matrix X of z rows and k columns by the feature vectors in the target image, wherein z represents the number of the feature vectors, and k represents the dimension of the feature vectors;
zero-equalizing all rows of the matrix X;
solving a covariance matrix of the matrix X;
obtaining the eigenvalue and corresponding eigenvector of the covariance matrix;
And arranging the feature vectors into a matrix according to the corresponding feature values from top to bottom, and taking the first m rows to form a matrix R, namely the data of the first class of target images after dimension reduction to m dimensions.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a training image sample;
constructing an initial convolutional neural network model, wherein the initial convolutional neural network model comprises one of the following components: a LeNet network model, a VGGNet network model, a ResNet network model, or a GoogleNet network model;
inputting the training image sample into the initial convolutional neural network model for training;
and when the training learning rate reaches a preset value, judging that the initial convolutional neural network model training is completed, and obtaining the convolutional neural network model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
if the category corresponding to the target image input into the convolutional neural network model is a container of a first category, a second category or a fourth category, processing the first recognition result output by the convolutional neural network model:
acquiring a central coordinate value of a lock hole in a first identification result;
Calculating the difference between the central coordinate value and the ideal coordinate value, wherein the ideal coordinate value is obtained in the following way:
selecting the same position of an initial target container, shooting a first image with the highest service life and the longest operation time in a severe environment, and shooting a second image with the lowest service life and the shortest operation time in the severe environment;
splicing the first image and the second image to obtain an actual coordinate value of the spliced image;
correcting the actual coordinate value based on a time distortion coefficient to obtain the ideal coordinate value;
when the coordinate difference value of any three or more lock holes in the container is smaller than a first preset value, outputting the corresponding ideal coordinate value, namely the second identification result.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (5)

1. The container lock pin disassembly and assembly safety management method based on image analysis is characterized by comprising the following steps of:
collecting related images of a target container;
preprocessing the related images to generate target images;
extracting a feature vector in the target image;
inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result;
Performing secondary processing on the first recognition result to generate a second recognition result;
disassembling or installing the lock pin of the target container based on the second identification result, so as to realize the safety management of the disassembly and assembly of the lock pin of the target container;
preprocessing the related image to generate a target image, wherein the generating of the target image comprises the following steps:
classifying the target container based on the related data of the target container:
classifying the target container into a first category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is greater than a first preset time;
classifying the target container into a second category when the service life of the target container is less than or equal to a first preset annual limit and the operating time in a severe environment is longer than a first preset time;
classifying the target container into a third category when the service life of the target container is less than or equal to a first preset annual limit value and the operation duration in a severe environment is less than or equal to a first preset time;
classifying the target container into a fourth category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is less than or equal to a first preset time;
The harsh environment includes at least one of: rainy weather, snowy and snowy weather, and sand storm weather;
generating the target image based on the one-to-one mapping relation between the classification category and the related image;
extracting the feature vector in the target image comprises:
graying the target image to generate a gray image:
wherein R, B, G represents the original pixel values of the red, green and blue channels, respectively, and Gray represents the graying channel pixel value;
normalizing the gray scale image:
wherein,representing a grey scale image +.>Representing pixel points in the gray scale image;
calculating gradient amplitude and gradient direction of pixel points in the gray level image:
wherein,representing gradient magnitude +.>Indicating gradient direction +_>A gradient in the horizontal direction is indicated,representing a vertical gradient;
dividing the gray image into cell units with the same size, combining a fixed number of cell units into a block, constructing a direction gradient histogram for each cell unit, connecting the direction gradient histograms of the cell units contained in each block in series to obtain a direction gradient histogram of the whole block, combining the feature vectors of all the cell units in the block to obtain the feature vector in the target image, wherein the dimension k is as follows:
Wherein a and d respectively represent the height and width of the input image, e and b respectively represent the size values of the units and the blocks, c represents the moving step length, and q represents the total number of gradient directions contained in one unit;
performing dimension reduction processing on the target image of the first category, wherein the dimension reduction processing comprises the following steps:
forming a matrix X of z rows and k columns by the feature vectors in the target image, wherein z represents the number of the feature vectors, and k represents the dimension of the feature vectors;
zero-equalizing all rows of the matrix X;
solving a covariance matrix of the matrix X;
obtaining the eigenvalue and corresponding eigenvector of the covariance matrix;
arranging the feature vectors into a matrix according to the corresponding feature values from top to bottom in rows, and taking the first m rows to form a matrix R, namely the data of the first class of target images after dimension reduction to m dimensions;
performing secondary processing on the first recognition result to generate a second recognition result includes:
if the category corresponding to the target image input into the convolutional neural network model is a container of a first category, a second category or a fourth category, processing the first recognition result output by the convolutional neural network model:
acquiring a central coordinate value of a lock hole in a first identification result;
Calculating the difference between the central coordinate value and the ideal coordinate value, wherein the ideal coordinate value is obtained in the following way:
selecting the same position of an initial target container, shooting a first image with the highest service life and the longest operation time in a severe environment, and shooting a second image with the lowest service life and the shortest operation time in the severe environment;
splicing the first image and the second image to obtain an actual coordinate value of the spliced image;
correcting the actual coordinate value based on a time distortion coefficient to obtain the ideal coordinate value;
when the coordinate difference value of any three or more lock holes in the container is smaller than a first preset value, outputting the corresponding ideal coordinate value, namely the second identification result.
2. The method for managing the disassembly and assembly safety of the container lock pin based on image analysis according to claim 1, wherein the construction process of the convolutional neural network model comprises the following steps:
acquiring a training image sample;
constructing an initial convolutional neural network model, wherein the initial convolutional neural network model comprises one of the following components: a LeNet network model, a VGGNet network model, a ResNet network model, or a GoogleNet network model;
Inputting the training image sample into the initial convolutional neural network model for training;
and when the training learning rate reaches a preset value, judging that the initial convolutional neural network model training is completed, and obtaining the convolutional neural network model.
3. A container locking pin disassembly and assembly safety management system based on image analysis, the system comprising:
the acquisition unit is used for acquiring related images of the target container;
the preprocessing unit is used for preprocessing the related images to generate target images;
an extracting unit, configured to extract a feature vector in the target image;
the first recognition result generation unit is used for inputting the feature vector into a pre-constructed convolutional neural network model to obtain a first recognition result;
the second recognition result generating unit is used for carrying out secondary processing on the first recognition result to generate a second recognition result;
the result application unit is used for detaching or installing the lock pin of the target container based on the second identification result, so that the safety management of the detachment of the lock pin of the target container is realized;
preprocessing the related image to generate a target image, wherein the generating of the target image comprises the following steps:
Classifying the target container based on the related data of the target container:
classifying the target container into a first category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is greater than a first preset time;
classifying the target container into a second category when the service life of the target container is less than or equal to a first preset annual limit and the operating time in a severe environment is longer than a first preset time;
classifying the target container into a third category when the service life of the target container is less than or equal to a first preset annual limit value and the operation duration in a severe environment is less than or equal to a first preset time;
classifying the target container into a fourth category when the service life of the target container is greater than a first preset annual limit and the operating time in a severe environment is less than or equal to a first preset time;
the harsh environment includes at least one of: rainy weather, snowy and snowy weather, and sand storm weather;
generating the target image based on the one-to-one mapping relation between the classification category and the related image;
Extracting the feature vector in the target image comprises:
graying the target image to generate a gray image:
wherein R, B, G represents the original pixel values of the red, green and blue channels, respectively, and Gray represents the graying channel pixel value;
normalizing the gray scale image:
wherein,representing a grey scale image +.>Representing pixel points in the gray scale image;
calculating gradient amplitude and gradient direction of pixel points in the gray level image:
wherein,representing gradient magnitude +.>Indicating gradient direction +_>A gradient in the horizontal direction is indicated,representing a vertical gradient;
dividing the gray image into cell units with the same size, combining a fixed number of cell units into a block, constructing a direction gradient histogram for each cell unit, connecting the direction gradient histograms of the cell units contained in each block in series to obtain a direction gradient histogram of the whole block, combining the feature vectors of all the cell units in the block to obtain the feature vector in the target image, wherein the dimension k is as follows:
wherein a and d respectively represent the height and width of the input image, e and b respectively represent the size values of the units and the blocks, c represents the moving step length, and q represents the total number of gradient directions contained in one unit;
Performing dimension reduction processing on the target image of the first category, wherein the dimension reduction processing comprises the following steps:
forming a matrix X of z rows and k columns by the feature vectors in the target image, wherein z represents the number of the feature vectors, and k represents the dimension of the feature vectors;
zero-equalizing all rows of the matrix X;
solving a covariance matrix of the matrix X;
obtaining the eigenvalue and corresponding eigenvector of the covariance matrix;
arranging the feature vectors into a matrix according to the corresponding feature values from top to bottom in rows, and taking the first m rows to form a matrix R, namely the data of the first class of target images after dimension reduction to m dimensions;
performing secondary processing on the first recognition result to generate a second recognition result includes:
if the category corresponding to the target image input into the convolutional neural network model is a container of a first category, a second category or a fourth category, processing the first recognition result output by the convolutional neural network model:
acquiring a central coordinate value of a lock hole in a first identification result;
calculating the difference between the central coordinate value and the ideal coordinate value, wherein the ideal coordinate value is obtained in the following way:
selecting the same position of an initial target container, shooting a first image with the highest service life and the longest operation time in a severe environment, and shooting a second image with the lowest service life and the shortest operation time in the severe environment;
Splicing the first image and the second image to obtain an actual coordinate value of the spliced image;
correcting the actual coordinate value based on a time distortion coefficient to obtain the ideal coordinate value;
when the coordinate difference value of any three or more lock holes in the container is smaller than a first preset value, outputting the corresponding ideal coordinate value, namely the second identification result.
4. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 2 when the computer program is executed by the processor.
5. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 2.
CN202310149782.4A 2023-02-13 2023-02-13 Container lock pin disassembly and assembly safety management method and system based on image analysis Active CN116109606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310149782.4A CN116109606B (en) 2023-02-13 2023-02-13 Container lock pin disassembly and assembly safety management method and system based on image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310149782.4A CN116109606B (en) 2023-02-13 2023-02-13 Container lock pin disassembly and assembly safety management method and system based on image analysis

Publications (2)

Publication Number Publication Date
CN116109606A CN116109606A (en) 2023-05-12
CN116109606B true CN116109606B (en) 2023-12-08

Family

ID=86267188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310149782.4A Active CN116109606B (en) 2023-02-13 2023-02-13 Container lock pin disassembly and assembly safety management method and system based on image analysis

Country Status (1)

Country Link
CN (1) CN116109606B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992824A (en) * 2017-11-30 2018-05-04 努比亚技术有限公司 Take pictures processing method, mobile terminal and computer-readable recording medium
CN109241903A (en) * 2018-08-30 2019-01-18 平安科技(深圳)有限公司 Sample data cleaning method, device, computer equipment and storage medium
CN209618706U (en) * 2019-01-11 2019-11-12 无锡华东重机吊具制造有限公司 Container spreader open locking instruction device
WO2021062619A1 (en) * 2019-09-30 2021-04-08 上海成业智能科技股份有限公司 Container lock pin sorting method and apparatus, and device and storage medium
WO2021062615A1 (en) * 2019-09-30 2021-04-08 上海成业智能科技股份有限公司 Method and device for mounting and dismounting container lock pin, apparatus, and storage medium
CN115180512A (en) * 2022-09-09 2022-10-14 湖南洋马信息有限责任公司 Automatic loading and unloading method and system for container truck based on machine vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992824A (en) * 2017-11-30 2018-05-04 努比亚技术有限公司 Take pictures processing method, mobile terminal and computer-readable recording medium
CN109241903A (en) * 2018-08-30 2019-01-18 平安科技(深圳)有限公司 Sample data cleaning method, device, computer equipment and storage medium
CN209618706U (en) * 2019-01-11 2019-11-12 无锡华东重机吊具制造有限公司 Container spreader open locking instruction device
WO2021062619A1 (en) * 2019-09-30 2021-04-08 上海成业智能科技股份有限公司 Container lock pin sorting method and apparatus, and device and storage medium
WO2021062615A1 (en) * 2019-09-30 2021-04-08 上海成业智能科技股份有限公司 Method and device for mounting and dismounting container lock pin, apparatus, and storage medium
CN115180512A (en) * 2022-09-09 2022-10-14 湖南洋马信息有限责任公司 Automatic loading and unloading method and system for container truck based on machine vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Neighborhood Aware Caching and Interest Dissemination Scheme for Content Centric Networks;Amitangshu Pal;《IEEE》;3900 - 3917 *
智能感知技术在集装箱码头堆场智能装卸中的应用;唐波;《港口科技》;1-7 *

Also Published As

Publication number Publication date
CN116109606A (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN112016438B (en) Method and system for identifying certificate based on graph neural network
CN103268481B (en) A kind of Text Extraction in complex background image
CN109285105B (en) Watermark detection method, watermark detection device, computer equipment and storage medium
CN111401371A (en) Text detection and identification method and system and computer equipment
CN110189341B (en) Image segmentation model training method, image segmentation method and device
CN108985161B (en) Low-rank sparse representation image feature learning method based on Laplace regularization
CN116109606B (en) Container lock pin disassembly and assembly safety management method and system based on image analysis
JP7320570B2 (en) Method, apparatus, apparatus, medium and program for processing images
US11922312B2 (en) Image classification system, image classification method, and image classification program
CN111104831A (en) Visual tracking method, device, computer equipment and medium
EP4220545A1 (en) Abnormality detection device, abnormality detection method, and abnormality detection system
CN105654138A (en) Orthogonal projection and dimensionality reduction classification method and system for multidimensional data
US20160379138A1 (en) Classifying test data based on a maximum margin classifier
RU2297039C2 (en) Method for recognizing complex graphical objects
CN109726722B (en) Character segmentation method and device
CN114741697B (en) Malicious code classification method and device, electronic equipment and medium
CN116704281A (en) Model training method, device, image recognition method and computer equipment
CN109829745A (en) Business revenue data predication method, device, computer equipment and storage medium
CN109767263A (en) Business revenue data predication method, device, computer equipment and storage medium
CN115424267A (en) Rotating target detection method and device based on Gaussian distribution
CN111241974B (en) Bill information acquisition method, device, computer equipment and storage medium
CN111161303A (en) Marking method, marking device, computer equipment and storage medium
CN111709479B (en) Image classification method and device
CN113538291B (en) Card image inclination correction method, device, computer equipment and storage medium
CN116822205B (en) Rapid fault early warning method for multi-dimensional ring main unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant