CN114155438A - Container loading and unloading safety detection method and system - Google Patents

Container loading and unloading safety detection method and system Download PDF

Info

Publication number
CN114155438A
CN114155438A CN202111481868.4A CN202111481868A CN114155438A CN 114155438 A CN114155438 A CN 114155438A CN 202111481868 A CN202111481868 A CN 202111481868A CN 114155438 A CN114155438 A CN 114155438A
Authority
CN
China
Prior art keywords
machine learning
learning model
image
encoder
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111481868.4A
Other languages
Chinese (zh)
Inventor
高聪
杭珂烨
赵增民
季彬
陆思烽
肖梓贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Feiyan Intelligent Technology Co ltd
Original Assignee
Nanjing Feiyan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Feiyan Intelligent Technology Co ltd filed Critical Nanjing Feiyan Intelligent Technology Co ltd
Priority to CN202111481868.4A priority Critical patent/CN114155438A/en
Publication of CN114155438A publication Critical patent/CN114155438A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a container loading and unloading safety detection method and a system, wherein the method comprises the following steps: training a machine learning model based on a neural network by using a plurality of groups of training data to obtain a converged machine learning model; wherein the sets of training data are from a field work data set; shooting a lock head connected with a container on the container truck and the gantry crane through a camera to obtain a photo; inputting the photograph into the machine learning model; and acquiring a result output by the machine learning model, wherein the result is used for indicating whether the lock head is in a safe state. Through the application, the problem that potential safety hazards caused by judging the lock head of the container by manpower in the prior art are solved, and therefore labor cost is reduced.

Description

Container loading and unloading safety detection method and system
Technical Field
The application relates to the field of artificial intelligence, in particular to a container loading and unloading safety detection method and system.
Background
One of the major potential safety hazards faced by container terminals (including conventional terminals and intelligent modern terminals) is the lifting caused by not unlocking all the container locks during the loading and unloading process of the inside and outside container containers. The existing solutions to this safety problem rely on human labor. Need dispose the field personnel and patrol and examine in traditional pier, judge through artifical observation and professional artificial experience, have the cost of labor increase, the not convenient to manage scheduling problem. In the existing intelligent modern wharf, manual field operation is replaced by only installing a camera on the field, the condition of lifting the container truck is still judged by manually observing real-time video data, no obvious reduction of labor cost exists actually, and the complexity of manual operation is increased.
Disclosure of Invention
The embodiment of the application provides a container loading and unloading safety detection method and system, which at least solve the problem caused by potential safety hazards caused by judging a lock head of a container with a container truck by manpower in the prior art.
According to one aspect of the present application, there is provided a container handling security detection method, including: training a machine learning model based on a neural network by using a plurality of groups of training data to obtain a converged machine learning model; wherein the sets of training data are from a field work data set; shooting a lock head connected with a container on the container truck and the gantry crane through a camera to obtain a photo; inputting the photograph into the machine learning model; and acquiring a result output by the machine learning model, wherein the result is used for indicating whether the lock head is in a safe state.
Further, training the neural network-based machine learning model includes: encoding the field operation data set by an encoder based on ResNet; and inputting the coded field operation data set into a machine learning model based on clustering for classification.
Further, encoding the field job data set via a ResNet based encoder comprises: inputting images in the field operation set; inputting the image to an encoder; receiving a low-dimensional code output by the encoder; and inputting the low-dimensional code into a decoder to obtain an image with the same size as the input image.
Further, the low dimension means that the dimension of the output code is smaller than the dimension of the image before the image is input into the encoder, and the dimension is used for indicating the number of the features in the image.
Further, still include: and alarming under the condition that the lock head is not in a safe state.
According to another aspect of the present application, there is also provided a container handling security detection system, comprising: the training module is used for training the machine learning model based on the neural network by using a plurality of groups of training data to obtain a converged machine learning model; wherein the sets of training data are from a field work data set; the shooting module is used for shooting a lock head connected with a container on the gantry crane and the container truck through a camera to obtain a picture; an input module to input the photograph into the machine learning model; and the obtaining module is used for obtaining a result output by the machine learning model, wherein the result is used for indicating whether the lock head is in a safe state.
Further, the training module is to: encoding the field operation data set by an encoder based on ResNet; and inputting the coded field operation data set into a machine learning model based on clustering for classification.
Further, the training module is to: inputting images in the field operation set; inputting the image to an encoder; receiving a low-dimensional code output by the encoder; and inputting the low-dimensional code into a decoder to obtain an image with the same size as the input image.
Further, the low dimension means that the dimension of the output code is smaller than the dimension of the image before the image is input into the encoder, and the dimension is used for indicating the number of the features in the image.
Further, still include: and the alarm module is used for giving an alarm under the condition that the lock head is not in a safe state.
In the embodiment of the application, a machine learning model based on a neural network is trained by using a plurality of groups of training data to obtain a convergent machine learning model; wherein the sets of training data are from a field work data set; shooting a lock head connected with a container on the container truck and the gantry crane through a camera to obtain a photo; inputting the photograph into the machine learning model; and acquiring a result output by the machine learning model, wherein the result is used for indicating whether the lock head is in a safe state. Through the application, the problem that potential safety hazards caused by judging the lock head of the container by manpower in the prior art are solved, and therefore labor cost is reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
FIG. 1 is a schematic illustration of a hub crane position according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a neural network-based hub anti-hang method according to an embodiment of the present application;
FIG. 3 is a residual block diagram according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a principle according to an embodiment of the present application;
FIG. 5 is a schematic diagram of message output logic according to an embodiment of the present application;
fig. 6 is a flow chart of a container handling security detection method according to an embodiment of the present application.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
In this embodiment, a container handling security detection method is provided, and fig. 6 is a flowchart of a container handling security detection method according to an embodiment of the present application, where as shown in fig. 6, the flowchart includes the following steps:
step S602, training a machine learning model based on a neural network by using a plurality of groups of training data to obtain a convergent machine learning model; wherein the sets of training data are from a field work data set; the field operation data set comprises images and a label configured for each image, wherein the label is at least used for identifying whether the states of the gantry and the container lock head in the images are safe or not.
There are many ways of training, for example, the field job data set may be encoded via a ResNet based encoder; and inputting the coded field operation data set into a machine learning model based on clustering for classification. Optionally, encoding the field job data set via a ResNet-based encoder comprises: inputting images in the field operation set; inputting the image to an encoder; receiving a low-dimensional code output by the encoder; inputting the low-dimensional code into a decoder, and acquiring an image with the same size as the input image, wherein the low dimension refers to the dimension of the output code being smaller than the dimension of the image before the image is input into the encoder, and the dimension is used for indicating the number of features in the image.
And step S604, shooting a lock head connected with the container on the gantry crane and the container truck through the camera to obtain a picture.
Step S606, inputting the photo into the machine learning model.
Step S608, obtaining a result output by the machine learning model, where the result is used to indicate whether the lock head is in a safe state.
As an optional implementation manner, an alarm may be further performed when the lock head is not in the safe state.
Through the steps, the problem caused by potential safety hazards caused by judging the lock head of the container by manpower in the prior art is solved, and therefore the labor cost is reduced.
The following description is made with reference to an optional embodiment, and provides a method for realizing clustering by using an automatic encoder based on ResNet, which realizes identification and judgment of lifting of a container truck and sends out an alarm by using a machine learning method, thereby realizing real-time wharf monitoring and intelligent wharf management.
The present embodiment now divides the operation state of the card concentrator into a safe operation state and a non-safe operation state. The safe working state refers to a working state without abnormal occurrence. The unsafe working state mainly comprises two main hoisting conditions caused by the dead locking of the lock head: the inner side is hoisted and the outer side is hoisted. And the definition of the inner and outer sides is determined with respect to the location of the field lens mounting.
As shown in fig. 1, the high-definition camera is usually installed on the container truck (usually composed of two models of 20 feet and 40 feet) near the gantry, and for convenience of description, the side of the container truck near the container stack is usually referred to as the outside or left side of the container truck in this embodiment; correspondingly, the high-definition camera mounting side of the collecting card is the inner side or the right side. The inner side of the container is lifted, namely the container truck is unlocked in time relative to the left side of the high-definition camera, and the high-definition camera can capture the situation that the abnormal posture of the clear container rises. The situation that the lock head is not unlocked in time and the high-definition camera cannot capture the clear abnormal container position rising exists on the right side of the container truck when the container truck is lifted from the outer side.
In this embodiment, if the existing camera can shoot all the situations, no additional new camera needs to be added; if some cameras can not shoot, new cameras can be added according to the situation that the cameras can not shoot.
As shown in fig. 2, this embodiment provides a method for implementing clustering by an automatic encoder based on ResNet, which mainly includes: data acquisition, data processing and real-time feedback. Wherein the squares represent the hardware and software portions of the system, the diamonds represent the data content, the arrows are the data flow direction, the bar graphs represent the field operation data sets, and the clouds represent the defined autoencoders.
Under the numerous exploration of the early VGG neural network, the more network parameters, the more layers and the more complex structure are generally found, the more the expression capability of the network is, and therefore the classification accuracy can be higher. However, after the number of network layers of the deep CNN reaches a certain depth, the depth of the network is continuously increased, and the speed of network convergence is reduced, so that the accuracy of network classification is reduced. Since ResNet was introduced to solve the above problem. It is not like the VGG network in the general sense that a direct mapping between input and output is established using a parametric layer, but rather a representation of a residual module is introduced that compares the residuals between input and output using a plurality of parametric layer analyses, as shown in fig. 3. In this embodiment, if the input of a certain layer network structure is X, the output is F (X), the mapping of a general VGG can be expressed as F (X) = X, and the mapping of a residual network structure can be simply expressed as F (X) = (F (X) — X) + X. Experiments effectively prove that a network with simple stacking depth is easy to have higher training errors at the later stage, the depth of the ResNet based on the residual error module is increased, considerable accuracy improvement can be effectively achieved, and meanwhile, compared with a common VGG, the ResNet has higher convergence, calculation resources are easy to reasonably utilize in actual work, and the running speed is improved.
After selecting a suitable neural network structure, this embodiment first implements a ResNet automatic encoder (configuration a and configuration b are based on a comparison graph of the ResNet automatic encoder and an original encoder), fig. 4 is a schematic diagram of the principle according to an embodiment of the present application, and with reference to fig. 4, the main steps are as follows:
1. preparing an input picture input network structure;
2. inputting the image into an encoder, consisting of a convolutional layer with standard ResNet and Reluctant activations and a maximum layer;
3. outputting a low dimensional code
4. Inputting the code into a decoder (composed of a transposed convolution layer and a deselection layer)
5. Obtaining an output picture of the same size as the input picture
This embodiment implements the encoder portion of the autoencoder, which has the same structure as the feature extraction layer of the ResNet18 convolutional network, and then discards the original classifier and pooling layer, because this embodiment only needs to multiplex the convolutional layer to extract the corresponding features (for example, the ResNet Block in configuration b is compared with the original ResNet structure of configuration c, and the ResNet Block discards the fully-connected layer and already SoftMax layer). Next, the present embodiment rewrites the pooling layer and returns the required pool index to the output, so the present embodiment rewrites the forward () function, adds the function of returning the pool index, and usually has two outputs: value and pool index. Through a forward () function, the present embodiment implements layer-wise execution at the encoder, generating pool indices and returning ordered sets.
The decoder in this embodiment can be simply regarded as the transpose of the decoder, and can be referred to as the mirror image of the encoder. The decoder comprises two parts: two-dimensional transpose convolution and two-dimensional solution pools, and abandon the original normalization and activation functions. Meanwhile, a forward () function is rewritten, so that the decoder forwards the tensor of the decoded image and the pool index in the process of de-pooling.
After the automatic encoder is trained, the encoder can represent the characteristics appearing at the high level of the image data set on the lower unique top, and the codes corresponding to the characteristics with high-level similarity can be measured by Euclidean distance and are closer to those of random codes. Therefore, after the automatic encoder is implemented, the embodiment mainly obtains the euclidean distance between pictures, puts the euclidean distance into a k-means cluster, and sets the k value to be 2, namely, corresponding to two states of safe and unsafe, so that the automatic classification of the pictures can be implemented.
In the data acquisition stage, the high-definition camera is installed at a designated fixed position on the operation site, so that the real-time picture of the field operation is captured, and the real-time information is transmitted back to be analyzed and processed by the network model of the embodiment. In the data acquisition stage, the high-definition camera is installed at a designated fixed position on the operation site, so that the real-time picture of the field operation is captured, and the real-time information is transmitted back to be analyzed and processed by the network model of the embodiment.
In the data processing phase, the message output logic is as shown in FIG. 5. Firstly, judging whether the model in the working state works on the site, and judging whether the model in the working state works on the site. And if the output of the model is in the non-working state, directly transmitting the state information in the non-working state. And when the model output is in a working state, judging whether the model in the operation safety state runs. If the output of the model is in a safe state, the state information in the safe working state is directly transmitted back, and if the output of the model is in a non-safe state, an alarm mechanism is set out to remind field operators (including wharf workers and drivers) that the container truck is lifted and needs to be stopped for processing. Wherein, determining whether to be in the working state or the non-working state may be an optional implementation manner.
In this embodiment, an electronic device is provided, comprising a memory in which a computer program is stored and a processor configured to run the computer program to perform the method in the above embodiments.
The programs described above may be run on a processor or may also be stored in memory (or referred to as computer-readable media), which includes both non-transitory and non-transitory, removable and non-removable media, that implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
These computer programs may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks, and corresponding steps may be implemented by different modules.
Such an apparatus or system is provided in this embodiment. The system is called a container loading and unloading safety detection system and comprises: the training module is used for training the machine learning model based on the neural network by using a plurality of groups of training data to obtain a converged machine learning model; wherein the sets of training data are from a field work data set; the shooting module is used for shooting a lock head connected with a container on the gantry crane and the container truck through a camera to obtain a picture; an input module to input the photograph into the machine learning model; and the obtaining module is used for obtaining a result output by the machine learning model, wherein the result is used for indicating whether the lock head is in a safe state.
The system or the apparatus is used for implementing the functions of the method in the foregoing embodiments, and each module in the system or the apparatus corresponds to each step in the method, which has been described in the method and is not described herein again.
For example, the training module is to: encoding the field operation data set by an encoder based on ResNet; and inputting the coded field operation data set into a machine learning model based on clustering for classification. Optionally, the training module is configured to: inputting images in the field operation set; inputting the image to an encoder; receiving a low-dimensional code output by the encoder; and inputting the low-dimensional code into a decoder to obtain an image with the same size as the input image. Optionally, the low dimension is that the dimension of the output code is smaller than the dimension of the image before the image is input into the encoder, and the dimension is used for indicating the number of features in the image.
For another example, the method may further include: and the alarm module is used for giving an alarm under the condition that the lock head is not in a safe state.
In the embodiment, the required physical equipment is less, only the corresponding high-definition camera is required to be installed on a working site, and no additional sensor is required to be installed. The required cost of labor of this embodiment reduces by a wide margin, and required human intervention appears in the artifical correction process after the warning appearing, has consequently reduced the cost of labor greatly, has also ensured the security of staff's work simultaneously. The embodiment eliminates the consumption of computing resources required by pre-training the neural network model, and the computing loss required in the stage of judging and classifying is small.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A container loading and unloading safety detection method is characterized by comprising the following steps:
training a machine learning model based on a neural network by using a plurality of groups of training data to obtain a converged machine learning model; wherein the sets of training data are from a field work data set;
shooting a lock head connected with a container on the container truck and the gantry crane through a camera to obtain a photo;
inputting the photograph into the machine learning model;
and acquiring a result output by the machine learning model, wherein the result is used for indicating whether the lock head is in a safe state.
2. The method of claim 1, wherein training a neural network-based machine learning model comprises:
encoding the field operation data set by an encoder based on ResNet;
and inputting the coded field operation data set into a machine learning model based on clustering for classification.
3. The method of claim 2, wherein encoding the field operation data set via a ResNet based encoder comprises:
inputting images in the field operation set;
inputting the image to an encoder;
receiving a low-dimensional code output by the encoder;
and inputting the low-dimensional code into a decoder to obtain an image with the same size as the input image.
4. The method of claim 3, wherein the low dimension is that the dimension of the output code is smaller than the dimension of the image before the image is input to the encoder, and the dimension is used to indicate the number of features in the image.
5. The method of any of claims 1 to 4, further comprising:
and alarming under the condition that the lock head is not in a safe state.
6. A container handling security detection system, comprising:
the training module is used for training the machine learning model based on the neural network by using a plurality of groups of training data to obtain a converged machine learning model; wherein the sets of training data are from a field work data set;
the shooting module is used for shooting a lock head connected with a container on the gantry crane and the container truck through a camera to obtain a picture;
an input module to input the photograph into the machine learning model;
and the obtaining module is used for obtaining a result output by the machine learning model, wherein the result is used for indicating whether the lock head is in a safe state.
7. The system of claim 6, wherein the training module is to:
encoding the field operation data set by an encoder based on ResNet;
and inputting the coded field operation data set into a machine learning model based on clustering for classification.
8. The system of claim 7, wherein the training module is configured to:
inputting images in the field operation set;
inputting the image to an encoder;
receiving a low-dimensional code output by the encoder;
and inputting the low-dimensional code into a decoder to obtain an image with the same size as the input image.
9. The system of claim 8, wherein the low dimension is that the dimension of the output encoding is smaller than the dimension of the image before it is input to the encoder, and the dimension is used to indicate the number of features in the image.
10. The system of any one of claims 6 to 9, further comprising:
and the alarm module is used for giving an alarm under the condition that the lock head is not in a safe state.
CN202111481868.4A 2021-12-07 2021-12-07 Container loading and unloading safety detection method and system Pending CN114155438A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111481868.4A CN114155438A (en) 2021-12-07 2021-12-07 Container loading and unloading safety detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111481868.4A CN114155438A (en) 2021-12-07 2021-12-07 Container loading and unloading safety detection method and system

Publications (1)

Publication Number Publication Date
CN114155438A true CN114155438A (en) 2022-03-08

Family

ID=80452730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111481868.4A Pending CN114155438A (en) 2021-12-07 2021-12-07 Container loading and unloading safety detection method and system

Country Status (1)

Country Link
CN (1) CN114155438A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114926916A (en) * 2022-05-10 2022-08-19 上海咪啰信息科技有限公司 5G unmanned aerial vehicle developments AI system of patrolling and examining

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680092A (en) * 2017-10-12 2018-02-09 中科视拓(北京)科技有限公司 A kind of detection of container lock and method for early warning based on deep learning
CN108639956A (en) * 2018-06-27 2018-10-12 上海沪东集装箱码头有限公司 The container crane intelligent early-warning system and method for view-based access control model detection technique
CN110197499A (en) * 2019-05-27 2019-09-03 江苏警官学院 A kind of container safety lifting monitoring method based on computer vision
CN112528721A (en) * 2020-04-10 2021-03-19 福建电子口岸股份有限公司 Bridge crane truck safety positioning method and system
CN113177431A (en) * 2021-03-15 2021-07-27 福建电子口岸股份有限公司 Method and system for preventing container truck from being lifted based on machine vision and deep learning
CN113177557A (en) * 2021-03-15 2021-07-27 福建电子口岸股份有限公司 Bowling prevention method and system based on machine vision and deep learning
CN113184707A (en) * 2021-01-15 2021-07-30 福建电子口岸股份有限公司 Method and system for preventing lifting of container truck based on laser vision fusion and deep learning
CN113420646A (en) * 2021-06-22 2021-09-21 天津港第二集装箱码头有限公司 Lock station connection lock detection system and method based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680092A (en) * 2017-10-12 2018-02-09 中科视拓(北京)科技有限公司 A kind of detection of container lock and method for early warning based on deep learning
CN108639956A (en) * 2018-06-27 2018-10-12 上海沪东集装箱码头有限公司 The container crane intelligent early-warning system and method for view-based access control model detection technique
CN110197499A (en) * 2019-05-27 2019-09-03 江苏警官学院 A kind of container safety lifting monitoring method based on computer vision
CN112528721A (en) * 2020-04-10 2021-03-19 福建电子口岸股份有限公司 Bridge crane truck safety positioning method and system
CN113184707A (en) * 2021-01-15 2021-07-30 福建电子口岸股份有限公司 Method and system for preventing lifting of container truck based on laser vision fusion and deep learning
CN113177431A (en) * 2021-03-15 2021-07-27 福建电子口岸股份有限公司 Method and system for preventing container truck from being lifted based on machine vision and deep learning
CN113177557A (en) * 2021-03-15 2021-07-27 福建电子口岸股份有限公司 Bowling prevention method and system based on machine vision and deep learning
CN113420646A (en) * 2021-06-22 2021-09-21 天津港第二集装箱码头有限公司 Lock station connection lock detection system and method based on deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114926916A (en) * 2022-05-10 2022-08-19 上海咪啰信息科技有限公司 5G unmanned aerial vehicle developments AI system of patrolling and examining

Similar Documents

Publication Publication Date Title
Cozzolino et al. Single-image splicing localization through autoencoder-based anomaly detection
CA3123632A1 (en) Automated inspection system and associated method for assessing the condition of shipping containers
CN111797826B (en) Large aggregate concentration area detection method and device and network model training method thereof
CN114155438A (en) Container loading and unloading safety detection method and system
CN113033553A (en) Fire detection method and device based on multi-mode fusion, related equipment and storage medium
CN116052026B (en) Unmanned aerial vehicle aerial image target detection method, system and storage medium
CN107609510A (en) Truck positioning method and apparatus under a kind of gantry crane
CN116502810B (en) Standardized production monitoring method based on image recognition
CN113936299A (en) Method for detecting dangerous area in construction site
CN111832345A (en) Container monitoring method, device and equipment and storage medium
CN111967473B (en) Grain depot storage condition monitoring method, equipment and medium based on image segmentation and template matching
CN117011280A (en) 3D printed concrete wall quality monitoring method and system based on point cloud segmentation
CN117218545A (en) LBP feature and improved Yolov 5-based radar image detection method
CN110197499B (en) Container safety hoisting monitoring method based on computer vision
CN107832696A (en) A kind of electric operating object in situ security feature identifying system
Rajesh et al. Smart Parking system using Image processing
CN112509050B (en) Pose estimation method, anti-collision object grabbing method and device
CN115205820A (en) Object association method, computer device, computer-readable storage medium, and vehicle
CN112183183A (en) Target detection method and device and readable storage medium
CN116128734B (en) Image stitching method, device, equipment and medium based on deep learning
CN117789185B (en) Automobile oil hole gesture recognition system and method based on deep learning
CN116934555B (en) Security and elimination integrated management method and device based on Internet of things
Yuan et al. A constructing vehicle intrusion detection algorithm based on BOW presentation model
Gopinath et al. Deep Learning based Automated Parking Lot Space Detection using Aerial Imagery
CN117456353A (en) Management system of river and lake intelligent boundary pile

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination