CN112465706A - Automatic gate container residual inspection method - Google Patents

Automatic gate container residual inspection method Download PDF

Info

Publication number
CN112465706A
CN112465706A CN202011513861.1A CN202011513861A CN112465706A CN 112465706 A CN112465706 A CN 112465706A CN 202011513861 A CN202011513861 A CN 202011513861A CN 112465706 A CN112465706 A CN 112465706A
Authority
CN
China
Prior art keywords
container
image
gradient
edge
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011513861.1A
Other languages
Chinese (zh)
Inventor
汤春明
陈朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Polytechnic University
Original Assignee
Tianjin Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Polytechnic University filed Critical Tianjin Polytechnic University
Priority to CN202011513861.1A priority Critical patent/CN112465706A/en
Publication of CN112465706A publication Critical patent/CN112465706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The method relates to an automatic gate container residue checking method. Firstly, a container residue inspection frame taking multifunctional edge nodes as a core is provided based on an edge computing mode, and then the container residue inspection frame is decomposed into three modules which are mutually associated according to a gateway related workflow: the system comprises a container detection module, a container splicing module and a container residue checking module. And according to the tasks to be completed by the three modules, the research on the aspects of function analysis, algorithm realization, experimental test and the like is carried out one by one. In the aspect of algorithm, firstly, a container detection starting program based on a frame difference method is designed to judge whether the program starts to run or not. Secondly, a complete picture of the container is obtained by adopting an image splicing mode based on a phase correlation method. Finally, in the aspect of container residue inspection, researches are carried out on detection methods of deformation of an upper cross beam of the container, depression of the surface of the container and corrosion of the surface of the container, and on the basis of analyzing image characteristics of three defects, a machine vision related technology is used for respectively providing corresponding detection algorithms.

Description

Automatic gate container residual inspection method
Technical Field
The method belongs to the field of artificial intelligence and machine vision, relates to a method for automatically testing the disabled of a container passing through a gate, and can automatically test the disabled of the container passing through the gate.
Background
In recent years, the container throughput of each large port is on the rise, and the realization of the intelligence of port container business is an urgent task. The container damage inspection work is one of the indispensable research directions as an important component part of the container business, but is rarely researched by people. At present, the container detection mode of each large port gate mainly adopts manual detection; in few gaps, the staff detects through using auxiliary assembly, and its essence still is manual detection. Manual detection has many problems that are difficult to solve, such as high requirements for manual experience, danger of working environment, and the like.
In the aspect of the research of container gate inspection technology in China, two aspects of residue inspection and box number identification are mainly discussed. In the aspect of container residue inspection, a semi-automatic technology is mostly adopted, namely, auxiliary equipment such as a monitoring camera or a sensor is deployed on a gate field to assist workers in a monitoring room to judge, for example, Jeng and the like propose a method for detecting a container card based on a sensor, photographing each surface of the container by combining a CCD camera, and displaying the container card on a monitor for manual inspection; a container residue checking device with a similar principle is provided by a mansion door outer wheel tally limited company, and is mainly upgraded on the aspect of equipment, and comprises a PLC (programmable logic controller) main control module, a high-definition ball unit and the like, wherein the high-definition ball unit is arranged on each side and used for shooting. The exploration in the aspect of automatic residual inspection of the container is realized, the Wujun provides an optimized container residual inspection mode, and on the basis of PLC module control and monitoring snapshot, an OCR box number recognition technology is introduced for automatically recognizing each side of the container and then arranging each side image in one image for inspection by a detector, so that the workload is reduced. In terms of the special aspect of the detection method for various damage on the surface of the container, no relevant research method is searched at present. In the aspect of box number identification, a processing mode based on combination of machine learning and machine vision is researched more at present. Zhangieqi et al proposed a method based on PCA and Bayesian threshold estimation, which obtains the location of the container number and identifies the container number by analyzing the gray scale value of the image, but can only identify a single type of container number and require a long preprocessing time. The method adopts a morphological processing mode in a box number positioning module, ignores the diversity of containers, is easy to cause misjudgment, needs to manually establish a large number of character template libraries, consumes resources relatively and has relatively general recognition accuracy. Although the quality of box number identification in some aspects is improved to a certain extent by the research, the overall situation still has limitations, foreign port gates are rapidly developed in recent years, and the state of development and capital investment are in leading positions. smartPORT Logistics (SPL) App has been introduced in 2018 by hamburger port, germany, and has enabled the efficiency of truck transport at hamburger ports to be improved. SPL1.0APP is proposed at present, aiming at solving the technical development bottleneck faced by the truck transportation business. This APP can provide very effective information to the logistics company, and the truck drivers can download on the smart phone, through which they can see when they can enter the port, and also receive various instructions sent to them. The waiting time of the truck at the gate can be reduced to below 30 minutes. As the largest trade port in europe, port hartdan, which is the world information port for further creation, is in the international leading position in the direction of exploring and realizing port intellectualization. The Marsday project which is put into operation in 2015 4 months integrates advanced technologies such as intelligent control, Internet of things and big data calculation, and automatic integrated control is realized for working processes such as relevant transportation operation at the front edge of a wharf and relevant business or road access in a yard. And after the construction is completed in 2035 years, the operation rate of the wharf is improved by more than 50%. And advanced information technology means such as a local area network, cloud computing, a mobile terminal, an internet of things, a GIS (geographic information system) and a video monitoring system are utilized to carry out the informationized construction of the port. The American Virginia international container wharf adopts the operation process of 'single-trolley quay crane + straddle carrier + automatic rail crane', and is a semi-automatic wharf; the Global Container Terminal (Global Container Terminal bayone) is the closest Container Terminal to the entrance of new jersey, new york, the port adopts a streamline gate process, each gate is equipped with optical character recognition software (OCR), radio frequency identification technology (RFID) and remote communication technology, the passing efficiency of the truck is improved to the maximum extent, the truck is in a semi-automatic state, but all operation affairs are remotely guided and monitored by a background; the long beach container terminal adopts the operation flow of 'bridge crane + automatic guided vehicle + automatic track crane', and realizes horizontal transmission by lifting the automatic guided vehicle through a battery driving belt. And in the united states, due to the effects of the wharf's concerts, the exotic wharf has evolved toward a fully automated wharf, while the indigenous wharf has evolved toward a semi-automated wharf. The current development situation and trend in China: the Pacific international container terminal adopts a container number distributed OCR acquisition system to realize the identification of container numbers and ISO codes, and the container number distributed acquisition system shoots the positions of four container numbers of a container body in the running process of a container vehicle to finish the accurate identification of the container bodies and the container numbers of single containers and double containers; the container body residue inspection system completes image plane display of five side faces of the container body through a dynamic image splicing technology, and then completes container body residue inspection manually. The driver needs to swipe the card according to the prompt of the channel self-service processing terminal, the system is operated on site through key data stored in the RFID card, and the vehicle inspection system automatically lifts the rod after swiping the card. From 2018, the high-column national code intelligent electronic gate system under the Zhuhai harbor reunion flag has 8 lanes of 4-in and 4-out, has the functions of intelligent weighing, intelligent rod lifting and releasing and the like, and is provided with license plate snapshot and recognition equipment, container number automatic recognition equipment, IC automatic recognition equipment, electronic wagon balance, electronic license plate reader-writer equipment, electronic lock recognition equipment, channel control box and other supervision equipment. The Qingdao harbor was introduced into autonomous research and development by QQCTU in 2017, and became the first real intelligent gate in China. Compared with the traditional intelligent mode, the system has less camera requirements, and can reduce the construction and maintenance cost. The functions of automatically identifying the numbers of different types of containers, collecting card license plates, identifying the lead seal two-dimensional code, photographing the front box door and the rear box door, and checking the five sides of the box body for the residual photograph, and the like. The system is high in identification speed and accuracy, manpower input is greatly reduced, and high-efficiency operation is realized. At the key period of re-work and re-production of port and wharf, in order to realize the unmanned passing of documents and passing, a set of container paperless platform is developed and put into use in Chongqing orchard wharf. The i-Gate intelligent Gate system used in a matched mode is designed for intelligentization of container truck channel management, and mainly works on automatic weight collection, box number shooting and recognition, box body residue checking and shooting, license plate recognition, two-dimensional code recognition, IC card recognition, man-machine interaction and the like of a passing container truck, so that rapid Gate opening of an unattended and a current-collecting truck is achieved on a Gate. The I-Gate intelligent Gate system has leading identification rate of the industry, and the identification rate is more than 98%. Each party can also adopt WEB, WeChat and APP modes for self-service reservation, so that the working efficiency is improved. Although a certain amount of research has been carried out at home and abroad in terms of the overall architecture, no relevant method has been proposed so far about the specific implementation scheme and algorithm of the port gate container inspection system.
The method firstly provides a container residue checking frame taking multifunctional edge nodes as a core based on an edge computing mode. It is then broken down into three modules associated with each other: the container detection, container splicing and container residue checking module. And according to the tasks to be completed by the three modules, the research on the aspects of function analysis, algorithm realization, experimental test and the like is carried out one by one. In the aspect of algorithm, firstly, the method designs a container detection starting program based on a frame difference method to judge whether the program starts to run. Secondly, a complete picture of the container is obtained by adopting an image splicing mode based on a phase correlation method. Finally, researches are carried out on the detection methods of deformation of the upper cross beam of the container, depression of the surface of the container and corrosion of the surface of the container, and on the basis of analyzing the image characteristics of the three defects, a machine vision related technology is used for respectively providing corresponding detection algorithms.
Disclosure of Invention
The method firstly provides a container residue checking frame taking multifunctional edge nodes as a core based on an edge computing mode. It is then broken down into three modules associated with each other: the system comprises a container detection module, a container splicing module and a container residue checking module. And according to the tasks to be completed by the three modules, the research on the aspects of function analysis, algorithm realization, experimental test and the like is carried out one by one. In the aspect of algorithm, firstly, a container detection starting program based on a frame difference method is designed to judge whether the program starts to run or not. Secondly, a complete picture of the container is obtained by adopting an image splicing mode based on a phase correlation method. Finally, researches are carried out on the detection methods of deformation of the upper cross beam of the container, depression of the surface of the container and corrosion of the surface of the container, and on the basis of analyzing the image characteristics of the three defects, a machine vision related technology is used for respectively providing corresponding detection algorithms. The technical scheme for realizing the method comprises the following three steps:
step 1: monitoring the lanes through a start-up procedure based on container detection to decide whether to start up the entire system;
step 2: splicing the containers to obtain a container positive playing image;
and step 3: three kinds of damage to the surface of the container: the deformation of the upper beam of the container and the depression and corrosion on the surface of the container are respectively detected.
Compared with the prior art, the method has the outstanding characteristics that:
1. related work can be carried out through remote control, so that the workload of field workers is reduced, and the expenditure of the workers is reduced;
2. the severe working environment of a gate container carrying operation area is overcome, and the possibility of causing the life safety of field workers is reduced;
3. the speed of the container entering and exiting the gate can be increased, so that the efficiency of related work is improved;
4. can continuously run for a long time, and eliminates the high work experience requirement, the inconvenience brought by visual fatigue and the like.
Drawings
FIG. 1: flow chart of the method.
FIG. 2: the implementation process of the starting program of the container detection is shown. 2-1 is a monitoring visual field and an ROI lane area, 2-2 is a lane state diagram without vehicle, and 2-3 is a lane state diagram with vehicle.
FIG. 3: and (5) image splicing result graph. Fig. 3 shows the splicing result of the method.
FIG. 4: and 4-1 is a normal beam fitting graph, and 4-2 is a deformation beam fitting graph.
FIG. 5: the detection diagram of the surface depression of the vehicle body is shown, wherein 5-1 is an input image, and 5-2 is an output image.
FIG. 6: the vehicle body surface corrosion detection diagram is shown in the specification, wherein 6-1 is an input image, 6-2 is age map output, and 6-3 is a boundary segmentation output image.
Detailed Description
The lane monitoring video is exported, the video is processed into an image frame as input, and a corresponding processing result can be obtained after the processing of the method.
7 paths are called from a monitoring video of a certain gate of Tianjin harbor for verification, the frame rate is 25 frames/s, the average time of each path of video is 48 minutes, the detection success number Tp, the false detection number Fp, the missed detection number Fn, the detection accuracy Acc, the recall rate R and the recall rate f1The performance of each algorithm in the index, disability test system is shown in table 1.
TABLE 1 results of the respective algorithms
Figure BSA0000227923810000041
In order to detect the real-time performance of the algorithm, the average time from entering the monitoring visual field to stopping of the container-carrying truck in each video is counted, and the average time required by the method for detecting the container is compared, as shown in table 2.
TABLE 2 statistical table for system detection
Figure BSA0000227923810000042
The data in the table show that the related algorithms of the method have good performance and can meet the requirement of real-time performance.

Claims (1)

1. An automatic gate container residue checking method comprises three parts:
firstly, judging whether the program starts to run or not by detecting and starting the program through a container; secondly, acquiring a complete image of the container by an image splicing mode of a phase correlation method; finally, detecting three defects of deformation of the upper cross beam of the container, depression on the surface of the container and corrosion on the surface of the container respectively; on the basis of analyzing the image characteristics of the three defects, a machine vision related technology is used, and corresponding detection algorithms are respectively provided, and the method comprises the following steps:
A. container detection start-up procedure, the algorithm is based on the fact that: before detecting the container, judging whether a truck passes through a lane, and when a truck passes through a gate, the speed is relatively reduced so as to reserve time for a camera to shoot and a worker to check, and only needing to know whether the truck passes through without obtaining a full picture of the truck, based on the point, in order to save computing resources and accelerate the processing speed, firstly selecting a lane area as a processed ROI area, and selecting a frame-separated processing mode to perform frame difference processing in the ROI area: firstly, performing graying processing and low-pass filtering processing on read-in adjacent frame images respectively; secondly, applying a frame difference method to two adjacent frames of images to obtain a frame difference image; finally, performing binarization processing on the frame difference image, and calculating information contained in the frame difference image to judge whether the head of the truck enters the visual field of the camera; when the target (the container truck and the container carried by the container) is not in the monitoring visual field, the picture in the camera lens has no obvious change, the information in the frame difference image is 0, and when the vehicle head enters the monitoring visual field, the frame difference image contains the information of the phase difference of the two images; after the detection of the truck head, whether the truck carries the container is detected, and a frame difference method is also selected for judgment; firstly, the height h of each row of highest point white pixel points in the difference imageiThe total column number col of white pixel points 1, calculate the highest point mean of the image:
Figure FSA0000227923800000011
then, a first part of the current image is takenDividing the image into hx50 area, hx100 area of the second part and hx50 area of the third part, and calculating val2 of the three areas according to the steps of formula (1)1、val22、val23Taking images of the front 1/2 area and the rear 1/2 area of the ROI area, and calculating val3 of the two areas1、val32(ii) a Finally, if val21Is > val1 and val22Is > val1 and val23If the value is more than val1, the container is loaded in the collecting card, and the container can be tested for residue; if val31> 0 and val32When the value is 0, the truck is started;
B. image splicing: setting the container image at time t in the monitoring view as f1(x, y), container image at time t + s is f2(x, y), the displacement of the container in s time is Deltax, Deltay, then f1(x, y) and f2The change in position of the same part between (x, y) can be seen as a result of a displacement by Δ x, i.e., f2(x, y) and f1The relationship between (x, y) can be expressed as:
f1(x,y)=f2(x-Δx,y-Δy) (2)
the Fourier transform can be used to convert the time domain variation into the frequency domain, and then the magnitude of the phase shift is used to obtain the expression displacement, and the Fourier expression of the formula (2) can be expressed in the form of the formula (3);
F1(u,v)=F2(u,v)*e-j*2π*(u*Δx+v*Δy) (3)
wherein F is1(u, v) is f1Representation in the frequency domain, F2(u, v) is f2In the representation of the frequency domain, the power density function is the signal power in the unit frequency band, and shows the change of the signal power along with the change of the frequency, and the F is obtained by1And F2Respectively obtain the conjugate F1 *And F2 *Then, f is obtained2(x, y) and f1The cross power spectrum of (x, y) is obtained by the following formula (4):
Figure FSA0000227923800000021
next, for cross power spectrum Hcps(u, v) taking an inverse fourier transform:
IFFT(Hcps(u,v))=δ(u-Δx,v-Δy) (5)
obtaining a pulse function through a formula (5), finding the peak value coordinate of the pulse function to obtain an offset, splicing the images according to the offset, and obtaining a container panoramic image after splicing is finished after the tail of the vehicle enters the visual field;
C. and (3) residual inspection of the container: the method mainly aims at detecting damage on two side surfaces and the top surface of the container, and respectively detects three kinds of damage, namely deformation of a container beam, depression on the surface of the container and corrosion on the surface of the container; aiming at the detection of the deformation of the upper beam of the container, a detection algorithm based on the combination of edge information and a nonlinear least square method is provided; in order to save computing resources, an upper cross beam of a container is automatically acquired, then the extraction effect of an upper edge required by an experiment is superior to that of other two edge extraction operators through a Sobel operator, in order to more conveniently position the upper edge of the container, after non-maximum suppression is carried out on an edge image to eliminate a stray effect, a proper double-threshold algorithm is selected for detection and connection to acquire the upper edge of the upper cross beam, and therefore edge point information can be conveniently counted; performing linear fitting by adopting a least square method, and fitting a given scatter point set by searching an optimal linear line in a linear class y-kx + b, keeping the true value in the scatter point set as y, wherein m points are shared, the fitting value is y', and the error calculation formula is shown as a formula (6); the optimal straight line requires that k and b be found to minimize the error calculation of equation (7) by calculating the error E between the fitted value and the true valuerDetermining whether the upper cross beam deforms or not;
Er=|y-y′| (6)
Figure FSA0000227923800000022
for depression of the surface of a containerCombining the characteristics of the concave image information, a concave detection method based on transverse gradient information is provided, namely, a concave area is divided by making the transverse gradient information of the image more obvious and eliminating the surrounding background; selecting Sobel operator with strong extraction direction edge to extract transverse edge of the image with the dent defect, namely performing gradient detection on the image in the vertical direction, firstly graying the image, and then performing vertical gradient detection on the grayed image through the Sobel operator, and assuming Fimg(x, y) denotes an actual image, Mimg(x, y) represents a desired target image, Nimg(x, y) represents an image including noise and other extraneous parts such as a background, and the relationship therebetween can be represented by equation (8):
Fimg(x,y)=Mimg(x,y)+Nimg(x,y) (8)
to segment defect aspects, we define GmThe recessed area is distinguished from the other areas by the formula (9) for maximum gray value:
Figure FSA0000227923800000023
we filter the template K by the improved 5x5 meanmTo the target image IimgCarrying out filtering processing, wherein the processing formula is as follows:
Figure FSA0000227923800000024
wherein the improved template KmThe expression is as follows:
Figure FSA0000227923800000031
introducing a parameter P on an original mean value filtering template, smoothing the image by changing the value of P, binarizing the obtained result, finally determining that the processing effect is better when the value of P is 10 through comparison, and then segmenting through a formula (9) to obtain a depressed area;
for the corrosion of the surface of the container, a container surface corrosion detection algorithm based on an age map and a gray gradient co-occurrence matrix is provided: assuming that the length and the width of the image are L, W respectively, dividing the image into hxh (20 x20 in the method) slices as source slices, wherein the size of each source slice is (L/h) x (W/h), and then determining the age map of each source slice by calculating the similarity between adjacent slices with the same size and the source slices, wherein the distances between the adjacent slices are W/2h and L/2h from the source slices; the luminance and gradient characteristics may represent an image, and thus the age of a source slice may be represented by simultaneously calculating the luminance and gradient similarities between the source slice and neighboring slices, as in equation (12):
Figure FSA0000227923800000032
wherein, IsliceAs intensity map of slice image, GsliceCalculating cosine distance between pixels to express similarity between the pixels for the gradient image; setting a certain source slice OTThe gradient image of (1) is GTLuminance image is LTAdjacent to the slice ONThe gradient image of (1) is GNLuminance image is LNAccording to the cosine similarity theorem, GTAnd GNThe similarity between them is calculated by equation (13):
Figure FSA0000227923800000033
in the same way, L can be obtainedTAnd LNSimilarity between SLWhile considering both the gradient feature and the brightness feature of the image, then OTThe calculation mode of the agemap is shown as the formula (14):
Figure FSA0000227923800000034
wherein, OTGIs OTThe gray level images of (1) sequentially obtain the age maps of all source slices, finally obtain the age map of the whole image, and then graying the image to obtain G1(ii) a By the pair G1Observing, finding that the internal brightness of the rusty area is high and is uniformly distributed, and the gradient value of the edge of the rusty part and the normal part is high; according to the characteristics, the gray-gradient co-occurrence matrix is used for obtaining the edge of the rust and the background; setting the maximum gray value of the normalized image as PfMaximum gradient value of PgThen the expression form of the gray-gradient co-occurrence matrix is Pf x PgA matrix of size, the matrix being divided into T1~T4Four regions, wherein: t is1And T3The region belongs to a rusty region or a normal region, T2And T4The area belongs to the edge of the rusted part and the normal part; four regions can be divided by only one dividing point (m, n), namely, a defect region can be found by finding (m, n); and if the probability that the gray value in the gray gradient co-occurrence matrix is x and the gradient value is y pixels is p (x, y), calculating T1The pixel probability sum of the region definitions can be calculated by equation (15):
Figure FSA0000227923800000041
by the same token we can obtain T2、T3And T4A sum of pixel probabilities for the region definition; only a normal box part, a rusted part, a background edge and a rusted edge exist in the image, so that the quadrant division points (m, n) are unique; firstly to T2Normalizing the pixel probability of the region to obtain a normalized pixel probability PT2(x, y); in the same way, T can be obtained4Pixel probability after area normalization; t is2The gradient value of the region is large, the gray value is small, the high probability is the background edge, and from the entropy point of view, P isT2(x, y) is the calculation formula of the edge entropy of the background region:
Figure FSA0000227923800000042
T4the region with large gradient value and large gray value is most likely to be rusty edge, PT4(x, y) is the entropy H of the tarnish edgeentropy(edge | rust), can be calculated by equation (17):
Figure FSA0000227923800000043
according to the formula (16) and the formula (17), the entropy calculation formula of the image segmentation corrosion is the formula (18):
Figure FSA0000227923800000044
according to the maximum entropy method, let Hentropy(m, n) is the maximum point (m, n) is the result; and when the pixel meets the condition that the gray value is greater than m and the gradient value is greater than n, the pixel is an edge pixel, and finally the rusty segmentation image can be obtained.
CN202011513861.1A 2020-12-21 2020-12-21 Automatic gate container residual inspection method Pending CN112465706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011513861.1A CN112465706A (en) 2020-12-21 2020-12-21 Automatic gate container residual inspection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011513861.1A CN112465706A (en) 2020-12-21 2020-12-21 Automatic gate container residual inspection method

Publications (1)

Publication Number Publication Date
CN112465706A true CN112465706A (en) 2021-03-09

Family

ID=74803336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011513861.1A Pending CN112465706A (en) 2020-12-21 2020-12-21 Automatic gate container residual inspection method

Country Status (1)

Country Link
CN (1) CN112465706A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705557A (en) * 2021-08-30 2021-11-26 上海西井信息科技有限公司 Method, system, equipment and storage medium for detecting door post of rear door of container
CN114104980A (en) * 2021-10-15 2022-03-01 福建电子口岸股份有限公司 Shore bridge safe operation control method and system based on AI and vision combination
CN115222697A (en) * 2022-07-18 2022-10-21 北京国泰星云科技有限公司 Container damage detection method based on machine vision and deep learning
CN115953726A (en) * 2023-03-14 2023-04-11 深圳中集智能科技有限公司 Machine vision container surface damage detection method and system
CN116074616A (en) * 2023-04-06 2023-05-05 上海知率智能科技有限公司 Container image acquisition system
CN116091499A (en) * 2023-04-07 2023-05-09 山东中胜涂料有限公司 Abnormal paint production identification system
CN117400066A (en) * 2023-12-15 2024-01-16 西安航飞精密工具有限公司 Numerical control machine tool wear identification method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3027428A1 (en) * 2014-10-17 2016-04-22 Msc & Sgcc METHOD, DEVICE AND INSPECTION LINE FOR OPTICALLY READING RELIEFS ON A LATERAL WALL OF A CONTAINER
CN106408938A (en) * 2016-09-13 2017-02-15 天津工业大学 Complete extraction method of various vehicle tracks in urban traffic monitoring at night
CN110650896A (en) * 2017-05-24 2020-01-03 沈玉欢 Container with inner container capable of being spliced into toy, mold and manufacturing method thereof
CN110954546A (en) * 2019-12-20 2020-04-03 上海撬动网络科技有限公司 Container image acquisition and inspection system for non-fixed scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3027428A1 (en) * 2014-10-17 2016-04-22 Msc & Sgcc METHOD, DEVICE AND INSPECTION LINE FOR OPTICALLY READING RELIEFS ON A LATERAL WALL OF A CONTAINER
CN106408938A (en) * 2016-09-13 2017-02-15 天津工业大学 Complete extraction method of various vehicle tracks in urban traffic monitoring at night
CN110650896A (en) * 2017-05-24 2020-01-03 沈玉欢 Container with inner container capable of being spliced into toy, mold and manufacturing method thereof
CN110954546A (en) * 2019-12-20 2020-04-03 上海撬动网络科技有限公司 Container image acquisition and inspection system for non-fixed scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHUNMING TANG等: "Automatic Damage-Detecting System for Port Container Gate Based on AI", 《ICCPR 2020: PROCEEDINGS OF THE 2020 9TH INTERNATIONAL CONFERENCE ON COMPUTING AND PATTERN RECOGNITION》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705557A (en) * 2021-08-30 2021-11-26 上海西井信息科技有限公司 Method, system, equipment and storage medium for detecting door post of rear door of container
CN114104980A (en) * 2021-10-15 2022-03-01 福建电子口岸股份有限公司 Shore bridge safe operation control method and system based on AI and vision combination
CN114104980B (en) * 2021-10-15 2023-06-02 福建电子口岸股份有限公司 Safe operation control method and system for quay crane based on combination of AI and vision
CN115222697A (en) * 2022-07-18 2022-10-21 北京国泰星云科技有限公司 Container damage detection method based on machine vision and deep learning
CN115953726A (en) * 2023-03-14 2023-04-11 深圳中集智能科技有限公司 Machine vision container surface damage detection method and system
CN115953726B (en) * 2023-03-14 2024-02-27 深圳中集智能科技有限公司 Machine vision container face damage detection method and system
CN116074616A (en) * 2023-04-06 2023-05-05 上海知率智能科技有限公司 Container image acquisition system
CN116091499A (en) * 2023-04-07 2023-05-09 山东中胜涂料有限公司 Abnormal paint production identification system
CN117400066A (en) * 2023-12-15 2024-01-16 西安航飞精密工具有限公司 Numerical control machine tool wear identification method and system
CN117400066B (en) * 2023-12-15 2024-03-08 西安航飞精密工具有限公司 Numerical control machine tool wear identification method and system

Similar Documents

Publication Publication Date Title
CN112465706A (en) Automatic gate container residual inspection method
US20220084186A1 (en) Automated inspection system and associated method for assessing the condition of shipping containers
Bai et al. A fast license plate extraction method on complex background
CN110211101A (en) A kind of rail surface defect rapid detection system and method
CN107145905A (en) The image recognizing and detecting method that elevator fastening nut loosens
CN101739549B (en) Face detection method and system
CN104029680A (en) Lane departure warning system and method based on monocular camera
CN109815856A (en) Status indication method, system and the computer readable storage medium of target vehicle
CN109489724A (en) A kind of tunnel safe train operation environment comprehensive detection device and detection method
CN111382704A (en) Vehicle line-pressing violation judgment method and device based on deep learning and storage medium
CN113516629A (en) Intelligent detection system for TFDS passing operation
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN105809219B (en) A kind of the prefabricated pipe section quality testing statistical system and method for tunnel prefabricated pipe section production line
CN114511519A (en) Train bottom bolt loss detection method based on image processing
CN112881412A (en) Method for detecting non-metal foreign bodies in scrap steel products
CN115588121A (en) Tower crane lifting object type detection method and system based on sensing data and image sequence
CN116337887A (en) Method and system for detecting defects on upper surface of casting cylinder body
CN110688876A (en) Lane line detection method and device based on vision
CN114187583A (en) Rapid identification method for container and flat car loading combined identification
CN114399671A (en) Target identification method and device
CN109978879B (en) Box corner in-groove state detection method based on railway wagon loading video monitoring
CN112150540A (en) Method, device, terminal, storage medium and processor for aligning collection card under field bridge
CN220651291U (en) Security check machine auxiliary device
CN116129374B (en) Multifunctional flaw detection cluster device beside rail and control method thereof
TWI838236B (en) System and method for personnel detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210309

WD01 Invention patent application deemed withdrawn after publication