CN115953726A - Machine vision container surface damage detection method and system - Google Patents

Machine vision container surface damage detection method and system Download PDF

Info

Publication number
CN115953726A
CN115953726A CN202310240276.6A CN202310240276A CN115953726A CN 115953726 A CN115953726 A CN 115953726A CN 202310240276 A CN202310240276 A CN 202310240276A CN 115953726 A CN115953726 A CN 115953726A
Authority
CN
China
Prior art keywords
container
video image
image
current frame
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310240276.6A
Other languages
Chinese (zh)
Other versions
CN115953726B (en
Inventor
刘浩
吕洁印
周受钦
李继春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen CIMC Intelligent Technology Co Ltd
Original Assignee
Shenzhen CIMC Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen CIMC Intelligent Technology Co Ltd filed Critical Shenzhen CIMC Intelligent Technology Co Ltd
Priority to CN202310240276.6A priority Critical patent/CN115953726B/en
Publication of CN115953726A publication Critical patent/CN115953726A/en
Application granted granted Critical
Publication of CN115953726B publication Critical patent/CN115953726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses a machine vision container surface damage detection method and system, which are used for detecting a container entering and exiting a container channel at the container channel. The method comprises the following steps: acquiring a plurality of video images of at least one container face of a container; screening a plurality of candidate images of at least one face of the container from the plurality of video images; screening a target image from a plurality of candidate images; identifying damage of the target image, wherein when the difference between the video image of the current frame and the video image of the previous frame of the box surface is greater than a preset difference threshold value, identifying the video image of the current frame so as to identify the edge of the box surface; and when the video image of the current frame at least contains the complete box surface, selecting the video image of the current frame as the candidate image. The machine vision container surface damage detection method is suitable for containers entering and exiting a container channel, does not need a specific detection environment, and is high in environmental adaptability.

Description

Machine vision container surface damage detection method and system
Technical Field
The application relates to the technical field of containers, in particular to a machine vision container surface damage detection method and a machine vision container surface damage detection system for automatically detecting the container damage condition according to the method.
Background
As an important carrier for international foreign trade transportation, the container itself needs to be guaranteed to be unbroken so as not to influence the transportation of goods. However, the container transportation period is long, the operation links are multiple, and when and in what links the container is damaged, the container transportation method is very important for container owners, container users and container transporters. Therefore, in important ports, storage yards and other scenes, it is often necessary to check whether the containers are damaged. Because the container consumption is greatly increased every year and the labor cost is increasingly high, the manual inspection in each link and each place cannot be realized. This places a demand on automatic container breakage detection. For automatic detection of container breakage, the existing equipment has more parts, is complex to maintain, has higher requirements on installation environment and installation parts, and cannot flexibly adapt to various application fields. Therefore, there is a need for a machine-vision method and system for automatically detecting container face breakage that at least partially addresses the above problems.
Disclosure of Invention
In this summary, concepts in a simplified form are introduced that are further described in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
To at least partially solve the above problems, a first aspect of the present application provides a machine-vision method for detecting a breakage of a container surface of a container, for detecting a container entering and exiting a container passageway at the container passageway, comprising:
acquiring a plurality of video images of at least one face of a container entering and exiting the container passageway;
screening a plurality of candidate images of the at least one face of the container from a plurality of the video images;
screening a target image from the plurality of candidate images;
performing damage recognition on the target image,
wherein said screening a plurality of candidate images of said at least one face of said container from a plurality of said video images comprises:
acquiring the video image of a current frame and the video image of a previous frame;
calculating the difference between the video image of the current frame and the video image of the previous frame;
when the difference is larger than a preset difference threshold value, identifying the video image of the current frame so as to identify the edge of the box surface;
and when the video image of the current frame at least comprises the complete box surface, selecting the video image of the current frame as the candidate image.
According to the machine vision container surface damage detection method, the containers entering and exiting the container channel can be directly detected, a specific detection environment is not needed, the adaptability to the environment is strong, and therefore the application scene is wider. In the application, when a moving container enters a shooting picture, the picture content can be obviously distinguished from a relatively fixed background picture, so that when the difference between the front frame picture and the rear frame picture is large, the container is judged to pass through a container channel. The image for breakage detection includes at least a complete box face image, so that whether the container is broken or not can be accurately judged.
Optionally, the screening out the target image from the plurality of candidate images includes:
step S31, setting the first candidate image as a preparation image, and then executing step S32;
step S32, screening out the next candidate image, and then executing step S33;
step S33, comparing the size z1 of the box surface in the next candidate image with the size z0 of the box surface in the preliminary image, executing step S34 when z1> z0, and executing step S36 when z1 is not more than z 0;
step S34, comparing the definition d1 of the next candidate image with the definition d0 of the preparation image, executing step S35 when d1> d0, and executing step S36 when d1 is less than or equal to d 0;
step S35 of taking the next candidate image as the preliminary image, and then performing step S37;
step S36, not replacing the preliminary image, and then executing step S37;
step S37, if the screening of all the candidate images is completed, step S38 is executed, and if the screening of all the candidate images is not completed, step S32 is executed;
and step S38, taking the prepared image as the target image.
According to the method and the device, the large and clear box surface image, namely the target image most beneficial to damage identification, can be screened out by integrating the size and the definition of the candidate image.
Optionally, the selecting, when the video image of the current frame at least includes the complete box face, the video image of the current frame as the candidate image includes:
and when the video image of the current frame contains the complete box surface and the box surface is located in a preset area in the video image of the current frame, selecting the video image of the current frame as the candidate image.
According to the method and the device, the candidate images of the plurality of complete box surfaces are screened by comparing the video image of the current frame with the video image of the previous frame, the clearest and complete box surface image in the video image can be selected, the candidate image of the box surface in the preset area can be selected, the image of the box surface in the middle of the image can be selected, and the damaged target can be conveniently identified by an algorithm.
Optionally, the box face comprises a front face, a rear face, a left side face, a right side face, and a top face.
According to the application, the damage detection can be carried out on all the box surfaces of the container.
Optionally, the box surface is located in a preset area in the video image of the current frame, and includes:
the video image of the current frame is the video image of the front end face or the rear end face or the top face, and when y is larger than or equal to t0 and [ H- (y + H) ] > is larger than or equal to t1, the box face is judged to be located in a preset area in the video image of the current frame; or
The video image of the current frame is the video image of the left side face or the right side face, when x is larger than or equal to s0 and [ W- (x + W) ] is larger than or equal to s1, the box face is judged to be positioned in a preset area in the video image of the current frame,
h is the total height of the video image of the current frame, W is the total width of the video image of the current frame, (x, y) is the coordinate of the upper left corner of the video image of the current frame in the video image of the current frame with the upper left corner of the video image of the current frame as the origin, the box surface in the video image of the current frame, H is the height of the box surface in the video image of the current frame, W is the width of the box surface in the video image of the current frame, t0, t1, s0 and s1 are preset edge thresholds, and t0, t1, s0 and s1 are positive numbers.
According to the application, the damage identification can be carried out on a plurality of surfaces of the container, wherein the surfaces comprise the front end surface, the rear end surface, the left side surface, the right side surface and the top surface. In the candidate image, the box surface is not only required to be complete, but also is required to be positioned in the middle of the picture as much as possible, so that the quality of the candidate image is improved.
Alternatively, in step S37, it is determined whether the screening of all the candidate images has been completed according to the following conditions:
the video image of the current frame is the video image of the front end face, and when [ H- (y + H) ] < t1, the screening of all the candidate images is judged to be completed; or
The video image of the current frame is the video image of the rear end face, and when y is less than t0, the screening of all the candidate images is judged to be completed;
the video image of the current frame is the video image of the top surface, and when y < t0 or [ H- (y + H) ] < t1, the screening of all the candidate images is judged to be completed; or alternatively
The video image of the current frame is the video image of the left side surface, and when x is less than s0, the screening of all the candidate images is judged to be completed; or alternatively
The video image of the current frame is the video image of the right side face, and when [ W- (x + W) ] < s1, it is determined that the screening of all the candidate images has been completed.
The method for detecting the damage of the container surface of the container is applied to the moving container, and whether the container is at least partially moved out of the picture is judged according to the moving rule of the container in the shot picture. When the box surface exceeds the preset area, the container is judged to be at least partially moved out of the picture, and the screening of the candidate images is finished. The images of the box surface beyond the preset area, namely the images not beneficial to damage identification, are excluded from the screening range, so that the target image can be screened from the candidate images which are more in line with the conditions.
Optionally, the performing damage identification on the target image includes:
cutting the target image according to the edge of the box surface;
uniformly zooming the cut target image into a preset size;
and carrying out damage identification on the zoomed target image.
According to the application, the target image is cut and zoomed into a preset size, so that damage identification can be automatically carried out by utilizing an algorithm.
A second aspect of the present application provides a machine vision container deck damage detection system for detecting a container entering and exiting a container passageway at the container passageway, comprising:
the camera is arranged at the container passage and used for shooting a video image of at least one container surface of a container entering and exiting the container passage;
the intelligent identification module is electrically connected to the at least one camera so as to control the work of the at least one camera and acquire a video image shot by the at least one camera;
an interactive module electrically connected with the intelligent recognition module for realizing the human-computer interaction function,
the intelligent identification module is configured to acquire a video image of the container channel through a camera to acquire a video image of at least one container surface of the container, complete the steps of the machine-vision container surface damage detection method according to any one of the above technical schemes, and control the interaction module to display a damage identification result.
According to the machine vision's damaged detecting system of container case face of this application, utilize the video image of camera collection container passageway, the intelligent recognition module who is connected with the camera electricity selects the target image and carries out damage discernment, the interactive module that is connected with the intelligent recognition module electricity shows the result of damage discernment, can accomplish the damaged detection of container case face, required equipment is small in quantity, it is simple to maintain, and can directly be applied to the passageway of container business turn over, do not need specific detection environment, the installation condition requires lowly, strong adaptability to the installation environment, therefore the application scene is more extensive.
Alternatively,
the interactive module is configured to display the video image, and the container face breakage detection system is further configured to: a user can manually mark at least one of damage type, damage position and damage quantity in the video image through the interaction module, and the intelligent identification module completes self-learning according to the result of the manual marking;
and/or
The smart identification module is further configured to perform the following: and identifying the box number and/or the box type of the container according to the target image, and/or controlling the interaction module to display the target image.
According to the machine vision container surface damage detection system, the interaction module can display the video image of the container surface, and human-computer interaction is friendly. The intelligent identification module can self-learn according to the user behaviors in the interaction module, and can continuously improve the identification accuracy rate under the condition of not changing codes. According to the container face damage detection system, the number and/or the box type of the current container can be output, and a user can conveniently find the damaged container quickly. The interactive module can display the box surface image, so that a user can know the damage condition of the container more clearly.
Alternatively,
the at least one camera includes:
a front camera arranged at the front end of the container channel and used for shooting a video image of the front end face,
a rear camera arranged at the rear end of the container channel and used for shooting a video image of the rear end face,
a left camera arranged at the left side of the container channel and used for shooting a video image of the left side surface,
a right camera disposed at the right side of the container passage for taking a video image of the right side surface, an
The top camera is arranged at the top of the container channel and used for shooting a video image of the top surface;
or alternatively
The at least one camera includes:
a front camera arranged at the front end of the container channel and used for shooting a video image of the front end face,
a rear camera arranged at the rear end of the container channel and used for shooting a video image of the rear end face,
a left camera disposed at the left side of the container passage for taking a video image of the left side surface, an
A right camera arranged at the right side of the container channel and used for shooting a video image of the right side surface,
wherein at least one of the front camera and the rear camera is further configured to capture a video image of a top surface, and the smart identification module is configured to identify the top surface from the video images captured by the front camera and/or the rear camera and select the plurality of candidate images of the top surface.
According to the application, the cameras are arranged in different directions of the container channel, video images of different container surfaces of the container are obtained, and accordingly damage identification can be carried out on a plurality of surfaces of the container. In the environment of the condition that does not possess installation top camera, shoot the video image at top through at least one in preceding camera and the back camera, can avoid installing the top camera for the equipment that needs the installation is small in quantity, and the requirement to installation environment and installed part is lower simultaneously.
Drawings
The following drawings of the present application are included to provide an understanding of the present application. The drawings illustrate embodiments of the application and, together with the description, serve to explain the principles of the application. In the drawings:
FIG. 1 is a schematic diagram of a machine vision container finish failure detection system according to a first embodiment of the present application;
FIG. 2 is a flow chart of steps of a machine vision method for detecting a breakage in a container face according to a preferred embodiment of the present application;
FIG. 3 is a detailed flowchart of the operation of step S20 in FIG. 2;
FIG. 4 is a schematic diagram of coordinates of a box surface in a current frame video image;
FIG. 5 is a detailed flowchart of the operation of step S30 in FIG. 2;
FIG. 6 is a detailed flowchart of the operation of step S40 in FIG. 2;
fig. 7 is a schematic diagram of a machine-vision container deck breakage detection system according to a second embodiment of the present application.
Description of the reference numerals:
10: container channel
20: camera head
21: front camera
22: rear camera
23: left camera
24: right camera
25: top camera
30: intelligent recognition module
40: interaction module
50: container for transporting goods
100/200: machine vision container surface damage detection system
R: predetermined area
DM: direction of motion
Detailed description of the preferred embodiments
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present application. It will be apparent, however, to one skilled in the art, that the present application may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present application.
In the following description, a detailed description will be given in order to thoroughly understand the present application. It should be understood that these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of these exemplary embodiments to those of ordinary skill in the art. It is apparent that the implementation of the embodiments of the present application is not limited to the specific details familiar to those skilled in the art. The following detailed description of the preferred embodiments of the present application, however, the present application may have other embodiments in addition to these detailed descriptions.
Ordinal words such as "first" and "second" are referred to in this application as labels only, and do not have any other meaning, such as a particular order, etc. Also, for example, the term "first component" does not itself imply the presence of "second component", and the term "second component" does not itself imply the presence of "first component". The use of the words "first," "second," and "third," do not denote any order, and such words are to be interpreted as names.
It is to be understood that the terms "upper," "lower," "front," "rear," "left," "right," "inner," "outer," and the like are used herein for descriptive purposes and not for purposes of limitation.
The application provides a machine vision container surface damage detection method (simply referred to as a container surface damage detection method or a detection method) and a machine vision container surface damage detection system (simply referred to as a container surface damage detection system or a detection system) for automatically detecting the damage condition of a container by adopting the detection method, which are used for detecting the container entering and exiting a container passage at the container passage. The container passage is arranged at the entrance and exit gate of a storage yard, a wharf, a bonded park, a container manufacturing factory and the like, so that the detection of the container damage can be completed by the way in the process of normal entrance and exit of the container according to the container surface damage detection method and the system of the application, and the detection process of the container damage does not need to be added independently, and the detection place (environment) does not need to be arranged independently.
In this application, machine vision is to use a machine to take the place of a human eye for measurement and judgment. The machine vision system converts a shot target into an image signal (such as a digital image signal) through a machine vision product (such as an image shooting device) and transmits the image signal to a special image processing system; the image processing system performs various operations on the image signal to extract the characteristics of the target, so as to obtain a discrimination result and control the working flow according to the discrimination result. Therefore, the detection method and the detection system have an artificial intelligence function, can replace manual work, and achieve automatic identification of damage of the container surface.
Exemplary embodiments according to the present application will now be described in more detail with reference to the accompanying drawings.
As shown in fig. 1, in a first embodiment according to the present application, a container deck breakage detection system 100 is used to detect a container 50 entering and exiting a container tunnel 10 at the container tunnel 10. The container deck breakage detection system 100 includes at least one camera 20, a smart identification module 30, and an interaction module 40. Wherein at least one camera 20 is arranged at the container aisle 10 for capturing video images of at least one deck of a container 50 entering and exiting the container aisle 10. The smart identification module 30 is electrically connected to the at least one camera 20 to control the operation of the at least one camera 20 and to acquire a video image captured by the at least one camera 20. The interaction module 40 is electrically connected with the intelligent recognition module 30 and is used for realizing a human-computer interaction function.
Preferably, the deck of the container 50 includes a front face, a rear face, a left side face, a right side face, and a top face.
In the first embodiment, as shown in fig. 1, the at least one camera 20 includes a front camera 21, a rear camera 22, a left camera 23, a right camera 24, and a top camera 25. Wherein, the front camera 21 is arranged at the front end of the container passage 10 and is used for shooting the video image of the front end surface of the container 50; the rear camera 22 is arranged at the rear end of the container passage 10 and is used for shooting a video image of the rear end face of the container 50; the left camera 23 is arranged at the left side of the container passage 10 and is used for shooting a video image of the left side surface of the container 50; the right camera 24 is arranged at the right side of the container passage 10 and is used for shooting a video image of the right side surface of the container 50; a top camera 25 is provided at the top of the container aisle 10 for capturing video images of the top surface of the container 50.
Here, the front, rear, left, and right of the container passage 10 are determined according to the moving direction DM of the container 50. The direction coinciding with the direction DM of movement of the container 50 is forward; the opposite direction to the direction of movement DM of the container 50 is rear; facing the moving direction DM, the left side is left and the right side is right.
In a specific implementation, the front camera 21, the rear camera 22, the left camera 23, the right camera 24, and the top camera 25 are respectively located at the front upper part, the rear upper part, the left upper part, the right upper part, and the right upper part of the container passage 10. The camera 20 may be mounted in a gantry manner, may be mounted in a plurality of similar light poles, or may be mounted in other manners that achieve the same effect. The distance between the cameras 20 and the installation height can be set according to the conditions of a specific scene, for example, the distance between the front camera 21 and the rear camera 22 is 10-15 meters, the distance between the left camera 23 and the right camera 24 is 5.6-6.5 meters, the installation height for installing the front camera 21 and the rear camera 22 is 5-6 meters, the installation height for installing the left camera 23 and the right camera 24 on the left and right sides is 4-5 meters, and the installation height for installing the top camera 25 is 6-7 meters.
Preferably, the front camera 21, the rear camera 22 and the top camera 25 are located at the middle of the width of the container aisle 10, so that the box surface is shot as much as possible in the middle of the screen.
According to the container face damage detection system of this application, the camera installation is simple, and is low to the installation environment requirement, can adapt to different application scenarios such as store yard, pier, railway freight station, bonded garden.
As described above, the container deck breakage detection system 100 according to the present application performs the container deck breakage detection method according to the present application. In a preferred embodiment, the container face damage detection method according to the present application comprises the following steps S10-S40 as shown in fig. 2:
s10, acquiring a plurality of video images of at least one container surface of a container 50 entering and exiting the container passage 10;
s20, screening a plurality of candidate images of at least one box surface of the container 50 from the plurality of video images;
s30, screening a target image from the candidate images;
and S40, identifying damage of the target image.
In particular, the smart identification module 30 of the detection system 100 is configured to perform the detection method according to the present application. In step S10, the smart identification module 30 acquires video images of the container passage 10 through the camera 20 to obtain a plurality of video images of at least one container surface of the container 50; in step S20, the smart identification module 30 screens a plurality of candidate images of at least one surface of the container 50 from the plurality of video images; then screening a target image from the plurality of candidate images in S30; in step S40, the target image is subjected to damage recognition. As can be appreciated, eventually, the smart identification module 30 controls the interactive module 40 to display the result of the damage recognition.
In other words, the detection system 100 is configured to perform at least a breakage detection of one of the faces of the container 50. Preferably, the detection system 100 is configured to perform the above-mentioned five-deck breakage detection of the container 50.
In a specific application scenario, the smart identification module 30 may be disposed in a kiosk beside or near the container aisle 10, the interactive module 40 may be disposed in a monitoring room, and the staff member knows the breakage detection condition through the interactive module 40.
In the present application, the camera 20 and the smart identification module 30 may be connected by wire or wireless communication (e.g., 4G or 5G network connection). The interaction module 40 and the smart identification module 30 may be connected by wire or wireless communication.
Specifically, step S20 includes steps S21-S24 as shown in FIG. 3.
S21, the video image of the current frame and the video image of the previous frame are acquired, and then step S22 is performed.
Wherein, the video image of the current frame and the video image of the previous frame are obtained from the video image of the container passage 10 collected in the foregoing step S10 by the smart identification module 30 electrically connected to the camera 20.
Illustratively, the smart identification module 30 may simultaneously acquire videos of a plurality of cameras 20 based on RTSP protocol multithreading.
S22, calculating a difference dp between the video image of the current frame and the video image of the previous frame, and then performing step S23.
And S23, when the difference dp is greater than the preset difference threshold dt, identifying the video image of the current frame to identify the edge of the box surface, and then executing the step S24.
Specifically, during the process that the container 50 enters the passage 10 and moves in the passage 10, the face of the container 50 gradually enters the picture of the camera 20 and finally moves out of the picture of the camera 20. When the container 50 does not enter the shooting area of the camera 20, the content shot by the camera 20 is basically a fixed environmental scene. When the container 50 enters the shooting area of the camera 20, the contents shot by the camera 20 are significantly different from the environmental scene. Thus, it is possible to determine that the container 50 enters the aisle 10 by calculating the difference between the video image of the current frame and the video image of the previous frame and start identifying the video image of the current frame to identify the edges of the deck and the position in the image. In addition, the name of the container face can be identified in combination with the installation position of the camera 20, for example, the video image collected by the front camera 21 installed at the front upper part of the container passage 10 is the picture of the front end face of the container 50.
Illustratively, the hash difference between the video image of the current frame and the video image of the previous frame may be calculated by a difference hash algorithm, and if the difference is smaller than or equal to a preset difference threshold dt, it is determined that the container has not entered the shooting area (i.e. no container passes through the channel 10), no recognition is performed, and the next frame of image continues to be obtained and the difference dp is calculated. When the difference dp is greater than the preset difference threshold dt, it is determined that a container passes through the channel 10, and a target recognition algorithm, such as Yolo4, yolo5, yolo6, yolo7, PP-Yolo, yolo-E, etc., may be used to recognize the video image of the current frame to identify the edge of the container surface.
And S24, when the video image of the current frame at least comprises a complete box surface, selecting the video image of the current frame as a candidate image.
Specifically, in step S23, the smart identification module 30 may identify the edge of the box surface, and thus may determine whether the video image of the current frame includes a complete box surface (4 corners and 4 edges of the box surface are identified in step S23).
Preferably, when the video image of the current frame includes a complete box surface and the box surface is located in a preset area in the video image of the current frame, the video image of the current frame is selected as the candidate image. Thereby, a plurality of candidate images of the box face can be obtained.
As shown in fig. 4, the method for determining whether the box surface is located in the preset region R in the video image of the current frame includes: if the video image V of the current frame is the video image of the front end face or the rear end face or the top face, when y is larger than or equal to t0 and [ H- (y + H) ] > is larger than or equal to t1, judging that the box face is positioned in the preset area R in the video image V of the current frame; or, if the video image V of the current frame is a video image of the left side or the right side, when x ≧ s0 and [ W- (x + W) ] > s1, it is determined that the box face is located in the preset region R in the video image V of the current frame. Where H is the total height (e.g., the number of pixels) of the video image V of the current frame and W is the total width of the video image V of the current frame. It can be understood that H and W are determined by parameters of the camera 20, and values of H and W are determined after the model of the camera 20 is determined. (x, y) is the coordinates of the box surface in the top left corner A of the video image V of the current frame by taking the top left corner of the video image V of the current frame as the origin, h is the height of the box surface in the video image V of the current frame, and w is the width of the box surface in the video image V of the current frame. The values of x, y, w and h are known by step S23. t0, t1, s0 and s1 are preset edge thresholds, and t0, t1, s0 and s1 are positive numbers.
In other words, preferably, in the candidate image, the upper edge of the box face is at least t0 away from the upper edge of the video image V, the lower edge of the box face is at least t1 away from the lower edge of the video image V, the left edge of the box face is at least s0 away from the left edge of the video image V, and the right edge of the box face is at least s1 away from the right edge of the video image V. Alternatively, it can be understood that the preset region R is a region surrounded by four lines in the video image V, the four lines being two vertical lines with abscissa s0 and W-s1, respectively, and two horizontal lines with ordinate t0 and H-t1, respectively.
It is understood that, in general, the front end face, the rear end face, and the top face of the container 50 appear as a quadrangle having a large height and a small width in the camera 20 and move in the vertical direction (up-down direction) in the screen, and therefore, it is preferable to consider only the positions of the upper and lower edges of the box face for the video images of the front end face, the rear end face, and the top face. The left and right sides of the container 50 appear as a quadrangle with a small height and a large width in the camera 20 and move in the lateral direction (left-right direction) in the screen, and therefore, it is preferable to consider only the positions of the left and right edges of the box surface for the video images of the left and right sides.
Through step S20, the smart identification module 30 screens out a plurality of candidate images at least including the complete box surface, and then the smart identification module 30 selects a target image for damage identification from the plurality of candidate images through step S30. It is understood that the smart identification module 30 screens out a plurality of candidate images for each picture taken by the camera 20 (i.e. for each box surface), and then performs step S30 for each candidate image of each box surface. Specifically, step S30 includes steps S31-S38 as shown in FIG. 5.
S31, set the first candidate image as a preliminary image, and then execute step S32.
The first candidate image is the first video image selected in step S20 and containing the complete box surface, and preferably, the box surface is located in the preset region R in the video image.
S32, the next candidate image is selected, and then step S33 is performed.
It is understood that, through the foregoing step S20, the smart identification module 30 may screen out a next candidate image from the video image captured by the camera 20.
S33, comparing the size z1 of the box surface in the next candidate image with the size z0 of the box surface in the preliminary image, executing the step S34 when z1 is larger than z0, and executing the step S36 when z1 is smaller than or equal to z 0.
For example, when comparing the size z1 of the box surface in the next candidate image with the size z0 of the box surface in the preliminary image, if the candidate image is a video image of the front end face, the rear cross-section, or the top face, the determination can be made by comparing the height h1 of the box surface in the next candidate image with the height h0 of the box surface in the preliminary image. If the candidate image is a video image of the left side or the right side, the determination can be made by comparing the width w1 of the box surface in the next candidate image with the width w0 of the box surface in the preliminary image.
S34, comparing the definition d1 of the next candidate image with the definition d0 of the preparation image, executing the step S35 when d1> d0, and executing the step S36 when d1 is less than or equal to d 0.
Wherein smart identification module 30 calculates the intelligibility, for example, according to at least one of the following functions: the Tenengrad merit function, the Laplacian gradient function, the Vollant function, and the EAV point sharpness algorithm function.
S35, the next candidate image is taken as a preliminary image, and then step S37 is performed.
Specifically, according to the foregoing steps S33 and S34, when the size z1 of the box surface in the next candidate image is larger than the size z0 of the box surface in the preliminary image and the resolution d1 of the next candidate image is larger than the resolution d0 of the preliminary image, the next candidate image (i.e., the candidate image in which the box surface is large and clear in the candidate image) is taken as the preliminary image.
S36, the preliminary image is not replaced, and then step S37 is executed.
Specifically, according to the foregoing steps S33 and S34, when the size z1 of the box surface in the next candidate image is equal to or smaller than the size z0 of the box surface in the preliminary image, the preliminary image is not replaced, or when the size z1 of the box surface in the next candidate image is larger than the size z0 of the box surface in the preliminary image but the resolution d1 of the next candidate image is equal to or smaller than the resolution d0 of the preliminary image, the preliminary image is not replaced.
S37, if the screening of all the candidate images is completed, step S38 is executed, and if the screening of all the candidate images is not completed, step S32 is executed.
And S38, taking the preliminary image as a target image.
That is, after all candidate images have been compared, a large and clear box surface image selected for the purpose of facilitating damage recognition of the box surface is determined for the target image. If all candidate images have not been compared, the comparison continues.
It is understood that steps S20 and S30 may be performed by an online analysis method or an offline analysis method. In the online analysis method, the smart identification module 30 obtains the images captured by the camera 20 in real time and performs the difference calculation, and the smart identification module 30 performs the step S30 once each candidate image is identified until the smart identification module 30 cannot identify the candidate image in the step S20 (the container 50 moves out of the capturing area of the camera 20). In the off-line analysis method, the smart identification module 30 first screens out all candidate images through step S20, and then screens out a target image from all candidate images through step S30.
As the container 50 continues to move in the tunnel 10, for example, when it is identified in step S24 that the box face is incomplete (e.g., the box face edge coincides with the edge of the video image), it is considered that the screening of all candidate images has been completed. Preferably, when the box surface in the video image exceeds the preset region R, it is determined that the screening of all the candidate images has been completed.
Specifically, referring to fig. 4, if the video image V of the current frame is the video image of the front end face, it is determined that the screening of all candidate images has been completed when [ H- (y + H) ] < t 1. When the video image is a video image of the front end face, the camera 20 shoots the front upper side of the container 50, so that the container 50 enters the shot picture from the upper edge of the shot picture and moves out of the shot picture from the lower edge of the shot picture. When [ H- (y + H) ] < t1, it is indicated that the lower edge of the deck has moved out of the preset region R, the container 50 is about to disappear from the shot region.
And if the video image V of the current frame is the video image of the rear end face, judging that the screening of all candidate images is finished when y is less than t 0. When the video image is a video image of the rear end face, the camera 20 shoots at the rear upper side of the container 50, so that the container 50 enters the shot picture from the lower edge of the shot picture and moves out of the shot picture from the upper edge of the shot picture. When y < t0, it indicates that the upper edge of the floor has moved outside the predetermined region R and the container 50 is about to disappear from the shot region.
If the video image of the current frame is the video image of the top surface, it is determined that the screening of all candidate images has been completed when y < t0 or [ H- (y + H) ] < t 1. When the video image V is a video image of the top surface, the camera 20 is shot above the container 50. When the shooting direction of the camera 20 is along the moving direction DM of the container 50, the container 50 enters the shooting picture from the lower edge of the shooting picture, and moves out of the shooting picture from the upper edge of the shooting picture, when y < t0, it indicates that the upper edge of the container surface has moved out of the preset area R, and the container 50 is about to disappear from the shooting area. When the shooting direction of the camera 20 is the moving direction DM facing the container 50, the container 50 enters the shooting picture from the upper edge of the shooting picture, and moves out of the shooting picture from the lower edge of the shooting picture, and when [ H- (y + H) ] < t1, it is indicated that the lower edge of the box surface has moved out of the preset area R, and the container 50 is about to disappear from the shooting area.
If the video image of the current frame is the video image of the left side, it is determined that the screening of all candidate images has been completed when x < s 0. When the video image is a left side video image, the camera 20 shoots at the upper left of the container 50, so that the container 50 enters the shot from the right edge of the shot and moves out of the shot from the left edge of the shot. When x < s0, it indicates that the left edge of the deck has moved out of the predetermined region R and the container 50 is about to disappear from the shot region.
If the video image of the current frame is the video image of the right side, it is determined that the screening of all candidate images has been completed when [ W- (x + W) ] < s1. When the video image is a video image of the right side, the camera 20 photographs at the upper right of the container 50, and thus the container 50 enters the photographed picture from the left edge of the photographed picture and moves out of the photographed picture from the right edge of the photographed picture. When [ W- (x + W) ] < s1, it is indicated that the right edge of the deck has moved out of the preset region R and the container 50 is about to disappear from the shot region.
In step S30, the intelligent recognition module 30 selects the best video image for performing damage recognition, which has a complete box surface, a large frame size, a centered position, and a clear image. In step S40, the optimal video image (target image) is subjected to damage recognition. Wherein the damage identification is performed on the target image, including identifying at least one of a type of damage, a location of the damage, and a number of the damages. The type of breakage includes, for example, one or more of no breakage, rust corrosion, scratching, deformation, hole breakage, collapse, and cracking.
Illustratively, step S40 includes steps S41-S43 as shown in FIG. 6:
s41, cutting the target image according to the edge of the box surface;
s42, uniformly zooming the cut target image into a preset size;
and S43, carrying out damage identification on the zoomed target image.
Finally, the container surface damage detection system 100 displays the damage identification result through the interactive module 40. For example, the interactive module 40 displays a target image, a damage type, a damage location, and a damage number of each case surface, respectively.
Preferably, the smart identification module 30 may control the interactive module 40 to display a video image of each case face.
Preferably, the container surface damage detection system 100 may perform damage identification using a self-learning algorithm (e.g., an artificial network model) in step S40, the user may manually mark at least one of the type, position and number of damages in the video image (target image) through the interaction module 40, and the intelligent recognition module 30 performs self-learning according to the result of the manual marking.
Preferably, the smart recognition module 30 is further configured to recognize a box number and/or a box type of the container 50 according to the target image, and control the interaction module 40 to display the box number and/or the box type, so that a worker can quickly find the broken container 50.
Preferably, the camera 20 is configured as a high definition camera, for example with a resolution above 1080P, or with pixels above 200 ten thousand. Preferably, the camera 20 is configured to have a light supplement function, thereby supporting a full-color night vision function. Preferably. The camera 20 supports both PEO portal power and mains power. Preferably, the camera 20 supports both wireless and wired data transmission. Therefore, the camera 20 is applicable to night scenes, and has various selectable modes of power supply and data transmission, strong adaptability to installation environment and wide application scenes.
In summary, according to the container surface damage detection system 100 of the first embodiment of the present application, the number of installation devices is small, the requirement for the installation environment is low, and the application scenarios are wide. Meanwhile, the large and clear box surface image in the preset area can be screened out by integrating the size and the definition of the box surface in the image, namely the target image most beneficial to damage identification is selected, so that the damage identification accuracy is improved.
As shown in fig. 7, in the second embodiment according to the present application, the container deck breakage detection system 200 includes at least one camera 20, a smart identification module 30, and an interaction module 40. Wherein at least one camera 20 is arranged at the container aisle 10 for capturing video images of at least one deck of a container 50 entering and exiting the container aisle 10. The smart identification module 30 is electrically connected to the at least one camera 20 to control the operation of the at least one camera 20 and to acquire a video image captured by the at least one camera 20. The interaction module 40 is electrically connected to the smart identification module 30. The smart identification module 30 is configured to perform the operations of the above-described steps S10 to S40.
Unlike the first embodiment, in the second embodiment, the at least one camera 20 includes a front camera 21, a rear camera 22, a left camera 23, and a right camera 24. The front camera 21 is arranged at the front end of the container channel 10 and used for shooting a video image of the front end face; the rear camera 22 is arranged at the rear end of the container passage 10 and is used for shooting a video image of the rear end face; the left camera 23 is arranged on the left side of the container channel 10 and used for shooting a video image of the left side; a right camera 24 is arranged on the right side of the container aisle 10 for capturing video images of the right side.
Wherein at least one of the front camera 21 and the rear camera 22 is also used to take a video image of the top surface of the container 50. That is, the front camera 21 is used for taking video images of the front end face and the top face, and the rear camera 22 is used for taking video images of the rear end face; or the front camera 21 is used for shooting the video image of the front end face, and the rear camera 22 is used for shooting the video images of the rear end face and the top face; alternatively, the front camera 21 is used to take video images of the front end face and the top face, and the rear camera 22 is used to take video images of the rear end face and the top face.
The smart identification module 30 is configured to identify the top surface of the container 50 from the video images captured by the front camera 21 and/or the rear camera 22 and select a plurality of candidate images of the top surface in step S20. The smart identification module 30 is configured to screen out the target picture from all the candidate images taken from the front camera 21 and/or the rear camera 22 in step S30.
According to the second embodiment of the application, in the environment without the condition of installing the top camera, the video image of the top is shot through at least one of the front camera and the rear camera, the top camera can be prevented from being installed, so that the number of devices needing to be installed is reduced, and meanwhile, the requirements on the installation environment and the installation part are low.
Portions not described in the second embodiment refer to the description in the first embodiment.
According to the method for detecting the damage of the container surface of the container, the container entering and exiting the container channel can be directly detected, a specific detection environment is not needed, the adaptability to the environment is strong, and therefore the application scene is wider. According to the container face damage detection system, the number of the equipment parts is small, the installation is simple and convenient, the adaptability to the environment is strong, the damage detection is automatically completed in the normal moving process of the container, no additional process is added, and the working efficiency is improved.
The flows and steps described in all the preferred embodiments described above are only examples. Unless an adverse effect occurs, various processing operations may be performed in a different order from the order of the above-described flow. The sequence of the steps of the above-mentioned flows can also be added, combined or deleted according to actual needs.
In understanding the scope of the present application, the term "comprising" and its derivatives, as used herein, are intended to be open ended terms that specify the presence of the stated features and/or steps, but do not exclude the presence of other unstated features and/or steps. This concept applies to words having similar meanings such as the terms, "including", "having" and their derivatives.
Unless defined otherwise, technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Features described herein in one embodiment may be applied to another embodiment, either alone or in combination with other features, unless the feature is otherwise inapplicable or otherwise stated in the other embodiment.
The present application has been described in terms of the above-described embodiments, but it should be understood that the above-described embodiments are for purposes of illustration and description only and are not intended to limit the present application to the scope of the described embodiments. Furthermore, it will be understood by those skilled in the art that the present application is not limited to the above embodiments, and that many variations and modifications may be made in accordance with the teachings of the present application, all of which fall within the scope of the present application as claimed.

Claims (10)

1. A machine vision method for detecting damage to a container face of a container for detecting a container passing into and out of a container passageway at the container passageway, comprising:
acquiring a plurality of video images of at least one deck of a container entering and exiting the container tunnel;
screening a plurality of candidate images of the at least one face of the container from a plurality of the video images;
screening a target image from the plurality of candidate images;
performing damage recognition on the target image,
wherein said screening a plurality of candidate images of said at least one face of said container from a plurality of said video images comprises:
acquiring the video image of a current frame and the video image of a previous frame;
calculating the difference between the video image of the current frame and the video image of the previous frame;
when the difference is larger than a preset difference threshold value, identifying the video image of the current frame so as to identify the edge of the box surface;
and when the video image of the current frame at least comprises the complete box surface, selecting the video image of the current frame as the candidate image.
2. The machine-vision container floor breakage detection method of claim 1 wherein said screening out a target image from said plurality of candidate images comprises:
step S31, setting the first candidate image as a preparation image, and then executing step S32;
step S32, screening out the next candidate image, and then executing step S33;
step S33, comparing the size z1 of the box surface in the next candidate image with the size z0 of the box surface in the preliminary image, executing step S34 when z1> z0, and executing step S36 when z1 is not more than z 0;
step S34, comparing the definition d1 of the next candidate image with the definition d0 of the preparation image, executing step S35 when d1> d0, and executing step S36 when d1 is less than or equal to d 0;
step S35 of taking the next candidate image as the preliminary image, and then performing step S37;
step S36, not replacing the preliminary image, and then executing step S37;
step S37, if the screening of all the candidate images is completed, step S38 is executed, and if the screening of all the candidate images is not completed, step S32 is executed;
and step S38, taking the prepared image as the target image.
3. The machine-vision container deck breakage detection method of claim 2, wherein the selecting the video image of the current frame as the candidate image when the video image of the current frame contains at least the complete deck includes:
and when the video image of the current frame contains the complete box surface and the box surface is located in a preset area in the video image of the current frame, selecting the video image of the current frame as the candidate image.
4. The machine-vision container deck breakage detection method of claim 3 wherein the deck includes a front face, a rear face, a left side face, a right side face, and a top face.
5. The machine-vision container face breakage detection method of claim 4, wherein the face being located in a preset area in the video image of the current frame includes:
the video image of the current frame is the video image of the front end face or the rear end face or the top face, and when y is larger than or equal to t0 and [ H- (y + H) ] > is larger than or equal to t1, the box face is judged to be located in a preset area in the video image of the current frame; or alternatively
The video image of the current frame is the video image of the left side face or the right side face, when x is larger than or equal to s0 and [ W- (x + W) ] is larger than or equal to s1, the box face is judged to be positioned in a preset area in the video image of the current frame,
h is the total height of the video image of the current frame, W is the total width of the video image of the current frame, (x, y) is the coordinate of the upper left corner of the video image of the current frame in the video image of the current frame with the upper left corner of the video image of the current frame as the origin, the box surface in the video image of the current frame, H is the height of the box surface in the video image of the current frame, W is the width of the box surface in the video image of the current frame, t0, t1, s0 and s1 are preset edge thresholds, and t0, t1, s0 and s1 are positive numbers.
6. The machine-vision container face damage detection method of claim 5, wherein in step S37, it is determined whether or not the screening of all the candidate images is completed according to the following conditions:
the video image of the current frame is the video image of the front end face, and when [ H- (y + H) ] < t1, the screening of all the candidate images is judged to be completed; or
The video image of the current frame is the video image of the rear end face, and when y is less than t0, the screening of all the candidate images is judged to be completed;
the video image of the current frame is the video image of the top surface, and when y < t0 or [ H- (y + H) ] < t1, the screening of all the candidate images is judged to be completed; or
The video image of the current frame is the video image of the left side surface, and when x is less than s0, the screening of all the candidate images is judged to be completed; or alternatively
The video image of the current frame is the video image of the right side face, and it is determined that the screening of all the candidate images has been completed when [ W- (x + W) ] < s1.
7. The machine-vision container face breakage detection method of any one of claims 1 to 6, wherein the identifying the target image for breakage includes:
cutting the target image according to the edge of the box surface;
uniformly zooming the cut target image into a preset size;
and carrying out damage identification on the zoomed target image.
8. A machine vision container deck damage detection system for detecting a container entering and exiting a container passageway at the container passageway, comprising:
the camera is arranged at the container passage and used for shooting a video image of at least one container surface of a container entering and exiting the container passage;
the intelligent identification module is electrically connected to the at least one camera so as to control the at least one camera to work and acquire a video image shot by the at least one camera;
an interactive module electrically connected with the intelligent recognition module for realizing the human-computer interaction function,
the intelligent recognition module is configured to acquire a video image of the container channel through a camera to acquire a video image of at least one container surface of the container, complete the steps of the machine-vision container surface damage detection method according to any one of claims 1 to 7, and control the interaction module to display a damage recognition result.
9. The machine-vision container deck breakage detection system of claim 8,
the interactive module is configured to display the video image, and the container face damage detection system is further configured to: a user can manually mark at least one of damage type, damage position and damage quantity in the video image through the interaction module, and the intelligent identification module completes self-learning according to the result of the manual marking;
and/or
The smart identification module is further configured to perform the following: and identifying the box number and/or the box type of the container according to the target image, and/or controlling the interaction module to display the target image.
10. The machine-vision container floor breakage detection system of claim 8 or 9,
the at least one camera includes:
a front camera arranged at the front end of the container channel and used for shooting a video image of the front end face,
a rear camera arranged at the rear end of the container channel and used for shooting a video image of the rear end face,
a left camera arranged at the left side of the container channel and used for shooting a video image of the left side surface,
a right camera disposed at the right side of the container passage for taking a video image of the right side surface, an
The top camera is arranged at the top of the container channel and is used for shooting a video image of the top surface;
or
The at least one camera includes:
a front camera arranged at the front end of the container channel and used for shooting a video image of the front end face,
a rear camera arranged at the rear end of the container channel and used for shooting a video image of the rear end face,
a left camera disposed at the left side of the container passage for taking a video image of the left side surface, an
A right camera arranged at the right side of the container channel for shooting video images of the right side surface,
wherein at least one of the front camera and the rear camera is further configured to capture a video image of a top surface, and the smart identification module is configured to identify the top surface from the video images captured by the front camera and/or the rear camera and select the plurality of candidate images of the top surface.
CN202310240276.6A 2023-03-14 2023-03-14 Machine vision container face damage detection method and system Active CN115953726B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310240276.6A CN115953726B (en) 2023-03-14 2023-03-14 Machine vision container face damage detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310240276.6A CN115953726B (en) 2023-03-14 2023-03-14 Machine vision container face damage detection method and system

Publications (2)

Publication Number Publication Date
CN115953726A true CN115953726A (en) 2023-04-11
CN115953726B CN115953726B (en) 2024-02-27

Family

ID=87287944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310240276.6A Active CN115953726B (en) 2023-03-14 2023-03-14 Machine vision container face damage detection method and system

Country Status (1)

Country Link
CN (1) CN115953726B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11866250B2 (en) 2019-03-04 2024-01-09 Goodpack Ibc (Singapore) Pte Ltd Cargo unit

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120033852A1 (en) * 2010-08-06 2012-02-09 Kennedy Michael B System and method to find the precise location of objects of interest in digital images
CN103235938A (en) * 2013-05-03 2013-08-07 北京国铁华晨通信信息技术有限公司 Method and system for detecting and identifying license plate
CN106529406A (en) * 2016-09-30 2017-03-22 广州华多网络科技有限公司 Method and device for acquiring video abstract image
US20180305123A1 (en) * 2017-04-18 2018-10-25 Alert Innovation Inc. Picking workstation with mobile robots & machine vision verification of each transfers performed by human operators
CN112465706A (en) * 2020-12-21 2021-03-09 天津工业大学 Automatic gate container residual inspection method
CN113038018A (en) * 2019-10-30 2021-06-25 支付宝(杭州)信息技术有限公司 Method and device for assisting user in shooting vehicle video
CN113240641A (en) * 2021-05-13 2021-08-10 大连海事大学 Deep learning-based container damage real-time detection method
CN113971811A (en) * 2021-11-16 2022-01-25 北京国泰星云科技有限公司 Intelligent container feature identification method based on machine vision and deep learning
CN113979367A (en) * 2021-10-12 2022-01-28 深圳中集智能科技有限公司 Automatic identification system and method for container position
US20220084186A1 (en) * 2018-12-21 2022-03-17 Canscan Softwares And Technologies Inc. Automated inspection system and associated method for assessing the condition of shipping containers
CN114723689A (en) * 2022-03-25 2022-07-08 盛视科技股份有限公司 Container body damage detection method
WO2022199441A1 (en) * 2021-03-23 2022-09-29 影石创新科技股份有限公司 360-degree video playback method and apparatus, computer device, and storage medium
CN115171197A (en) * 2022-09-01 2022-10-11 广州市森锐科技股份有限公司 High-precision image information identification method, system, equipment and storage medium
CN115222697A (en) * 2022-07-18 2022-10-21 北京国泰星云科技有限公司 Container damage detection method based on machine vision and deep learning
CN115410105A (en) * 2021-05-11 2022-11-29 科大讯飞股份有限公司 Container mark identification method, device, computer equipment and storage medium
CN115457454A (en) * 2022-08-16 2022-12-09 浙江大华技术股份有限公司 Target detection method, apparatus and storage medium
CN115620275A (en) * 2022-12-20 2023-01-17 深圳中集智能科技有限公司 Intelligent container lifting method and system based on visual target recognition

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120033852A1 (en) * 2010-08-06 2012-02-09 Kennedy Michael B System and method to find the precise location of objects of interest in digital images
CN103235938A (en) * 2013-05-03 2013-08-07 北京国铁华晨通信信息技术有限公司 Method and system for detecting and identifying license plate
CN106529406A (en) * 2016-09-30 2017-03-22 广州华多网络科技有限公司 Method and device for acquiring video abstract image
US20180305123A1 (en) * 2017-04-18 2018-10-25 Alert Innovation Inc. Picking workstation with mobile robots & machine vision verification of each transfers performed by human operators
US20220084186A1 (en) * 2018-12-21 2022-03-17 Canscan Softwares And Technologies Inc. Automated inspection system and associated method for assessing the condition of shipping containers
CN113038018A (en) * 2019-10-30 2021-06-25 支付宝(杭州)信息技术有限公司 Method and device for assisting user in shooting vehicle video
CN112465706A (en) * 2020-12-21 2021-03-09 天津工业大学 Automatic gate container residual inspection method
WO2022199441A1 (en) * 2021-03-23 2022-09-29 影石创新科技股份有限公司 360-degree video playback method and apparatus, computer device, and storage medium
CN115410105A (en) * 2021-05-11 2022-11-29 科大讯飞股份有限公司 Container mark identification method, device, computer equipment and storage medium
CN113240641A (en) * 2021-05-13 2021-08-10 大连海事大学 Deep learning-based container damage real-time detection method
CN113979367A (en) * 2021-10-12 2022-01-28 深圳中集智能科技有限公司 Automatic identification system and method for container position
CN113971811A (en) * 2021-11-16 2022-01-25 北京国泰星云科技有限公司 Intelligent container feature identification method based on machine vision and deep learning
CN114723689A (en) * 2022-03-25 2022-07-08 盛视科技股份有限公司 Container body damage detection method
CN115222697A (en) * 2022-07-18 2022-10-21 北京国泰星云科技有限公司 Container damage detection method based on machine vision and deep learning
CN115457454A (en) * 2022-08-16 2022-12-09 浙江大华技术股份有限公司 Target detection method, apparatus and storage medium
CN115171197A (en) * 2022-09-01 2022-10-11 广州市森锐科技股份有限公司 High-precision image information identification method, system, equipment and storage medium
CN115620275A (en) * 2022-12-20 2023-01-17 深圳中集智能科技有限公司 Intelligent container lifting method and system based on visual target recognition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11866250B2 (en) 2019-03-04 2024-01-09 Goodpack Ibc (Singapore) Pte Ltd Cargo unit

Also Published As

Publication number Publication date
CN115953726B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
US10515471B2 (en) Apparatus and method for generating best-view image centered on object of interest in multiple camera images
CN106226157B (en) Concrete structure member crevices automatic detection device and method
CN113706495B (en) Machine vision detection system for automatically detecting lithium battery parameters on conveyor belt
CN111626139A (en) Accurate detection method for fault information of IT equipment in machine room
CN102393397A (en) System and method for detecting surface defects of magnetic shoe
CN101937614A (en) Plug and play comprehensive traffic detection system
CN115953726A (en) Machine vision container surface damage detection method and system
CN110008771B (en) Code scanning system and code scanning method
CN112580600A (en) Dust concentration detection method and device, computer equipment and storage medium
CN112149513A (en) Industrial manufacturing site safety helmet wearing identification system and method based on deep learning
CN101924923A (en) Embedded intelligent automatic zooming snapping system and method thereof
CN109583317A (en) A kind of container tallying system based on ground identification
CN106228541A (en) The method and device of screen location in vision-based detection
CN111307812A (en) Welding spot appearance detection method based on machine vision
CN110389130A (en) Intelligent checking system applied to fabric
CN112634269A (en) Rail vehicle body detection method
CN109002045A (en) A kind of the inspection localization method and inspection positioning system of intelligent inspection robot
CN201726494U (en) Device and system which utilize image color information to conduct image comparison
CN101561316B (en) On-line test visual data processing system based on region of interest (ROI)
JP2017030380A (en) Train detection system and train detection method
CN110084171A (en) A kind of subway roof detection device for foreign matter and detection method
CN116091506B (en) Machine vision defect quality inspection method based on YOLOV5
CN112260402A (en) Method for monitoring state of intelligent substation inspection robot based on video monitoring
CN109932160B (en) AOI and gray scale meter detection system and method
CN101943575B (en) Test method and test system for mobile platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant