CN113409282A - Deformation detection method and device for box-type structure, electronic equipment and storage medium - Google Patents

Deformation detection method and device for box-type structure, electronic equipment and storage medium Download PDF

Info

Publication number
CN113409282A
CN113409282A CN202110713034.5A CN202110713034A CN113409282A CN 113409282 A CN113409282 A CN 113409282A CN 202110713034 A CN202110713034 A CN 202110713034A CN 113409282 A CN113409282 A CN 113409282A
Authority
CN
China
Prior art keywords
determining
depth
depth image
deformation
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110713034.5A
Other languages
Chinese (zh)
Inventor
蒋哲兴
郭卉
龚星
郭双双
谢骏
曾锴
王谦
刘庆
郑双智
蔡俩志
洪亮
廖树根
侯嘉悦
郝红
杨刚刚
戚恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Foreign Transport Co ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
China Foreign Transport Co ltd
Tencent Cloud Computing Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Foreign Transport Co ltd, Tencent Cloud Computing Beijing Co Ltd filed Critical China Foreign Transport Co ltd
Priority to CN202110713034.5A priority Critical patent/CN113409282A/en
Publication of CN113409282A publication Critical patent/CN113409282A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/16Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a deformation detection method and device for a box-type structure, electronic equipment and a computer-readable storage medium; the method comprises the following steps: acquiring a depth image of the side face of the box-type structure; identifying regions respectively corresponding to a plurality of parts included in the side face from the depth image based on depth information of the depth image; determining a reference plane corresponding to at least one of the components and determining a distance between a pixel point included in each of the regions and the reference plane; determining a deformation region in each of the regions based on a distance between a pixel point included in each of the regions and the reference plane. Through this application, can high-efficiently and accurately detect out the region that box structure takes place deformation.

Description

Deformation detection method and device for box-type structure, electronic equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for detecting deformation of a box structure, an electronic device, and a computer-readable storage medium.
Background
Artificial Intelligence (AI) is a comprehensive technique in computer science, and by studying the design principles and implementation methods of various intelligent machines, the machines have the functions of perception, reasoning and decision making. The artificial intelligence technology is a comprehensive subject and relates to a wide range of fields, for example, natural language processing technology and machine learning/deep learning, etc., and along with the development of the technology, the artificial intelligence technology can be applied in more fields and can play more and more important values.
The method is characterized in that an effective scheme for detecting the deformation of the box-type structure based on artificial intelligence is lacked in the related technology, and after the box-type structure is observed mainly through human eyes, a measuring instrument is used for manually measuring and determining the specific deformation degree of the observed area with the deformation, so that the problems of low efficiency and low accuracy of deformation detection of the box-type structure are caused.
Disclosure of Invention
The embodiment of the application provides a deformation detection method and device for a box-type structure, electronic equipment and a computer-readable storage medium, which can efficiently and accurately detect the deformation area of the box-type structure.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a deformation detection method for a box-type structure, which comprises the following steps:
acquiring a depth image of the side face of the box-type structure;
identifying regions respectively corresponding to a plurality of parts included in the side face from the depth image based on depth information of the depth image;
determining a reference plane corresponding to at least one of the components and determining a distance between a pixel point included in each of the regions and the reference plane;
determining a deformation region in each of the regions based on a distance between a pixel point included in each of the regions and the reference plane.
In the foregoing aspect, when there is a specific component having an intensity greater than an intensity threshold value among the plurality of components, the determining a reference plane corresponding to at least one of the components includes: determining a fifth depth mean value of pixel points included in the region corresponding to the specific part; determining a specific reference plane corresponding to the specific component based on the fifth depth mean; the determining a deformation region in each of the regions based on a distance between a pixel point included in each of the regions and the reference plane includes: determining a distance between a pixel point included in the specific component and the specific reference plane; determining a deformation region in a region corresponding to the specific component based on a distance between a pixel point included in the specific component and the specific reference plane.
In the above scheme, the acquiring a two-dimensional image of a side surface of the box-type structure includes: acquiring three-dimensional point cloud data for the box-type structure; extracting a two-dimensional image of the side face from the three-dimensional point cloud data; the obtaining of the depth image of the side of the box structure comprises: and extracting the depth image of the side face from the three-dimensional point cloud data.
The embodiment of the application provides a deformation detection device of box structure, includes:
the acquisition module is used for acquiring a depth image of the side face of the box-type structure;
the identifying module is used for identifying areas corresponding to a plurality of parts included in the side face from the depth image based on the depth information of the depth image;
a determining module for determining a reference plane corresponding to at least one of the components and determining a distance between a pixel point included in each of the regions and the reference plane;
the determining module is further configured to determine a deformation region in each of the regions based on a distance between a pixel point included in each of the regions and the reference plane.
In the foregoing solution, the determining module is further configured to traverse the depth image in a sliding manner through windows of a preset size, and determine a jump amplitude of a depth value between different pixel points included in each window; determining the region where the window corresponding to the jump amplitude larger than the amplitude threshold value is located as a noise point in the depth image; the apparatus also includes a deletion module to delete the noise from the depth image.
In the foregoing solution, the identifying module is further configured to identify, from the depth image, regions corresponding to the plurality of components included in the side surface, based on the depth information of the depth image and a gradient difference between different pixel points included in the depth image.
In the foregoing solution, the determining module is further configured to determine a pixel point in the depth image, where a depth value is smaller than a depth threshold and a gradient difference between the pixel point and an adjacent pixel point is greater than a gradient difference threshold; and determining the pixel point which is formed by the depth value smaller than the depth threshold value and has the gradient difference with the adjacent pixel point larger than the gradient difference threshold value in the depth image and the corresponding area with the spatial characteristic as a corner as the area corresponding to the corner piece included in the side surface.
In the above solution, the determining module is further configured to determine, based on an area corresponding to a corner fitting included in the side surface, a first boundary of a reinforcing plate included in the side surface, which coincides with the corner fitting; traversing and determining gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the first boundary as a starting point; determining a second boundary of the reinforcing plate based on the pixel points corresponding to the positions with the gradient difference larger than the gradient difference threshold value; and determining a region consisting of the first boundary and the second boundary in the depth image as a region corresponding to a reinforcing plate included in the side surface.
In the foregoing aspect, the determining module is further configured to determine, based on a region corresponding to a reinforcing plate included in the side surface, a left and right boundary of a side sill included in the side surface; traversing and determining gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the boundary of the depth image as a starting point; determining the upper and lower boundaries of the side beam based on the pixel points corresponding to the positions of which the gradient difference is greater than the gradient difference threshold; and determining a region composed of the left and right boundaries of the side beam and the upper and lower boundaries of the side beam in the depth image as a region corresponding to the side beam included in the side surface.
In the above scheme, the determining module is further configured to determine, based on a region corresponding to a corner fitting included in the side surface, an upper boundary and a lower boundary of a cross beam included in the side surface; determining an interval consisting of a minimum abscissa and a maximum abscissa corresponding to the corner fitting, and traversing and determining the gradient of each pixel point in the interval; determining left and right boundaries of the beam based on pixel points corresponding to positions with gradient differences larger than a gradient difference threshold; and determining a region formed by the upper and lower boundaries of the beam and the left and right boundaries of the beam in the depth image as a region corresponding to the beam included in the side surface.
In the above scheme, the determining module is further configured to determine, based on an area corresponding to a cross beam included in the side surface, a boundary of an extension plate included in the side surface, which coincides with the cross beam; determining the upper and lower boundaries of the extension plate based on the corresponding region of the reinforcing plate included in the side surface; traversing and determining the gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the boundary coincident with the beam as a starting point; determining the residual boundary of the extension board based on the pixel points corresponding to the positions with the gradient difference larger than the difference threshold; determining an area, which is formed by the boundary coincident with the beam, the upper and lower boundaries of the extension plate and the remaining boundary, in the depth image as an area corresponding to the extension plate included in the side surface; and a section for determining a section of the side surface other than the corner fitting, the reinforcement panel, the cross member, the side member, and the extension panel as a section corresponding to the side panel included in the side surface.
In the above scheme, the apparatus further includes a selecting module, configured to select a target component with a largest size from the plurality of components; the determining module is further configured to determine a depth mean of pixel points included in a region corresponding to the target component, and determine a reference plane corresponding to the target component based on the depth mean.
In the above scheme, the determining module is further configured to determine a tongue region and a groove region included in the side surface; determining a first depth mean value of pixel points included in the convex groove area, and deleting the pixel points in the convex groove area, which are more than a depth mean value threshold value from the first depth mean value; determining a second depth mean value of the residual pixel points in the convex groove area, and determining a first reference plane corresponding to the convex groove area based on the second depth mean value; determining a third depth mean value of pixel points included in the groove area, and deleting the pixel points in the groove area, wherein the distance between the pixel points and the third depth mean value is larger than the depth mean value threshold; and determining a fourth depth mean value of the residual pixel points in the groove area, and determining a second reference plane corresponding to the groove area based on the fourth depth mean value.
In the foregoing solution, the determining module is further configured to, for each of the regions, perform the following processing: determining pixel points corresponding to the distance greater than the distance threshold in the region; determining an area formed by the pixel points corresponding to the distance greater than the distance threshold value as a deformation area in the area; and the confidence coefficient of the deformation region is determined based on the ratio of the number of the noise points included in the deformation region to the total number of the pixel points included in the deformation region.
In the foregoing solution, when there is a specific component having an intensity greater than an intensity threshold in the multiple components, the determining module is further configured to determine a fifth depth mean of a pixel included in a region corresponding to the specific component; determining a specific reference plane corresponding to the specific component based on the fifth depth mean; and for determining a distance between a pixel point comprised by the particular component and the particular reference plane; determining a deformation region in a region corresponding to the specific component based on a distance between a pixel point included in the specific component and the specific reference plane.
In the above scheme, the obtaining module is further configured to obtain a two-dimensional image of a side surface of the box-type structure; the device further comprises an annotation module, which is used for marking areas corresponding to the parts respectively included in the side surface and deformation areas in each area in the two-dimensional image based on the corresponding relation between the depth image and the two-dimensional image.
In the above scheme, the obtaining module is further configured to obtain three-dimensional point cloud data for the box-type structure; the device also comprises an extraction module used for extracting the two-dimensional image of the side face from the three-dimensional point cloud data; and the depth image of the side face is extracted from the three-dimensional point cloud data.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the deformation detection method of the box-type structure provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions and is used for causing a processor to execute the executable instructions so as to realize the deformation detection method of the box-type structure provided by the embodiment of the application.
The embodiment of the present application provides a computer program product, where the computer program product includes computer executable instructions, and is used for implementing the deformation detection method for a box-type structure provided in the embodiment of the present application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
by acquiring the depth image of the side face of the box-type structure, the areas corresponding to the parts respectively included in the side face can be identified from the depth image based on the depth information of the depth image, then, at least one part can be selected from the parts to determine the reference plane corresponding to the at least one part, and then, the deformation area in each area can be determined based on the distance between the pixel point included in each area and the reference plane.
Drawings
FIG. 1 is a schematic diagram of a deformation detection system 100 of a box structure according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a server 200 provided in an embodiment of the present application;
FIG. 3 is a schematic flow chart of a deformation detection method for a box-type structure according to an embodiment of the present disclosure;
FIG. 4 is a schematic flowchart of a deformation detection method for a box-type structure according to an embodiment of the present disclosure;
FIG. 5 is a schematic flowchart of a deformation detection method for a box-type structure according to an embodiment of the present disclosure;
FIG. 6A is a schematic view of a linear scan 2D of the top surface of a container according to an embodiment of the present application;
FIG. 6B is a depth image of a top surface of a container provided by an embodiment of the present application;
fig. 6C is a schematic diagram illustrating a deformation detection result of the top surface of the container according to an embodiment of the present application;
FIG. 7 is a schematic flowchart of a deformation detection method for a box-type structure according to an embodiment of the present disclosure;
fig. 8A is a schematic view of a linear scan 2D of the top surface of a container to be inspected according to an embodiment of the present application;
fig. 8B is a depth image of the top surface of the container to be detected according to the embodiment of the present application;
FIG. 9 is a graph illustrating the segmentation result of the top surface of the container according to the embodiment of the present application;
FIG. 10 is a schematic flow chart of a deformation detection method for a top surface of a container provided in the related art;
fig. 11 is a schematic flow chart of a method for detecting deformation of a top surface of a container according to an embodiment of the present application;
fig. 12A is a schematic view of a linear scan 2D of the top surface of a container to be inspected according to an embodiment of the present application;
fig. 12B is a depth image of the top surface of the container to be detected according to the embodiment of the present application;
fig. 12C is a schematic diagram illustrating a deformation detection result of the top surface of the container according to an embodiment of the present application;
fig. 13A is a schematic view of a linear scan 2D of the top surface of a container to be inspected according to an embodiment of the present application;
fig. 13B is a depth image of the top surface of the container to be inspected according to the embodiment of the present application;
fig. 13C is a schematic diagram of a deformation detection result of the top surface of the container according to the embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first", "second", and the like are only used for distinguishing similar objects and do not denote a particular order or importance, but rather the terms "first", "second", and the like may be used interchangeably with the order of priority or the order in which they are expressed, where permissible, to enable embodiments of the present application described herein to be practiced otherwise than as specifically illustrated and described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Depth image (depth image), also known as range image, refers to an image having as pixel values the depth (distance) from an image collector (e.g., a camera) to each point in a scene, which directly reflects the geometry of the visible surface of the scene. For example, the gray value of each pixel in the depth image can be used to characterize the distance of a point in the scene from the camera. The depth image can be calculated into point cloud data through coordinate conversion, and the point cloud data with regular and necessary information can also be inversely calculated into depth image data.
2) The point cloud data, also called three-dimensional point cloud data, refers to a set of vectors in a three-dimensional coordinate system, the vectors are usually represented in the form of X, Y, Z three-dimensional coordinates, and are generally mainly used for representing the shape of the external surface of an object, and besides, the point cloud data can also represent the RGB color, gray value, depth, segmentation result, and the like of a point besides the geometric position information represented by (X, Y, Z).
3) Linear scanning imaging means that when an object is imaged, a line image, even a two-dimensional image, is formed by scanning on an image plane at each time, and finally, along the direction of the object motion, the line scanning imaging is performed for short by splicing.
4) The 2D image, also called a plane image, has only an X axis and a Y axis, wherein the 2D image and the RGB image have the same meaning.
5) The 3D image is an image having a stereoscopic effect and has an X axis direction, a Y axis direction, and a Z axis direction.
6) Box-like structure, three-dimensional structure with accommodation space, such as container, packing box, modular house, etc. Taking the top surface of a container as an example, the top surface of the container usually includes corner fittings (located at four corners of the top surface, beside the corner fittings), reinforcing plates (located at four corners of the top surface, at two ends of the top surface), top cross beams (located at two left and right ends of the top surface), top side beams (located at two upper and lower ends of the top surface), a top plate (located in the middle of the top surface), and the like.
Taking a box-type structure as an example of a container, in order to detect the deformation degree of the surface of the container, the related art provides the following schemes:
scheme A: acquiring an external image and/or an internal image of the container, wherein the external image is used for detecting the damage of the outside of the container, and the internal image is used for detecting the damage of the inside of the container; the acquired images are stored or transmitted so as to obtain a box checking result through subsequent calculation and flexibly acquire box images in a non-fixed scene.
Scheme B: the intelligent testing system for the container in the fixed scene comprises a container truck position acquisition unit, an image acquisition and processing unit, a terminal processing unit, a data transmission unit and a human-computer interaction unit. The system is arranged on a lane through which a collection truck passes, is applied to container entrances such as ports, customs, container yards and storage yards in fixed scenes, and is used for automatically acquiring images of containers on the collection truck and intelligently inspecting the containers.
Scheme C: the container is observed by human eyes of a detector, and a measuring instrument is used for manually measuring and determining the specific deformation degree of the observed deformation area.
However, the applicant has found in the course of carrying out the present application: although scheme A and scheme B can detect the deformation degree of container top surface, these two kinds of technical scheme are complicated, and have only used 2D information, and wherein, scheme A has only used the 2D camera, can't acquire specific deformation value, only is fit for carrying out rough analysis to the loss of container, and to the container yard, need accurately obtain the accurate value of the deformation in each region of container, only depends on the 2D camera can't accomplish. Although the image acquisition device of scheme B is high definition digtal camera, can't acquire regional deformation's specific numerical value equally, and if there is the turning etc. in above-mentioned two kinds of schemes in the container, the collection information of itself must not conform to with statistical law, and at this moment entire system will lose efficacy, can't carry out deformation detection at all. The scheme C depends on the proficiency of the detection personnel to a great extent, the efficiency is too low, the detection time is long, and only the region with serious deformation is usually concerned.
In view of the above technical problems, embodiments of the present application provide a deformation detection method and apparatus for a box-type structure, an electronic device, and a computer-readable storage medium, which can efficiently and accurately detect an area where a box-type structure deforms. An exemplary application of the electronic device provided in the embodiment of the present application is described below, and the electronic device provided in the embodiment of the present application may be implemented as a terminal, may also be implemented as a server, or may be implemented by cooperation of a terminal and a server. The following description will be given taking an example in which a terminal and a server cooperatively implement the deformation detection method for a box structure provided in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of a deformation detection system 100 with a box structure provided in an embodiment of the present application, a terminal 400 is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of both.
The terminal 400 (running a client 410, for example, a deformation detection client of the box structure) may be used to obtain a deformation detection request for the box structure, for example, after the inspector inputs a depth image of the side of the box structure in the client 410, the terminal 400 automatically obtains the deformation detection request for the box structure. Then, the terminal 400 transmits a deformation detection request for the box structure to the server 200 through the network 300. After receiving a deformation detection request for the box-type structure sent by the terminal 400, the server 200 extracts a depth image of the side surface of the box-type structure carried in the deformation detection request, then identifies regions respectively corresponding to a plurality of components (for example, when the box-type structure is a container, the corresponding components may be corner fittings, reinforcing plates, extending plates, beams, and the like) included in the side surface from the depth image based on the depth information of the depth image, then selects at least one component from the plurality of components, determines a reference plane corresponding to the at least one component, determines a distance between a pixel point included in each region and the reference plane, and finally determines a deformation region (i.e., a detection result) in each region based on the distance between the pixel point included in each region and the reference plane. After determining the deformation region in each region, the server 200 may return the detection result carrying the determined deformation region in each region to the terminal 400 through the network 300, so that the terminal 400 may perform labeling according to the detection result in a two-dimensional image corresponding to the depth image (for example, while a detection person inputs a depth image of the side of the box structure in the client 410, the two-dimensional image of the side of the box structure may also be input, where the depth image and the two-dimensional image are completely aligned, and coordinates of pixel points of each region are consistent in both the two images), and display the two-dimensional image labeled with the detection result in a human-computer interaction interface of the client 410.
In some embodiments, the deformation detection method for the box-type structure provided by the embodiments of the present application may also be implemented by the terminal alone.
For example, taking the terminal 400 shown in fig. 1 as an example, a deformation detection plug-in of a box structure may be implanted in the client 410 operated by the terminal 400, so as to implement a deformation detection method of the box structure locally at the client 410. For example, after acquiring a deformation detection request for a box-type structure, the terminal 400 calls a deformation detection plug-in for the box-type structure to implement a deformation detection method for the box-type structure, and extracts a depth image of a side surface of the box-type structure carried in the deformation detection request, based on depth information of the depth image, identifies regions corresponding to a plurality of components included in the side surface from the depth image, then selects at least one component from the plurality of components, and determines a reference plane corresponding to the at least one component, and then after determining the reference plane, may determine a distance between a pixel point included in each region and the reference plane, and based on the distance, determine a deformation region in each region. So, can realize carrying out automated inspection to box structure's deformation degree, improve the accuracy of detection efficiency and detection.
In other embodiments, after obtaining the deformation detection request for the box-type structure, the terminal 400 may further invoke a deformation detection interface of the box-type structure provided by the server 200 (for example, the deformation detection interface may be in a form provided as a cloud service, that is, a deformation detection service of the box-type structure), and after receiving the deformation detection request, the server 200 first extracts a depth image of a side surface of the box-type structure to be detected from the deformation detection request, then identifies regions corresponding to a plurality of components included in the side surface from the depth image based on depth information of the depth image, then selects at least one component from the plurality of components, and determines a reference plane corresponding to the at least one component, and finally, after determining the reference plane, may determine a distance between a pixel point included in each region and the reference plane, and based on the distance, determine the deformation region in every region, so, can realize carrying out automated inspection to box structure's deformation degree, for example, when the deformation degree that detects out certain box structure surpassed deformation degree threshold value (when can't continue to use promptly), can send out to measurement personnel and remind the message to remind measurement personnel to change.
In some embodiments, the deformation detection method for the box-type structure provided by the embodiment of the present application may be implemented by combining a cloud technology.
For example, the server 200 shown in fig. 1 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited thereto.
The following describes the configuration of the server 200 shown in fig. 1. Referring to fig. 2, fig. 2 is a schematic structural diagram of a server 200 according to an embodiment of the present application, where the server 200 shown in fig. 2 includes: at least one processor 210, memory 240, at least one network interface 220. The various components in server 200 are coupled together by a bus system 230. It is understood that the bus system 230 is used to enable connected communication between these components. The bus system 230 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 230 in fig. 2.
The Processor 210 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The memory 240 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 240 optionally includes one or more storage devices physically located remote from processor 210.
The memory 240 includes either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 240 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 240 is capable of storing data, examples of which include programs, modules, and data structures, or subsets or supersets thereof, to support various operations, as exemplified below.
An operating system 241, including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
a network communication module 242 for communicating to other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
in some embodiments, the deformation detection device for a box structure provided in the embodiments of the present application may be implemented in software, and fig. 2 illustrates the deformation detection device 243 for a box structure stored in the memory 240, which may be software in the form of programs and plug-ins, and includes the following software modules: the retrieving module 2431, the identifying module 2432, the determining module 2433, the deleting module 2434, the selecting module 2435, the labeling module 2436, and the extracting module 2437, which are logical and thus can be arbitrarily combined or further separated according to the functions implemented. It should be noted that, in fig. 2, for the sake of convenience of expression, all the modules described above are illustrated at once, but it should not be considered that the implementation of the deformation detection device 243 in a box-type structure excludes the module that can comprise only the acquisition module 2431, the identification module 2432 and the determination module 2433, the functions of which will be explained below.
In other embodiments, the deformation detection Device of the box structure provided in the embodiments of the present Application may be implemented in hardware, and for example, the Device provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the deformation detection method of the box structure provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
As described above, the deformation detection method for a box-type structure provided in the embodiments of the present application may be implemented by various types of electronic devices, for example, may be implemented by a terminal or a server alone, or may be implemented by a terminal and a server in cooperation. The following description will be given taking as an example a method for detecting deformation of a box structure, which is provided by the embodiment of the present application, cooperatively implemented by the terminal 400 and the server 200 shown in fig. 1. Referring to fig. 3, fig. 3 is a schematic flowchart of a deformation detection method for a box-type structure according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3.
In step S101, a depth image of the side of the box structure is acquired.
In some embodiments, the box structure may include containers, boxes, modular houses, etc., and the sides of the box structure may include the top, side, floor, etc. of the box structure. For example, when the deformation degree of the top surface of the box-type structure needs to be detected, a depth image of the top surface of the box-type structure can be obtained; when the deformation degree of the side wall surface of the box-type structure needs to be detected, the depth image of the side wall surface of the box-type structure can be obtained.
For example, taking the depth image of the top surface of the container as an example, the method for acquiring the depth image can be divided into two types: the method comprises the steps of passive distance measurement sensing and active distance measurement sensing, wherein the passive distance measurement sensing method is binocular stereo vision, two images of the same scene are obtained simultaneously through two cameras which are separated by a certain distance (namely, the same container to be detected is shot simultaneously through the two cameras which are separated by the certain distance to obtain the two images), corresponding pixel points in the two images are found through a stereo matching algorithm, parallax information is calculated according to a trigonometric principle, and the parallax information is converted to obtain depth information for representing objects in the scene (namely, a depth image of the top surface of the container is obtained, and the depth image comprises the depth information of the top surface of the container to be detected). Based on the stereo matching algorithm, the depth image of the scene can also be obtained by shooting a group of images of different angles in the same scene (for example, the top surface of the container to be detected can be shot from different angles, and the depth image of the top surface of the container can be obtained based on the shot group of images). In addition, the depth information of the top surface of the container can be indirectly estimated by analyzing the photometric characteristics, the light and shade characteristics and other characteristics of the two-dimensional image of the top surface of the container to be detected.
As an example, in keeping with the above, compared to passive ranging sensing, active ranging sensing is characterized by: the image acquisition device itself needs to transmit energy to complete the acquisition of depth information, which also ensures that the acquisition of depth images is independent of the acquisition of two-dimensional images. Active range sensing may include structured light, which refers to light having a specific pattern with a pattern Of patterns such as dots, lines, planes, etc., laser scanning and Time Of Flight (TOF), wherein the depth image acquisition principle based on structured light is: the method comprises the steps of projecting structured light to a scene (such as a container to be detected), capturing a corresponding pattern with the structured light by an image sensor, and calculating depth information of each point in the scene (namely the depth information of each point including the top surface of the container) by utilizing a trigonometric principle according to the position and the deformation degree of the pattern image in the captured image because the pattern of the structured light is deformed due to the shape of the container (namely the depth information of each point is combined to obtain the depth image of the top surface of the container based on the depth information of each point). The depth image acquisition principle based on laser scanning is as follows: and emitting laser to the top surface of the container to be detected at certain time intervals, recording the interval time of signals of all scanning points reaching the top surface of the container to be detected from the laser radar, and reflecting the signals back to the laser radar through the top surface of the container to be detected, thereby calculating the distance between each point on the top surface of the container to be detected and the laser radar. The principle of obtaining depth images based on a TOF camera is as follows: the continuous near infrared pulses are transmitted to the top surface of the container to be measured, then the sensor receives the light pulses reflected by the top surface of the container to be measured, the phase difference between the transmitted light pulses and the light pulses reflected by the top surface of the container is compared, the transmission delay between the light pulses can be calculated, the distance between the top surface of the container to be measured and the transmitter can be further obtained, and finally the depth image of the top surface of the container can be obtained.
In some embodiments, the depth image of the side of the box structure may also be acquired by: and acquiring three-dimensional point cloud data aiming at the box-type structure, and extracting a depth image of the side face from the three-dimensional point cloud data.
Taking the box structure as an example of a container, a depth image of the top surface of the container can be obtained by the following method: scanning the top surface of the container to be measured by using a laser radar (2D/3D), a stereo camera or a TOF camera to obtain corresponding three-dimensional point cloud data, wherein the point cloud data file is in the form of a 3D coordinate file (often referred to as an XYZ file), and the files are ASCII files, so that the point cloud data can be read by all post-processing software, for example, in a 3D gray file, the gray pixel point cloud data is: x1, Y1, Z1, gray value 1; the method comprises the steps of obtaining point cloud data obtained by three-dimensionally scanning the top surface of a container to be detected by X2, Y2, Z2, gray values of 2 and the like, and extracting a depth image of the top surface of the container to be detected from the point cloud data.
In other embodiments, after obtaining the depth image of the side surface of the box-type structure, a user (e.g., a detection person) may input the obtained depth image in a client (e.g., a deformation detection client of the box-type structure) operated by the terminal, the terminal automatically obtains a deformation detection request (carrying the depth image) for the box-type structure, and sends the deformation detection request for the box-type structure to the server, so that the server extracts the depth image after receiving the deformation detection request, detects the deformation degree of the box-type structure based on the depth image, and returns a detection result to the terminal, so that the terminal calls a human-computer interaction interface of the client to present the detection result returned by the server.
In step S102, based on the depth information of the depth image, regions corresponding to the plurality of components included in the side surface are identified from the depth image.
In some embodiments, after receiving a deformation detection request for the box-type structure sent by the terminal, the server extracts a depth image from the deformation detection request, and before identifying, from the depth image, regions corresponding to the plurality of components respectively included in the side face, based on depth information of the depth image, may further perform the following processing: traversing the depth image in a sliding mode through windows with preset sizes, and determining jump amplitude of depth values among different pixel points included in each window; determining the region where the window corresponding to the jump amplitude larger than the amplitude threshold value is located as a noise point in the depth image; the determined noise is removed from the depth image.
For example, taking a box structure as a container as an example, since many noise points may exist in an original depth image to affect subsequent calculation, after a server extracts a depth image of a top surface of the container from a deformation detection request, it is first required to perform denoising processing on the depth image, for example, the server may traverse the entire depth image by sliding a window of a preset size (for example, 50 × 50 pixels), and determine a jump amplitude of depth values between different pixel points in the region for a region corresponding to each window in the sliding process, since the depth value of a noise point changes very fast, the depth value of a noise point may jump repeatedly in a small pixel interval, that is, the process of repeatedly decreasing, increasing, and decreasing is repeated, and the jump amplitude is large, therefore, a region where the jump amplitude is larger than a window corresponding to an amplitude threshold value may be determined as a noise point in the depth image, after the entire depth image is traversed (i.e., after determining whether the region in which each window is located is noise in the depth image), the determined noise may be removed from the depth image.
It should be noted that, in practical applications, the value of the preset size may be related to the size of the depth image, for example, when the size of the depth image is larger, the value of the corresponding preset size may also be larger (for example, when the size of the depth image is 10000 × 10000 pixels, the corresponding preset size may be 50 × 50 pixels, and when the size of the depth image is 20000 × 20000 pixels, the corresponding preset size may be 100 × 100 pixels); certainly, the value of the preset size may also be related to the denoising precision, for example, when the required denoising precision is higher, the value of the corresponding preset size is smaller; when the required denoising precision is low, the value of the corresponding preset size is larger, that is, the value of the preset size can be flexibly adjusted, and the value of the preset size is not specifically limited in the embodiment of the application.
In some embodiments, the server may further implement the depth image based on the depth image, and identify, from the depth image, respective corresponding regions of the plurality of components included in the side face by: based on the depth image of the depth image and the gradient difference between different pixel points included in the depth image, areas corresponding to a plurality of components included in the side face are identified from the depth image.
For example, taking a box structure as a container as an example, the regions corresponding to the corner fittings of the container can be identified from the depth image by the following method: determining pixel points of which the depth values are smaller than a depth threshold (namely the highest area in the depth image) and the gradient difference between the pixel points and adjacent pixel points is larger than a gradient difference threshold in the depth image; and determining the region which is composed of the pixel points of which the depth values are smaller than the depth threshold value and the gradient difference between the pixel points and the adjacent pixel points is larger than the gradient difference threshold value and the corresponding spatial characteristics are corners in the depth image as the region corresponding to the corner fittings of the container.
For example, the corner fittings are located at four corners of the container, and the feature of the corner fittings is obvious, and in the depth image, the corner fittings are the highest areas in the whole image (i.e. the areas with depth values smaller than a certain depth threshold), so that the position of the corner fittings can be located at the four corners of the container according to the height feature and the spatial feature (located at the corners).
For example, taking a box structure as a container, the corresponding region of the reinforcing plate of the container can be identified from the depth image by the following method: determining a first boundary of a reinforcing plate of the container coinciding with the corner fitting based on the area corresponding to the corner fitting; traversing and determining the gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the first boundary as a starting point; determining a second boundary of the reinforcing plate based on the pixel points corresponding to the positions with the gradient difference larger than the gradient difference threshold value; and determining the area consisting of the first boundary and the second boundary in the depth image as the area corresponding to the reinforcing plate of the container.
For example, the reinforcing plate surrounds the corner fitting, and a weld exists at the junctions of the reinforcing plate and the top beam, the extension plate and the top plate, so that after the corner fitting is confirmed, a partial outer boundary of the reinforcing plate (namely, a boundary coinciding with the corner fitting) can be determined, gradients of a plurality of subsequent pixel points are calculated in a traversing mode along the same row or the same column by taking at least one pixel point included by the boundary as a starting point, and at the weld, the gradients have sudden changes (namely, the gradient difference of the pixel points at two sides of the weld is larger than a gradient difference threshold), so that according to the characteristic, the remaining boundary of the reinforcing plate can be determined, and an area corresponding to the reinforcing plate can be identified from the depth image.
Taking a box structure as an example of a container, the corresponding region of the side beam of the container can be identified from the depth image by the following method: determining left and right boundaries of a side sill of the container based on a region corresponding to a reinforcing panel of the container; traversing and determining gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the boundary of the depth image as a starting point; determining the upper and lower boundaries of the side beam based on the pixel points corresponding to the positions of which the gradient difference is greater than the gradient difference threshold; and determining a region consisting of the left and right boundaries of the side beam and the upper and lower boundaries of the side beam in the depth image as a region corresponding to the side beam of the container.
For example, taking the top surface of the container as an example, the top side beams are positioned at the upper side and the lower side of the top surface of the container, the left and right boundaries of the top side beams are overlapped with the left and right boundaries of the reinforcing plate, the upper boundary of the top side beam at the upper side is the upper boundary of the container, the lower boundary is connected with the top plate, and a welding seam exists; similarly, the lower boundary of the lower side roof beam is the lower boundary of the container, the upper boundary is connected with the top plate, and a welding seam exists, so that when the boundary of the roof beam is determined, at least one pixel point included by the upper and lower boundaries of the container can be used as a starting point, the gradient of a plurality of pixel points positioned in the same column subsequently can be determined in a traversing manner, and the gradient value at the upper and lower boundaries can be greatly jumped, so that a line segment formed by pixel points corresponding to positions with gradient differences larger than a gradient difference threshold value can be used as the upper and lower boundaries of the roof beam, and the corresponding region of the roof beam in the depth image can be identified.
For example, taking a box structure as a container as an example, the area corresponding to the beam of the container can be identified from the depth image by the following method: determining the upper and lower boundaries of a beam of the container based on the region corresponding to the corner fitting of the container; determining an interval consisting of a minimum abscissa and a maximum abscissa corresponding to the corner fitting, and traversing the gradient of each pixel point in the determined interval; determining left and right boundaries of the beam based on pixel points corresponding to positions with gradient differences larger than a gradient difference threshold; and determining a region consisting of the upper and lower boundaries of the beam and the left and right boundaries of the beam in the depth image as a region corresponding to the beam of the container.
For example, taking the top surface of the container as an example, the top beam is located at the two side edges of the top surface of the container, one is located at the left and right sides of the top beam, and is necessarily located between the corner fittings at the two sides, that is, after the positions of the four corner fittings are determined, the upper and lower boundaries of the two top beams can be located, the left and right boundaries of the top beam can be determined according to the depth image, assuming that the minimum abscissa of the corner fitting at the side corresponding to the top beam is x1 and the maximum abscissa is x2, the pixel points in the interval [ x1, x2] formed by x1 and x2 are traversed, the gradient of the pixel points is calculated, and at the left and right boundaries of the top beam, the maximum jump of the gradient occurs, so that the left and right boundaries of the top beam can be determined, and the corresponding region of the top beam in the depth image can be identified.
Taking the container as an example, the corresponding region of the extending plate of the container can be identified from the depth image by the following method: determining a boundary of an extension plate of the container, which coincides with a cross beam, based on an area corresponding to the cross beam of the container; determining the upper and lower boundaries of the extension plate based on the corresponding region of the reinforcing plate of the container; traversing and determining the gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the boundary coincident with the beam as a starting point; determining the residual boundary of the extension board based on the pixel points corresponding to the positions with the gradient difference larger than the gradient difference threshold; and determining an area consisting of the boundary overlapped with the cross beam, the upper and lower boundaries of the extension plate and the remaining boundary in the depth image as an area corresponding to the extension plate of the container.
For example, after the top beam is positioned, one side of the top beam is a boundary of the container, and the other side of the top beam is an extension plate, that is, after the top beam is positioned, a partial boundary of the extension plate (that is, a boundary coinciding with the top beam) can be determined, and the upper and lower boundaries of the extension plate are boundaries of two corresponding side reinforcing plates (that is, after the reinforcing plates are positioned, the upper and lower boundaries of the extension plate can be determined), and in addition, the remaining boundaries of the extension plate are connected with the top plate, and a welding seam exists at the connection part, that is, a gradient of a pixel point at the welding seam has a great jump, so that the remaining boundaries of the extension plate can be determined according to a gradient difference, and thus the position of the extension plate can be identified from the depth image.
By way of example, continuing with the case-type structure as an example of a container, after the remaining components of the top surface of the container (such as corner pieces, reinforcing plates, extension plates, top cross beams and top side beams) are located, the position of the top plate can be automatically obtained (i.e. the remaining areas are automatically determined as the areas corresponding to the top plate of the container), so that the areas corresponding to all important components included in the container are identified from the depth image.
In step S103, a reference plane corresponding to at least one component is determined, and a distance between a pixel point included in each region and the reference plane is determined.
In some embodiments, the reference plane corresponding to the at least one component may be determined by: selecting a target part of a maximum size from the plurality of parts; and determining the depth mean value of pixel points included in the region corresponding to the target component, and determining a reference plane corresponding to the target component based on the depth mean value.
For example, in the case of a container having a box-type structure, after regions corresponding to a plurality of components (e.g., corner pieces, reinforcing plates, extension plates, top beams, top side beams, and a top plate) included in a top surface of the container are identified from a depth image based on depth information of the depth image and a gradient difference between different pixel points, a target component (e.g., a top plate) corresponding to a maximum size may be selected from the plurality of components, a depth mean value of all pixel points included in the region corresponding to the target component may be calculated, and a plane corresponding to the calculated depth mean value may be determined as a reference plane corresponding to the target component.
In other embodiments, the reference plane corresponding to the at least one component may also be determined by: determining a tongue region and a groove region included in the side surface; determining a first depth mean value of pixel points included in the convex groove area, and deleting the pixel points in the convex groove area, wherein the distance between the pixel points and the first depth mean value is greater than a depth mean value threshold; determining a second depth mean value of the residual pixel points in the convex groove area, and determining a first reference plane corresponding to the convex groove area based on the second depth mean value; determining a third depth mean value of pixel points included in the groove area, and deleting the pixel points in the groove area, wherein the distance between the pixel points and the third depth mean value is greater than a depth mean value threshold value; and determining a fourth depth mean value of the residual pixel points in the groove area, and determining a second reference plane corresponding to the groove area based on the fourth depth mean value.
As an example, in the case of a container having a box structure, the top plate of the top surface of the container is made of a corrugated plate including a convex region (i.e., a convex groove region) and a concave region (i.e., a concave groove region), and thus, it is necessary to determine reference planes corresponding to the convex groove region and the concave groove region, respectively. For example, taking the tongue region as an example, first, all the pixel points included in the tongue region are counted, and the depth mean m1 of all the pixel points is calculated, then, the pixel points whose distance from the depth mean m1 exceeds the depth mean threshold (for example, 10 mm) among all the pixel points included in the tongue region are determined and deleted, then, the depth mean m2 of the remaining pixel points included in the tongue region is recalculated, and the depth mean m2 can be used to represent the height of the reference plane corresponding to the tongue region. Similarly, the groove area is processed similarly, and the reference plane corresponding to the groove area can be obtained.
In some embodiments, when there is a specific part having an intensity greater than the intensity threshold among the plurality of parts identified in step S102, the following process may be performed for the specific part: determining a fifth depth mean value of pixel points included in the region corresponding to the specific part; determining a specific reference plane corresponding to the specific part based on the fifth depth mean value; determining a distance between a pixel point included in the specific part and the specific reference plane; based on a distance between a pixel point included in the specific part and the specific reference plane, a deformation region in a region corresponding to the specific part is determined.
In the case of a container, the corner fitting is the strongest part in the container (i.e., a specific part having a strength greater than a strength threshold), the deformation of the corner fitting is relatively small, and most points have no deformation. For example, for four corner fittings, different from other areas of the container, a hollow hole exists in the middle of the corner fitting, and the hollow hole does not participate in deformation calculation, so that the position of the hollow hole needs to be located firstly; traversing from the edge of the corner fitting to the middle, calculating the maximum gradient value in a small-size (for example, 5 × 5 pixels) interval, and making the gradient of pixel points around the hole jump a larger time, so that the position of the hole can be determined based on the gradient difference of the pixel points, then calculating the depth mean m3 of the remaining pixel points included in the regions corresponding to the four corner fittings, after obtaining the depth mean m3, removing the pixel points which are greater than the depth mean threshold (for example, 5 mm) from the depth mean m3 in the regions corresponding to the four corner fittings, and then calculating the depth mean of the remaining pixel points aiming at the remaining pixel points to obtain the depth mean m4, wherein the depth mean m4 can be used for representing the height of the reference plane corresponding to the corner fitting. After the reference plane corresponding to the corner fitting is obtained, the distance between the pixel points except the cavity in the region corresponding to the corner fitting and the reference plane of the corner fitting can be calculated, the pixel points with the distance exceeding a distance threshold (for example, 8 mm) are determined as deformation points, the deformation points are combined to obtain a deformation region, and the size, the coordinates, the maximum deformation value (namely, the farthest distance from the reference plane of the corner fitting) and the confidence coefficient (the confidence coefficient is determined by the ratio of the number of the noise points included in the deformation region to the total number of the pixel points in the deformation region) of the deformation region can be further determined.
In some embodiments, the above-mentioned determining the distance between the pixel point included in each region and the reference plane may be implemented by: for each region, the distance between the depth value of each pixel included in the region and the reference plane is determined, for example, assuming that the height of the reference plane is 20 mm, and the depth value of a certain pixel is 18 mm, the distance between the pixel and the reference plane is 2 mm.
In step S104, a deformation region in each region is determined based on the distance between the pixel point included in each region and the reference plane.
In some embodiments, step S104 shown in fig. 3 may be implemented by steps S1041 to S1042 shown in fig. 4, which will be described in conjunction with the steps shown in fig. 4.
In step S1041, a pixel point in the region whose distance is greater than the distance threshold is determined.
In some embodiments, after determining the distance between the pixel point included in each region and the reference plane based on step S103, the following process may be performed for each region: and determining the pixel points corresponding to the distance greater than the distance threshold in the area.
Taking a box structure as an example of a container, after identifying regions corresponding to a plurality of components (such as corner pieces, reinforcing plates, extension plates, top side beams, top cross beams, and the like) included in the top surface of the container and distances between pixel points included in each region and a reference plane from a depth image, pixel points corresponding to distances greater than a distance threshold in each region can be determined. For example, taking the reinforcing plate as an example, after determining the distance between each pixel point included in the region corresponding to the reinforcing plate and the reference plane (for example, the reference plane corresponding to the top plate, that is, the reference plane determined based on the top plate), a pixel point whose distance is greater than the distance threshold may be selected from the distances, and then, the deformation region may be obtained based on the combination of the selected pixel points.
It should be noted that, in practical applications, for different components included in the side surface of the box-type structure identified from the depth image, when deformation degrees of the different components are calculated, values of corresponding distance thresholds may be the same or different, for example, when deformation degrees of the different components are calculated, a fixed distance threshold may be selected to determine a pixel point whose distance is greater than the distance threshold; of course, the values of the corresponding distance thresholds may also be set for different components, for example, when the strength of a certain component is high (i.e., the component is not easily deformed), the value of the corresponding distance threshold may be small; when the strength of a certain component is small (i.e., it is easy to deform), the value of the corresponding distance threshold may be larger, that is, the value of the distance threshold may be flexibly adjusted, which is not limited in the embodiments of the present application.
In step S1042, a region composed of pixels whose distances are greater than the distance threshold is determined as a deformation region in the region.
In some embodiments, after determining the pixel points whose distance is greater than the distance threshold in the region based on step S1041, the region obtained by combining the pixel points whose distance is greater than the distance threshold may be determined as the deformation region in the region.
Taking the box structure as an example of a container, the deformation zone in the roof comprised by the container can be determined by: the distance between the depth value of each pixel point included in the region corresponding to the top plate and the reference plane is calculated, then, the pixel points corresponding to the distance greater than the distance threshold value (for example, 20 millimeters) are selected, then, the selected pixel points are combined, and the combined region is used as the deformation region in the top plate, so that the automatic detection of the deformation degree of the container can be realized, and the detection efficiency and the accuracy of the detection result are improved.
In other embodiments, referring to fig. 5, after step S104 shown in fig. 3 is completed, step S105 to step S106 shown in fig. 5 may be further performed, which will be described in conjunction with the steps shown in fig. 5.
In step S105, a two-dimensional image of the side of the box structure is acquired.
In some embodiments, a two-dimensional image of the side of the box structure may be acquired by: and shooting the side face of the box-type structure through a 2D camera to obtain a two-dimensional image of the side face of the box-type structure, wherein the two-dimensional image is completely aligned with the depth image, and the coordinates of pixel points of each area are consistent on the two images.
Taking a box-type structure as an example of a container, linear array scanning is performed on the top surface of the container through a 2D camera (such as a high-definition camera) to obtain a linear scanning 2D grayscale image of the top surface of the container, wherein the linear scanning 2D grayscale image is completely aligned with the depth image of the top surface of the container acquired in step S101, and coordinates of pixel points of each region are consistent on both images, so that after the regions corresponding to a plurality of components included in the top surface of the container and the deformation region of the container are identified based on the depth image, the corresponding positions of the linear scanning 2D grayscale image can be labeled, thereby facilitating detection personnel to clearly determine the region where the container deforms.
In other embodiments, a two-dimensional image of the side of the box structure may also be acquired by: acquiring three-dimensional point cloud data aiming at the box-type structure; and extracting a two-dimensional image of the side face from the three-dimensional point cloud data.
For example, taking a box-type structure as a container as an example, first three-dimensional point cloud data for a top surface of the container is obtained (for example, the top surface of the container is scanned by a laser radar to obtain the three-dimensional point cloud data for the top surface of the container), and then format conversion and extraction operations are performed on the three-dimensional point cloud data to extract a two-dimensional image of the top surface of the container from the three-dimensional point cloud data.
In step S106, regions to which the plurality of parts included in the side face respectively correspond and a deformation region in each region are marked in the two-dimensional image based on the correspondence between the depth image and the two-dimensional image.
In some embodiments, the depth image of the side of the box structure acquired in step S101 is perfectly aligned with the two-dimensional image of the side of the box structure acquired in step S105, that is, the coordinates of the pixel points of each region are consistent on both images, and therefore, after the regions respectively corresponding to the plurality of components included in the side surface and the deformation region in each region are identified from the depth image based on the depth information of the depth image and the gradient difference between different pixel points, it is possible to determine, based on the correspondence between the depth image and the two-dimensional image, areas corresponding to a plurality of parts included in the side face, respectively, and deformation areas in each area are marked in the two-dimensional image, and thus, through marking out the region that box structure takes place deformation in two-dimensional image, can demonstrate box structure's deformation region directly perceivedly, made things convenient for the subsequent box structure to take place deformation of testing personnel to handle.
According to the deformation detection method for the box-type structure, the depth image of the side face of the box-type structure is obtained, so that the regions corresponding to the components included in the side face can be identified from the depth image based on the depth information of the depth image, then at least one component can be selected from the components to determine the reference plane corresponding to the at least one component, and then the deformation region in each region can be determined based on the distance between the pixel point included in each region and the reference plane.
In the following, an exemplary application of the embodiment of the present application in an actual application scenario is described by taking a box structure as an example.
In practical application scenarios, the top surface of the container may be deformed concavely or convexly under the influence of extrusion, impact and the like. For the detection of the deformation degree of the top surface of the container, the related art is usually measured manually, which requires a lot of manpower, resulting in high cost. In view of this, the embodiment of the present application provides a deformation detection method for a box-type structure, which is combined with a computer vision technology to improve the efficiency and accuracy of detecting the deformation degree of the top surface of the container.
For example, referring to fig. 6A to 6C, fig. 6A is a schematic line scan 2D diagram of a top surface of a container provided by an embodiment of the present application, and fig. 6B is a depth image of the top surface of the container provided by an embodiment of the present application (as shown in fig. 6B, corresponding depth values of different areas are different, for example, a depth value of a pixel point included in a gray area 601 is greater than a depth value of a pixel point included in a white area 602, that is, a distance of the gray area 601 from an image acquisition device is greater than that of the white area 602), wherein the depth image shown in fig. 6B is acquired by a line scan 3D camera, and the depth image shown in fig. 6B is completely aligned with the schematic line scan 2D diagram shown in fig. 6A, that is, coordinates of the pixel point of each area are consistent on both images. The deformation detection method for the box-type structure provided by the embodiment of the application can accurately output the specific position of the deformation of the top surface of the container and the deformation degree information, for example, the solid line box 603 shown in fig. 6C represents the area with large deformation of the top surface of the container.
The deformation detection method for the box-type structure provided by the embodiment of the present application is specifically described below. For the problem of detecting the deformation degree of the top surface of the container, the applicant carries out detection practice on the container transported to the container yard, and in the specific practical process, the following two difficulties are found:
1) when the container is transported to a field, due to the conditions of uneven road surface, turning of a travelling crane and the like, an image formed by a line scanning camera is distorted, and the depth values of the head and the tail of the advancing direction of the container are greatly changed;
2) the container deformation degree detection method provided by the related art basically needs instruments such as laser ranging instruments or metal flaw detectors, so that the cost is high, the container needs to be kept in a static state, the container and the detection instruments need to be placed in a specific site, the requirement is high, in actual production, the container and the detection instruments may not be available in too many sites, namely, the scheme provided by the related art is harsh, and the detection requirement of a real scene cannot be met.
In actual production, the applicant has also found the following two facts for containers in a container yard:
1) the empty area of the storage yard is small, the traffic is busy, if the freight cars stop and then the imaging detection deformation is carried out, the normal operation of the storage yard can be influenced, so the best mode is that the imaging and the detection are finished without stopping the transport of the containers when the containers enter the yard;
2) although the reliability of the detection result output by the algorithm is high, the reliability does not reach 100%, and for actual production, a small amount of manual work is required to be combined for inspection.
In view of this, in order to support project development and ensure reliability of a detection result, embodiments of the present application provide a deformation detection method for a box structure, which can automatically detect a deformation degree of a top surface of a container, so as to improve detection efficiency and reduce detection cost.
For example, referring to fig. 7, fig. 7 is a schematic flowchart of a deformation detection method for a box-type structure provided in the embodiment of the present application, and as shown in fig. 7, the deformation detection method for a box-type structure provided in the embodiment of the present application mainly includes: the method comprises five steps of inputting, denoising, area positioning, positioning a reference plane and deformation calculation, and the five steps are specifically described below.
Inputting: first, an image to be subjected to deformation detection is obtained, where the image is an image of a top surface of the container, and includes, for example, a line scan 2D schematic diagram as shown in fig. 8A and a depth image as shown in fig. 8B (as shown in fig. 8B, depth values corresponding to different regions are different, for example, a depth value corresponding to a gray region 801 is greater than a depth value corresponding to a white region 802), and the top surface of the container may be further divided into six components, namely, a corner piece, a reinforcing plate, an extension plate, a top beam, a top side beam, and a top plate, where the top plate is composed of a corrugated plate, and the corrugated plate includes a groove region and a convex groove region.
(II) denoising: and traversing the depth image, and removing noise in the depth image to ensure the accuracy of subsequent calculation.
In some embodiments, there are many noise points in the original depth image, which affect the subsequent calculation, so the depth image needs to be denoised first. In practical situations, no matter at a deformed point or a junction of components, there is a rule in the change of depth values between adjacent different pixel points, which generally changes slowly, and some change is a large change, while the change of depth values of noise points is extremely fast, and in a small pixel interval, the depth value of the noise point jumps repeatedly, i.e., the process of increasing, decreasing and increasing is repeated, and the amplitude is large, so that the whole depth image can be traversed by sliding according to the rule, and the noise point jumping repeatedly in a small interval (for example, 50 × 50 pixels) is removed.
(III) area positioning: according to the difference of the depth values and gradients of all areas in the depth image, positioning the corner pieces, the reinforcing plates, the extension plates, the top cross beams, the top side beams and the top plates, and correspondingly marking the corner pieces, the reinforcing plates, the extension plates, the top cross beams, the top side beams and the top plates in the line scanning 2D gray level image.
In some embodiments, the top surface of the container includes six regions, namely, corner fittings, reinforcing plates, extending plates, top cross beams, top side beams and a top plate, wherein the corner fittings, the reinforcing plates, the extending plates and the top cross beams are distributed at the left end and the right end of the top surface of the container, one end of each corner fitting, the other reinforcing plate, the other extending plate and the other top cross beam are respectively distributed at the upper end and the lower end of the top surface of the container, the top side beams are distributed at the upper end and the lower end of the top surface of the container, the total number of the top side beams is two, and the rest regions are the top plate.
The positioning of the corner fitting and the reinforcing plate will be explained first.
The corner fittings and the reinforcing plates are positioned at four corners of the top surface of the container, the characteristics of the corner fittings are obvious, and in the depth image, the corner fittings are the highest areas in the whole image, so that the positions of the corner fittings can be positioned at the four corners of the top surface of the container according to the height characteristics and the space characteristics (positioned at the corners).
The reinforcing plate surrounds the corner fitting, welding seams exist at junctions of the reinforcing plate, the top beam, the extension plate and the top plate, after the corner fitting is confirmed, a partial outer boundary of the reinforcing plate can be determined, the gradient of each pixel point is calculated in a traversing mode by taking any one or more pixel points included by the partial outer boundary as starting points, sudden changes can exist in the gradient at the welding seams, according to the characteristic, the remaining boundary of the reinforcing plate can be determined, and then the corner fitting and the reinforcing plate are positioned completely.
The positioning of the extension panel and the top rail is explained below.
The top cross beam is located at the edges of the two sides of the top surface of the container, the top cross beam is located at the left side and the right side, the four corner fittings are located between the corner fittings at the two sides, the upper and lower boundaries of the two top cross beams can be located, the left and right boundaries of the top cross beam can be determined according to the depth image, the minimum abscissa of the corner fitting at the side corresponding to the top cross beam is assumed to be x1, the maximum abscissa is x2, pixel points in the interval of [ x1, x2] are traversed, the gradient of each pixel point is calculated, and the left and right boundaries of the top cross beam can be determined by the maximum jumping of the gradient at the left and right boundaries of the top cross beam, so that the accurate position of the top cross beam is obtained.
One side of the top beam is the boundary of the container, the other side of the top beam is an extension plate, the upper boundary and the lower boundary of the extension plate are the boundaries of two reinforcing plates on the corresponding sides, the rest boundaries of the extension plate are connected with the top plate, welding seams exist, the rest boundaries of the extension plate can be determined according to gradient differences, and the positioning of the region corresponding to the extension plate is finished.
The positioning of the roof side rail and the roof panel will be described below.
The roof side beam is located upside and the downside of container top surface, its left and right sides border is the border of controlling of reinforcing plate promptly, the last border of the roof side beam of upside is the last border of container, the lower border links to each other with the roof, and there is the welding seam, the lower border of the roof side beam of downside is the lower border of container, the upper border links to each other with the roof, and there is the welding seam, so the locate mode with the roof beam is the same, through traversing the pixel, and calculate the gradient of every pixel, very big jump can appear in upper and lower border department gradient, can fix a position the position of roof side beam.
After other parts of the top surface of the container are positioned, the position of a top plate can be automatically obtained, the top plate is positioned between two extending plates and two top side beams, next, in order to conveniently determine two concave-convex reference planes, concave-convex grooves of the top plate need to be subdivided, each concave-convex groove is positioned, the top plate is traversed from left to right in actual treatment, a first area and a last area are necessarily grooves, and a larger jump exists in the gradient of the boundaries of the grooves and the convex grooves, so that the position of each concave-convex groove can be determined.
For example, referring to fig. 9, fig. 9 is a schematic diagram of the segmentation result of the top surface of the container provided by the embodiment of the present application, and as shown in fig. 9, after the corner fittings, the reinforcing plates, the extension plates, the top side beams, the top cross beams and the top plate are located based on the depth information of the depth image and the gradient difference between different pixel points, corresponding positions of the schematic diagram of line scan 2D shown in fig. 9 may be marked, such as the corner fittings 901 located at four corners of the container, the reinforcing plates 902 surrounding the corner fittings 901, the top cross beams 904 located at the left and right edges of the top surface of the container, the extension plates 903 connected to the top cross beams 904, the top side beams 905 located at the upper and lower edges of the top surface of the container, the reinforcing plates 902, the extension plates 903, the top cross beams 904 and the top plate 906 outside the top side beams 905 of the container.
(IV) positioning the reference plane: calculating the deformation degree of the top surface of the container, namely calculating the distance between the top surface of the container and a reference plane, and taking the groove plane of the top plate as a depressed reference plane (namely comparing with the reference plane when calculating the depressed deformation) and the tongue plane of the top plate as a raised reference plane (namely comparing with the reference plane when calculating the raised deformation) in other areas except corner fittings, so that two raised and depressed reference planes of the container can be found in the depth image based on the cut areas; the corner fitting is the strongest part of the container, and the deformation of the corner fitting is relatively small, and most points have no deformation, so that the plane formed by four corner fittings is used as a reference plane when the deformation of the corner fitting is calculated.
In some embodiments, taking calculation of the deformation of the top surface as an example, two reference planes need to be located first, where a plane equation is ax + by + cz ═ D, and considering the characteristics of the top surface scanning camera, where a and b are both 0, c is 1, and the equation is changed into z ═ D, then the height of the reference plane is obtained, i.e., the reference plane can be located, and the specific steps are as follows:
for the ceiling tongue region { X1,X2,…,XnEach tongue region may be further divided into three sub-regions, that is, a first sub-region where a coordinate difference value from a lower boundary y1 of the upper side top rail is smaller than a first preset difference value (for example, 20 pixels), a second sub-region where a coordinate difference value from an upper boundary y2 of the lower side top rail is smaller than 20 pixels, and a remaining region which is a third sub-region, the depth average value of the first sub-region in all the tongue regions is calculated, then a region composed of pixels whose difference values with the average value exceeds a preset threshold (for example, 10 mm) is excluded, and the remaining region is calculated again to obtain the depth average value m1 corresponding to the first sub-region of the tongue region, and similarly, the depth average value m2 corresponding to the second sub-region is obtained, and then the average value m between m1 and m2 is taken, where the average value m represents the height of the reference plane corresponding to the tongue region; the same operation is performed on the groove area, and the height of the reference plane corresponding to the groove area can be located.
For four corner fittings, different from other parts on the top surface of the container, a cavity exists in the middle of the corner fitting and does not participate in deformation calculation, so that the position of a space needs to be located firstly, the corner fitting is traversed from the edge to the middle, the maximum gradient value in a preset size (for example, 5 pixels by 5 pixels) interval is calculated, the gradient around the cavity has one jump, the position of the cavity can be located, then the depth average value of the remaining points of the four corner fittings is calculated, pixel points with difference larger than a preset threshold value (for example, 5 millimeters) from the average value are removed after the average value is obtained, the average value is recalculated for the remaining pixel points, the value is the height of the corner fitting reference plane, and the corner fitting reference plane can be obtained because z is equal to D.
(V) deformation calculation: traversing all areas on the top surface of the container, respectively calculating the deformation degree of each area, for example, for a pixel point in a certain area, a point with a larger distance (for example, larger than a distance threshold) from the reference plane is a deformation point, combining all the deformation points in a single area into a deformation area, obtaining a final output result, and simultaneously outputting the detection confidence of the area.
In some embodiments, the deformation calculation also needs to output confidence, and for different areas on the top surface, each area calculates a confidence (the top plate area calculates a confidence for each small concave-convex groove); the calculation mode is the same, the confidence coefficient is determined by the proportion of the noise quantity in the region to the total number of the pixels included in the region, assuming that the proportion of the noise quantity to the total number of the pixels is i, the confidence coefficient is 1-2 × i, the more the noise is, the lower the corresponding confidence coefficient is, the lowest is 0, and the deformation calculation modes of different regions are different, specifically as follows:
1) corner fittings: after the position of the corner fitting and the reference plane of the corner fitting are located, the distance between pixel points except for the hollow in the corner fitting and the reference plane of the corner fitting is calculated, the pixel points with the distance exceeding a distance threshold (for example, 8 millimeters) are regarded as deformation points, the deformation points are combined to form deformation areas, and the size, the coordinate, the maximum deformation value and the confidence coefficient of the deformation areas are given.
2) A reinforcing plate: calculating the distance between each pixel point included in the reinforcing plate and a reference plane corresponding to the convex groove area, regarding the pixel points which exceed the reference plane and have the distance larger than a distance threshold (for example, 20 mm) as deformation points, combining the deformation points into a deformation area (namely, convex deformation), and giving the size, the coordinate, the maximum deformation value and the confidence coefficient of the deformation area; then, the distance between each pixel point included in the reinforcing plate and the reference plane corresponding to the groove region is calculated, the pixel points which are lower than the reference plane and have the distance of more than 20 millimeters are taken as deformation points, the deformation points are combined to form deformation regions (namely, concave deformation), and the size, the coordinate, the maximum deformation value and the confidence coefficient of the deformation regions are given.
3) An extension plate: calculating the distance between each pixel point included by the extension plate and a reference plane corresponding to the convex groove area, regarding the pixel points which exceed the reference plane and have the distance larger than a distance threshold (for example, 35 mm) as deformation points, combining the deformation points into a deformation area (namely, convex deformation), and giving the size, the coordinate, the maximum deformation value and the confidence coefficient of the deformation area; then, the distance between each pixel point included in the extension plate and the reference plane corresponding to the groove region is calculated, the pixel points which exceed the reference plane and have the distance larger than 35 mm are regarded as deformation points, the deformation points are combined to form deformation regions (namely, concave deformation), and the size, the coordinates, the maximum deformation value and the confidence coefficient of the deformation regions are given.
4) A top beam: calculating the distance between each pixel point included by the top beam and a reference plane corresponding to the convex groove area, regarding the pixel points which exceed the reference plane and have the distance larger than a distance threshold (for example, 20 mm) as deformation points, combining the deformation points into a deformation area (namely, convex deformation), and giving the size, the coordinate, the maximum deformation value and the confidence coefficient of the deformation area; then, the distance between each pixel point included in the top beam and the reference plane corresponding to the groove area is calculated, the pixel points which exceed the reference plane and are more than 20 mm away are regarded as deformation points, the deformation points are combined to form deformation areas (namely, concave deformation), and the size, the coordinate, the maximum deformation value and the confidence coefficient of the deformation areas are given.
5) A roof side beam: calculating the distance between each pixel point included in the top side beam and a reference plane corresponding to the convex groove area, regarding the pixel points which exceed the reference plane and have the distance larger than a distance threshold (for example, 30 mm) as deformation points, combining the deformation points into a deformation area (namely, convex deformation), and giving the size, the coordinate, the maximum deformation value and the confidence coefficient of the deformation area; then, the distance between each pixel point included in the top side beam and a reference plane corresponding to the groove area is calculated, the pixel points which exceed the reference plane and are more than 30 mm away are regarded as deformation points, the deformation points are combined to form deformation areas (namely, concave deformation), and the size, the coordinate, the maximum deformation value and the confidence coefficient of the deformation areas are given.
6) Top plate: calculating the distance between each pixel point included in the top plate and a reference plane corresponding to the convex groove area, regarding the pixel points which exceed the reference plane and have the distance larger than a distance threshold (for example, 20 mm) as deformation points, combining the deformation points into a deformation area (namely, convex deformation), and giving the size, the coordinate, the maximum deformation value and the confidence coefficient of the deformation area; then, the distance between each pixel point included in the top plate and the reference plane corresponding to the groove region is calculated, the pixel points which exceed the reference plane and have the distance larger than 20 mm are regarded as deformation points, the deformation points are combined to form deformation regions (namely, concave deformation), and the size, the coordinate, the maximum deformation value and the confidence coefficient of the deformation regions are given.
It should be noted that, in actual production, for further confirmation, a manual review may be performed on a low confidence region (for example, a region with a confidence less than 0.8) given by the system to ensure that there is no problem.
For example, referring to fig. 10 and fig. 11, fig. 10 is a schematic flowchart of a container top surface deformation detection method provided in the related art, and fig. 11 is a schematic flowchart of a container top surface deformation detection method provided in an embodiment of the present application, and it can be seen by comparing fig. 10 and fig. 11 that a computer vision technology is added to the deformation detection method provided in the embodiment of the present application, and less labor is used as an auxiliary, and a result in actual production shows that, compared with the deformation detection method shown in fig. 10, the deformation detection method provided in the embodiment of the present application can obtain a detection result more quickly and better.
The deformation detection method of box structure that this application embodiment provided, at first remove the noise processing to the depth image, then, based on the depth information of the depth image after removing the noise location go out the container top surface include a plurality of parts (for example corner fittings, the reinforcing plate, the extension board, the roof side rail, top crossbeam and roof) respectively corresponding position, afterwards, the reference plane that the positioning groove region and tongue region correspond respectively, finally, calculate the deformation degree in every region based on the reference plane, confirm the deformation region, so, carry out automated inspection through the deformation degree of using the algorithm to the container top surface, the cost of labor has been reduced.
The beneficial effects of the deformation detection method for the box-type structure provided by the embodiment of the application are further explained by combining the deformation detection result in actual production.
For example, referring to fig. 12A to 12C, fig. 12A is a schematic line scan 2D diagram of a top surface of a container to be detected provided in this embodiment, and fig. 12B is a depth image of the top surface of the container to be detected provided in this embodiment (as shown in fig. 12B, depth values corresponding to different areas are different, for example, a depth value corresponding to a gray area 1201 is greater than a depth value corresponding to a white area 1202), and based on the depth image shown in fig. 12B, a deformation detection method of a box structure provided in this embodiment of the present invention is invoked to detect a deformation degree of the top surface of the container, so as to obtain a deformation detection result shown in fig. 12C, where a solid line 1203 shown in fig. 12C indicates an area of the top surface of the container with a large deformation.
For example, referring to fig. 13A to 13C, fig. 13A is a schematic view of a linear scan 2D of a top surface of a container to be detected provided in this embodiment, fig. 13B is a depth image of the top surface of the container to be detected provided in this embodiment (as shown in fig. 13B, depth values corresponding to different areas are different, for example, a depth value corresponding to a gray area 1301 is greater than a depth value corresponding to a white area 1302), based on the depth image shown in fig. 13B, a deformation detection method of a box structure provided in this embodiment of the present invention is invoked to detect a deformation degree of the top surface of the container, so as to obtain a deformation detection result shown in fig. 13C, where a solid line box 1303 shown in fig. 13C represents an area with a large deformation in the top surface of the container, and it can be seen from fig. 13A to 13C that even though a distortion of an acquired container image itself is relatively serious, a deformation detection method of a box structure provided in this embodiment of the present invention can also obtain a relatively ideal deformation detection result of deformation detection And (6) obtaining the result.
It should be noted that the deformation detection method for the box-type structure provided by the embodiment of the application can be applied to deformation detection of the top surface of the container in a storage yard, and can also be used for detecting the overall condition of the top surface of the container in a production plant so as to ensure good product quality.
In addition, the deformation detection method for the box-type structure provided by the embodiment of the application also can be used for directly collecting three-dimensional point cloud data (such as a 3D point cloud image) of the top surface of the container without combining with 2D information of the top surface of the container to be detected, and detecting the deformation degree of the top surface of the container by using a 3D data processing technology, for example, firstly collecting three-dimensional point cloud data aiming at the top surface of the container, then extracting a depth image of the top surface of the container from the three-dimensional point cloud data, determining an area where the top surface of the container is deformed based on the depth information of the depth image, then extracting a two-dimensional image of the top surface of the container from the three-dimensional point cloud data, and labeling in the two-dimensional image based on the determined deformation area.
Continuing with the exemplary structure of the box-type structure deformation detecting device 243 provided by the embodiment of the present application implemented as a software module, in some embodiments, as shown in fig. 2, the software module stored in the box-type structure deformation detecting device 243 of the memory 240 may include: an acquisition module 2431, an identification module 2432, and a determination module 2433.
An acquisition module 2431 for acquiring a depth image of a side of the box-like structure; an identifying module 2432, configured to identify, based on depth information of the depth image, regions corresponding to the plurality of components respectively included in the side face from the depth image; a determining module 2433 for determining a reference plane corresponding to the at least one component and determining a distance between a pixel point included in each area and the reference plane; the determining module 2433 is further configured to determine a deformation region in each region based on a distance between a pixel point included in each region and the reference plane.
In some embodiments, the determining module 2433 is further configured to traverse the depth image in a sliding manner through windows of a preset size, and determine a jump amplitude of the depth value between different pixel points included in each window; determining the region where the window corresponding to the jump amplitude larger than the amplitude threshold value is located as a noise point in the depth image; the deformation detection device 243 of the box-type structure further includes a deleting module 2434 for deleting noise from the depth image.
In some embodiments, the identifying module 2432 is further configured to identify, from the depth image, regions corresponding to the plurality of parts respectively included in the side face based on the depth information of the depth image and a gradient difference between different pixel points included in the depth image.
In some embodiments, the determining module 2433 is further configured to determine a pixel point in the depth image, where the depth value is smaller than the depth threshold and the gradient difference between the pixel point and the adjacent pixel point is larger than the gradient difference threshold; and determining the area of the depth image, which is composed of the pixel points of which the depth values are smaller than the depth threshold and the gradient difference between the pixel points and the adjacent pixel points is larger than the gradient difference threshold, and the corresponding spatial features are corners, as the area corresponding to the corner fittings included in the side surface.
In some embodiments, the determining module 2433 is further configured to determine a first boundary of the reinforcing plate included in the side surface, which coincides with the corner fitting, based on the area corresponding to the corner fitting included in the side surface; traversing and determining the gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the first boundary as a starting point; determining a second boundary of the reinforcing plate based on the pixel points corresponding to the positions with the gradient difference larger than the gradient difference threshold value; and determining the area consisting of the first boundary and the second boundary in the depth image as the area corresponding to the reinforcing plate included in the side surface.
In some embodiments, the determining module 2433 is further configured to determine left and right boundaries of the side sill included in the side face based on the corresponding region of the reinforcement panel included in the side face; traversing and determining gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the boundary of the depth image as a starting point; determining the upper and lower boundaries of the side beam based on the pixel points corresponding to the positions of which the gradient difference is greater than the gradient difference threshold; a region in the depth image, which is composed of the left and right boundaries of the side sill and the upper and lower boundaries of the side sill, is determined as a region corresponding to the side sill included in the side face.
In some embodiments, the determining module 2433 is further configured to determine upper and lower boundaries of the beam included in the side surface based on the area corresponding to the corner fitting included in the side surface; determining an interval consisting of a minimum abscissa and a maximum abscissa corresponding to the corner fitting, and traversing the gradient of each pixel point in the determined interval; determining left and right boundaries of the beam based on pixel points corresponding to positions with gradient differences larger than a gradient difference threshold; and determining a region consisting of the upper and lower boundaries of the beam and the left and right boundaries of the beam in the depth image as a region corresponding to the beam included in the side surface.
In some embodiments, the determining module 2433 is further configured to determine a boundary of the extension board included in the side surface, which coincides with the beam, based on an area corresponding to the beam included in the side surface; determining the upper and lower boundaries of the extension plate based on the corresponding region of the reinforcement plate included in the side surface; traversing and determining the gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the boundary coincident with the beam as a starting point; determining the residual boundary of the extension board based on the pixel points corresponding to the positions with the gradient difference larger than the difference threshold; determining an area consisting of a boundary superposed with the beam, an upper boundary and a lower boundary of the extension plate and a residual boundary in the depth image as an area corresponding to the extension plate included in the side surface; and a section for determining a section of the side face other than the corner fitting, the reinforcement panel, the cross member, the side member, and the extension panel as a section corresponding to the side panel included in the side face.
In some embodiments, the deformation detection device 243 of the box-type structure further comprises a selecting module 2435 for selecting a target part of maximum size from the plurality of parts; the determining module 2433 is further configured to determine a depth mean of pixel points included in the region corresponding to the target component, and determine a reference plane corresponding to the target component based on the depth mean.
In some embodiments, determining module 2433 is further configured to determine a land area and a groove area included in the side surface; determining a first depth mean value of pixel points included in the convex groove area, and deleting the pixel points in the convex groove area, wherein the distance between the pixel points and the first depth mean value is greater than a depth mean value threshold; determining a second depth mean value of the residual pixel points in the convex groove area, and determining a first reference plane corresponding to the convex groove area based on the second depth mean value; determining a third depth mean value of pixel points included in the groove area, and deleting the pixel points in the groove area, wherein the distance between the third depth mean value and the pixel points is greater than a depth mean value threshold value; and determining a fourth depth mean value of the residual pixel points in the groove area, and determining a second reference plane corresponding to the groove area based on the fourth depth mean value.
In some embodiments, the determining module 2433 is further configured to, for each region, perform the following: determining pixel points corresponding to the distance greater than the distance threshold in the area; determining a region formed by pixel points with the distance greater than the distance threshold value as a deformation region in the region; and the confidence coefficient of the deformation region is determined based on the ratio of the number of the noise points included in the deformation region to the total number of the pixel points included in the deformation region.
In some embodiments, when there is a specific component having an intensity greater than the intensity threshold in the plurality of components, the determining module 2433 is further configured to determine a fifth depth mean of pixel points included in the area corresponding to the specific component; determining a specific reference plane corresponding to the specific part based on the fifth depth mean value; and for determining a distance between a pixel point included in the specific component and the specific reference plane; based on a distance between a pixel point included in the specific part and the specific reference plane, a deformation region in a region corresponding to the specific part is determined.
In some embodiments, the acquisition module 2431 is further configured to acquire a two-dimensional image of a side of the box structure; the deformation detection apparatus 243 of the box-type structure further includes an annotation module 2436, configured to, based on a correspondence relationship between the depth image and the two-dimensional image, identify, in the two-dimensional image, regions to which the plurality of components included in the side face respectively correspond, and a deformation region in each region.
In some embodiments, the obtaining module 2431 is further configured to obtain three-dimensional point cloud data for the box-like structure; the deformation detection device 243 for the box-type structure further comprises an extraction module 2437, which is used for extracting a two-dimensional image of the side face from the three-dimensional point cloud data; and the depth image is used for extracting the side surface from the three-dimensional point cloud data.
It should be noted that the description of the apparatus in the embodiment of the present application is similar to the description of the method embodiment, and has similar beneficial effects to the method embodiment, and therefore, the description is not repeated. The inexhaustible technical details of the deformation detection device with a box-type structure provided by the embodiment of the application can be understood according to the description of any one of the drawings in fig. 3-5, fig. 7 or fig. 11.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the deformation detection method for the box structure according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium having stored thereon executable instructions, which when executed by a processor, will cause the processor to perform a method provided by embodiments of the present application, for example, a deformation detection method of a box structure as shown in fig. 3-5, fig. 7, or fig. 11.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, the depth image of the side surface of the box-type structure is obtained, so that the regions respectively corresponding to the multiple components included in the side surface can be identified from the depth image based on the depth information of the depth image, then, at least one component can be selected from the multiple components to determine the reference plane corresponding to the at least one component, and then, the deformation region in each region can be determined based on the distance between the pixel point included in each region and the reference plane, so that the automatic detection of the deformation degree of the box-type structure is realized, and the detection efficiency and the detection accuracy are improved.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A deformation detection method for a box-type structure is characterized by comprising the following steps:
acquiring a depth image of the side face of the box-type structure;
identifying regions respectively corresponding to a plurality of parts included in the side face from the depth image based on depth information of the depth image;
determining a reference plane corresponding to at least one of the components and determining a distance between a pixel point included in each of the regions and the reference plane;
determining a deformation region in each of the regions based on a distance between a pixel point included in each of the regions and the reference plane.
2. The method according to claim 1, wherein before identifying, from the depth image, regions corresponding to the plurality of components included in the side surface based on the depth information of the depth image, the method further comprises:
traversing the depth image in a sliding mode through windows with preset sizes, and determining jump amplitude of depth values among different pixel points included in each window;
determining the region where the window corresponding to the jump amplitude larger than the amplitude threshold value is located as a noise point in the depth image;
removing the noise from the depth image.
3. The method according to claim 1, wherein the identifying, from the depth image, respective regions corresponding to a plurality of parts included in the side face based on the depth information of the depth image comprises:
and identifying regions respectively corresponding to a plurality of components included in the side face from the depth image based on the depth information of the depth image and the gradient difference between different pixel points included in the depth image.
4. The method according to claim 3, wherein the identifying, from the depth image, regions corresponding to the plurality of parts included in the side face based on the depth information of the depth image and a gradient difference between different pixel points included in the depth image comprises:
determining pixel points of which the depth values are smaller than a depth threshold value and the gradient difference with adjacent pixel points is larger than a gradient difference threshold value in the depth image;
and determining the pixel point which is formed by the depth value smaller than the depth threshold value and has the gradient difference with the adjacent pixel point larger than the gradient difference threshold value in the depth image and the corresponding area with the spatial characteristic as a corner as the area corresponding to the corner piece included in the side surface.
5. The method according to claim 3, wherein the identifying, from the depth image, regions corresponding to the plurality of parts included in the side face based on the depth information of the depth image and a gradient difference between different pixel points included in the depth image comprises:
determining a first boundary of a reinforcing plate included in the side face, which coincides with a corner fitting, based on an area corresponding to the corner fitting included in the side face;
traversing and determining gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the first boundary as a starting point;
determining a second boundary of the reinforcing plate based on the pixel points corresponding to the positions with the gradient difference larger than the gradient difference threshold value;
and determining a region consisting of the first boundary and the second boundary in the depth image as a region corresponding to a reinforcing plate included in the side surface.
6. The method according to claim 3, wherein the identifying, from the depth image, regions corresponding to the plurality of parts included in the side face based on the depth information of the depth image and a gradient difference between different pixel points included in the depth image comprises:
determining left and right boundaries of a side sill included in the side face based on a region corresponding to a reinforcing plate included in the side face;
traversing and determining gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the boundary of the depth image as a starting point;
determining the upper and lower boundaries of the side beam based on the pixel points corresponding to the positions of which the gradient difference is greater than the gradient difference threshold;
and determining a region composed of the left and right boundaries of the side beam and the upper and lower boundaries of the side beam in the depth image as a region corresponding to the side beam included in the side surface.
7. The method according to claim 3, wherein the identifying, from the depth image, regions corresponding to the plurality of parts included in the side face based on the depth information of the depth image and a gradient difference between different pixel points included in the depth image comprises:
determining the upper and lower boundaries of a cross beam included in the side face based on the area corresponding to the corner fitting included in the side face;
determining an interval consisting of a minimum abscissa and a maximum abscissa corresponding to the corner fitting, and traversing and determining the gradient of each pixel point in the interval;
determining left and right boundaries of the beam based on pixel points corresponding to positions with gradient differences larger than a gradient difference threshold;
and determining a region formed by the upper and lower boundaries of the beam and the left and right boundaries of the beam in the depth image as a region corresponding to the beam included in the side surface.
8. The method according to claim 3, wherein the identifying, from the depth image, regions corresponding to the plurality of parts included in the side face based on the depth information of the depth image and a gradient difference between different pixel points included in the depth image comprises:
determining the boundary of the extension plate included in the side face, which coincides with the cross beam, based on the area corresponding to the cross beam included in the side face;
determining the upper and lower boundaries of the extension plate based on the corresponding region of the reinforcing plate included in the side surface;
traversing and determining the gradients of a plurality of subsequent pixel points by taking at least one pixel point included in the boundary coincident with the beam as a starting point;
determining the residual boundary of the extension board based on the pixel points corresponding to the positions with the gradient difference larger than the difference threshold;
determining an area, which is formed by the boundary coincident with the beam, the upper and lower boundaries of the extension plate and the remaining boundary, in the depth image as an area corresponding to the extension plate included in the side surface;
the method further comprises the following steps:
and determining the area of the side surface except for the corner fittings, the reinforcing plates, the cross beams, the side beams and the extension plates as the area corresponding to the side plate included in the side surface.
9. The method of claim 1, wherein said determining a reference plane corresponding to at least one of said components comprises:
selecting a target component of a maximum size from the plurality of components;
determining a depth mean value of pixel points included in a region corresponding to the target component, and determining a reference plane corresponding to the target component based on the depth mean value.
10. The method of claim 1, wherein said determining a reference plane corresponding to at least one of said components comprises:
determining a tongue region and a groove region comprised by the side;
determining a first depth mean value of pixel points included in the convex groove area, and deleting the pixel points in the convex groove area, which are more than a depth mean value threshold value from the first depth mean value;
determining a second depth mean value of the residual pixel points in the convex groove area, and determining a first reference plane corresponding to the convex groove area based on the second depth mean value;
determining a third depth mean value of pixel points included in the groove area, and deleting the pixel points in the groove area, wherein the distance between the pixel points and the third depth mean value is larger than the depth mean value threshold;
and determining a fourth depth mean value of the residual pixel points in the groove area, and determining a second reference plane corresponding to the groove area based on the fourth depth mean value.
11. The method according to claim 1, wherein the determining a deformation region in each of the regions based on a distance between a pixel point included in each of the regions and the reference plane comprises:
for each of the regions, performing the following processing:
determining pixel points corresponding to the distance greater than the distance threshold in the region;
determining an area formed by the pixel points corresponding to the distance greater than the distance threshold value as a deformation area in the area;
the method further comprises the following steps:
and determining the confidence of the deformation region based on the ratio of the number of the noise points included in the deformation region to the total number of the pixel points included in the deformation region.
12. The method of claim 1, further comprising:
acquiring a two-dimensional image of the side surface of the box-type structure;
on the basis of the correspondence between the depth image and the two-dimensional image, areas corresponding to the plurality of components included in the side face respectively and deformation areas in each of the areas are marked in the two-dimensional image.
13. A deformation detection device of box structure, its characterized in that, the device includes:
the acquisition module is used for acquiring a depth image of the side face of the box-type structure;
the identifying module is used for identifying areas corresponding to a plurality of parts included in the side face from the depth image based on the depth information of the depth image;
a determining module for determining a reference plane corresponding to at least one of the components and determining a distance between a pixel point included in each of the regions and the reference plane;
the determining module is further configured to determine a deformation region in each of the regions based on a distance between a pixel point included in each of the regions and the reference plane.
14. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the deformation detection method of a box-like structure according to any one of claims 1 to 12 when executing executable instructions stored in the memory.
15. A computer-readable storage medium storing executable instructions for implementing the deformation detection method of a box-like structure according to any one of claims 1 to 12 when executed by a processor.
CN202110713034.5A 2021-06-25 2021-06-25 Deformation detection method and device for box-type structure, electronic equipment and storage medium Pending CN113409282A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110713034.5A CN113409282A (en) 2021-06-25 2021-06-25 Deformation detection method and device for box-type structure, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110713034.5A CN113409282A (en) 2021-06-25 2021-06-25 Deformation detection method and device for box-type structure, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113409282A true CN113409282A (en) 2021-09-17

Family

ID=77679566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110713034.5A Pending CN113409282A (en) 2021-06-25 2021-06-25 Deformation detection method and device for box-type structure, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113409282A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049322A (en) * 2022-08-16 2022-09-13 安维尔信息科技(天津)有限公司 Container management method and system for container yard
CN116228747A (en) * 2023-05-04 2023-06-06 青岛穗禾信达金属制品有限公司 Metal cabinet processing quality monitoring method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150363970A1 (en) * 2014-06-16 2015-12-17 Replica Labs, Inc. Model and Sizing Information from Smartphone Acquired Image Sequences
CN108876704A (en) * 2017-07-10 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of facial image deformation
CN111932537A (en) * 2020-10-09 2020-11-13 腾讯科技(深圳)有限公司 Object deformation detection method and device, computer equipment and storage medium
CN111950543A (en) * 2019-05-14 2020-11-17 北京京东尚科信息技术有限公司 Target detection method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150363970A1 (en) * 2014-06-16 2015-12-17 Replica Labs, Inc. Model and Sizing Information from Smartphone Acquired Image Sequences
CN108876704A (en) * 2017-07-10 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of facial image deformation
CN111950543A (en) * 2019-05-14 2020-11-17 北京京东尚科信息技术有限公司 Target detection method and device
CN111932537A (en) * 2020-10-09 2020-11-13 腾讯科技(深圳)有限公司 Object deformation detection method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049322A (en) * 2022-08-16 2022-09-13 安维尔信息科技(天津)有限公司 Container management method and system for container yard
CN116228747A (en) * 2023-05-04 2023-06-06 青岛穗禾信达金属制品有限公司 Metal cabinet processing quality monitoring method

Similar Documents

Publication Publication Date Title
CN113516660B (en) Visual positioning and defect detection method and device suitable for train
Premebida et al. Pedestrian detection combining RGB and dense LIDAR data
Walsh et al. Data processing of point clouds for object detection for structural engineering applications
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
EP2439487B1 (en) Volume measuring device for mobile objects
CN110163904A (en) Object marking method, control method for movement, device, equipment and storage medium
CN113409282A (en) Deformation detection method and device for box-type structure, electronic equipment and storage medium
Sauerbier et al. The practical application of UAV-based photogrammetry under economic aspects
CN111626665A (en) Intelligent logistics system and method based on binocular vision
CN110136186B (en) Detection target matching method for mobile robot target ranging
CN110910382A (en) Container detection system
CN110992337A (en) Container damage detection method and system
CN113610933A (en) Log stacking dynamic scale detecting system and method based on binocular region parallax
Weinmann et al. Preliminaries of 3D point cloud processing
JP4568845B2 (en) Change area recognition device
CN117392423A (en) Laser radar-based true value data prediction method, device and equipment for target object
Guo et al. Digital transformation for intelligent road condition assessment
Gehrung et al. A fast voxel-based indicator for change detection using low resolution octrees
CN109641351B (en) Object feature identification method, visual identification device and robot
Saini et al. FishTwoMask R-CNN: Two-stage Mask R-CNN approach for detection of fishplates in high-altitude railroad track drone images
Carratù et al. Vision-Based System for Measuring the Diameter of Wood Logs
Zhang Target-based calibration of 3D LiDAR and binocular camera on unmanned vehicles
CN112270753B (en) Three-dimensional drawing construction method and device and electronic equipment
CN117726239B (en) Engineering quality acceptance actual measurement method and system
KR102520676B1 (en) Tree species detection apparatus based on camera, thermal camera, GPS, and LiDAR and Detection method of the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination