CN116263952A - Method, device, system and storage medium for measuring car hopper - Google Patents

Method, device, system and storage medium for measuring car hopper Download PDF

Info

Publication number
CN116263952A
CN116263952A CN202111532728.5A CN202111532728A CN116263952A CN 116263952 A CN116263952 A CN 116263952A CN 202111532728 A CN202111532728 A CN 202111532728A CN 116263952 A CN116263952 A CN 116263952A
Authority
CN
China
Prior art keywords
image
dimensional
point cloud
bottom plate
dimensional point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111532728.5A
Other languages
Chinese (zh)
Inventor
王望
蒋难得
张英杰
胡攀攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Wanji Photoelectric Technology Co Ltd
Original Assignee
Wuhan Wanji Photoelectric Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Wanji Photoelectric Technology Co Ltd filed Critical Wuhan Wanji Photoelectric Technology Co Ltd
Priority to CN202111532728.5A priority Critical patent/CN116263952A/en
Publication of CN116263952A publication Critical patent/CN116263952A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a car hopper measuring method, equipment, a system and a storage medium, which are suitable for the technical field of article shipment and are used for solving the problem of how to accurately measure the sizes of all basic boards of a car hopper. The method comprises the following steps: acquiring a three-dimensional point cloud of a vehicle to be tested; projecting the three-dimensional point cloud to different directions to obtain two-dimensional images of all the basic boards; based on the two-dimensional images of the respective base boards, measurement results of the respective base boards are acquired.

Description

Method, device, system and storage medium for measuring car hopper
Technical Field
The present disclosure relates to the field of object shipment technology, and in particular, to a method, an apparatus, a system, and a storage medium for measuring a hopper.
Background
With the acceleration of the modern process, automatic loading becomes a necessary trend of future development.
The following three-dimensional sensing methods are generally used to measure the hopper of a vehicle:
one is a three-dimensional measurement based on a specific location. And after the vehicle to be tested is accurately parked in a specified area, establishing a three-dimensional model of the hopper based on the vehicle information. However, this requires the driver to accurately park the vehicle in a specific area position or manually adjust the posture of the automatic loading device, which is complicated to use.
Another is three-dimensional measurement based on camera-based image processing. When three-dimensional perception is performed based on camera imaging, the visual angle of the camera is smaller, and when a vehicle to be measured is longer, a plurality of cameras are needed, so that registration of the plurality of cameras is more complex; and the camera images have certain distortion, so that the dimensional accuracy of measurement is reduced.
Yet another is a three-dimensional measurement based on lidar point cloud processing. The method comprises the steps of obtaining a three-dimensional point cloud of a vehicle to be measured through a laser radar, dividing a plane for the three-dimensional point cloud through methods such as region generation or random sampling consistency, and calculating size information of each bin by adopting a minimum bounding box algorithm. However, this is susceptible to scatter, such as a larger measurement by the minimum bounding box algorithm when there is a convex scatter on a surface.
It follows that none of the above-described ways simply and accurately obtain the basic dimensions of the vehicle. In addition, the methods ignore the measurement of special components such as the upright post, the lacing wire, the oil tank and the like, and have poor applicability to different vehicle types. Therefore, how to accurately obtain the sizes of the base plates of the hopper, the upright posts, the lacing wires, the oil tank and other special components becomes a technical problem to be solved.
Disclosure of Invention
The application provides a car hopper measuring method, equipment, a system and a storage medium, which solve the problem of how to accurately measure the sizes of special components such as each basic board, an upright post, a lacing wire, an oil tank and the like of a car hopper.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, a method for measuring a hopper is provided, including: acquiring a three-dimensional point cloud of a vehicle to be tested; projecting the three-dimensional point cloud to different directions to obtain two-dimensional images of all the basic boards; based on the two-dimensional images of the respective base boards, measurement results of the respective base boards are acquired.
As an alternative implementation of the embodiment of the present application, the method further includes: based on the two-dimensional images of each of the base plates, a measurement result of a particular component is obtained, the particular component including at least one of: upright post, lacing wire and oil tank.
As an optional implementation manner of the embodiment of the present application, a process of projecting a three-dimensional point cloud to different directions to obtain two-dimensional images of each base board, and obtaining measurement results of each base board based on the two-dimensional images of each base board includes:
projecting the three-dimensional point cloud in a overlooking manner to obtain a first image; determining the position of the bottom plate in the first image according to the gray value of the first image; and restoring the position of the bottom plate in the first image to a three-dimensional point cloud to obtain a three-dimensional measurement result of the bottom plate.
As an alternative implementation of the embodiment of the present application, the method further includes:
dividing the point cloud data of the four side plates according to the three-dimensional measurement result of the bottom plate, and respectively carrying out side view projection on the point cloud data of each side plate to obtain two-dimensional images of the four side plates; determining the position of each side plate in the corresponding two-dimensional image according to the gray value of the two-dimensional image of each side plate; and restoring the position of each side plate in the corresponding two-dimensional image to a three-dimensional point cloud to obtain three-dimensional measurement results of the four side plates.
As an optional implementation manner of the embodiment of the present application, the projecting the three-dimensional point cloud in a top view to obtain a first image includes:
taking the difference value between the coordinate value of the first coordinate axis of the ith point in the three-dimensional point cloud and the maximum coordinate value of the first coordinate axis in the three-dimensional point cloud as the first coordinate value after the projection of the ith point; taking the difference value between the maximum coordinate value of the second coordinate axis in the three-dimensional point cloud and the coordinate value of the second coordinate axis of the ith point in the three-dimensional point cloud as the second coordinate value after the projection of the ith point; taking a gray value obtained by utilizing the coordinate value of the third coordinate axis of the ith point in the three-dimensional point cloud, the maximum coordinate value of the third coordinate axis in the three-dimensional point cloud and the minimum coordinate value of the third coordinate axis in the three-dimensional point cloud as the gray value of the ith point after projection; after the first coordinate value, the second coordinate value and the gray value of each point in the three-dimensional point cloud are determined, the gray value of each point in the first image is taken as the maximum value in all gray values corresponding to each point.
As an optional implementation manner of the embodiment of the present application, determining the position of the bottom plate in the first image according to the gray value of the first image includes:
acquiring a gray level histogram of a first image; determining the gray range of the bottom plate according to the pixel point with the highest frequency in the gray histogram; according to the gray scale range of the bottom plate, binarizing, closing operation and opening operation are carried out on the first image to obtain a bottom plate area image; and detecting the edge of the image of the bottom plate area to obtain the position of the bottom plate in the first image.
As an optional implementation manner of the embodiment of the present application, before dividing the point cloud data of the four side plates according to the three-dimensional measurement result of the bottom plate, the method further includes:
fitting an edge straight line of the bottom plate according to the outline of the bottom plate area image; and correcting the three-dimensional point cloud according to the slope of the edge straight line.
Dividing point cloud data of the four side plates according to a three-dimensional measurement result of the bottom plate, including:
and according to the three-dimensional measurement result of the bottom plate, dividing the corrected point cloud data of the four side plates.
As an alternative implementation of the embodiments of the present application, the special component includes a post; based on the two-dimensional images of each basic plate, obtaining measurement results of the special components, including:
Determining the highest point in the vehicle height direction from the two-dimensional image of the left side plate or the right side plate; taking the highest point as a starting position, taking the target range as a detection interval, and moving to the lowest point in the vehicle height direction; if the number of the pixel points in the vehicle length direction corresponding to the target position is the largest, fitting a dividing line of the upright post and the side plate through the pixel points contained in the target position; performing binarization, closing operation and opening operation on the area between the highest point and the dividing line to determine the position of the upright post image in the two-dimensional image; and restoring the position of the upright post image in the two-dimensional image to a three-dimensional point cloud to obtain a three-dimensional measurement result of the upright post.
As an alternative implementation of the embodiments of the present application, the special component includes a tie; based on the two-dimensional images of each basic plate, obtaining measurement results of the special components, including:
binarizing the first image according to a first threshold value to obtain a binary image; probability Hough change is carried out on the binary image, and a plurality of straight lines are obtained; retaining a part of straight lines which are positioned in the range of the bottom plate and are along the width direction of the vehicle; connecting straight lines with the distance smaller than a preset distance by using a closed operation; detecting the outline of the lacing wire image, and acquiring the position of the lacing wire image in the first image; and restoring the position of the lacing wire image in the two-dimensional image to a three-dimensional point cloud to obtain a three-dimensional measurement result of the lacing wire.
As an alternative implementation of the embodiments of the present application, the special component includes a fuel tank; based on the two-dimensional images of each basic plate, obtaining measurement results of the special components, including:
determining a bottom plate corner in the bottom plate area image; performing closed operation filling on the bottom plate region image according to the bottom plate corner points to obtain a filling image; subtracting the bottom plate area image from the filling image, performing open operation, and deleting noise points to obtain an oil tank image; detecting the outline of the oil tank image, and determining the position of the oil tank image in the first image; and restoring the position of the oil tank image in the two-dimensional image to a three-dimensional point cloud to obtain a three-dimensional measurement result of the oil tank.
In a second aspect, a control apparatus is provided. The control device may include:
the acquisition module is used for acquiring the three-dimensional point cloud of the vehicle to be tested;
the processing module is used for projecting the three-dimensional point cloud to different directions to obtain two-dimensional images of each basic board; based on the two-dimensional images of the respective base boards, measurement results of the respective base boards are acquired.
As an optional implementation manner of the embodiment of the application, the processing module is further configured to obtain a measurement result of the special component based on the two-dimensional image of each basic plate; wherein the special components include at least one of: upright post, lacing wire and oil tank.
As an alternative implementation manner of the embodiment of the present application, the control device may further include: and a transmitting module. And the sending module is used for sending each measurement result to the client.
The control device may correspond to executing the method described in the first aspect, and for the relevant description of each module in the apparatus, please refer to the description of the first aspect, and for brevity, a description is omitted here.
The method described in the first aspect may be implemented by hardware, or may be implemented by executing corresponding software by hardware. The hardware or software includes one or more modules or units corresponding to the functions described above. Such as a processing module or unit, a display module or unit, etc.
In a third aspect, the present application provides an information measurement system comprising a lidar, a pan-tilt and a control device as provided in the second aspect.
The laser radar is used for rotating under the drive of the holder, transmitting and receiving laser data, and acquiring three-dimensional point cloud data according to the laser data. A control device for executing the hopper measuring method as provided in any one of the first aspects, based on the three-dimensional point cloud data.
In a fourth aspect, the present application provides an information measurement system comprising a processor and a memory, the processor being coupled to the memory, the processor being operable to execute a computer program or instructions stored in the memory to cause the information measurement system to implement a hopper measurement method as provided in any of the first aspects.
In a fifth aspect, the present application provides a storage medium having stored thereon a computer program to be loaded by a processor to perform the hopper measuring method as provided in any of the first aspects.
According to the car hopper measuring scheme, after the three-dimensional point cloud collected by the laser radar is obtained, the three-dimensional point cloud of the vehicle to be measured can be separated from the three-dimensional point cloud. And projecting the three-dimensional point cloud to different directions to obtain two-dimensional images of all the basic boards, so that the measurement of all the basic boards can be realized based on the two-dimensional images of all the basic boards. Because the laser radar has advantages of high measuring speed, high precision, strong anti-interference capability and the like, the two-dimensional images of the bottom plate, the left side plate, the right side plate, the front side plate and the rear side plate can be separated by projecting the laser three-dimensional point cloud to different directions, so that the problem that the laser radar is easily influenced by illumination, imaging visual field and camera distortion when imaging through a camera is effectively solved, and the size of the basic size of the car hopper is accurately obtained.
Drawings
Fig. 1 is a schematic diagram of an application scenario of a lidar provided in an embodiment of the present application;
FIG. 2A is a flow chart of a method for measuring a hopper according to an embodiment of the present disclosure;
Fig. 2B is a flow chart of a method for measuring a hopper according to another embodiment of the present disclosure;
FIG. 3 is an exemplary diagram of acquiring a bottom plate for top projection imaging provided in an embodiment of the present application;
fig. 4 is a cross-sectional view of a base plate in an XY plane in a three-dimensional point cloud according to an embodiment of the present application;
FIG. 5 is an exemplary diagram of side projection imaging of various side plates provided in an embodiment of the present application;
FIG. 6 is an exemplary diagram of a laser-based side-panel image detection and measurement stand provided in an embodiment of the present application;
FIG. 7 is an exemplary diagram of a lashing bar detection and measurement based on a laser top view image provided in an embodiment of the present application;
FIG. 8 is an exemplary diagram of a laser-based overhead image detection and measurement tank provided in an embodiment of the present application;
fig. 9 is one of schematic structural diagrams of a control device according to an embodiment of the present application;
fig. 10 is a second schematic structural diagram of a control device according to an embodiment of the present disclosure.
Detailed Description
In the description of the present application, "/" means or, unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. Also, in the description of the present application, unless otherwise indicated, "a plurality" means two or more than two. In addition, for the sake of clarity in describing the technical solutions of the embodiments of the present application, the "first" and "second" and the like described in the embodiments of the present application are used to distinguish different objects or to distinguish different treatments on the same object, and are not used to describe a specific order of the objects.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The technical scheme of the present application is described in detail below with specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 1 is a schematic diagram of an application scenario of a lidar according to an embodiment of the present application.
The laser radar is erected above the parking interval of the lane, for example, the distance between the laser radar and the ground can be 4 meters. Typically, prior to installation of the lidar, a parking interval for the vehicle in the lane is determined; in the parking interval, the laser is ensured to fall in the hopper at the ground projection point in the length direction of the vehicle, for example, the laser radar is arranged in the interval 1/4 to 1/2 of the distance from the hopper in the length direction; in the height direction of the vehicle, the object drooping at the top is ensured not to shade the laser beam, and the vehicle hopper can be completely scanned.
Before a certain vehicle is parked and loaded, the vehicle is parked in a predetermined parking section along a predetermined traveling direction in a lane. A laser radar is installed above the lane. The laser radar can be composed of a processor, a detector, a laser head, a rotary reflection structure and the like.
Specifically, the working process of the laser radar is as follows: the laser head uses laser as a signal source to emit pulse laser. The rotary reflecting structure in a rotary state reflects the pulse laser. When the laser reaches the vehicle to be tested, scattering is caused, and a part of the light waves are reflected to the detector. The detector then sends the detected signal to the processor. The processor calculates according to the laser ranging principle to obtain the distance from the laser radar to the vehicle to be tested, and the pulse laser continuously scans the vehicle to be tested to obtain the data of all obstacle points on the vehicle to be tested, namely three-dimensional point cloud data. The vehicle to be tested may be a vehicle for loading articles, such as a truck, truck or trolley, etc. comprising a hopper.
It should be noted that, in specific implementations, the processor, the detector, the laser head, and the rotating reflective structure may be integrated into one device. In addition, in other embodiments of the present application, the lidar may include more or fewer devices than shown, or certain devices may be combined, or certain devices may be split, or different device arrangements may be made.
Fig. 2A is a schematic flow chart of a method for measuring a hopper according to an embodiment of the present application. The method includes S101 to S103 described below.
S101, acquiring a three-dimensional point cloud of a vehicle to be tested.
Before the to-be-tested vehicle is about to be loaded, the to-be-tested vehicle can travel forwards to a designated parking area along the lane under the command of the command console and park. The laser radar is arranged above the parking area and continuously emits pulse laser. When the pulse laser strikes the body of the vehicle to be tested and surrounding obstacles, laser scattering is caused, and a part of light waves are reflected to the detector. And then the detector sends the detected laser data to the processor, so that the processor processes the laser data to obtain distance information, and further three-dimensional point cloud data is obtained according to the distance information.
The embodiment of the application adopts a form of a Cartesian coordinate system, takes the direction of a lane as an X axis, the direction perpendicular to the lane as a Y axis, and the direction perpendicular to the ground (namely the vehicle height direction) as a Z axis, and establishes a three-dimensional coordinate system for three-dimensional point cloud data. For example, the coordinates of a certain point in the three-dimensional point cloud data may be expressed by (x, y, z).
Further, the origin of the three-dimensional coordinate system may be the position of the laser head of the laser radar, the projection point of the laser head on the lane, or other positions on the lane. The present application is not limited in this regard.
The three-dimensional point cloud data acquired by the laser radar not only comprises the three-dimensional point cloud data of the vehicle to be detected, but also comprises the three-dimensional point cloud data of the environment, so that the three-dimensional point cloud data of the vehicle to be detected needs to be extracted from the three-dimensional point cloud data. Since the mounting lidar is fixed and known in position, the position of the lane relative to the lidar is also known. Limiting the Y-axis direction point cloud according to the lane, limiting the X-direction point cloud according to the point cloud origin, and limiting the Z-direction point cloud according to the vehicle height so as to separate the vehicle to be tested from the environment point cloud, and finally obtaining the three-dimensional point cloud of the vehicle to be tested.
S102, projecting the three-dimensional point cloud to different directions to obtain two-dimensional images of all the basic boards.
Each of the base plates of the vehicle includes: a floor, a left side panel (also referred to as a left rail panel), a right side panel (also referred to as a right rail panel), a front side panel (also referred to as a front rail panel), and a rear side panel (also referred to as a rear rail panel).
S103, acquiring measurement results of the base plates based on the two-dimensional images of the base plates.
Since the floor plane is the only plane in the XY plane in the basic version of the hopper model, the floor plane can be determined first. Then, according to the constraint relation of the left side plate, the right side plate, the front side plate, the rear side plate and the bottom plate, a left side plate plane, a right side plate plane, a front side plate plane and a rear side plate plane are respectively determined.
Specifically, the three-dimensional point cloud is projected in a overlooking mode to obtain a two-dimensional image of the bottom plate, and therefore the measurement result of the bottom plate is determined. And then, according to the constraint relation of the left side plate, the right side plate, the front side plate, the rear side plate and the bottom plate, respectively carrying out side view projection on the point cloud data of each side plate to obtain two-dimensional images of the four side plates, and thus obtaining the measurement result of each side plate.
Further, based on the two-dimensional images of the respective basic plates, a measurement result of a specific component may also be acquired, wherein the specific component includes at least one of: upright post, lacing wire and oil tank. For example, in conjunction with fig. 2A, as shown in fig. 2B, the hopper measuring method may further include S104 to S106.
S104, acquiring a measuring result of the stand column based on the two-dimensional image of the side plate.
S105, based on the two-dimensional image of the bottom plate, a measurement result of the lacing wire is obtained.
S106, acquiring a measurement result of the oil tank based on the two-dimensional image of the bottom plate.
For measurement of small components such as lacing wires, upright posts, oil cylinders and the like, a camera imaging mode is adopted to acquire color information, and because the whole truck needs to be shot, the components occupy less space in an image, for example, the lacing wires are imaged through a camera, the width of the lacing wires in the image can be only a few images, the lacing wire colors are relatively close to the colors of the bottom plates, and the special small components are difficult to detect and divide based on the camera imaging. Therefore, for special components of the vehicle such as the upright post, the oil cylinder and the lacing wire, the embodiment of the application can quickly eliminate the influence of scattered points by adopting opening and closing operation based on the two-dimensional image of the basic plate, thereby extracting the two-dimensional image of the special component and further realizing the measurement of the special component.
It will be appreciated that, according to the actual use requirements, based on the two-dimensional image of each basic plate, measurement results of any other possible specific components may also be obtained, and the embodiment of the present application is not limited.
According to the vehicle measurement scheme provided by the embodiment of the application, after the three-dimensional point cloud acquired by the laser radar is acquired, the three-dimensional point cloud of the vehicle to be measured can be separated from the three-dimensional point cloud. And projecting the three-dimensional point cloud to different directions to obtain two-dimensional images of all the basic boards, so that the measurement of all the basic boards can be realized based on the two-dimensional images of all the basic boards. Then, based on the two-dimensional images of the base plates, the measurement of the special parts of the vehicle can be realized, for example, based on the two-dimensional images of the side plates, the measurement of the upright post can be realized; based on the two-dimensional image of the bottom plate, obtaining a measurement result of the lacing wire; and acquiring a measurement result of the oil tank based on the two-dimensional image of the bottom plate.
The following will describe a three-dimensional imaging of a hopper based on a laser point cloud. Specifically, the bottom plate measurement based on the laser image is described from S1 to S3 below, and then the side plate measurement based on the laser image is described from S4 to S6 below.
S1, projecting the three-dimensional point cloud of the vehicle to be tested in a overlooking mode to obtain a first image.
The first image is a two-dimensional image obtained by three-dimensional point cloud overlooking projection of the vehicle to be detected.
When the three-dimensional point cloud of the vehicle to be tested is projected in a overlooking mode, the range of the two-dimensional image can be calculated by adopting the range of the three-dimensional point cloud. For example, the X-axis coordinate value and the Y-axis coordinate value of a corresponding point in the two-dimensional image are calculated according to the X-axis coordinate value and the Y-axis coordinate value of a certain point in the three-dimensional point cloud of the vehicle to be tested, and the gray value of the corresponding point in the two-dimensional image is calculated according to the Z-axis coordinate value of the middle point of the three-dimensional point cloud of the vehicle to be tested.
The X-axis and the Y-axis in the three-dimensional coordinate system are the same as the X-axis and the Y-axis in the two-dimensional coordinate system, and are not equivalent to each other, but have different meanings.
An exemplary description will be given below taking an ith point in a three-dimensional point cloud of a vehicle to be measured as an example. The position of the i-th point in the two-dimensional image is determined as follows:
let the coordinate value of the first coordinate axis (X axis) of the ith point in the three-dimensional point cloud be p i,x Representing the ith point in the three-dimensional point cloudThe coordinate value of the second coordinate axis (Y axis) of (B) is p i,y The coordinate value of the third coordinate axis (Z axis) of the ith point in the three-dimensional point cloud is represented by p i,z Representing the minimum coordinate value of X axis in the three-dimensional point cloud by p min,x Representing the maximum coordinate value of the Y-axis in the three-dimensional point cloud by p max,y Representing the maximum coordinate value of Z axis in the three-dimensional point cloud by p max,z Representing the minimum coordinate value of Z axis in the three-dimensional point cloud by p min,z Representing p with the ith point in the three-dimensional point cloud i,x X for coordinate position in corresponding two-dimensional image i Representing p with the ith point in the three-dimensional point cloud i,y Y is used for corresponding to the coordinate position in the two-dimensional image i Representing that then there is:
Figure BDA0003412011660000081
from the above equation one, it can be obtained:
coordinate value p of X-axis of ith point in three-dimensional point cloud i,x Maximum coordinate value p with X axis in three-dimensional point cloud min,x Is the first coordinate value x after the projection of the ith point i
Maximum coordinate value p of Y-axis in three-dimensional point cloud max,y Coordinate value p with Y-axis of ith point in three-dimensional point cloud i,y Is the second coordinate value y after the projection of the ith point i
Using the Z-axis coordinate value p of the ith point in the three-dimensional point cloud i,z Maximum coordinate value p of Z axis in three-dimensional point cloud max,z Minimum coordinate value p of Z axis in three-dimensional point cloud min,z The obtained gray value is gray value gray of the ith point after projection i
In the above manner, the X-axis coordinate value, the Y-axis coordinate value, and the gray value after projection of each point in the three-dimensional point cloud of the vehicle to be measured can be determined.
Further, in the process of converting a 3D point cloud image of a vehicle to be tested into a 2D image, a plurality of cloud points may be corresponding to the same position in the 2D image. Because goods are generally loaded from top to bottom during automatic loading, the smallest gray value can possibly collide with an object at a higher point during loading, and therefore the largest gray value meets the requirement of automatic loading. Therefore, after determining the X-axis coordinate value, the Y-axis coordinate value, and the gray value after projection of each point in the three-dimensional point cloud, the gray value of each point in the first image may be valued as the maximum value of all the gray values corresponding to each point.
Since the points in the point cloud are smaller after being projected into the image, each of the projected points needs to be enlarged, and the point of the original pixel 1*1 is enlarged to n×n. Wherein N is an integer greater than or equal to 2.
In addition, as the range of the point cloud of the truck is larger, the image obtained after projection is larger, the image needs to be scaled for facilitating subsequent processing, a nearest point interpolation method is adopted during scaling, the length and the width of the image are scaled by fx times, and finally the top view imaging img_v shown in (a) of fig. 3 is obtained.
S2, determining the position of the bottom plate in the first image according to the gray value of the first image.
Specifically, determining the position of the bottom plate in the first image according to the gray value of the first image includes the following steps:
step 1, acquiring a gray level histogram of a first image.
And step 2, determining the gray range of the bottom plate according to the pixel point with the highest frequency in the gray histogram.
Illustratively, (a) in fig. 3 is a top view image img_v acquired by a lidar. Fig. 3 (b) is a gray level histogram obtained by counting img_v in a top view image, wherein the horizontal axis represents gray level values and the vertical axis represents the number of pixels.
In the overhead view imaging img_v, the gradation value is used to represent the height information of the vehicle. The gray values are also different for different vehicle height positions. Since the bottom plate of the vehicle is usually at the lowest position of the vehicle and the bottom plate is relatively flat, the characteristic is that the gray value is relatively low in the gray level histogram and the number is relatively large, so that the gray level range corresponding to the pixel point with the highest occurrence frequency in the gray level histogram is the gray level range of the bottom plate. As shown in fig. 3 (b), there are two peaks in the gray level histogram, and the gray level of the left peak is smaller than that of the right peak, that is, the number of vertices corresponding to the left peak is the largest and the gray level is smaller, so that the gray level range of the bottom plate can be determined according to the horizontal axis coordinate of the left peak.
Since the floor of the vehicle is not an absolute flat plane, there is unevenness, and therefore, the gray range is determined according to the pixel point with the highest frequency in the gray histogram, not just a gray value.
And 3, performing binarization, closing operation and opening operation on the first image according to the gray scale range of the bottom plate to obtain a bottom plate area image.
The first image not only contains the hopper for loading cargoes, but also comprises other parts such as a headstock, so that the first image can be binarized to remove the other parts such as the headstock and highlight the hopper.
The space caused by some obstacles after projection can be filled by closed operation
The convex caused by some scattered points can be removed by open operation.
And 4, detecting the edges of the bottom plate area image to obtain the position of the bottom plate in the first image.
The above description will be given taking fig. 3 as an example. After determining the gray scale range of the bottom plate, binarization, a closing operation, an opening operation, and the like may be sequentially performed according to the gray scale range of the bottom plate, to obtain a bottom plate region image img_floor as shown in (c) of fig. 3. Then, edge detection is performed on the floor area image img_floor, and the minimum bounding rectangle of the floor area image img_floor is acquired, resulting in a floor position, for example, a rectangular dotted frame as shown in (d) of fig. 3. And finally, reversely restoring to the nearest position of the three-dimensional point cloud according to the formula I to obtain a three-dimensional measurement result of the bottom plate.
In the conventional method for detecting the floor by using the image acquired by the camera, the floor detection is complicated when the color of the floor of the vehicle is not obvious from the color distinction of the vehicle head or other environments due to the acquired color information of the image acquired by the camera. In contrast, in the embodiment of the application, the laser radar is used for acquiring the image obtained by the 3D point cloud projection imaging, so that the influence of environmental factors such as illumination can be avoided, and the length and width of the image cannot be deformed.
S3, restoring the position of the bottom plate in the first image to the three-dimensional point cloud of the vehicle to be detected, and obtaining a three-dimensional measurement result of the bottom plate.
Alternatively, the three-dimensional measurement of the base plate may include, but is not limited to, at least one of:
the width of the bottom plate,
The length of the bottom plate,
The positions of the four corner points of the bottom plate in the three-dimensional point cloud.
And S4, dividing the point cloud data of the four side plates according to the three-dimensional measurement result of the bottom plate, and respectively carrying out side view projection on the point cloud data of each side plate to obtain two-dimensional images of the four side plates.
Because the parking position of the vehicle may be inclined, the three-dimensional point cloud obtained by laser scanning may also be inclined, and then errors exist in projection imaging of each plate surface directly, the three-dimensional point cloud can be corrected before projection imaging is performed on the surrounding side plates.
Specifically, firstly, fitting an edge straight line of a bottom plate according to the outline of the bottom plate area image; acquiring the slope of the edge straight line; and correcting the three-dimensional point cloud according to the slope of the edge straight line. Therefore, the corrected point cloud data of the four side plates can be divided according to the three-dimensional measurement result of the bottom plate, and the point cloud data of each side plate are respectively subjected to side view projection to obtain two-dimensional images of the four side plates.
The side view projection direction needs to be changed for different side plates.
Fig. 4 is a cross-sectional view of a base plate in an XY plane in a three-dimensional point cloud according to an embodiment of the present application. The outline of the base plate is assumed to be represented by a solid line as shown in fig. 4. In order to reduce the segmentation error, four areas surrounded by a broken line as shown in fig. 4 are divided by taking the value S as a threshold value on both sides of each side of the bottom plate. Wherein the upper and lower regions correspond to left and right side plates of the vehicle, and the left and right regions correspond to front and rear side plates of the vehicle.
After four point cloud areas corresponding to the four side plates are divided, the four point cloud areas can be subjected to side view projection. The projection method refers to the first formula, wherein the left and right side plates are projected to the planes corresponding to the X-axis and the Z-axis in a side view, and the front and rear plates are projected to the planes corresponding to the Y-axis and the Z-axis in a side view.
For example, the two-dimensional image imgF0 of the left side plate is shown in fig. 5 (a), the two-dimensional image imgF1 of the right side plate is shown in fig. 5 (b), the two-dimensional image imgS0 of the front side plate is shown in fig. 5 (c), and the two-dimensional image imgS1 of the rear side plate is shown in fig. 5 (d).
S5, determining the position of each side plate in the corresponding two-dimensional image according to the gray value of the two-dimensional image of each side plate.
S6, restoring the position of each side plate in the corresponding two-dimensional image to the three-dimensional point cloud of the vehicle to be tested, and obtaining three-dimensional measurement results of the four side plates.
Since the four-panel image after the side view projection imaging is a gray scale image, binarization of the left-side panel imgF0, right-side panel imgF1, front-side panel imgS0, and rear-side panel imgS0 images is first required to remove the non-side panel portion, highlighting the side panels. Then, a closing operation is performed to fill in the gaps caused by some obstacles after projection. Then, an open operation is performed to eliminate the bumps caused by some scattered points. And finally, carrying out contour detection and minimum circumscribed rectangle detection on each panel image to obtain contour points and rectangle corner points, and reversely restoring to the nearest position of the point cloud to obtain a three-dimensional measurement result of each side panel.
It should be noted that, regarding the implementation manner of the side plate projection, the gray value detection of the two-dimensional image of the side plate, determining the position of the side plate in the corresponding two-dimensional image, and the restoration of the position of the side plate in the corresponding two-dimensional image to the three-dimensional point cloud, the description of the above embodiments may be omitted herein.
The point cloud extraction and measurement of the tie bars, the upright posts and the oil cylinders will be exemplarily described below.
1. Stand column based on laser side plate image detection and measurement
The pillar is a protruding portion located on the left and right side plates of the vehicle.
The column detection and measurement comprises the following specific steps:
1. the highest point in the vehicle height direction is determined from the two-dimensional image of the left side plate or the right side plate.
2. And moving to the lowest point in the vehicle height direction by taking the highest point as a starting position and the target range as a detection section.
Because the stand column is positioned on the left side plate and the right side plate, any one of the two-dimensional image of the left side plate and the two-dimensional image of the right side plate can be used as a reference image for detecting the stand column.
Further, in the vehicle height direction, the upper surface of the upright post is located at the highest point, the lower surface of the side plate is located at the lowest point, and the widths and the lengths of the upright post and the side plate are different, so that the vehicle height direction can be taken as the vehicle height direction, the vehicle length direction is the vehicle length direction, and the parting line of the upright post and the side plate is gradually searched from the highest point to the lowest point.
3. If the number of the pixel points existing in the vehicle length direction corresponding to the target position is the largest, fitting a dividing line of the upright post and the side plate through the pixel points included in the target position.
An exemplary two-dimensional image of the left side plate as shown in fig. 6 is illustrated. The two-dimensional image is in an L shape, the upper half part of the L shape is a column, and the lower half part of the L shape is a left side plate. First, a point at which the value in the Y-axis direction is maximum is determined as the highest point. Then, the highest point is taken as a starting position, and [ Y0, y0+d ] is taken as a detection interval, and the detection interval moves from top to bottom along the Y axis, namely the value of Y0 is gradually reduced. Wherein d takes a smaller value. When the vehicle moves to the target position between the upright post and the side plate, the number of the pixel points in the X-axis direction suddenly increases, namely the number of the pixel points in the vehicle length direction corresponding to the target position is the largest, and the dotted line is taken as a dividing line of the upright post and the side plate as shown in the figure. The image is separated into two parts based on this dividing line: a column image on the upper half and a side plate image on the lower half.
4. And carrying out binarization, closing operation and opening operation on the area between the highest point and the dividing line to determine the position of the column image in the two-dimensional image.
5. And restoring the position of the upright post image in the two-dimensional image to a three-dimensional point cloud to obtain a three-dimensional measurement result of the upright post.
After separating the upright post and the side plate, the related description of restoring the position of the bottom plate in the overlooking projection image to the three-dimensional point cloud can be referred to in the embodiment, the binarization, the closing operation and the opening operation are performed on the upright post image, and the outline of the upright post is detected, so that the position of the upright post image in the two-dimensional image is determined. And then, restoring the position of the upright post image in the two-dimensional image to a three-dimensional point cloud to obtain a three-dimensional measurement result of the upright post.
2. Lashing bar detection and measurement based on laser overlooking image
The lacing wire refers to an object which spans between the left side plate and the right side plate and is used for preventing the side plate from deforming.
The specific steps of tie bar detection and measurement are as follows:
1. and binarizing the first image according to a first threshold value to obtain a binary image.
The first threshold is a gray value of the bottom plate.
The lacing wires are typically mounted on the base plate with the lacing wires being approximately straight. Since the gray value represents the height in the overhead view image img_v obtained by laser radar imaging, binarizing the overhead view image img_v to be equal to or larger than the first threshold value can obtain a tie image as in fig. 7 (a).
2. And carrying out probability Hough change on the binary image to obtain a plurality of straight lines.
3. And reserving part of straight lines which are positioned in the range of the bottom plate and along the width direction of the vehicle.
The partial straight lines are straight lines meeting a preset direction in all straight lines, and the preset direction is the vehicle width direction.
For example, probability hough change is performed on a lacing wire image as in fig. 7 (a), resulting in an image including a plurality of straight lines as shown in fig. 7 (b).
All straight lines meeting the set minimum point number and the shortest length in the image can be detected by adopting probability Hough change. Since the tie bar direction is generally a vertical direction (i.e., vehicle width direction) as shown in the drawing, a straight line in the vertical direction among all straight lines is reserved, and a straight line in the horizontal direction is deleted. In addition, since the detected straight lines include not only the straight lines located in the tie bar region but also the straight lines located in the front and rear plate regions, it is necessary to delete the straight lines outside the floor range according to the detected floor range, and only the straight line portions within the floor range remain. Thus, a partial straight line as shown in fig. 7 (c) can be obtained.
4. And connecting straight lines with the distance smaller than the preset distance by using a closed operation.
Under the influence of projection, the straight lines which are originally connected may be in an unconnected state, so that the straight lines with the distance smaller than the preset distance can be connected by using a closed operation.
5. And detecting the outline of the lacing wire image, and acquiring the position of the lacing wire image in the first image.
6. And restoring the position of the lacing wire image in the two-dimensional image to a three-dimensional point cloud to obtain a three-dimensional measurement result of the lacing wire.
As shown by the solid line in fig. 7 (d), after the tie image contour detection, the position of the tie image in the top view image img_v can be acquired. Then, referring to the description related to the restoration of the position of the bottom plate to the three-dimensional point cloud in the top view projection image in S102 to S104 in the above embodiment, the position of the lacing wire image in the two-dimensional image may be restored to the three-dimensional point cloud, so as to obtain the three-dimensional measurement result of the lacing wire.
3. Oil tank based on laser bottom plate image detection and measurement
The oil tank is a cylindrical object which is tightly attached to the front plate of the vehicle and is positioned above the hopper.
The specific steps of oil tank detection and measurement are as follows:
1. and determining the corner points of the bottom plate in the bottom plate area image.
2. And performing closed operation filling on the bottom plate region image according to the bottom plate corner points to obtain a filling image.
When the hopper position has an oil tank, the bottom plate has a gap at the corresponding position of the oil tank. By utilizing the characteristics, on the basis of the bottom plate area image detected in the above embodiment, four bottom plate corner points can be determined. And filling the bottom plate region image by using the four bottom plate corner points to obtain a filling image.
3. And subtracting the bottom plate area image from the filling image, performing open operation, and deleting noise points to obtain the oil tank image.
4. And detecting the outline of the oil tank image, and determining the position of the oil tank image in the first image.
5. And restoring the position of the oil tank image in the two-dimensional image to a three-dimensional point cloud to obtain a three-dimensional measurement result of the oil tank.
For example, the bottom plate region image shown in fig. 8 (a) is subjected to the closed operation filling, and a filled image shown in fig. 8 (b) can be obtained. The image of the floor area is subtracted from the filling image to obtain an image of the fuel tank as shown in fig. 8 (c). And then, carrying out open operation on the obtained oil tank area image, and deleting interference possibly caused by subtraction operation. And finally, performing contour detection on the image after the open operation to obtain the position of the oil tank image in the first image. And restoring the points of the outline into the three-dimensional point cloud to obtain a three-dimensional measurement result of the oil tank.
In the embodiment of the application, the detection and measurement of each component are carried out on the hopper in the image, and the influence of scattered points can be eliminated rapidly through the common open-close operation in the image. The binarization and straight line detection algorithm in the image processing algorithm is used, the special small parts of the lacing wires, the upright posts and the oil cylinders are easy to divide, the detection and measurement problems of the special small parts of various trucks and the influence of scattered points in the measurement process are effectively solved, and accurate and comprehensive three-dimensional measurement information is provided for subsequent automatic loading.
As shown in fig. 9, an embodiment of the present application provides a control apparatus. The control device 90 includes:
the acquiring module 91 may be configured to acquire a three-dimensional point cloud of the vehicle to be tested. Because the three-dimensional point cloud data acquired by the laser radar not only includes the three-dimensional point cloud data of the vehicle to be detected but also includes the three-dimensional point cloud data of the environment, the acquisition module 91 needs to separate the vehicle to be detected from the environmental point cloud after receiving the three-dimensional point cloud data, and finally obtains the three-dimensional point cloud of the vehicle to be detected.
The processing module 92 may be configured to project the three-dimensional point cloud to different directions to obtain two-dimensional images of each base plate; acquiring measurement results of the base plates based on the two-dimensional images of the base plates; acquiring a measuring result of the stand column based on the two-dimensional image of the side plate; based on the two-dimensional image of the bottom plate, obtaining a measurement result of the lacing wire; and acquiring a measurement result of the oil tank based on the two-dimensional image of the bottom plate.
Optionally, as shown in fig. 10, the control device 90 provided in the embodiment of the present application may further include a sending module 93 connected to the processing module 92. The sending module 93 may be configured to send the respective measurement results obtained by the processing module 92 to the client. For example, the transmitting module may transmit the measurement result of the base board, the measurement result of the pillar, the measurement result of the tie bar, and the measurement result of the oil tank to the client, so that a user using the client performs intelligent loading or the like with reference to these measurement results.
The control device provided in the embodiment of the present application may execute the above method embodiment, and its implementation principle is similar to that of the technical effect, and will not be described herein again.
The embodiment of the application provides an information measurement system, which comprises a laser radar, a cradle head and control equipment. Wherein, the cloud platform is used for driving the laser radar and rotates.
The laser radar is used for rotating under the drive of the holder, transmitting and receiving laser data, and acquiring three-dimensional point cloud data according to the laser data.
And the control equipment is used for executing the car hopper measuring method provided by the embodiment of the method according to the three-dimensional point cloud data. For example, the control device may separate the vehicle to be tested from the ambient point cloud after receiving the three-dimensional point cloud data sent by the lidar, to obtain the three-dimensional point cloud of the vehicle to be tested. Then, projecting the three-dimensional point cloud to different directions to obtain two-dimensional images of all the basic boards; acquiring measurement results of the base plates based on the two-dimensional images of the base plates; and acquiring measurement results of special components such as the upright post, the lacing wire, the oil tank and the like based on the two-dimensional images of the base plates.
It should be noted that, for the structure, the working process, etc. of the lidar, reference may be made to the description of the foregoing embodiments, and the description is omitted here. In addition, the control equipment can be an industrial personal computer, the laser radar and the industrial personal computer are provided with network transmission interfaces, and in actual use, the laser radar transmits the collected three-dimensional point cloud data to algorithm software in the industrial personal computer through the network transmission interfaces, so that the algorithm software is utilized to process the three-dimensional data, and the basic size of the hopper and the size of a special part are obtained through measurement.
In addition, the cradle head and the laser radar are combined together, and left-right swing can be achieved. For example, the laser radar used in the embodiments of the present application may be a single-line laser radar, which can scan in one direction, for example, in the front and rear directions, so that the single-line laser radar and the pan/tilt head swinging from side to side together achieve the front, rear, left and right scanning.
Based on the same inventive concept, the embodiment of the application also provides an information measurement system. The information measurement system provided by the embodiment of the application comprises: a memory and a processor, the memory for storing a computer program; the processor is used for executing the car hopper measuring method of the method embodiment when the computer program is called.
The processor may be a central processing unit (central processing unit, CPU), which may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be an internal storage unit, such as a hard disk or memory, in some embodiments. The memory may also be an external storage device in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card, etc. Further, the memory may also include both internal storage units and external storage devices. The memory is used to store an operating system, application programs, boot loader programs, data, and other programs, etc., such as program code for a computer program, etc. The memory may also be used to temporarily store data that has been output or is to be output.
The information measurement system provided in the embodiment of the present application may execute the above method embodiment, and its implementation principle is similar to that of the technical effect, and will not be described herein again.
In addition, the embodiment of the application further provides a readable storage medium, and the readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the embodiment of the car hopper measuring method can be realized, and the same technical effects can be achieved, so that repetition is avoided, and no redundant description is provided herein. Among them, a computer-readable storage medium such as a read-only Memory (ROM), a random access Memory (random access Memory, RAM), a magnetic disk, an optical disk, or the like.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium may include: ROM or random access memory RAM, magnetic or optical disk, etc.
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. A method of hopper measurement, the method comprising:
acquiring a three-dimensional point cloud of a vehicle to be tested;
projecting the three-dimensional point cloud to different directions to obtain two-dimensional images of all the basic boards;
and acquiring measurement results of the base boards based on the two-dimensional images of the base boards.
2. The method of claim 1, wherein projecting the three-dimensional point cloud in different directions to obtain two-dimensional images of each base plate, and obtaining the measurement results of each base plate based on the two-dimensional images of each base plate, comprises:
projecting the three-dimensional point cloud in a overlooking manner to obtain a first image;
determining the position of the bottom plate in the first image according to the gray value of the first image;
and restoring the position of the bottom plate in the first image to the three-dimensional point cloud to obtain a three-dimensional measurement result of the bottom plate.
3. The method according to claim 2, wherein the method further comprises:
dividing point cloud data of four side plates according to the three-dimensional measurement result of the bottom plate, and respectively carrying out side view projection on the point cloud data of each side plate to obtain two-dimensional images of the four side plates;
Determining the position of each side plate in the corresponding two-dimensional image according to the gray value of the two-dimensional image of each side plate;
and restoring the position of each side plate in the corresponding two-dimensional image to the three-dimensional point cloud to obtain three-dimensional measurement results of the four side plates.
4. The method of claim 2, wherein the overhead projecting the three-dimensional point cloud to obtain a first image comprises:
taking the difference value between the coordinate value of the first coordinate axis of the ith point in the three-dimensional point cloud and the maximum coordinate value of the first coordinate axis in the three-dimensional point cloud as the first coordinate value after the ith point is projected;
taking the difference value between the maximum coordinate value of the second coordinate axis in the three-dimensional point cloud and the coordinate value of the second coordinate axis of the ith point in the three-dimensional point cloud as the second coordinate value after the ith point is projected;
taking a gray value obtained by utilizing a coordinate value of a third coordinate axis of an ith point in the three-dimensional point cloud, a maximum coordinate value of the third coordinate axis in the three-dimensional point cloud and a minimum coordinate value of the third coordinate axis in the three-dimensional point cloud as a gray value after the ith point is projected;
and after the first coordinate value, the second coordinate value and the gray value of each point in the three-dimensional point cloud are determined, the gray value of each point in the first image is taken as the maximum value of all gray values corresponding to each point.
5. The method of claim 4, wherein determining the position of the bottom plate in the first image based on the gray value of the first image comprises:
acquiring a gray level histogram of the first image;
determining the gray scale range of the bottom plate according to the pixel point with the highest frequency in the gray scale histogram;
according to the gray scale range of the bottom plate, binarizing, closing operation and opening operation are carried out on the first image to obtain a bottom plate area image;
and detecting the edge of the bottom plate area image to obtain the position of the bottom plate in the first image.
6. The method of claim 5, wherein prior to dividing the point cloud data for the four side panels based on the three-dimensional measurements of the bottom panel, the method further comprises:
fitting an edge straight line of the bottom plate according to the outline of the bottom plate area image;
correcting the three-dimensional point cloud according to the slope of the edge straight line;
the dividing the point cloud data of the four side plates according to the three-dimensional measurement result of the bottom plate comprises:
and dividing the corrected point cloud data of the four side plates according to the three-dimensional measurement result of the bottom plate.
7. The method of claim 2, wherein after obtaining the two-dimensional image of each base plate, the method further comprises:
Determining a highest point in a vehicle height direction from the two-dimensional image of the left side plate or the right side plate;
taking the highest point as a starting position, taking a target range as a detection interval, and moving to the lowest point in the vehicle height direction;
if the number of the pixel points in the vehicle length direction corresponding to the target position is the largest, fitting a dividing line of the upright post and the side plate through the pixel points contained in the target position;
performing binarization, closing operation and opening operation on the area between the highest point and the dividing line to determine the position of the upright post image in the two-dimensional image;
and restoring the position of the upright post image in the two-dimensional image to a three-dimensional point cloud to obtain a three-dimensional measurement result of the upright post.
8. The method of claim 2, wherein after obtaining the two-dimensional image of each base plate, the method further comprises:
binarizing the first image according to a first threshold value to obtain a binary image;
probability Hough change is carried out on the binary image, and a plurality of straight lines are obtained;
retaining a part of straight lines which are positioned in the range of the bottom plate and are along the width direction of the vehicle;
connecting straight lines with the distance smaller than a preset distance by using a closed operation;
Detecting the outline of a lacing wire image, and acquiring the position of the lacing wire image in the first image;
and restoring the position of the lacing wire image in the two-dimensional image to a three-dimensional point cloud to obtain a three-dimensional measurement result of the lacing wire.
9. The method of claim 5, wherein after obtaining the two-dimensional image of each base plate, the method further comprises:
determining a bottom plate corner point in the bottom plate area image;
performing closed operation filling on the bottom plate region image according to the bottom plate corner points to obtain a filling image;
subtracting the bottom plate area image from the filling image, performing open operation, and deleting noise points to obtain an oil tank image;
detecting the outline of the oil tank image and determining the position of the oil tank image in the first image;
and restoring the position of the oil tank image in the two-dimensional image to a three-dimensional point cloud to obtain a three-dimensional measurement result of the oil tank.
10. A control apparatus, characterized in that the control apparatus comprises:
the acquisition module is used for acquiring the three-dimensional point cloud of the vehicle to be tested;
the processing module is used for projecting the three-dimensional point cloud to different directions to obtain two-dimensional images of each basic board; and acquiring measurement results of the base boards based on the two-dimensional images of the base boards.
11. An information measuring system, characterized in that it comprises a pan-tilt, a lidar and a control device according to claim 10;
the laser radar is used for rotating under the drive of the holder, transmitting and receiving laser data, and acquiring three-dimensional point cloud data according to the laser data;
the control device is configured to execute the hopper measuring method according to any one of claims 1 to 9 based on the three-dimensional point cloud data acquired by the lidar.
12. An information measuring system comprising a processor and a memory, the processor being coupled to the memory, the processor being configured to execute a computer program or instructions stored in the memory to cause the information measuring system to implement the hopper measuring method of any of claims 1 to 9.
13. A storage medium having stored thereon a computer program to be loaded by a processor for performing the hopper measuring method according to any of claims 1 to 9.
CN202111532728.5A 2021-12-15 2021-12-15 Method, device, system and storage medium for measuring car hopper Pending CN116263952A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111532728.5A CN116263952A (en) 2021-12-15 2021-12-15 Method, device, system and storage medium for measuring car hopper

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111532728.5A CN116263952A (en) 2021-12-15 2021-12-15 Method, device, system and storage medium for measuring car hopper

Publications (1)

Publication Number Publication Date
CN116263952A true CN116263952A (en) 2023-06-16

Family

ID=86723548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111532728.5A Pending CN116263952A (en) 2021-12-15 2021-12-15 Method, device, system and storage medium for measuring car hopper

Country Status (1)

Country Link
CN (1) CN116263952A (en)

Similar Documents

Publication Publication Date Title
US10928508B2 (en) Camera and radar fusion
WO2021179988A1 (en) Three-dimensional laser-based container truck anti-smashing detection method and apparatus, and computer device
CN112270713B (en) Calibration method and device, storage medium and electronic device
JP5303873B2 (en) Vehicle shape measuring method and apparatus
US20060115113A1 (en) Method for the recognition and tracking of objects
WO2021197345A1 (en) Method and apparatus for measuring remaining volume in closed space on basis of laser radar
US20020118874A1 (en) Apparatus and method for taking dimensions of 3D object
CN112513679B (en) Target identification method and device
KR20140027468A (en) Depth measurement quality enhancement
GB2265779A (en) Obstacle warning system for vehicle
CN112099025B (en) Method, device, equipment and storage medium for positioning vehicle under bridge crane
CN110555407B (en) Pavement vehicle space identification method and electronic equipment
WO2021179983A1 (en) Three-dimensional laser-based container truck anti-hoisting detection method and apparatus, and computer device
CN112130158B (en) Object distance measuring device and method
CN113281777A (en) Dynamic measuring method and device for cargo volume
US20240135566A1 (en) System and Method for Automatic Container Configuration using Fiducial Markers
CN114494075A (en) Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium
WO2021051736A1 (en) Method and apparatus for determining sensing area, and storage medium and vehicle
Steinbaeck et al. Occupancy grid fusion of low-level radar and time-of-flight sensor data
CN116263952A (en) Method, device, system and storage medium for measuring car hopper
CN117011362A (en) Method for calculating cargo volume and method for dynamically calculating volume rate
CN116215520A (en) Vehicle collision early warning and processing method and device based on ultrasonic waves and 3D looking around
CN115546216A (en) Tray detection method, device, equipment and storage medium
CN115631329A (en) Loading control method and system for open type carriage and storage medium
EP4177694A1 (en) Obstacle detection device and obstacle detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination