CN114373005A - Cargo volume measuring method and device, electronic equipment and readable storage medium - Google Patents

Cargo volume measuring method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114373005A
CN114373005A CN202111484189.2A CN202111484189A CN114373005A CN 114373005 A CN114373005 A CN 114373005A CN 202111484189 A CN202111484189 A CN 202111484189A CN 114373005 A CN114373005 A CN 114373005A
Authority
CN
China
Prior art keywords
image
cargo
point cloud
carriage
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111484189.2A
Other languages
Chinese (zh)
Inventor
马志伟
黄凯明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Streamax Technology Co Ltd
Original Assignee
Streamax Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Streamax Technology Co Ltd filed Critical Streamax Technology Co Ltd
Priority to CN202111484189.2A priority Critical patent/CN114373005A/en
Publication of CN114373005A publication Critical patent/CN114373005A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a cargo volume measuring method, a cargo volume measuring device, electronic equipment and a readable storage medium, wherein the cargo volume measuring method comprises the steps of obtaining a first image in a carriage, wherein the first image is an unloaded image of the carriage; processing the first image to obtain a three-dimensional geometric model of the bottom surface of the carriage; acquiring a second image in the carriage, wherein the second image is an image of the carriage load; carrying out differential processing on the first image and the second image to obtain a foreground image; converting each pixel in the foreground image into a cargo three-dimensional point cloud based on the calibration parameters of the depth camera, and performing sparsification treatment on the cargo three-dimensional point cloud; the method comprises the steps of projecting the thinned cargo three-dimensional point cloud to a three-dimensional geometric model of the bottom surface of the carriage to obtain a projection point cloud, obtaining the volume of the cargo based on the volume of a grid column between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud, rapidly and accurately measuring the volume of the cargo in the carriage, and avoiding the problems of miscalculation and cheating.

Description

Cargo volume measuring method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of vision, and particularly relates to a cargo volume measuring method and device, electronic equipment and a readable storage medium.
Background
At present, in the logistics industry, the demand for measuring the volume of goods is rapidly increasing, and the volume of goods in a carriage is rapidly and accurately measured, which is important for improving the carrying efficiency of vehicles and promoting the development of the logistics industry. The volume of goods in the existing measuring carriage adopts a manual measuring mode.
However, the manual measurement mode has low measurement efficiency and poor precision, and may cause problems of miscalculation and fraud.
Disclosure of Invention
The embodiment of the application provides a cargo volume measuring method, a cargo volume measuring device, electronic equipment and a readable storage medium, and can solve the problems of low manual measuring efficiency, poor precision, and errors and cheating caused by manual calculation.
In a first aspect, an embodiment of the present application provides a cargo volume measurement method, including:
acquiring a first image in a carriage through a depth camera, wherein the first image is an unloaded image of the carriage;
processing the first image to obtain a preselected three-dimensional geometric model of the bottom surface of the carriage;
acquiring a second image in the carriage through the depth camera, wherein the second image is an image of the carriage load;
carrying out differential processing on the first image and the second image to obtain a foreground image;
converting each pixel in the foreground image into a cargo three-dimensional point cloud based on the calibration parameters of the depth camera, and performing sparsification processing on the cargo three-dimensional point cloud to obtain a sparsified cargo three-dimensional point cloud;
and projecting the thinned cargo three-dimensional point cloud to a three-dimensional geometric model of the bottom surface of the carriage to obtain a projection point cloud, and obtaining the cargo volume based on the volume of a grid column between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud.
Further, after obtaining the volume of the cargo based on the volume of the grid columns between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud, the method further includes:
acquiring a third image in the carriage through the depth camera, and calculating the volume of goods in the third image, wherein the third image is an image of the carriage in a goods loading completion state;
acquiring a fourth image in the carriage through the depth camera according to a preset time interval, and calculating the volume of goods in the fourth image;
and if the difference between the volume of the goods in the third image and the volume of the goods in the fourth image is larger than a preset volume difference value, triggering an alarm operation.
Further, the processing the first image to obtain a preselected three-dimensional geometric model of the car floor includes:
converting each pixel in the first image into a no-load three-dimensional point cloud based on the calibration parameters of the depth camera;
and fitting and extracting the no-load three-dimensional point cloud to obtain a three-dimensional geometric model of the bottom surface of the carriage.
Further, the sparsifying the cargo three-dimensional point cloud includes:
and uniformly downsampling the dense point cloud in the cargo three-dimensional point cloud according to a preset space range.
Further, obtaining the volume of the cargo based on the volume of the grid columns between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud includes:
calculating the volume of a grid column between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud;
and adding the volumes of all the grid columns to obtain the volume of the goods.
In a second aspect, an embodiment of the present application provides a cargo volume measuring device, including:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a first image in a carriage through a depth camera, and the first image is an unloaded image of the carriage;
the depth camera is used for acquiring a second image in the carriage, and the second image is an image of the carriage load.
The first processing unit is used for processing the first image to obtain a preselected three-dimensional geometric model of the bottom surface of the carriage;
the second processing unit is used for carrying out differential processing on the first image and the second image to obtain a foreground image;
the depth camera is used for converting each pixel in the foreground image into a cargo three-dimensional point cloud based on the calibration parameters of the depth camera, and performing sparsification processing on the cargo three-dimensional point cloud to obtain a sparsified cargo three-dimensional point cloud;
and the calculation unit is used for projecting the thinned cargo three-dimensional point cloud to the three-dimensional geometric model of the bottom surface of the carriage to obtain a projection point cloud, and obtaining the cargo volume based on the volume of a grid column between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud.
Furthermore, the device also comprises a first monitoring unit, a second monitoring unit and an alarm unit;
the first monitoring unit is used for acquiring a third image in the carriage through the depth camera and calculating the volume of goods in the third image, wherein the third image is an image of the carriage in a goods loading completion state;
the second monitoring unit is used for acquiring a fourth image in the carriage through the depth camera according to a preset time interval and calculating the volume of goods in the fourth image;
and the alarm unit is used for triggering an alarm instruction if the difference between the volume of the goods in the third image and the volume of the goods in the fourth image is greater than a preset volume difference value.
Further, the second processing unit is specifically configured to perform uniform downsampling processing on the dense point cloud in the cargo three-dimensional point cloud according to a preset spatial range.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the method according to any one of the above first aspects.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when executed by a processor, the computer program implements the method according to any one of the above first aspects.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method of any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that:
according to the method and the device, a first image in a carriage is obtained through a depth camera, and the first image is an unloaded image of the carriage; processing the first image to obtain a preselected three-dimensional geometric model of the bottom surface of the carriage; acquiring a second image in the carriage through the depth camera, wherein the second image is an image of the carriage load; carrying out differential processing on the first image and the second image to obtain a foreground image; converting each pixel in the foreground image into a cargo three-dimensional point cloud based on the calibration parameters of the depth camera, and performing sparsification treatment on the cargo three-dimensional point cloud to obtain a sparsified cargo three-dimensional point cloud; the method comprises the steps of projecting the thinned cargo three-dimensional point cloud to a three-dimensional geometric model of the bottom surface of the carriage to obtain a projection point cloud, obtaining the volume of the cargo based on the volume of a grid column between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud, rapidly and accurately measuring the volume of the cargo in the carriage, and avoiding the problems of miscalculation and cheating.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of a cargo volume measuring method according to an embodiment of the present application;
FIG. 2 is a schematic view of an installation of a depth camera provided by an embodiment of the present application;
FIG. 3 is a schematic view of a vehicle cabin interior structure provided in an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a grid pillar according to an embodiment of the present application;
FIG. 5 is a schematic flow chart diagram of a cargo volume measurement method provided by another embodiment of the present application;
fig. 6 is a schematic structural diagram of a cargo volume measuring device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Fig. 1 is a schematic flow chart of a cargo volume measurement method according to an embodiment of the present application. By way of example and not limitation, as shown in fig. 1, the method comprises:
s101: a first image in the carriage is obtained through the depth camera, and the first image is an unloaded image of the carriage.
Wherein, in order to shoot the carriage in full sight of goods to the at utmost, need to set up the mounted position and the angle of degree of depth camera. FIG. 2 is a schematic view of an installation of a depth camera according to an embodiment of the present disclosure. As shown in fig. 2, the depth camera 10 is mounted on a preselected center axis of the roof 11.
By way of example, the depth cameras include binocular depth cameras, light field cameras, and the like.
The first image is a depth image containing depth information of the interior of the vehicle compartment.
S102: and processing the first image to obtain a preselected three-dimensional geometric model of the bottom surface of the compartment.
Wherein, the bottom surface of the carriage can be selected according to the actual use condition. Fig. 3 is a schematic structural diagram of the interior of the vehicle cabin according to an embodiment of the present application. As shown in fig. 3, the selected surface 20 is a floor of a vehicle compartment.
Specifically, firstly, each pixel in the first image is converted into an unloaded three-dimensional point cloud based on the calibration parameters of the depth camera. Namely, the pre-calibrated parameters of the depth camera are utilized to convert the two-dimensional image coordinates of each pixel of the first image into three-dimensional coordinates under a three-dimensional camera coordinate system, so that the three-dimensional coordinates are converted into three-dimensional point cloud.
Illustratively, the three-dimensional camera coordinate system is centered on the optical center of the depth camera.
And then, fitting and extracting the empty load three-dimensional point cloud to obtain a three-dimensional geometric model of the bottom surface of the carriage. Namely, the operation such as plane fitting, straight line extraction and the like is carried out on the empty load three-dimensional point cloud, and a three-dimensional geometric model of the bottom surface of the carriage is obtained.
S103: and acquiring a second image in the carriage through the depth camera, wherein the second image is an image of the load of the carriage.
After goods are loaded, images in the carriage are obtained through the depth camera, and a second image is obtained. The second image is a depth image and comprises depth information of cargos and depth information inside the carriage except the cargos.
S104: and carrying out differential processing on the first image and the second image to obtain a foreground image.
Wherein the foreground image contains depth information of the foreground region. Fig. 3 is a schematic structural diagram of the interior of the vehicle cabin according to an embodiment of the present application. As shown in fig. 3, the surface 21 is a foreground region, i.e., a foreground region of the cargo 22.
S105: and based on the calibration parameters of the depth camera, converting each pixel in the foreground image into a cargo three-dimensional point cloud, and performing sparsification treatment on the cargo three-dimensional point cloud to obtain the sparsified cargo three-dimensional point cloud.
Specifically, firstly, the two-dimensional image coordinates of each pixel of the foreground image are converted into three-dimensional coordinates under a three-dimensional camera coordinate system by using the parameters of the depth camera calibrated in advance, so that the three-dimensional coordinates are converted into three-dimensional point cloud.
And then, uniformly downsampling the dense point cloud in the three-dimensional cargo point cloud according to a preset space range to obtain the thinned three-dimensional cargo point cloud.
The preset space range can be set according to specific requirements.
S106: and projecting the thinned cargo three-dimensional point cloud to a three-dimensional geometric model of the bottom surface of the carriage to obtain a projection point cloud, and obtaining the cargo volume based on the volume of grid columns between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud.
Fig. 4 is a schematic structural diagram of a grid column according to an embodiment of the present application. As shown in fig. 4, the grid columns 30 are the spatial structure between the points 31 and the projected points 32 in the cargo three-dimensional point cloud.
After the projection point cloud is obtained, calculating the volume of a grid column between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud;
then, the volumes of all the grid columns are added to obtain the volume of the goods.
In the embodiment, a first image in a carriage is obtained through a depth camera, and the first image is an unloaded image of the carriage; processing the first image to obtain a preselected three-dimensional geometric model of the bottom surface of the carriage; acquiring a second image in the carriage through the depth camera, wherein the second image is an image of the carriage load; carrying out differential processing on the first image and the second image to obtain a foreground image; converting each pixel in the foreground image into a cargo three-dimensional point cloud based on the calibration parameters of the depth camera, and performing sparsification treatment on the cargo three-dimensional point cloud to obtain a sparsified cargo three-dimensional point cloud; the method comprises the steps of projecting the thinned cargo three-dimensional point cloud to a three-dimensional geometric model of the bottom surface of the carriage to obtain a projection point cloud, obtaining the volume of the cargo based on the volume of grid columns between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud, rapidly and accurately measuring the volume of the cargo in the carriage, further improving the carrying capacity of vehicles, and avoiding the problems of miscalculation and cheating.
Fig. 5 is a schematic flow chart of a cargo volume measuring method according to another embodiment of the present application. As an example and not by way of limitation, as shown in fig. 5, after obtaining the volume of the cargo based on the volume of the grid columns between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud, the method further includes:
s201: and acquiring a third image in the carriage through the depth camera, and calculating the volume of the goods in the third image, wherein the third image is the image of the carriage in a goods loading completion state.
After the goods are completely loaded, the carriage is in a goods loading completion state, and then the images in the carriage are obtained through the depth camera to obtain a third image. The third image is a depth image and comprises depth information of cargos and depth information inside the carriage except the cargos.
The volume of cargo in the third image is calculated by the steps in the above method embodiment.
S202: and acquiring a fourth image in the carriage through the depth camera according to a preset time interval, and calculating the volume of the goods in the fourth image.
And when the time interval reaches a preset time interval, acquiring the image in the carriage through the depth camera to obtain a fourth image. The fourth image is a depth image and contains depth information of cargos and depth information of the interior of the carriage except the cargos, or contains depth information of the interior of the carriage.
S203: and if the difference between the volume of the goods in the third image and the volume of the goods in the fourth image is larger than a preset volume difference value, triggering alarm operation.
Wherein, the preset volume difference value is set according to the actual use scene.
Triggering the alert operation may include sending an alert instruction to the platform to notify the user that the cargo is stolen, or sending an alert instruction to the vehicle's operating platform to notify the driver that the cargo is stolen.
This embodiment obtains the third image in the carriage through the depth camera, and calculate the volume of goods in the third image, the third image is the image that is in goods loading completion state in the carriage, according to presetting time interval, obtain the fourth image in the carriage through the depth camera, and calculate the volume of goods in the fourth image, if the volume of goods is greater than the volume difference value of predetermineeing in the third image and the fourth image volume of goods, then trigger the alarm operation, can supervise the goods in the transportation, the operation of standard uninstallation, guarantee goods safety, reduce the stolen risk of goods on the way.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the methods described in the above embodiments, only the portions related to the embodiments of the present application are shown for convenience of explanation.
Fig. 6 is a schematic structural diagram of a cargo volume measurement device provided in an embodiment of the present application. By way of example and not limitation, as shown in fig. 6, the apparatus comprises:
the acquiring unit 40 is used for acquiring a first image in the carriage through the depth camera, wherein the first image is an unloaded image of the carriage;
the depth camera is used for acquiring a second image in the carriage, and the second image is an image of the carriage load.
A first processing unit 41, configured to process the first image to obtain a preselected three-dimensional geometric model of the car floor;
a second processing unit 42, configured to perform difference processing on the first image and the second image to obtain a foreground image;
the system comprises a depth camera, a storage unit, a processing unit and a processing unit, wherein the depth camera is used for calibrating parameters based on the depth camera, converting each pixel in a foreground image into a cargo three-dimensional point cloud, and performing sparsification processing on the cargo three-dimensional point cloud to obtain a sparsified cargo three-dimensional point cloud;
and the calculating unit 43 is configured to project the thinned cargo three-dimensional point cloud to the three-dimensional geometric model of the bottom surface of the carriage to obtain a projection point cloud, and obtain the volume of the cargo based on the volume of the grid columns between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud.
In another embodiment, the device further comprises a first monitoring unit, a second monitoring unit and an alarm unit;
the first monitoring unit is used for acquiring a third image in the carriage through the depth camera and calculating the volume of goods in the third image, wherein the third image is an image of the carriage in a goods loading completion state;
the second monitoring unit is used for acquiring a fourth image in the carriage through the depth camera according to a preset time interval and calculating the volume of goods in the fourth image;
and the alarm unit is used for triggering an alarm instruction if the difference between the volume of the goods in the third image and the volume of the goods in the fourth image is greater than a preset volume difference value.
In another embodiment, the second processing unit is specifically configured to perform uniform downsampling processing on the dense point cloud in the cargo three-dimensional point cloud according to a preset spatial range.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 7, the electronic apparatus 5 of this embodiment includes: at least one processor 50 (only one shown in fig. 7), a memory 51, and a computer program 52 stored in the memory 51 and executable on the at least one processor 50, the steps of any of the various method embodiments described above being implemented when the computer program 52 is executed by the processor 50.
The electronic device 5 may be an embedded device, a desktop computer, a cloud server, or other computing devices. The electronic device 5 may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 7 is merely an example of the electronic device 5, and does not constitute a limitation of the electronic device 5, and may include more or less components than those shown, or combine some of the components, or different components, such as an input-output device, a network access device, etc.
The Processor 50 may be a Central Processing Unit (CPU), and the Processor 50 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may in some embodiments be an internal storage unit of the electronic device 5, such as a hard disk or a memory of the electronic device 5. The memory 51 may also be an external storage device of the electronic device 5 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the electronic device 5. The memory 51 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments may be implemented.
The embodiments of the present application provide a computer program product, which when running on a mobile electronic device, enables the mobile electronic device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/electronic device, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A cargo volume measuring method, comprising:
acquiring a first image in a carriage through a depth camera, wherein the first image is an unloaded image of the carriage;
processing the first image to obtain a preselected three-dimensional geometric model of the bottom surface of the carriage;
acquiring a second image in the carriage through the depth camera, wherein the second image is an image of the carriage load;
carrying out differential processing on the first image and the second image to obtain a foreground image;
converting each pixel in the foreground image into a cargo three-dimensional point cloud based on the calibration parameters of the depth camera, and performing sparsification processing on the cargo three-dimensional point cloud to obtain a sparsified cargo three-dimensional point cloud;
and projecting the thinned cargo three-dimensional point cloud to a three-dimensional geometric model of the bottom surface of the carriage to obtain a projection point cloud, and obtaining the cargo volume based on the volume of a grid column between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud.
2. The method of claim 1, wherein after obtaining the volume of the cargo based on the volume of the grid columns between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud, further comprising:
acquiring a third image in the carriage through the depth camera, and calculating the volume of goods in the third image, wherein the third image is an image of the carriage in a goods loading completion state;
acquiring a fourth image in the carriage through the depth camera according to a preset time interval, and calculating the volume of goods in the fourth image;
and if the difference between the volume of the goods in the third image and the volume of the goods in the fourth image is larger than a preset volume difference value, triggering an alarm operation.
3. The method of claim 1, wherein said processing said first image to obtain a preselected three-dimensional geometric model of the floor of the vehicle compartment comprises:
converting each pixel in the first image into a no-load three-dimensional point cloud based on the calibration parameters of the depth camera;
and fitting and extracting the no-load three-dimensional point cloud to obtain a three-dimensional geometric model of the bottom surface of the carriage.
4. The method of claim 1, wherein the sparsifying of the cargo three-dimensional point cloud comprises:
and uniformly downsampling the dense point cloud in the cargo three-dimensional point cloud according to a preset space range.
5. The method of claim 1, wherein the deriving the volume of the cargo based on the volume of the grid posts between each point in the sparsified cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud comprises:
calculating the volume of a grid column between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud;
and adding the volumes of all the grid columns to obtain the volume of the goods.
6. A cargo volume measuring device, comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a first image in a carriage through a depth camera, and the first image is an unloaded image of the carriage;
the depth camera is used for acquiring a second image in the carriage, and the second image is an image of the carriage load;
the first processing unit is used for processing the first image to obtain a preselected three-dimensional geometric model of the bottom surface of the carriage;
the second processing unit is used for carrying out differential processing on the first image and the second image to obtain a foreground image;
the depth camera is used for converting each pixel in the foreground image into a cargo three-dimensional point cloud based on the calibration parameters of the depth camera, and performing sparsification processing on the cargo three-dimensional point cloud to obtain a sparsified cargo three-dimensional point cloud;
and the calculation unit is used for projecting the thinned cargo three-dimensional point cloud to the three-dimensional geometric model of the bottom surface of the carriage to obtain a projection point cloud, and obtaining the cargo volume based on the volume of a grid column between each point in the thinned cargo three-dimensional point cloud and the corresponding projection point in the projection point cloud.
7. The apparatus of claim 6, further comprising a first monitoring unit, a second monitoring unit, and an alarm unit;
the first monitoring unit is used for acquiring a third image in the carriage through the depth camera and calculating the volume of goods in the third image, wherein the third image is an image of the carriage in a goods loading completion state;
the second monitoring unit is used for acquiring a fourth image in the carriage through the depth camera according to a preset time interval and calculating the volume of goods in the fourth image;
and the alarm unit is used for triggering an alarm instruction if the difference between the volume of the goods in the third image and the volume of the goods in the fourth image is greater than a preset volume difference value.
8. The apparatus of claim 6, wherein the second processing unit is specifically configured to perform uniform downsampling processing on the dense point clouds in the cargo three-dimensional point cloud according to a preset spatial range.
9. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN202111484189.2A 2021-12-07 2021-12-07 Cargo volume measuring method and device, electronic equipment and readable storage medium Pending CN114373005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111484189.2A CN114373005A (en) 2021-12-07 2021-12-07 Cargo volume measuring method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111484189.2A CN114373005A (en) 2021-12-07 2021-12-07 Cargo volume measuring method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114373005A true CN114373005A (en) 2022-04-19

Family

ID=81139568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111484189.2A Pending CN114373005A (en) 2021-12-07 2021-12-07 Cargo volume measuring method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114373005A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115587675A (en) * 2022-11-25 2023-01-10 中国外运股份有限公司 Method, device, equipment and medium for determining loading mode
CN117670979A (en) * 2024-02-01 2024-03-08 四川港投云港科技有限公司 Bulk cargo volume measurement method based on fixed point position monocular camera

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115587675A (en) * 2022-11-25 2023-01-10 中国外运股份有限公司 Method, device, equipment and medium for determining loading mode
CN115587675B (en) * 2022-11-25 2023-05-12 中国外运股份有限公司 Method, device, equipment and medium for determining loading mode
CN117670979A (en) * 2024-02-01 2024-03-08 四川港投云港科技有限公司 Bulk cargo volume measurement method based on fixed point position monocular camera
CN117670979B (en) * 2024-02-01 2024-04-30 四川港投云港科技有限公司 Bulk cargo volume measurement method based on fixed point position monocular camera

Similar Documents

Publication Publication Date Title
CN114373005A (en) Cargo volume measuring method and device, electronic equipment and readable storage medium
EP3620823B1 (en) Method and device for detecting precision of internal parameter of laser radar
CN110057292B (en) Method and device for determining carriage loading rate
CN112254635B (en) Volume measurement method, device and system
CN109828250B (en) Radar calibration method, calibration device and terminal equipment
CN112912932B (en) Calibration method and device for vehicle-mounted camera and terminal equipment
CN110962844A (en) Vehicle course angle correction method and system, storage medium and terminal
CN114578329A (en) Multi-sensor joint calibration method, device, storage medium and program product
CN112927306A (en) Calibration method and device of shooting device and terminal equipment
CN113341401A (en) Vehicle-mounted laser radar calibration method and device, vehicle and storage medium
CN113219439B (en) Target main point cloud extraction method, device, equipment and computer storage medium
CN112432596B (en) Space measuring method, space measuring device, electronic equipment and computer storage medium
CN112781893B (en) Spatial synchronization method and device for vehicle-mounted sensor performance test data and storage medium
CN112967347A (en) Pose calibration method and device, robot and computer readable storage medium
CN112162294A (en) Robot structure detection method based on laser sensor
CN116630401A (en) Fish-eye camera ranging method and terminal
CN110673114A (en) Method and device for calibrating depth of three-dimensional camera, computer device and storage medium
CN115861403A (en) Non-contact object volume measurement method and device, electronic equipment and medium
CN115655740A (en) Method and device for detecting vehicle braking performance, electronic equipment and storage medium
CN114359400A (en) External parameter calibration method and device, computer readable storage medium and robot
CN115082565A (en) Camera calibration method, device, server and medium
CN113299095B (en) Vehicle calibration reminding method and device, electronic equipment and storage medium
CN114565681B (en) Camera calibration method, device, equipment, medium and product
CN111462309B (en) Modeling method and device for three-dimensional head, terminal equipment and storage medium
WO2022204953A1 (en) Method and apparatus for determining pitch angle, and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination