WO2020062918A1 - 体积测量方法、系统、设备及计算机可读存储介质 - Google Patents

体积测量方法、系统、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2020062918A1
WO2020062918A1 PCT/CN2019/090310 CN2019090310W WO2020062918A1 WO 2020062918 A1 WO2020062918 A1 WO 2020062918A1 CN 2019090310 W CN2019090310 W CN 2019090310W WO 2020062918 A1 WO2020062918 A1 WO 2020062918A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
measured object
measured
measurement
depth
Prior art date
Application number
PCT/CN2019/090310
Other languages
English (en)
French (fr)
Inventor
刘慧泉
Original Assignee
顺丰科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 顺丰科技有限公司 filed Critical 顺丰科技有限公司
Priority to CA3114457A priority Critical patent/CA3114457A1/en
Priority to US17/280,790 priority patent/US11436748B2/en
Priority to EP19866397.3A priority patent/EP3842736A4/en
Priority to KR1020217010703A priority patent/KR102559661B1/ko
Priority to JP2021517211A priority patent/JP2022501603A/ja
Publication of WO2020062918A1 publication Critical patent/WO2020062918A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B5/00Measuring arrangements characterised by the use of mechanical techniques
    • G01B5/0021Measuring arrangements characterised by the use of mechanical techniques for measuring the volumetric dimension of an object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/022Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by means of tv-camera scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/024Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by means of diode-array scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30112Baggage; Luggage; Suitcase

Definitions

  • the patent application relates to the field of logistics, and in particular, to a volume measurement method, system, device, and computer-readable storage medium.
  • volume measurement is the basis of cargo circulation and transportation. It is important to select the appropriate transportation vehicle, ship, and aircraft for the appropriate volume.
  • most of the work of measuring the volume of objects is still measured by the traditional method of measuring tape.
  • This measurement method has a large measurement error. After the measurement, the estimated volume and the selection of a suitable transportation means are manually calculated. The measurement efficiency of traditional methods is very low. This deficiency will become an important factor restricting the efficiency of transportation.
  • the purpose of this patent application is to provide a volume measurement method and system.
  • a volume measurement method including the following steps:
  • the 3D vision system Collecting the first information of the measurement area when there is no object under measurement and the first depth image information of the measurement area when there is the object based on the 3D vision system, the 3D vision system is located above the measurement area;
  • Identify the outer contour of the measured object by comparing the gray values of the first information and the first depth image information collected at different perspectives, obtain the first depth information of the outer contour of the measured object, and fill the measured
  • the outer contour of the object defines the area to obtain the measured object target and the size information of the measured object target;
  • the volume of the measured object is obtained according to a preset relationship between the block information, the size information of the target of the measured object, and the volume of the measured object.
  • the 3D vision system includes a first camera, a second camera, and a structured laser transmitter, wherein the structured laser transmitter is used to collect first information of a measurement area when there is no object to be measured in the current field of view, and to measure according to the laser grid Whether the depth change of the measurement area exceeds a threshold, and if so, determining that there is a measured object in the measurement area, and driving a first camera to collect a first depth image of the measurement area, and driving a second camera to grayscale the first depth image Degree processing to generate the first depth image information.
  • the structured laser transmitter is used to collect first information of a measurement area when there is no object to be measured in the current field of view, and to measure according to the laser grid Whether the depth change of the measurement area exceeds a threshold, and if so, determining that there is a measured object in the measurement area, and driving a first camera to collect a first depth image of the measurement area, and driving a second camera to grayscale the first depth image Degree processing to generate the first depth image information.
  • the outer contour of the measured object is matched with a standard double-layer calibration target of a 3D vision system to obtain the size of the outer contour of the measured object.
  • the preset relationship between the first depth information and the divided blocks satisfies:
  • T arg et org represents the boundary size of one of the divided blocks
  • T arg et new represents the boundary size of the corresponding newly divided block after the depth change
  • Distance is a function that calculates the depth and the block size.
  • w 1 and w 2 are weight coefficients.
  • the preset relationship among the block information, the size information of the measured object target, and the measured object volume satisfies:
  • H org is the base height value of the measured object in the image before dividing the block.
  • H is the height value of the divided block in the outer contour area of the measured object in the image.
  • a volume measurement system including:
  • Measurement area information acquisition unit configured to collect the first information of the measurement area when there is no object to be measured and the first depth image information of the measurement area when there is the object based on the 3D vision system, and the 3D vision system is located at the measurement Above the area
  • the target acquisition unit of the measured object is configured to identify the outer contour of the measured object by comparing the gray values of the first information and the first depth image information collected under different perspectives, to obtain the first contour of the measured external contour. A depth information, and filling the bounding area of the outer contour of the measured object to obtain the measured object target and the size information of the measured object target;
  • a block dividing unit configured to block divide an outer contour area of the measured object according to a preset relationship between the first depth information and the divided blocks to generate block information
  • the measured object volume obtaining unit is configured to obtain the measured object volume according to a preset relationship between the block information, the size information of the measured object target, and the measured object volume.
  • the 3D vision system includes a first camera, a second camera, and a structured laser emitter.
  • the structured laser transmitter is configured to collect first information of a measurement area when there is no object to be measured in a current field of view, and measure whether a depth change of the measurement area exceeds a threshold value according to a laser grid, and if so, determine the measurement area There is a measured object, and a first camera is driven to collect a first depth image of the measurement area, and a second camera is driven to perform grayscale processing on the first depth image to generate the first depth image information.
  • the measured object target obtaining unit is further configured to match the outer contour of the measured object with a standard double-layer calibration target of a 3D vision system based on the disposition distance of the first camera and the second camera, to obtain the measured object. Measure the size of the outer contour of the object.
  • the 3D vision system is located directly above the measurement area or diagonally above the measured object in the measurement area.
  • the preset relationship between the first depth information and the divided blocks satisfies:
  • T arg et org represents the boundary size of one of the divided blocks
  • T arg et new represents the boundary size of the corresponding newly divided block after the depth change
  • Distance is a function that calculates the depth and the block size.
  • w 1 and w 2 are weight coefficients.
  • the preset relationship among the block information, the size information of the measured object target, and the measured object volume satisfies:
  • H org is the base height value of the measured object in the image before dividing the block.
  • H is the height value of the divided block in the outer contour area of the measured object in the image.
  • a device comprising:
  • One or more processors are One or more processors;
  • Memory for storing one or more programs
  • the one or more processors are caused to execute the method according to any one of the preceding items.
  • a computer-readable storage medium storing a computer program, which when executed by a processor implements the method as described in any one of the above.
  • the volume measurement method exemplified in the present patent application based on a 3D vision system collecting first information of a measurement area when there is no object to be measured and current first depth image information of the measurement area when there is the object, the 3D vision system Located above the measurement area; identifying the outer contour of the measured object by comparing the gray values of the first information and the first depth image information collected at different viewing angles to obtain the first depth information of the outer contour of the measured object, And fill the bounding area of the outer contour of the measured object to obtain the size information of the measured object target and the measured object target; according to a preset relationship between the first depth information and the divided block, The outer contour area is divided into blocks to generate block information.
  • the volume of the measured object is obtained according to a preset relationship between the block information, the size information of the measured object target, and the volume of the measured object.
  • the method is based on a 3D vision system. The test is performed to directly measure the volume of the measured object, the measurement speed is fast, the measurement accuracy is high, and the measurement range is large.
  • the volume measurement system of the example of this patent application is simple in composition of each unit. Through the cooperation with each other, the test is based on the 3D vision system. In addition, the layout of the vision system is greatly reduced, and the space can be directly measured. The volume of the object to be measured is measured quickly, with high measurement accuracy and large measurement range, which effectively avoids the disadvantages of the existing measuring equipment that occupies a large volume, has a complex structure, and requires high functional configuration of the equipment.
  • the device and computer-readable storage medium storing the computer program examples of the present patent application can be used to perform volume test of the measured object through a 3D vision system with a small footprint and low device performance requirements.
  • the measurement range is large and worthy of promotion.
  • FIG. 1 is a flowchart of Embodiment 1;
  • FIG. 2 is a schematic diagram of the top installation of a 3D vision acquisition module according to the first embodiment
  • FIG. 3 is a schematic diagram of an outer contour of a measured object in an image according to the first embodiment
  • FIG. 4 is a schematic diagram of filling of an outer contour area of a measured object in an image according to the first embodiment
  • FIG. 5 is a block division diagram of an outer contour area of a measured object according to the first embodiment
  • FIG. 6 is a schematic diagram of oblique installation of a 3D vision acquisition module according to the first embodiment.
  • This embodiment provides a volume measurement system, including:
  • Measurement area information acquisition unit configured to collect the first information of the measurement area when there is no object to be measured and the first depth image information of the measurement area when there is the object based on the 3D vision system, and the 3D vision system is located at the measurement Above the area
  • the target acquisition unit of the measured object is configured to identify the outer contour of the measured object by comparing the gray values of the first information and the first depth image information collected under different perspectives, to obtain the first contour of the measured external contour. A depth information, and filling the bounding area of the outer contour of the measured object to obtain the measured object target and the size information of the measured object target;
  • a block dividing unit configured to block divide an outer contour area of the measured object according to a preset relationship between the first depth information and the divided blocks to generate block information
  • the measured object volume obtaining unit is configured to obtain the measured object volume according to a preset relationship between the block information, the size information of the measured object target, and the measured object volume.
  • the 3D vision system includes a first camera, a second camera, and a structured laser transmitter.
  • the structured laser transmitter is configured to collect first information of a measurement area when there is no object to be measured in the current field of view, and according to the laser grid measurement Whether the depth change of the measurement area exceeds a threshold, and if so, determining that there is a measured object in the measurement area, and driving a first camera to collect a first depth image of the measurement area, and driving a second camera to grayscale the first depth image Processing to generate the first depth image information.
  • the target acquisition unit of the measured object is further configured to match the outer contour of the measured object with a standard double-layer calibration target of a 3D vision system based on the disposition distance of the first camera and the second camera to obtain the external The size of the outline.
  • the first camera is a color camera that is used to capture the entire picture of the measured object
  • the second camera is a black and white camera that is mainly used for gray processing.
  • the two cameras are installed at a fixed distance, and the infrared laser light they observe can assist Ranging.
  • the structured laser transmitter is specifically a structured laser transmitter with a coding characteristic, which is the main direct sensor for measuring distance.
  • the coding characteristic can effectively avoid interference of other similar types of light beams such as visible light.
  • the 3D vision system is located directly above the measurement area or diagonally above the measured object in the measurement area.
  • the preset relationship between the first depth information and the divided blocks satisfies:
  • T arg et org represents the boundary size of one of the divided blocks
  • T arg et new represents the boundary size of the corresponding newly divided block after the depth change
  • Distance is a function that calculates the depth and the block size.
  • w 1 and w 2 are weight coefficients.
  • the preset relationship among the block information, the size information of the measured object target, and the measured object volume satisfies:
  • H org is the base height value of the measured object in the image before dividing the block.
  • H is the height value of the divided block in the outer contour area of the measured object in the image.
  • This embodiment provides a volume measurement method, as shown in FIG. 1, including the following steps:
  • the first information of the measurement area when there is no object to be measured and the first depth image information of the measurement area when there is the object is collected based on the 3D vision system.
  • the 3D vision system is located above the measurement area, specifically located in the measurement The area is directly above or diagonally above the measured object.
  • the 3D vision system includes a first camera, a second camera, and a structured laser transmitter.
  • the structured laser transmitter is used to collect first information of a measurement area when there is no object to be measured in the current field of view, and the measurement is measured according to a laser grid. Whether the change in the area depth exceeds a threshold, if yes, determine that there is a measured object in the measurement area, and drive a first camera to collect a first depth image of the measurement area, and drive a second camera to perform grayscale processing on the first depth image, Generating the first depth image information.
  • Identify the outer contour of the measured object by comparing the gray values of the first information and the first depth image information collected under different perspectives, obtain first depth information of the outer contour of the measured object, and fill the The outer contour of the measured object defines the area, and the size information of the measured object target and the measured object target is obtained.
  • the outer contour of the measured object is matched with a standard double-layer calibration target of a 3D vision system to obtain the size of the outer contour of the measured object.
  • the outer contour area of the measured object is divided into blocks to generate block information.
  • the preset relationship between the first depth information and the divided blocks satisfies:
  • T arg et org represents the boundary size of one of the divided blocks
  • T arg et new represents the boundary size of the corresponding newly divided block after the depth change
  • Distance is a function that calculates the depth and the block size.
  • w 1 and w 2 are weight coefficients.
  • the preset relationship among the block information, the size information of the measured object target, and the measured object volume satisfies:
  • H org is the base height value of the measured object in the image before dividing the block.
  • H is the height value of the divided block in the outer contour area of the measured object in the image.
  • the above 3D vision system is specifically a set of integrated 3D vision sampling modules.
  • the sampling modules include a color camera, a black-and-white camera, and a structured laser transmitter with a coding feature.
  • the original components used in the module are the current components in the market. For example, the camera and laser module commonly used in the mobile phone industry can be used.
  • the overall module has a very good economy.
  • the layout of the 3D vision acquisition module was designed in multiple dimensions.
  • the most common layout method is to place it directly above the measured object, as shown in Figure 2.
  • the advantage of this arrangement is that a single 3D vision module can be used to cover a large field of view. Combined with coded structured light, measurement accuracy can be maximized.
  • the top installation occupies a very small space, which can be installed by a column suspended from the ceiling. This installation method completely releases the bottom space and the headroom has never been greater.
  • the core of the volume measurement method of this patent application is the volume measurement method of 3D vision combined with real physical space.
  • the specific steps are:
  • the depth ray model of the current field of view is modeled, and the low-latency laser grid is encoded using a low-power mode, while the gray-scale field of view defines the measurement area.
  • the measurement system is turned on, and the laser speckle and grid laser conversion at high latitudes are performed to increase the scanning frequency of the area.
  • the maximum outer contour of the measured object that is, the size of the shape edge
  • the determination of the size is to determine the maximum boundary of the divided block, wherein the calibration target calibrates two size accuracy at one time.
  • the depth ray is based on the laser ranging sensor tof technology, which detects whether there is an object in the area, and then turns on the camera to measure to prevent the camera from working seriously for a long time.
  • the tof ranging sensor is set according to the parameter without an object. At this time, the camera sleeps and does not acquire an image. The presence or absence of the object to be detected is dependent on the coded low-latitude laser grid to check the depth of the measurement area.
  • Accurate identification of the measured object can improve the measurement accuracy, and the accuracy of the identification can achieve 1m length information with an error of ⁇ 10mm.
  • Block division is based on non-linear growth based on depth information.
  • the non-linearity changes according to the distortion correction coefficient of the camera lens.
  • the non-linear growth and the correction coefficient are arranged in the system according to the lens when it leaves the factory. Finally, it is reflected in the function Distance (Obstacles, Robot).
  • T arg et org represents the boundary size of the divided block, represents the boundary size of one of the divided blocks
  • T arg et new represents the newly divided block boundary size based on the depth change, and represents the corresponding size of the corresponding newly divided block after the depth change.
  • Boundary size according to the difference caused by block division.
  • a new block is a new depth. The depth of adjacent blocks is not the same.
  • Distance (Obstacles, Robot) is a function that calculates the depth and the block division size, w 1 , w 2 is the weight coefficient.
  • FIG. 5 An example of the divided block is shown in FIG. 5.
  • Each block area cannot be filled with depth information, and the volume distribution is accurately measured. It follows that the closer to the camera, the larger the block division, the smaller the number of blocks; the farther from the camera, the smaller the block division, the greater the number of blocks.
  • the average value of the depth information is calculated, and the volume information of the object is the sum of the products of the mean of all the same blocks and the area occupied by the blocks of the same size.
  • H org is the base height value of the measured object in the image before dividing the block.
  • H is the height value of the divided block in the outer contour area of the measured object in the image
  • V is the total volume after measurement.
  • the volume measurement method of this patent application proposes a new and simple algorithm. It is a limited and precise measurement method. Compared with point cloud and triangulation, this method greatly improves the computing performance. Combined with a simple spatial layout, a fast and accurate volume measurement function is realized.
  • This patent application can be used alone as a separate measurement system module. It can also combine the weighing system and barcode scanning system of the logistics industry to output complete cargo information. This method effectively improves equipment performance, logistics capabilities, reduces equipment complexity, and has good installation ease of use.
  • the volume measurement algorithm does not need to convert the two-dimensional coordinates in the vision system into real three-dimensional space coordinates, and can directly calculate the volume of the object to be measured using limited information.
  • the method is simple and practical.
  • this embodiment also provides a device suitable for implementing the embodiments of the present application.
  • the device includes a computer system, and the computer system includes a central processing unit (CPU), which may be stored in a read-only memory (ROM) according to ) Or a program loaded from a storage section into a random access memory (RAM) to perform various appropriate actions and processes.
  • CPU central processing unit
  • ROM read-only memory
  • RAM random access memory
  • various programs and data required for system operation are also stored.
  • the CPU, ROM, and RAM are connected to each other through a bus.
  • Input / output (I / O) interfaces are also connected to the bus.
  • the following components are connected to the I / O interface: including input parts such as keyboard, mouse, etc .; including output parts such as cathode ray tube (CRT), liquid crystal display (LCD), etc .; speakers; storage parts including hard disks; etc .; LAN card, modem, and other network interface card communication part.
  • the communication section performs communication processing via a network such as the Internet.
  • the drive is also connected to the I / O interface as required. Removable media, such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc., are installed on the drive as needed, so that a computer program read therefrom is installed into the storage section as needed.
  • the process described above with reference to FIG. 1 may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product including a computer program tangibly embodied on a machine-readable medium, the computer program containing program code for performing the method of FIG. 1.
  • the computer program may be downloaded and installed from a network through a communication section, and / or installed from a removable medium.
  • each block in the flowchart may represent a module, a program segment, or a portion of code, which contains one or more components for implementing a specified logical function Executable instructions.
  • the functions marked in the boxes may also occur in a different order than those marked in the drawings. For example, two successively represented boxes may actually be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the flowchart, and the combination of blocks in the flowchart can be implemented by a dedicated hardware-based system that performs a specified function or operation, or by dedicated hardware and a computer The combination of instructions to achieve.
  • the units or modules described in the embodiments of the present application may be implemented in a software manner, or may be implemented in a hardware manner.
  • the described units or modules may also be provided in a processor.
  • the names of these units or modules do not, in some cases, qualify the unit or module itself.
  • this embodiment also provides a computer-readable storage medium.
  • the computer-readable storage medium may be a computer-readable storage medium included in the system described in the foregoing embodiments; or it may exist separately. Computer-readable storage medium that is not incorporated into the device.
  • the computer-readable storage medium stores one or more programs, which are used by one or more processors to perform the volume measurement method described in this application.
  • the 3D vision acquisition module can be placed obliquely above the three-dimensional diagonal of the measured object. As shown in Figure 6, the camera's observation angle can cover the three major faces of the entire measured object. Solve to get more volume details and improve measurement accuracy. Again, this installation is simple and takes up little space. While measuring the volume, you can take a picture of the measured object.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本专利申请涉及体积测量方法、系统、设备及计算机可读存储介质,基于3D视觉系统采集当前视角下无被测物时测量区域的第一信息及有被测物时测量区域的第一深度图像信息,通过比对不同视角下采集的所述第一信息、第一深度图像信息的灰度值识别出被测物的外轮廓,得到被测物外轮廓的第一深度信息,并填充所述被测物的外轮廓界定区域,得到被测物目标及被测物目标的尺寸信息;根据所述第一深度信息与划分的区块之间的预设关系对被测物的外轮廓区域进行区块划分,生成区块信息;根据区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系得到被测物体积,该方法基于3D视觉进行测试,对被测物进行直接测量,测量速度快且精度高,测量范围大。

Description

体积测量方法、系统、设备及计算机可读存储介质
本专利申请要求于2018年9月28日向中国国家知识产权局递交的专利申请号为201811141066.7、发明名称为“一种体积测量方法及系统”的优先权权益。
技术领域
本专利申请涉及物流领域,尤其涉及一种体积测量方法、系统、设备及计算机可读存储介质。
背景技术
随着物流运输的快速增长,物流运输环节的效能逐渐无法满足传统运输的需求。其中物流环节中重要的一个组成部分是对物流货物进行体积测量。体积测量是货物流通运输的基础,合适的体积选择合适的运输车辆、轮船、飞机至关重要。但是,现在的物流体系中,对于托盘堆垛的大体积测量,测量物体体积的工作大部分仍然是使用卷尺这种传统的方式测量。
这种测量方式,测量误差大,测量后估算体积与选择合适的运输工具,均为人工计算。传统方式的测量效率很低,这一不足之处,将成为制约运输的效率的重要因素。
为了解决这一问题,现有的自动化体积测量技术,大多使用激光扫描仪以及多视觉融合技术,解决此问题。常见的解决方式是使用1台激光扫描发射器,以及4台工业相机,分布在被测物的周围,通过相机捕捉激光器发射的线激光,来进行的尺寸测量。这种方式虽然比人工测试的准确性和效率都要高。但是这种测量方式的,占地面积很大,由于工业相机要覆盖被测物的视场,其体积几乎是被测物的2倍,因此,整套设备也需要一个稳固的支架固定。除此之外,这一套设备比较复杂,需要昂贵的激光器以及工业相机,复杂的系统,对测试设备的硬件要求较高,对视觉系统的拼接和处理能力要求很高。
为了解决人工测量以及现有自动化测量设备的不足。提出了一种运 用模块化3D视觉技术,解决体积测量的方法。
发明内容
为了解决上述技术问题,本专利申请的目的在于提供一种体积测量方法及系统。
根据本专利申请的一个方面,提供了一种体积测量方法,包括以下步骤:
基于3D视觉系统采集当前视角下无被测物时测量区域的第一信息及有被测物时测量区域的第一深度图像信息,所述3D视觉系统位于测量区域的上方;
通过比对不同视角下采集的所述第一信息、第一深度图像信息的灰度值识别出被测物的外轮廓,得到被测物外轮廓的第一深度信息,并填充所述被测物的外轮廓界定区域,得到被测物目标及被测物目标的尺寸信息;
根据所述第一深度信息与划分的区块之间的预设关系对被测物的外轮廓区域进行区块划分,生成区块信息;
根据所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系得到被测物体积。
进一步的,3D视觉系统包括第一相机、第二相机及结构激光发射器,其中,结构激光发射器用于采集当前视场下无被测物时测量区域的第一信息,及根据激光栅格测量所述测量区域深度变化是否超过阈值,若是,则确定所述测量区域有被测物,并驱动第一相机采集所述测量区域的第一深度图像,驱动第二相机对第一深度图像进行灰度处理,生成所述第一深度图像信息。
进一步的,基于第一相机、第二相机的配置间距,将所述被测物的外轮廓与3D视觉系统的标准双层标定靶匹配,得所述被测物外轮廓的尺寸。
所述第一深度信息、划分的区块之间的预设关系满足:
Figure PCTCN2019090310-appb-000001
其中,
T arg et org表示划分的其中一个区块的边界尺寸,T arg et new表示深度变化后新划分的相应区块的边界尺寸,Distance(Obstacles,Robot)为计算深度与区块划分大小的函数,w 1、w 2为权值系数。
所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系满足:
Figure PCTCN2019090310-appb-000002
其中,
H org为划分区块前图像中被测物的基础高度值,
H为图像中被测物外轮廓区域内划分区块的高度值。
根据本专利申请的另一个方面,提供了一种体积测量系统,包括:
测量区域信息获取单元:配置用于基于3D视觉系统采集当前视角下无被测物时测量区域的第一信息及有被测物时测量区域的第一深度图像信息,所述3D视觉系统位于测量区域的上方;
被测物目标获取单元,配置用于通过比对不同视角下采集的所述第一信息、第一深度图像信息的灰度值识别出被测物的外轮廓,得到被测物外轮廓的第一深度信息,并填充所述被测物的外轮廓界定区域,得到被测物目标及被测物目标的尺寸信息;
区块划分单元,配置用于根据所述第一深度信息与划分的区块之间的预设关系对被测物的外轮廓区域进行区块划分,生成区块信息;
被测物体积获取单元,配置用于根据所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系得到被测物体积。
进一步的,3D视觉系统包括第一相机、第二相机及结构激光发射器,
其中,结构激光发射器配置用于采集当前视场下无被测物时测量区域的第一信息,及根据激光栅格测量所述测量区域深度变化是否超过阈值,若是,则确定所述测量区域有被测物,并驱动第一相机采集所述测量区域的第一深度图像,驱动第二相机对第一深度图像进行灰度处理,生成所述第一深度图像信息。
进一步的,被测物目标获取单元还配置用于基于第一相机、第二相机的配置间距,将所述被测物的外轮廓与3D视觉系统的标准双层标定靶匹配,得所述被测物外轮廓的尺寸。
3D视觉系统位于测量区域的正上方或位于测量区域内被测物的斜上方。
所述第一深度信息、划分的区块之间的预设关系满足:
Figure PCTCN2019090310-appb-000003
其中,
T arg et org表示划分的其中一个区块的边界尺寸,T arg et new表示深度变化后新划分的相应区块的边界尺寸,Distance(Obstacles,Robot)为计算深度与区块划分大小的函数,w 1、w 2为权值系数。
所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系满足:
Figure PCTCN2019090310-appb-000004
其中,
H org为划分区块前图像中被测物的基础高度值,
H为图像中被测物外轮廓区域内划分区块的高度值。
根据本专利申请的另一个方面,提供了一种设备,所述设备包括:
一个或多个处理器;
存储器,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器执行如上任一项所述的方法。
根据本专利申请的另一个方面,提供了一种存储有计算机程序的计算机可读存储介质,该程序被处理器执行时实现如上任一项所述的方法。
与现有技术相比,本专利申请具有以下有益效果:
1、本专利申请示例的体积测量方法,基于3D视觉系统采集当前视角下无被测物时测量区域的第一信息及有被测物时测量区域的第一深度图像信息,所述3D视觉系统位于测量区域的上方;通过比对不同视角下采集的所述第一信息、第一深度图像信息的灰度值识别出被测物的外轮廓,得到被测物外轮廓的第一深度信息,并填充所述被测物的外轮廓界定区域,得到被测物目标及被测物目标的尺寸信息;根据所述第一深度信息与划分的区块之间的预设关系对被测物的外轮廓区域进行区块划分,生成区块信息;根据所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系得到被测物体积,该方法基于3D视觉系统进行测试,对被测物体积进行直接测量,测量速度快,测量精度高,测量范围大。
2、本专利申请示例的体积测量系统,各个单元组成简单,通过相互之间的配合,基于3D视觉系统进行测试,加之对所述视觉系统布局配置,使其占用空间大大缩小,且可直接对被测物进行体积测量,测量速度快,测量精度高,测量范围大,有效避免了现有测量设备占用空间体积大,结构复杂,设备功能配置要求高的弊端。
3、本专利申请示例的设备及存储有计算机程序的计算机可读存储介质,可通过占地空间小、设备性能要求低的3D视觉系统进行被测物体积测试,测量速度快,测量精度高,测量范围大,值得推广。
附图说明
图1为实施例一流程图;
图2为实施例一3D视觉采集模块顶部安装示意图;
图3为实施例一图像中被测物的外轮廓示意图;
图4为实施例一图像中被测物的外轮廓区域填充示意图;
图5为实施例一被测物的外轮廓区域区块划分示意图;
图6为实施例一3D视觉采集模块斜向安装示意图。
具体实施方式
为了更好的了解本专利申请的技术方案,下面结合具体实施例、说明书附图对本专利申请作进一步说明。
实施例一:
本实施例提供了一种体积测量系统,包括:
测量区域信息获取单元:配置用于基于3D视觉系统采集当前视角下无被测物时测量区域的第一信息及有被测物时测量区域的第一深度图像信息,所述3D视觉系统位于测量区域的上方;
被测物目标获取单元,配置用于通过比对不同视角下采集的所述第一信息、第一深度图像信息的灰度值识别出被测物的外轮廓,得到被测物外轮廓的第一深度信息,并填充所述被测物的外轮廓界定区域,得到被测物目标及被测物目标的尺寸信息;
区块划分单元,配置用于根据所述第一深度信息与划分的区块之间的预设关系对被测物的外轮廓区域进行区块划分,生成区块信息;
被测物体积获取单元,配置用于根据所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系得到被测物体积。
其中,3D视觉系统包括第一相机、第二相机及结构激光发射器,结构激光发射器配置用于采集当前视场下无被测物时测量区域的第一信息,及根据激光栅格测量所述测量区域深度变化是否超过阈值,若是,则确定所述测量区域有被测物,并驱动第一相机采集所述测量区域的第一深度图像,驱动第二相机对第一深度图像进行灰度处理,生成所述第一深度图像信息。
被测物目标获取单元还配置用于基于第一相机、第二相机的配置间距,将所述被测物的外轮廓与3D视觉系统的标准双层标定靶匹配,得所述被测物外轮廓的尺寸。
其中,第一相机具体为彩色相机,用于拍摄被测物体全貌,第二相机具体为黑白相机,主要用于灰色处理,同时这两个相机安装有固定间距,其观测到的红外激光可以辅助测距。结构激光发射器具体为带有编码特性的结构激光发射器是测量距离的最主要直接传感器,编码特性可以有效避免可见光等其他同类型光束干扰。
3D视觉系统位于测量区域的正上方或位于测量区域内被测物的斜上方。
所述第一深度信息、划分的区块之间的预设关系满足:
Figure PCTCN2019090310-appb-000005
其中,
T arg et org表示划分的其中一个区块的边界尺寸,T arg et new表示深度变化后新划分的相应区块的边界尺寸,Distance(Obstacles,Robot)为计算深度与区块划分大小的函数,w 1、w 2为权值系数。
所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系满足:
Figure PCTCN2019090310-appb-000006
其中,
H org为划分区块前图像中被测物的基础高度值,
H为图像中被测物外轮廓区域内划分区块的高度值。
本实施例提供了一种体积测量方法,如图1所示,包括以下步骤:
S1、基于3D视觉系统采集当前视角下无被测物时测量区域的第一信息及有被测物时测量区域的第一深度图像信息,所述3D视觉系统位于测量区域的上方,具体位于测量区域的正上方或位于被测物的斜上方。
3D视觉系统包括第一相机、第二相机及结构激光发射器,其中,结构激光发射器用于采集当前视场下无被测物时测量区域的第一信息,及根据激光栅格测量所述测量区域深度变化是否超过阈值,若是,则确定所述测量区域有被测物,并驱动第一相机采集所述测量区域的第一深度图像,驱动第二相机对第一深度图像进行灰度处理,生成所述第一深度图像信息。
S2、通过比对不同视角下采集的所述第一信息、第一深度图像信息的灰度值识别出被测物的外轮廓,得到被测物外轮廓的第一深度信息,并填充所述被测物的外轮廓界定区域,得到被测物目标及被测物目标的尺寸信息。
基于第一相机、第二相机的配置间距,将所述被测物的外轮廓与3D视觉系统的标准双层标定靶匹配,得所述被测物外轮廓的尺寸。
S3、根据所述第一深度信息与划分的区块之间的预设关系对被测物的外轮廓区域进行区块划分,生成区块信息。
所述第一深度信息、划分的区块之间的预设关系满足:
Figure PCTCN2019090310-appb-000007
其中,
T arg et org表示划分的其中一个区块的边界尺寸,T arg et new表示深度变化后新划分的相应区块的边界尺寸,Distance(Obstacles,Robot)为计算深度与区块划分大小的函数,w 1、w 2为权值系数。
S4、根据所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系得到被测物体积。
所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系满足:
Figure PCTCN2019090310-appb-000008
其中,
H org为划分区块前图像中被测物的基础高度值,
H为图像中被测物外轮廓区域内划分区块的高度值。
上述3D视觉系统具体为一套集成3D视觉采样模块,采样模块包括彩色相机、黑白相机、带有编码特性的结构激光发射器。模块使用的原件为目前市场现有的部件,如可采用手机行业普遍采用的摄像头和激光模组,整体模组具备非常好的经济型。
对于该3D视觉采集模块的布局方式,进行了多个维度的设计。最通常的布局方式是放置在被测物的正上方,如图2所示,这样布置的优点是可以使用单独一个3D视觉模块,覆盖较大的视场空间。结合编 码制式结构光,可以最大限度的提高测量准确性。顶部安装占用空间十分的小巧,由一根天棚悬挂的立柱即可安装,这一安装方式,完全释放了底面空间,净空空间也达到了前所未有的巨大。
本专利申请的体积测量方法的核心为3D视觉结合真实物理空间的体积测量方法,具体步骤为:
S1、当没有被测物出现在视场中时,模式化当前视场区域深度射线模型,并使用低功耗模式,编码低纬度的激光栅格,同时灰度视场限定的测量区域。当有物体进入测量区域后,深度射线会受到干扰,此时开启测量系统,进行高纬度的激光散斑和栅格激光变换,提高区域扫描频率。
S2、边界识别。
扫描被测物边界通过灰度值,比对物体进入先后的变化,找出被测物的最大外轮廓。然后填充边缘区域包含的面积。依据这个信息,可以得到最大的可能物体体积信息,如图3-4所示。
其中,被测物的最大外轮廓即外形边缘的尺寸,通过匹配标准双层标定靶,直接得出。该尺寸的确定是为了确定划分区块的最大边界,其中,标定靶一次性标定两个尺寸精度。根据固定深度,等比例计算尺寸精度的缩放系数。由于深度越大,测量尺寸越大,同时误差也会放大,故所述的缩放比例是用于减少误差的,出厂前需标定好。
深度射线基于激光测距传感器tof技术,作用检测区域内是否有物体,再打开相机测量,避免相机长时间工作发热严重。tof测距传感器是根据没有物体的参数是设置,此时,相机休眠,不获取图像。有无被测物的检测是依赖于编码低纬度的激光栅格检查测量区域有无深度变化。
S3、对被测物外轮廓进行区块划分。
对被测物进行精准识别,可以将测量精度提高,识别精度可以做到1m的长度信息,误差±10mm。利用步骤(1)得到的被测物的外轮廓,进行区块划分。区块划分依据根据深度信息,非线性增长划分。非线性依据相机镜头的畸变矫正系数而变化,非线性增长、矫正系数出厂时根据镜头,配置在系统中,最终体现在函数 Distance(Obstacles,Robot)中,
Figure PCTCN2019090310-appb-000009
其中,
T arg et org表示划分区块边界尺寸,表示划分的其中一个区块的边界尺寸,T arg et new表示根据深度变化后的新划分区块边界尺寸,表示深度变化后新划分的相应区块的边界尺寸,根据区块划分引起的不同,一个新区块,就是一个新的深度,相邻区块深度不相同,Distance(Obstacles,Robot)为计算深度与区块划分大小的函数,w 1、w 2为权值系数。
S4、由此划分的区块示例如图5所示。将各个区块区域填不上深度信息,进行体积的分布精确测量。由此得出,距离相机越近的部分,区块划分越大,区块数量越小;距离相机越远的部分,区块划分越小,区块数量越大。在划分的相同大小的区块内,进行深度信息的均值计算,则物体的体积信息为所有相同区块均值与相同大小区块所占面积的乘积的总和。
Figure PCTCN2019090310-appb-000010
其中,
H org为划分区块前图像中被测物的基础高度值,
H为图像中被测物外轮廓区域内划分区块的高度值,
V为测量后的总体积。
本专利申请的体积测量方法提出了一种全新简便的算法。是一种限定性精准测量方法。这种方法相比于点云和三角测量法,大大的提高了运算性能。结合简洁的空间布局,实现了快速,精准的体积测量功能。本专利申请可以作为单独的测量系统模块单独使用。也可以结合物流行业的称重系统和条码扫描系统,输出完整的货物信息,这种 方式有效的提升了设备性能,物流能力,降低了设备的复杂性,并具备良好的安装易用性。
所述体积测量算法无需将视觉系统中的二维坐标转换成真实的三维空间坐标,运用有限的信息信息,即可直接计算出被测量物的体积,方法简单实用。
作为另一方面,本实施例还提供了适于用来实现本申请实施例的设备,设备包括计算机系统,所述计算机系统包括中央处理单元(CPU),其可以根据存储在只读存储器(ROM)中的程序或者从存储部分加载到随机访问存储器(RAM)中的程序而执行各种适当的动作和处理。在RAM中,还存储有系统操作所需的各种程序和数据。CPU、ROM以及RAM通过总线彼此相连。输入/输出(I/O)接口也连接至总线。
以下部件连接至I/O接口:包括键盘、鼠标等的输入部分;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分;包括硬盘等的存储部分;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分。通信部分经由诸如因特网的网络执行通信处理。驱动器也根据需要连接至I/O接口。可拆卸介质,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器上,以便于从其上读出的计算机程序根据需要被安装入存储部分。
特别地,根据本公开的实施例,上文参考图1描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,所述计算机程序包含用于执行图1的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。
附图中的流程图,图示了按照本专利申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图中的每个方框可以代表一个模块、程序段、或代码的一部分,所述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方 框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,流程图中的每个方框、以及流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元或模块可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元或模块也可以设置在处理器中。这些单元或模块的名称在某种情况下并不构成对该单元或模块本身的限定。
作为另一方面,本实施例还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中所述系统中所包含的计算机可读存储介质;也可以是单独存在,未装配入设备中的计算机可读存储介质。计算机可读存储介质存储有一个或者一个以上程序,所述程序被一个或者一个以上的处理器用来执行描述于本申请的体积测量方法。
实施例二
本实施例与实施例一相同的特征不再赘述,本实施例与实施例一不同的特征在于:
3D视觉采集模组可以安放在被测物立体对角线的斜上方,如图6所示,相机的观测视角,可以覆盖整个被测物体的三个主要面,通过对着三个面的信息求解,可以获得更多的体积细节,提高测量精度。同样的,这种安装方式简便,占用空间位置小。在测量体积的同时,可以拍摄被测物的照片。
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离所述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能。

Claims (13)

  1. 一种体积测量方法,其特征是,包括以下步骤:
    基于3D视觉系统采集当前视角下无被测物时测量区域的第一信息及有被测物时测量区域的第一深度图像信息,所述3D视觉系统位于测量区域的上方;
    通过比对不同视角下采集的所述第一信息、第一深度图像信息的灰度值识别出被测物的外轮廓,得到被测物外轮廓的第一深度信息,并填充所述被测物的外轮廓界定区域,得到被测物目标及被测物目标的尺寸信息;
    根据所述第一深度信息与划分的区块之间的预设关系对被测物的外轮廓区域进行区块划分,生成区块信息;
    根据所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系得到被测物体积。
  2. 根据权利要求1所述的体积测量方法,其特征是,3D视觉系统包括第一相机、第二相机及结构激光发射器,其中,结构激光发射器用于采集当前视场下无被测物时测量区域的第一信息,及根据激光栅格测量所述测量区域深度变化是否超过阈值,若是,则确定所述测量区域有被测物,并驱动第一相机采集所述测量区域的第一深度图像,驱动第二相机对第一深度图像进行灰度处理,生成所述第一深度图像信息。
  3. 根据权利要求2所述的体积测量方法,其特征是,基于第一相机、第二相机的配置间距,将所述被测物的外轮廓与3D视觉系统的标准双层标定靶匹配,得所述被测物外轮廓的尺寸。
  4. 根据权利要求1-3任一所述的体积测量方法,其特征是,所述第一深度信息、划分的区块之间的预设关系满足:
    Figure PCTCN2019090310-appb-100001
    其中,
    T arg et org表示划分的其中一个区块的边界尺寸,T arg et new表示深度变化后新划分的相应区块的边界尺寸,Distance(Obstacles,Robot)为计算深度与区块划分大小的函数,w 1、w 2为权值系数。
  5. 根据权利要求4所述的体积测量方法,其特征是,所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系满足:
    Figure PCTCN2019090310-appb-100002
    其中,
    H org为划分区块前图像中被测物的基础高度值,
    H为图像中被测物外轮廓区域内划分区块的高度值。
  6. 一种体积测量系统,其特征是,包括:
    测量区域信息获取单元:配置用于基于3D视觉系统采集当前视角下无被测物时测量区域的第一信息及有被测物时测量区域的第一深度图像信息,所述3D视觉系统位于测量区域的上方;
    被测物目标获取单元,配置用于通过比对不同视角下采集的所述第一信息、第一深度图像信息的灰度值识别出被测物的外轮廓,得到被测物外轮廓的第一深度信息,并填充所述被测物的外轮廓界定区域,得到被测物目标及被测物目标的尺寸信息;
    区块划分单元,配置用于根据所述第一深度信息与划分的区块之间的预设关系对被测物的外轮廓区域进行区块划分,生成区块信息;
    被测物体积获取单元,配置用于根据所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系得到被测物体积。
  7. 根据权利要求6所述的体积测量系统,其特征是,3D视觉系统包括第一相机、第二相机及结构激光发射器,
    其中,结构激光发射器配置用于采集当前视场下无被测物时测量区域的第一信息,及根据激光栅格测量所述测量区域深度变化是否超过阈值,若是,则确定所述测量区域有被测物,并驱动第一相机采集所述测量区域的第一深度图像,驱动第二相机对第一深度图像进行灰度处理,生成所述第一深度图像信息。
  8. 根据权利要求7所述的体积测量系统,其特征是,被测物目标获取单元还配置用于基于第一相机、第二相机的配置间距,将所述被测物的外轮廓与3D视觉系统的标准双层标定靶匹配,得所述被测物外轮廓的尺寸。
  9. 根据权利要求6所述的体积测量系统,其特征是,3D视觉系统位于测量区域的正上方或位于测量区域内被测物的斜上方。
  10. 根据权利要求6-9任一所述的体积测量系统,其特征是,所述第一深度信息、划分的区块之间的预设关系满足:
    Figure PCTCN2019090310-appb-100003
    其中,
    T arg et org表示划分的其中一个区块的边界尺寸,T arg et new表示深度变化后新划分的相应区块的边界尺寸,Distance(Obstacles,Robot)为计算深度与区块划分大小的函数,w 1、w 2为权值系数。
  11. 根据权利要求10所述的体积测量系统,其特征是,所述区块信息、被测物目标的尺寸信息及被测物体积之间的预设关系满足:
    Figure PCTCN2019090310-appb-100004
    其中,
    H org为划分区块前图像中被测物的基础高度值,
    H为图像中被测物外轮廓区域内划分区块的高度值。
  12. 一种设备,所述设备包括:
    一个或多个处理器;
    存储器,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器执行根据权利要求1-5中任一项所述的方法。
  13. 一种存储有计算机程序的计算机可读存储介质,该程序被处理器执行时实现根据权利要求1-5中任一项所述的方法。
PCT/CN2019/090310 2018-09-28 2019-06-06 体积测量方法、系统、设备及计算机可读存储介质 WO2020062918A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CA3114457A CA3114457A1 (en) 2018-09-28 2019-06-06 Volume measurement method and system, apparatus and computer-readable storage medium
US17/280,790 US11436748B2 (en) 2018-09-28 2019-06-06 Volume measurement method and system, apparatus and computer-readable storage medium
EP19866397.3A EP3842736A4 (en) 2018-09-28 2019-06-06 VOLUME MEASUREMENT PROCESS, SYSTEM AND DEVICE, AND COMPUTER READABLE INFORMATION MEDIA
KR1020217010703A KR102559661B1 (ko) 2018-09-28 2019-06-06 부피 측량 방법, 시스템, 설비 및 컴퓨터 판독이 가능한 저장매체
JP2021517211A JP2022501603A (ja) 2018-09-28 2019-06-06 体積測定方法、システム、装置及びコンピュータ読取可能な記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811141066.7A CN109443196B (zh) 2018-09-28 2018-09-28 一种体积测量方法及系统
CN201811141066.7 2018-09-28

Publications (1)

Publication Number Publication Date
WO2020062918A1 true WO2020062918A1 (zh) 2020-04-02

Family

ID=65544781

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/090310 WO2020062918A1 (zh) 2018-09-28 2019-06-06 体积测量方法、系统、设备及计算机可读存储介质

Country Status (7)

Country Link
US (1) US11436748B2 (zh)
EP (1) EP3842736A4 (zh)
JP (1) JP2022501603A (zh)
KR (1) KR102559661B1 (zh)
CN (1) CN109443196B (zh)
CA (1) CA3114457A1 (zh)
WO (1) WO2020062918A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115235344A (zh) * 2022-06-06 2022-10-25 苏州天准科技股份有限公司 基于涡旋光束的测量系统及高度测量方法

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109443196B (zh) 2018-09-28 2020-07-24 顺丰科技有限公司 一种体积测量方法及系统
CN109916302B (zh) * 2019-03-27 2020-11-20 青岛小鸟看看科技有限公司 一种载货箱体的体积测量方法和系统
CN110017773B (zh) * 2019-05-09 2021-12-17 福建(泉州)哈工大工程技术研究院 一种基于机器视觉的包裹体积测量方法
CN111986250A (zh) * 2019-05-22 2020-11-24 顺丰科技有限公司 物体体积测量方法、装置、测量设备及存储介质
CN112150533A (zh) * 2019-06-28 2020-12-29 顺丰科技有限公司 物体体积计算方法、装置、设备、及存储介质
TWI738098B (zh) 2019-10-28 2021-09-01 阿丹電子企業股份有限公司 光學式體積測定裝置
CN111561872B (zh) * 2020-05-25 2022-05-13 中科微至智能制造科技江苏股份有限公司 基于散斑编码结构光的包裹体积测量方法、装置及系统
CN112584015B (zh) * 2020-12-02 2022-05-17 达闼机器人股份有限公司 物体探测方法、装置、存储介质和电子设备
CN113313803B (zh) * 2021-06-11 2024-04-19 梅卡曼德(北京)机器人科技有限公司 垛型分析方法、装置、计算设备及计算机存储介质
CN116734945A (zh) * 2023-06-16 2023-09-12 广州市西克传感器有限公司 一种基于光栅的规则箱体体积测量系统
CN117670979B (zh) * 2024-02-01 2024-04-30 四川港投云港科技有限公司 一种基于固定点位单目相机的大宗货物体积测量方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104197834A (zh) * 2014-09-04 2014-12-10 福建师范大学 一种非接触式复杂表面锥体体积测量系统及方法
CN106839995A (zh) * 2017-01-22 2017-06-13 南京景曜智能科技有限公司 一种物品三维尺寸检测装置及检测方法
EP3232404A1 (en) * 2016-04-13 2017-10-18 SICK, Inc. Method and system for measuring dimensions of a target object
CN109443196A (zh) * 2018-09-28 2019-03-08 顺丰科技有限公司 一种体积测量方法及系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11211438A (ja) * 1998-01-22 1999-08-06 Komatsu Ltd 荷台積載体積計測装置
JP3803750B2 (ja) * 2002-02-22 2006-08-02 防衛庁技術研究本部長 体積計測方法及び体積計測プログラム
JP4917983B2 (ja) * 2007-07-18 2012-04-18 大成建設株式会社 搬送量推定装置
NO330423B1 (no) * 2009-06-26 2011-04-11 Storvik Aqua As Anordning og fremgangsmate for fisketelling eller biomassebestemmelse
US8564534B2 (en) * 2009-10-07 2013-10-22 Microsoft Corporation Human tracking system
CN107388960B (zh) * 2016-05-16 2019-10-22 杭州海康机器人技术有限公司 一种确定物体体积的方法及装置
US10630959B2 (en) * 2016-07-12 2020-04-21 Datalogic Usa, Inc. System and method for object counting and tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104197834A (zh) * 2014-09-04 2014-12-10 福建师范大学 一种非接触式复杂表面锥体体积测量系统及方法
EP3232404A1 (en) * 2016-04-13 2017-10-18 SICK, Inc. Method and system for measuring dimensions of a target object
CN106839995A (zh) * 2017-01-22 2017-06-13 南京景曜智能科技有限公司 一种物品三维尺寸检测装置及检测方法
CN109443196A (zh) * 2018-09-28 2019-03-08 顺丰科技有限公司 一种体积测量方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3842736A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115235344A (zh) * 2022-06-06 2022-10-25 苏州天准科技股份有限公司 基于涡旋光束的测量系统及高度测量方法
CN115235344B (zh) * 2022-06-06 2023-08-01 苏州天准科技股份有限公司 基于涡旋光束的测量系统及高度测量方法

Also Published As

Publication number Publication date
CA3114457A1 (en) 2020-04-02
EP3842736A4 (en) 2021-10-06
CN109443196B (zh) 2020-07-24
CN109443196A (zh) 2019-03-08
KR102559661B1 (ko) 2023-07-26
US20220012907A1 (en) 2022-01-13
KR20210080368A (ko) 2021-06-30
JP2022501603A (ja) 2022-01-06
US11436748B2 (en) 2022-09-06
EP3842736A1 (en) 2021-06-30

Similar Documents

Publication Publication Date Title
WO2020062918A1 (zh) 体积测量方法、系统、设备及计算机可读存储介质
US9053547B2 (en) Three-dimensional point cloud position data processing device, three-dimensional point cloud position data processing system, and three-dimensional point cloud position data processing method and program
CN113657224B (zh) 车路协同中用于确定对象状态的方法、装置、设备
US9207069B2 (en) Device for generating a three-dimensional model based on point cloud data
US20130121564A1 (en) Point cloud data processing device, point cloud data processing system, point cloud data processing method, and point cloud data processing program
CN109801333B (zh) 体积测量方法、装置、系统及计算设备
CN112132523A (zh) 一种货物数量确定方法、系统和装置
CN111750804A (zh) 一种物体测量的方法及设备
CN112883955A (zh) 货架布局检测方法、装置及计算机可读存储介质
CN112378333B (zh) 仓储货物测量方法和装置
CN112149348A (zh) 一种基于无人货柜场景的仿真空间模型训练数据生成方法
CN115546202A (zh) 一种用于无人叉车的托盘检测与定位方法
CN114396875A (zh) 一种基于深度相机垂直拍摄的长方形包裹体积测量方法
CN115856829B (zh) 一种雷达三维数据转换的图像数据识别方法及系统
US11074708B1 (en) Dark parcel dimensioning
Ladplee et al. Volumetric measurement of rectangular parcel box using LiDar depth camera for dimensioning and 3D bin packing applications
CN115272465A (zh) 物体定位方法、装置、自主移动设备和存储介质
CN114372313A (zh) 用于实测实量的影像处理方法、系统及激光扫描仪
CN113688900A (zh) 雷达和视觉数据融合处理方法、路侧设备及智能交通系统
CN113538553A (zh) 基于规则箱体的体积测量装置
CN116258714B (zh) 一种缺陷识别方法、装置、电子设备和存储介质
WO2023231425A1 (zh) 定位方法、电子设备、存储介质及程序产品
CN117953038A (zh) 一种基于深度相机的非规则体积测量方法、系统、设备及存储介质
GB2621906A (en) A system and method for processing image data
Yang et al. A Method to Improve the Precision of 2-Dimensioanl Size Measurement of Objects through Image Processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19866397

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021517211

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 3114457

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2101001848

Country of ref document: TH

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019866397

Country of ref document: EP

Effective date: 20210326