SE541083C2 - Method and image processing system for facilitating estimation of volumes of load of a truck - Google Patents

Method and image processing system for facilitating estimation of volumes of load of a truck

Info

Publication number
SE541083C2
SE541083C2 SE1751405A SE1751405A SE541083C2 SE 541083 C2 SE541083 C2 SE 541083C2 SE 1751405 A SE1751405 A SE 1751405A SE 1751405 A SE1751405 A SE 1751405A SE 541083 C2 SE541083 C2 SE 541083C2
Authority
SE
Sweden
Prior art keywords
image
truck
vertical plane
point cloud
information
Prior art date
Application number
SE1751405A
Other versions
SE1751405A1 (en
Inventor
Anders Nyberg
Jimmy Jonsson
Original Assignee
Cind Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cind Ab filed Critical Cind Ab
Priority to SE1751405A priority Critical patent/SE541083C2/en
Priority to PCT/SE2018/051075 priority patent/WO2019098901A1/en
Publication of SE1751405A1 publication Critical patent/SE1751405A1/en
Publication of SE541083C2 publication Critical patent/SE541083C2/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01FMEASURING VOLUME, VOLUME FLOW, MASS FLOW OR LIQUID LEVEL; METERING BY VOLUME
    • G01F17/00Methods or apparatus for determining the capacity of containers or cavities, or the volume of solid bodies
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G19/00Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups
    • G01G19/02Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups for weighing wheeled or rolling bodies, e.g. vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/245Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Fluid Mechanics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Disclosed is an image-processing computer system (30) and method for facilitating estimation of volumes of load of a truck (20) moving on a surface (7). The method comprises image processing of one or more 3D images captured of the truck (20) by a first camera (11), into a 3D point cloud, determining a vertical plane representing a first side (20a) of the truck from the 3D point cloud, the determined vertical plane being the most common denominator of a majority of points of the 3D point cloud (110), and warping an image captured of the truck (20) simultaneously as the 3D image, around the determined vertical plane so that the image seems to have been captured from a virtual camera positioned at a position substantially horizontal to the truck load at a predetermined distance from the determined vertical plane. Hereby, sizes of loads of trucks can be more correctly determined from images, as the warped images show loads from same distance and direction irrespective of distance and direction that trucks actually drive by cameras.

Description

METHOD AND IMAGE PROCESSING SYSTEM FOR FACILITATING ESTIMATION OF VOLUMES OF LOAD OF A TRUCK Technical field
[0001] The present disclosure relates generally to an image-processing method performed by a computer system for facilitating estimation of volumes of load of a truck driving on a surface. The disclosure further relates to an image-processing computer system and a computer program for facilitating estimation of volumes of load of a truck moving on a surface.
Background
[0002] When trucks comprising timber loads are to deliver the timber loads to a production unit of the forest industry, the amount of timber that is delivered needs to be measured or estimated. Today, most such measurements are performed manually. In other words, there are persons that observe a truck from different angles in order to estimate how much timber there is on the truck.
[0003] Recently, there have been discussions of automatically determining timber volumes using cameras that capture images of the truck with its timber load and calculating timber volumes based on analyses of the captured images. By such solution it would be possible to quickly and automatically determining timber volumes and avoid time- and personal-consuming manual determination methods. Consequently, there is a need of an image processing method for facilitating estimation of timber volumes of a truck.
Summary
[0004] It is an object of the invention to address at least some of the problems and issues outlined above. It is an object of embodiments of the invention to facilitate determination of volumes of a load of a truck using image processing of images captured of the truck. It is possible to achieve this object and others by using a method, a system and a computer program as defined in the attached independent claims.
[0005] According to one aspect, an image-processing method performed by a computer system for facilitating estimation of volumes of load of a truck moving on a surface is provided. The method comprises image processing of one 3D information-carrying images captured of the truck including a first side of the truck, by a first camera, into a 3D point cloud positioned into a coordinate system defining a horizontal plane of the surface on which the truck moves and a predetermined truck-driving direction, wherein the one or more 3D informationcarrying images comprises a first 3D information-carrying image captured at a first time point. The method further comprises determining a vertical plane of the 3D point cloud, the vertical plane representing the first side of the truck, the vertical plane being substantially perpendicular to the horizontal plane of the surface and extending in the predetermined truck-driving direction, wherein the determined vertical plane is the most common denominator of a majority of points of the 3D point cloud. The method further comprises warping a first 2D image captured of the truck at the first time point around the determined vertical plane so that the first 2D image seems to have been captured from a virtual camera positioned at a position substantially horizontal to the truck load at a distance from the determined vertical plane, and captured in a virtual camera direction perpendicular to the vertical plane, the warping being based on a position of the camera that captured the first 2D image and on a direction in which the first 2D image was captured and on the virtual camera position and the virtual camera direction.
[0006] By such a method, after the warping step, an image is obtained that shows the first side of the load captured by any camera as if the image were captured at one and the same distance from the load, i.e. at the virtual camera position. This method can be used for any load that passes by the cameras in the portal on the surface, e.g. road, which means that the size of loads passing by can be determined directly from the warped image. Also, sizes of different loads are comparable in different warped images as all loads are shown as if they passed on the same distance from and in the same angle to the virtual camera. The distance for the virtual camera from the vertical plane, and also field of view of the virtual camera is preferably selected so that one pixel in the warped image represents a measurable length unit of the truck, i.e. one pixel may be 3 mm of the truck in the real world. Hereby the size of the side of the load can be measured in the warped image, and the estimation of volume of load is facilitated. By similar measurements of the upper side and possibly also of the second vertical side of the load, opposite the first vertical side, the volume of the load can be even better approximated.
[0007] The coordinate system mentioned may be a Cartesian coordinate system having a first axis and a second axis substantially coinciding with a horizontal plane of the surface on which the truck moves, wherein one of the first and second axis extends in a predetermined truck-driving direction, the third axis thereby extending vertically in relation to the horizontal plane. According to an embodiment, the 3D information-carrying images are in a first step image processed to a 3D point cloud in a Cartesian co-ordinate system with origin in the first camera and thereafter the 3D point cloud is translated into the Cartesian coordinate system having first and second axis substantially coinciding with a horizontal plane of the surface on which the truck moves. When setting up the system including the first camera, the system is calibrated so that the coordinate systems are set in the images, i.e. so that the horizontal plane and the truckdriving direction is detected in images from the first camera. The wording “truck” signifies the truck and its load, e.g. timber load, or at least one of its loads, if more than one. The most common denominator may be determined from a least square method, alternatively a mean value or a median value may be determined. The first 2D image may be captured by any camera, as long as it is captured at the same time as the 3D information carrying-image. The 2D image may be one of the images from the stereo camera that produced the 3D information-carrying image, in case the first camera is a stereo camera.
[0008] According to an embodiment, the first camera is a stereo camera, and each of the one or more 3D information-carrying images comprises a pair of images simultaneously captured by two different image sensors of the first stereo camera.
[0009] According to another embodiment, the determining comprises partitioning points of the 3D point cloud into a plurality of groups depending on each point’s position in a direction perpendicular to the truck-driving direction, selecting a group of the plurality of groups, the selected group having more points of the 3D point cloud than each of the other of the plurality of groups, and determining the substantially vertical plane based on the points of the 3D point cloud that are within the selected group, wherein the vertical plane is the most common denominator of most points of the selected group. Hereby, the vertical plane can be even more exactly determined, resulting in an even better warping of the 2D image and approximation of the truck load.
[00010] According to another embodiment, the one or more 3D informationcarrying images further comprises a second 3D information-carrying image captured by the first camera at a second time point earlier than the first time point. The method further comprises detecting, in the first 3D information-carrying image, first feature positions of a plurality of characteristic features of the truck, and, in the second 3D information-carrying image, detecting second feature positions of the plurality of characteristic features. The method further comprises determining movement data of the truck from the second time point to the first time point based on the detected first and second feature positions of the plurality of characteristic features, wherein the image processing of the one or more 3D information-carrying images comprises image processing of the first and the second 3D informationcarrying images into a 3D point cloud, taking into account the determined movement data. By taking into account data from a plurality of 3D informationcarrying images taken at consecutive time points when the truck drives through the portal, a more exact determination of the vertical plan can be made, resulting in a more exacting warping of 2D images and a better approximation of the truck load.
[00011] According to another embodiment, the one or more 3D informationcarrying images further comprises a second 3D information-carrying image captured by the first camera at a second time point earlier than the first time point. Further, the image processing comprises image processing of the first 3D information-carrying image into a first 3D point cloud positioned into the coordinate system and image processing of the second 3D information-carrying image into a second 3D point cloud positioned into the coordinate system. Further, the determining comprises determining a first vertical plane of the first 3D point cloud, the first vertical plane being the most common denominator of most points of the first 3D point cloud, and determining a second vertical plane of the second 3D point cloud, the second vertical plane being the most common denominator of most points of the second 3D point cloud. Further, the first 2D image is warped around the first vertical plane, and a second 2D image captured of the truck at the second time point is warped around the determined second vertical plane so that the second 2D image seems to have been captured from the virtual camera in the virtual camera direction. This embodiment of the method may further comprise detecting positions of wheels of the truck in the warped first 2D image and in the warped second 2D image, comparing the detected wheel positions in the warped first 2D image and in the warped second 2D image with predetermined preferred wheel positions, and selecting the one of the warped first 2D image and the warped second 2D image that has most resemblance with any of the predetermined wheel positions. Flereby, the image of the first and second warped 2D images that best show the truck load can be automatically selected for the estimation of truck load volume. Also, wheels can be easily detected from warped images as they are shown as circles in an image warped around a vertical plane
[00012] According to a variant of the above embodiment, the one or more 3D information carrying images further comprises a third image captured by the first camera at a third time point earlier than the second time point. Further, the method comprises detecting first feature positions of a plurality of characteristic features of the truck in the first 3D information carrying image, detecting second feature positions of the plurality of characteristic features in the second 3D information carrying image, and detecting third feature positions of the plurality of characteristic features in the third image. Thereafter, movement data of the truck is determined from the third time point to the second time point to the first time point based on the detected first, second and third feature positions of the plurality of characteristic features. Further, the image processing of the one or more 3D information-carrying images comprises image processing of the first, second and third images into the first 3D point cloud, taking into account the determined movement data and image processing of the second and third images into the second 3D point cloud, taking into account the determined movement data.
[00013] According to another embodiment, the determining of the vertical plane is performed based on a RANSAC algorithm.
[00014] According to another embodiment, the method further comprises deleting from the 3D point cloud, points that are determined not belonging to the load. Such determination may be performed by presetting an area of images that may be of interest i.e. that may comprise points of the timber load, and deleting points outside the area. For example, as timber loads normally are not situated below 1,5 meter of ground level, such points may be deleted. Hereby, a better approximation of the vertical side of the load can be determined.
[00015] According to another embodiment, the vertical plane can be tilted up to 10 degrees in relation to the predetermined truck-driving direction when determining the vertical plane of the 3D point cloud. In other words, when determining the vertical plane that is the most common denominator for a majority of points of the 3D point cloud, it is tested whether there may be more points in any direction that is slightly tilted in relation to the predetermined truck-driving direction, in order to cater for a truck passing the cameras a little askew.
[00016] According to other aspects, an image-processing computer system and a computer program are also provided, the details of which will be described in the claims and the detailed description.
[00017] Further possible features and benefits of this solution will become apparent from the detailed description below.
Brief description of drawings
[00018] The solution will now be described in more detail by means of exemplary embodiments and with reference to the accompanying drawings, in which:
[00019] Fig. 1 is a schematic block diagram of an arrangement comprising a portal equipped with cameras in which the present invention may be used.
[00020] Fig 2 is a schematic block diagram of a truck passing through the portal of fig. 1.
[00021] Fig. 3 is a flow chart illustrating a method for facilitating estimation of timber load volumes according to possible embodiments.
[00022] Fig. 4 is an illustration of a 3D point cloud produced from a 3D information-carrying image of a truck captured by a camera arranged at the portal.
[00023] Fig. 5 is an illustration of the 3D point cloud of fig. 4, the illustration comprising a vertical plane representing a side of the truck , determined according to an embodiment.
[00024] Fig. 6 is a photo of a truck passing through a portal.
[00025] Fig. 7 is the photo of fig. 6 warped around a vertical plane defined from a 3D point cloud determined from 3D information-carrying images captured of the truck, wherein the 3D point cloud is determined for the same time point as the photo of fig. 6 is captured.
[00026] Fig. 8 is a flow chart of an embodiment of the inventive method.
[00027] Fig. 9 is another flow chart of an embodiment of the inventive method.
[00028] Fig. 10 is a schematic block diagram of a truck and four camera views captured in different time points a-d as the truck drives by.
[00029] Fig. 11 is a flow chart of an embodiment of the invention.
[00030] Fig. 12 is a schematic block diagram of a computer system according to the invention.
Detailed description
[00031] Briefly described, a method and a system is provided that uses imageprocessing to facilitate determining loads of a truck. The loads of the truck may be timber load. This is achieved by using cameras that capture images of the truck as the truck passes by the cameras and then using image processing on the captured images in order to provide processed images from which the load is measurable. An image captured by one camera is warped so that the image seems to have been captured by a virtual camera situated on a determined position and directed towards the timber load with a determined angle, such as perpendicular to a vertical side of the load. Hereby, it does not matter how far from the camera the truck is moving; the image is warped to the same determined position and angle of the virtual camera. Also, the field of view of the camera may be selected so that the size of e.g. a side of the timber load is directly measurable. In order to warp an image captured by a camera, three-dimension, 3D, information-carrying images are captured by a first camera and the 3D information-carrying images are processed into a 3D point cloud. The first camera that captures such 3D information-carrying images may be e.g. a stereo camera or a camera equipped with a depth sensor, such as a laser-equipped camera. The 3D point cloud is then positioned into a Cartesian coordinate system defining a horizontal plane of the road surface on which the truck moves and a general truck-driving direction.
Thereafter, a vertical plane is determined from the 3D point cloud, the vertical plane representing the vertical side of the load that faces the virtual camera. This vertical plane is determined from the 3D point cloud as the most common denominator of points of the 3D point cloud. Such a determination will be a good approximation of the actual vertical side of the timber load facing the virtual camera as there are most 3D points in such a plane for images comprising a side of a truck. Further down in the description there are embodiments described how to approximate the vertical side of the timber load with more precision. After this vertical plane has been determined, any image captured by a real camera at the same time as the 3D information-carrying image was captured, or at least at the same time point for which the 3D point cloud is determined, in case of many consecutive 3D information-carrying images was used for determining the 3D point cloud, can be warped to a virtual camera position. In other words, any image captured by a real camera can be warped around the vertical plane such that the image appears as if it would have been captured from the virtual camera. When the image has been warped to the virtual camera position, the pixels of the image can be transformed into metrical units based on a scaling factor. An operator that observes this warped image can then determine the size of the side of the timber load directly from the image, for example by using a computer measuring tool that measures the size directly from the warped image on the computer screen of the operator.
[00032] Fig. 1 shows an embodiment of an arrangement in the shape of a portal 1 equipped with cameras, through which portal trucks with one or more trailers loaded with timber are to pass for facilitating determining the volume of the timber load. The exemplary portal of fig. 1 has a soccer-goal shape, however other shapes may apply. Consequently, the portal of fig. 1 has a first vertical part 3 and a second vertical part 4 that extends perpendicularly from the ground 2 upwards. The first and second vertical parts 3, 4 are connected by a horizontal part 5 extending approximately horizontal to the ground 2. The portal is positioned into a Cartesian coordinate system, wherein the x-axis and the y-axis are in the horizontal plane and the z-axis points vertically downwards. The x-axis is directed in a general truck-driving direction through the portal. Onto the portal there are arranged cameras that capture images of a truck as it passes through the portal 1. In the exemplary portal shown in fig. 1 there is a first stereo camera 11 positioned on the first vertical part 3, capturing a number of 3D information-carrying images of a truck, including a first side of the truck. Preferably, the first stereo camera 11 is positioned high enough up on the vertical part 3 to be able to also get a view of an upper horizontal side of the truck. The first stereo camera 11 may be directed in the same plane as the plane the portal 1 constitutes, i.e. in the y-z plane, capturing images in a direction perpendicular to the general truck-driving direction a (see fig. 2) along the x-axis through the portal 1. Alternatively, the first stereo camera 11 may be directed at an angle towards the general truck-driving direction either at an angle so that it captures images angled against the general truck-driving direction or at an angle so that it captures images angled along the general truck-driving direction. In the exemplary portal of fig. 1 there is further a second stereo camera 12 arranged on the horizontal part 5, capturing a number of 3D informationcarrying images of the horizontal, upper side of the truck as it passes by. As for the first stereo camera 11, the second stereo camera 12 may be directed perpendicular to or angled to the general truck-driving direction. There is further a third stereo camera 13 arranged on the second vertical part 4, capturing a number of 3D information-carrying images of the second vertical side of the truck as it passes by. As for the first stereo camera 11, the third stereo camera 13 may be directed perpendicular to or angled to the general truck-driving direction. In this embodiment there are also other cameras, for example one camera 14 directed angled along the general truck-driving direction for capturing high-resolution 2D information-carrying images of the back-side of the truck. Those 2D informationcarrying images are used for determining the quality of the timber load. There may also be other 2D cameras showing different angles of the truck, such as cameras 15, 16.
[00033] Further, the cameras 11-16 are communicatively connected to a computer system 30 that is arranged to perform processing of the images captured by the cameras 11-16 and to perform the inventive method. The computer system may comprise one or more processors and one or more memories. The one or more processors and memories may be arranged in one or more location or may be distributed in a wireless and/or wireline communication network.
[00034] Fig. 2 shows an example of a truck 20 having two trailers 21, 22 loaded with load, in this case timber 23, 24, the truck passing through the portal 1. The truck 20 drives on a road surface 7. The truck 20 has a first vertical side 20a and a second vertical side (not shown) extending in a driving direction of the truck, on opposite sides of the truck. The truck further has an upper, generally horizontal side 20c. The truck 20 further comprises a driving compartment 20d. In the disclosure, the wording “truck” generally signifies the truck driving compartment part 20d including the trailers 21, 22 connected to the truck driving compartment part and the load 23, 24 on the trailers. In fig. 2, as well as in fig. 1 there is marked a Cartesian coordinate system having the x- and y-axes in the road surface, and the z-axis pointing vertically downwards, wherein the x-axis points in the general truck driving direction a.
[00035] Fig. 3, with some references to fig. 1, 2 and 4, 5, shows an embodiment of a method performed by the computer system 30 for facilitating estimation of volumes of load of a truck 20 moving on a surface 7, the truck 20 comprising the load 22, 23. Firstly, the first camera 11, which in this example is a stereo camera, captures 202 one or more 3D information-carrying images of the truck 20 as it passes by. The first stereo camera may have two different image sensors that simultaneously capture two 2D images, the two images making up an image pair. Such an image pair may be understood as one 3D information-carrying image, in case a stereo camera is used, as the 2D images of the image pair together contains depth information. In case another type of camera creating 3D information-carrying images is used, 3D information carrying image may have another shape. The captured 3D information-carrying images are communicated to the computer system 30. The computer system processes 204 the one or more 3D information-carrying images into a 3D point cloud 110, which is shown in fig. 4. The processing of 3D information-carrying images into a 3D point cloud may be performed according to any known manner. When using a stereo camera, it may be performed using photogrammetry on the two simultaneously captured 2D images, captured by the two different image sensors of the stereo camera. In photogrammetry, each of the two simultaneously captured 2D images are rectified so that they appear in the same plane. Thereafter, information that is common from the two 2D images, i.e. information identified as showing the same feature, are determined from the rectified images. Further, a disparity map is created that comprises information of the different features from both 2D images. Then the disparity map is converted into a 3D point cloud, wherein the points of the 3D point cloud position the identified features into a 3D Cartesian co-ordinate system with different position in the image depth direction depending on the disparity information. According to another embodiment, the camera that produces the 3D information-carrying images may be a camera equipped with a depth sensor, such as a camera having a depth-determining equipment e.g. using laser light to determine the position of identified features.
[00036] According to an embodiment, the 3D point cloud is then positioned 206 into the coordinate system shown in figs. 1, 2 and 4, defining a horizontal plane of the road surface on which the truck 20 moves and defining the predetermined truck-driving direction a. Then a vertical plane 120 (see fig. 5) of the 3D point cloud 110 is determined 208 that is to represent a common denominator of the first side 20a of the truck, specifically the first side of the truck load 23, e.g. the timber. The vertical plane 120 is substantially perpendicular to a horizontal plane of the road surface 7 and extends in approximately the general truck-driving direction a. The vertical plane 120 is determined as the most common denominator of a majority of points of the 3D point cloud 110. Fig. 5 shows a 3D point cloud in which the vertical plane 120 has been determined. The vertical plane is illustrated in fig. 5 with two diagonal lines crossing each other and wherein the ends of the diagonal lines above the truck are connected, as well as the ends below the truck. In the embodiment of fig. 6, separate vertical planes are determined for each load of the truck. In fig. 6 the vertical plane 120 marked is for the load 23 closest to the driving compartment 20d. In step 208, one substantially vertical plane is determined out of all possible vertical planes or substantially vertical planes in the 3D point cloud. According to an embodiment, the determined vertical plane is the most common denominator for inlier points, i.e. points within a certain distance of the plane, and the plane should at the same time has as many inlier points as possible, considering at the same time the most common denominator. The vertical plane determined in this way will represent the first vertical side 20a of the truck. The determination of the vertical plane may, according to an embodiment, be performed by a random sample consensus, RANSAC algorithm performed on the points of the 3D point cloud. The RANSAC algorithm determines inlier points of the 3D point cloud, i.e. points within a certain distance from a possible vertical plane, in comparison to outlier points which are outside the certain distance. On the inlier points, a least square method may be performed in order to determine the position of the vertical plane. The vertical plane may be even more specifically determined, as will be described further below, for example in relation fig. 8.
[00037] After the vertical plane has been determined, an original image, e.g. a two-dimensional, 2D, image, captured of the truck 20 simultaneously as the 3D information-carrying image on which the 3D point cloud of fig. 4 was determined is warped 210 around the determined vertical plane 120 so that the 2D image seems to have been captured from a virtual camera positioned at a position substantially horizontal to the first vertical side of the truck at a distance from the determined vertical plane, and captured in a virtual camera direction perpendicular to the vertical plane. The warping is based on a position of the camera that captured the 2D image and on a direction in which the 2D image was captured and on the virtual camera position and on the virtual camera direction. The camera that captured the 2D image may be the first camera, i.e. the camera that captured the 3D information-carrying image, but it may also be another camera.
[00038] Fig. 6 shows a photo of a truck 20 driving through a portal such as the portal 1. Fig. 7 shows the image of fig. 6 warped around a vertical plane determined from one or more 3D information-carrying images captured at the same time as the photo, as described in the embodiment above. Even though the exemplary image of fig. 6 is actually not captured simultaneously as the 3D information-carrying image on which the 3D point cloud of fig. 4 was determined (for example, the trucks of fig. 4 is driving in a different direction than the truck of fig. 6), it is to be understood that the method is to be performed for 2D images (e.g. photos) captured simultaneously as the 3D images, or at least simultaneously as the time point for which the 3D point cloud is determined. As can be seen, if comparing the warped image of fig. 7, “captured” from the virtual camera with the photo of fig. 6, the perspective projection of details of the image that are in the vertical plane has been taken away. For example, struts 32 of the truck holding the timber load in position, points vertically and are parallel in fig. 7 compared to pointing obliquely upwardly in fig. 6.
[00039] To be able to correctly measure the size of the timber load from a warped image, it is important that the vertical plane is as correctly determined to represent the first side of the truck as possible. For this reason, the step of determining 208 (fig. 3) the vertical plane can be further developed, as is shown in fig 8. According to the embodiment of fig. 8, the determining 208 of the vertical plane comprises partitioning 222 points of the 3D point cloud into a plurality of groups depending on each point’s position in a direction perpendicular to the truck-driving direction, i.e. the points are grouped depending on their position on the y-axis. Thereafter, the one group of the plurality of groups is selected 224 that has more points of the 3D point cloud than each of the other of the plurality of groups. Then, the substantially vertical plane is determined 226 based on the points of the 3D point cloud that are within the selected group, the vertical plane being determined as the most common denominator of most points of the selected group. The different groups may cover an equal distance along the y-axis. For example, the groups may each cover a distance on the y-axis that represents 0.1 m distance in reality. Hereby, the less relevant points of the 3D point cloud can be sorted out and the determination of the vertical plane be based on only the most relevant points. As a result, a vertical plane even more in line with the first vertical side 20a of the truck is achieved compared to the embodiment of fig. 3. The points of the 3D point cloud may be partitioned into groups based on a histogram of the points.
[00040] According to an embodiment, the vertical plane of the 3D point cloud may be more precisely determined 208 (fig. 3) by taking into account information from more than one 3D information-carrying image captured of the truck 20 from the same camera at subsequent time points. By using stereo navigation information that determines how the truck has moved between the more than one 3D information-carrying images, more image information is achieved of the truck, resulting in more points in the 3D point cloud. As there are more points in the 3D point cloud, the vertical plane can be determined with more precision, i.e. a better representation of the first vertical side 20c of the truck 20 is achieved. Such an embodiment is described in fig. 9, wherein the determination is performed based on a first 3D information-carrying image captured by the first camera 11 at a first time point and a second 3D information-carrying image captured by the first camera at a second time point earlier than the first time point. According to this embodiment, the method further comprises, in the first 3D information-carrying image, detecting 232 first feature positions of a plurality of characteristic features of the truck 20, and in the second 3D information-carrying image, detecting 234 second feature positions of the plurality of characteristic features. Thereafter, movement data of the truck 20 from the second time point to the first time point is determined 236 based on the detected first and second feature positions of the plurality of characteristic features. The image processing 204 (fig. 3) of images into a 3D point cloud then comprises image processing of the first and the second 3D information-carrying images into a 3D point cloud, taking into account the determined movement data. Movement data, i.e. how the truck has moved from the first time point to the second time point may be determined using Procrustes analysis.
[00041] According to another embodiment, when determining the vertical plane, a number of vertical planes may be tested that are a little bit angled, e.g. up to 10 degrees, to the predetermined truck-driving direction in order to find a better approximation of the first vertical side of the truck. There may be a better approximation, i.e. a better determination of the vertical plane that is a bit angled to the truck-driving direction as it may happen that the truck is driven a bit askew through the portal.
[00042] In an embodiment, points of the 3D point cloud that are determined to belong to the chassis of the driving compartment of the truck are deleted before determining the vertical plane. The determination may be performed by comparing the 3D point cloud to 3D point cloud models of trucks. Hereby, a better approximation of the side of the actual load can be determined.
[00043] In another embodiment, points of the 3D point cloud that are determined to belong to the wheels of the truck are deleted from the 3D point cloud before determining the vertical plane. The determination may be performed by deleting all points below a certain level above the surface 7 on which the truck moves. Before deleting points of the 3D point cloud belonging to the wheels or to the chassis of the driving compartment of the truck, the system may determine the size of the load. If the load is comparatively small, the points of the wheels may be kept as the vertical plane may not be determined properly when the load is small as there may be too few 3D points.
[00044] In another embodiment, at least one of the cameras 11 -17 is arranged to detect the driving direction of the truck so as to filter out from the invented method any truck that is driving through the portal in the wrong direction, i.e. against the general driving direction a. Such detection may be performed e.g. by comparing two images taken at subsequent time points by the same camera.
[00045] In one embodiment, which is illustrated in fig. 10, there are many images captured of the first side 20a of the truck 20 over time as the truck drives through the portal 10, the different images showing different parts of the truck. In fig. 10, image a is captured first, as the truck drives through the portal, followed by image b, image c and finally image d. The four images mentioned are for illustrative purposes. Normally there are many more images captured than four. Fig. 10 shows an example of a truck 20 comprising a first trailer 21 carrying a load and a second trailer 22 carrying a load. The four images a, b, c, d are processed as described in any of the embodiments of the invention into separate warped images. In order to automatically select which warped image to use for determining the size of the respective loads of the first and the second trailer, an embodiment as described in fig. 11 may be used. Herein, the wheels 25, 26, 27, 28, 29 of the truck 20 are detected 242 in the four images a, b, c, d by the computer system 30. As the four images have been warped over the determined vertical plane so that they appear as if they were captured from a position horizontal to the first side 20a of the truck, the wheels appear circular, as can be seen in the warped photo of fig. 7. Then the wheels can be detected in the warped images using a circle detector algorithm, such as a Circle Hough transform algorithm on the four warped images. When the wheels have been detected, their positions in the respective images a, b, c and d are compared 244 to predetermined, i.e. pre-stored preferred wheel positions. Thereafter, the image with the wheel position having most resemblance with the predetermined wheel positions is selected 246. If possible one load is to be shown in one image. In order to show the load of the first trailer 21, a wheel position resembling the wheel position in image b has been pre-stored. Consequently, the system will automatically select image b for showing the first load of the first 21. In a similar way, image d is automatically selected for showing the load of the second trailer 22.
[00046] In reality, there is often a number of different types of trucks in use. This means that it may be necessary to first determine which type of truck it is before carrying out this embodiment. In one alternative, the truck driver selects which type of truck he is driving, and inputs the truck type selection via a user interface of the system. Alternatively, an automatic detection of type of truck may be performed, by e.g. comparing truck images with pre-stored truck images. In an embodiment, the warped images may each be warped around a vertical plane determined from a plurality of images captured at different time points.
[00047] According to another aspect, the captured images may be used to automatically determine the volume of the truck load. For this reason, 3D information-carrying images of the first vertical side 20a of the truck, the horizontal upper side 20c of the truck and the second vertical side (not shown) of the truck, opposite the first side are captured and image processed into one or more 3D point clouds. Then a first vertical plane is determined for the first vertical side 20a, based on the one or more 3D point clouds, as described above. In a similar way, a horizontal plane is determined for the horizontal upper side 20c and a second vertical plane is determined for the second vertical side 20b of the truck. The computer system then determines the volume of the truck load by integration using the positions of the determined first and second vertical plane and the horizontal plane. Length, width and height is determined from the 3D point cloud and it is controlled that this determined 3D box comprising length, width and height corresponds to the totally integrated volume.
[00048] When determining which parts of the 3D point clouds that show the load of the truck, points of the 3D point cloud that do not show the load are determined and deleted from the 3D point cloud. Input data is a roughly segmented truck where each load has a defined bounding box. By using such a bounding box, parts of the 3D point cloud that are apparently outside the load can be deleted. Then, according to an example, a machine learning algorithm is used on the remaining points of the 3D point cloud to further filter out the load. This may be performed as follows: The computer is fed with a large number of different images of trucks, so called training images. In the training images it is defined what is the load and what is not the load, such as the chassis of the truck. By using machine learning algorithms on the training images, the computer system can learn what is to be determined as the load and what is not to be determined as the load.
[00049] Fig. 12 shows an image-processing computer system 30 for facilitating estimation of volumes of load of a truck driving on a surface. The computer system 30 comprises a processor 303 and a memory 304. Said memory 304 contains instructions executable by said processor, whereby the computer system 30 is operative for performing any of the embodiments of the inventive method described in this disclosure. The computer system may further comprise a communication unit 302 by which the computer system can communicate with one or more of the cameras 11 -16 of the portal 1 of fig. 1. The processor 303 and the memory 304 may be arranged in a sub-arrangement 301. The sub-arrangement 601 may be a micro-processor and adequate software and storage therefore, a Programmable Logic Device, PLD, or other electronic component(s)/processing circuit(s) configured to perform the methods mentioned above. The instructions executable by said processor 303 may be arranged as a computer program 305 stored e.g. in said memory 304. The computer program 305 may be arranged such that when its instructions are run in the processor 303, the instructions cause the computer system 30 to perform the steps described in any of the described embodiments.
[00050] Although the description above contains a plurality of specificities, these should not be construed as limiting the scope of the concept described herein but as merely providing illustrations of some exemplifying embodiments of the described concept. It will be appreciated that the scope of the presently described concept fully encompasses other embodiments which may become obvious to those skilled in the art, and that the scope of the presently described concept is accordingly not to be limited. Reference to an element in the singular is not intended to mean "one and only one" unless explicitly so stated, but rather "one or more." All structural and functional equivalents to the elements of the abovedescribed embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed hereby. Moreover, it is not necessary for an apparatus or method to address each and every problem sought to be solved by the presently described concept, for it to be encompassed hereby. In the exemplary figures, a broken line generally signifies that the feature within the broken line is optional.

Claims (15)

1. An image-processing method performed by a computer system (30) for facilitating estimation of volumes of load of a truck (20) moving on a surface (7), the method comprising: image processing (204) of one or more three dimension, 3D, informationcarrying images captured of the truck (20) including a first side (20a) of the truck, by a first camera (11), into a 3D point cloud (110) positioned into a coordinate system defining a horizontal plane of the surface on which the truck (20) moves and a predetermined truck-driving direction (a), the one or more 3D informationcarrying images comprising a first 3D information-carrying image captured at a first time point; determining (208) a vertical plane (120) of the 3D point cloud (110), the vertical plane representing the first side of the truck, the vertical plane (120) being substantially perpendicular to the horizontal plane of the surface (7) and extending in the predetermined truck-driving direction (a), wherein the determined vertical plane (120) is the most common denominator of a majority of points of the 3D point cloud (110), warping (210) a first 2D image captured of the truck (20) at the first time point around the determined vertical plane (120) so that the first 2D image seems to have been captured from a virtual camera positioned at a position substantially horizontal to the truck load at a distance from the determined vertical plane, and captured in a virtual camera direction perpendicular to the vertical plane, the warping being based on a position of the camera that captured the first 2D image and on a direction in which the first 2D image was captured and on the virtual camera position and the virtual camera direction.
2. Method according to claim 1, wherein the first camera (11) is a stereo camera, and each of the one or more 3D information-carrying images comprises a pair of images simultaneously captured by two different image sensors of the first stereo camera (11).
3. Method according to claim 1 or 2, wherein the determining (208) comprises: partitioning (222) points of the 3D point cloud into a plurality of groups depending on each point’s position in a direction perpendicular to the truck-driving direction, selecting (224) a group of the plurality of groups, the selected group having more points of the 3D point cloud than each of the other of the plurality of groups, and determining (226) the substantially vertical plane based on the points of the 3D point cloud that are within the selected group, wherein the vertical plane is the most common denominator of most points of the selected group.
4. Method according to any of the preceding claims, wherein the one or more 3D information-carrying images further comprises a second 3D informationcarrying image captured by the first camera (11) at a second time point earlier than the first time point, the method further comprising, in the first 3D information-carrying image, detecting (222) first feature positions of a plurality of characteristic features of the truck (20), and in the second 3D information-carrying image, detecting (224) second feature positions of the plurality of characteristic features, determining (226) movement data of the truck (20) from the second time point to the first time point based on the detected first and second feature positions of the plurality of characteristic features, wherein the image processing (204) of the one or more 3D information-carrying images comprises image processing of the first and the second 3D informationcarrying images into a 3D point cloud, taking into account the determined movement data.
5. Method according to any of claims 1 -3, wherein the one or more 3D information carrying images further comprises a second 3D information-carrying image captured by the first camera at a second time point earlier than the first time point, wherein: the image processing (204) comprises image processing of the first 3D information-carrying image into a first 3D point cloud positioned into the coordinate system and image processing of the second 3D information-carrying image into a second 3D point cloud positioned into the coordinate system; the determining (208) comprises determining a first vertical plane of the first 3D point cloud, the first vertical plane being the most common denominator of most points of the first 3D point cloud, and determining a second vertical plane of the second 3D point cloud, the second vertical plane being the most common denominator of most points of the second 3D point cloud, wherein the first 2D image is warped (210) around the first vertical plane, the method further comprises: warping a second 2D image captured of the truck at the second time point around the determined second vertical plane so that the second 2D image seems to have been captured from the virtual camera in the virtual camera direction.
6. Method according to claim 5, further comprising: detecting (242) positions of wheels of the truck in the warped first 2D image and in the warped second 2D image, comparing (244) the detected wheel positions in the warped first 2D image and in the warped second 2D image with predetermined preferred wheel positions, and selecting (246) the one of the warped first 2D image and the warped second 2D image that has most resemblance with any of the predetermined wheel positions.
7. Method according to claim 5 or 6, wherein the one or more 3D information carrying images further comprises a third image captured by the first camera at a third time point earlier than the second time point, the method further comprising, in the first 3D information carrying image, detecting first feature positions of a plurality of characteristic features of the truck, and in the second 3D information-carrying image, detecting second feature positions of the plurality of characteristic features, in the third image, detecting third feature positions of the plurality of characteristic features determining movement data of the truck from the third time point to the second time point to the first time point based on the detected first, second and third feature positions of the plurality of characteristic features, wherein the image processing (204) of the one or more 3D information-carrying images comprises image processing of the first, second and third images into the first 3D point cloud, taking into account the determined movement data and image processing of the second and third images into the second 3D point cloud, taking into account the determined movement data.
8. Method according to any of the preceding claims, wherein the determining (208) of the vertical plane is performed based on a RANSAC algorithm.
9. Method according to any of the preceding claims, further comprising: deleting from the 3D point cloud, points that are determined not belonging to the load;
10. Method according to any of the preceding claims, wherein when determining the vertical plane of the 3D point cloud, the vertical plane can be tilted up to 10 degrees in relation to the predetermined truck-driving direction.
11. An image-processing computer system (30) for facilitating estimation of volumes of load of a truck driving on a surface, the truck comprising the load, the computer system (30) comprising a processor (303) and a memory (304), said memory containing instructions executable by said processor, whereby the computer system (30) is operative for: image processing of one or more three dimension, 3D, informationcarrying images captured of the truck (20) including a first side (20a) of the truck, by a first camera (11), into a 3D point cloud (110) positioned into a coordinate system defining a horizontal plane of the surface on which the truck (20) moves and a predetermined truck-driving direction (a), the one or more 3D information carrying images comprising a first 3D information-carrying image captured at a first time point; determining a vertical plane (120) of the 3D point cloud (110), the vertical plane representing the first side of the truck, the vertical plane (120) being substantially perpendicular to the horizontal plane of the surface (7) and extending in the predetermined truck-driving direction (a), wherein the determined vertical plane (120) is the most common denominator of a majority of points of the 3D point cloud (110), and warping a first 2D image captured of the truck (20) at the first time point around the determined vertical plane (120) so that the first 2D image seems to have been captured from a virtual camera positioned at a position substantially horizontal to the truck load at a distance from the determined vertical plane, and captured in a virtual camera direction perpendicular to the vertical plane, the warping being based on a position of the camera that captured the first 2D image and on a direction in which the first 2D image was captured and on the virtual camera position and the virtual camera direction.
12. Image-processing computer system (30) according to claim 11, operative for determining the vertical plane (120) of the 3D point cloud (110) by: partitioning points of the 3D point cloud into a plurality of groups depending on each point’s position in a direction perpendicular to the truck-driving direction, selecting a group of the plurality of groups, the selected group having more points of the 3D point cloud than each of the other of the plurality of groups, and determining the substantially vertical plane based on the points of the 3D point cloud that are within the selected group, wherein the vertical plane is the most common denominator of most points of the selected group.
13. Image-processing computer system (30) according to claim 11 or 12, wherein the one or more 3D information-carrying images further comprises a second 3D information-carrying image captured by the first camera (11) at a second time point earlier than the first time point, the image-processing computer system (30) further being operative for: in the first 3D information-carrying image, detecting first feature positions of a plurality of characteristic features of the truck (20), and in the second 3D information-carrying image, detecting second feature positions of the plurality of characteristic features, determining movement data of the truck (20) from the second time point to the first time point based on the detected first and second feature positions of the plurality of characteristic features, and wherein the image-processing computer system (30) is operative for image processing of the one or more 3D information-carrying images by image processing of the first and the second 3D information-carrying images into a 3D point cloud, taking into account the determined movement data.
14. Image-processing computer system (30) according to any of claims 11 -13, wherein the one or more 3D information carrying images further comprises a second 3D information-carrying image captured by the first camera at a second time point earlier than the first time point, the image-processing computer system (30) being operative for: the image processing of the one or more 3D information-carrying images by image processing of the first 3D information-carrying image into a first 3D point cloud positioned into the coordinate system and image processing of the second 3D information-carrying image into a second 3D point cloud positioned into the coordinate system, and the determining of the vertical plane by determining a first vertical plane of the first 3D point cloud, the first vertical plane being the most common denominator of most points of the first 3D point cloud, and determining a second vertical plane of the second 3D point cloud, the second vertical plane being the most common denominator of most points of the second 3D point cloud, and the warping of the first 2D image by warping of the first 2D around the first vertical plane, the image-processing computer system further being operative for: warping a second 2D image captured of the truck at the second time point around the determined second vertical plane so that the second 2D image seems to have been captured from the virtual camera in the virtual camera direction, detecting positions of wheels of the truck in the warped first 2D image and in the warped second 2D image, comparing the detected wheel positions in the warped first 2D image and in the warped second 2D image with predetermined preferred wheel positions, and selecting the one of the warped first 2D image and the warped second 2D image that has most resemblance with any of the predetermined wheel positions.
15. A computer program (605) comprising instructions configured for performing any of the methods of claims 1-10, when the computer program is loaded into an image-processing computer system (30).
SE1751405A 2017-11-14 2017-11-14 Method and image processing system for facilitating estimation of volumes of load of a truck SE541083C2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
SE1751405A SE541083C2 (en) 2017-11-14 2017-11-14 Method and image processing system for facilitating estimation of volumes of load of a truck
PCT/SE2018/051075 WO2019098901A1 (en) 2017-11-14 2018-10-22 Method and image processing system for facilitating estimation of volumes of load of a truck

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
SE1751405A SE541083C2 (en) 2017-11-14 2017-11-14 Method and image processing system for facilitating estimation of volumes of load of a truck

Publications (2)

Publication Number Publication Date
SE1751405A1 SE1751405A1 (en) 2019-04-02
SE541083C2 true SE541083C2 (en) 2019-04-02

Family

ID=65899159

Family Applications (1)

Application Number Title Priority Date Filing Date
SE1751405A SE541083C2 (en) 2017-11-14 2017-11-14 Method and image processing system for facilitating estimation of volumes of load of a truck

Country Status (2)

Country Link
SE (1) SE541083C2 (en)
WO (1) WO2019098901A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2020404468B2 (en) * 2019-12-17 2023-02-23 Motion Metrics International Corp. Apparatus for analyzing a payload being transported in a load carrying container of a vehicle
CN111121640B (en) * 2019-12-18 2021-10-15 杭州明度智能科技有限公司 Vehicle size detection method and device
KR102456468B1 (en) * 2020-01-17 2022-11-07 주식회사 엠코프 Method and system for measuring a volume of aggregate loaded in a truck, and system for managing aggregate

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7277187B2 (en) * 2001-06-29 2007-10-02 Quantronix, Inc. Overhead dimensioning system and method
EP2439487A1 (en) * 2010-10-06 2012-04-11 Sick Ag Volume measuring device for mobile objects
DE102013015833A1 (en) * 2013-09-24 2015-03-26 Fovea Ug (Haftungsbeschränkt) Method for determining the amount of wood in wood bumps
US20160061591A1 (en) * 2014-08-28 2016-03-03 Lts Metrology, Llc Stationary Dimensioning Apparatus
WO2017042747A2 (en) * 2015-09-11 2017-03-16 Mer Mec S.P.A. An apparatus for the determination of the features of at least a moving load
US20170227645A1 (en) * 2016-02-04 2017-08-10 Symbol Technologies, Llc Methods and systems for processing point-cloud data with a line scanner
US20170228885A1 (en) * 2014-08-08 2017-08-10 Cargometer Gmbh Device and method for determining the volume of an object moved by an industrial truck
US20170280125A1 (en) * 2016-03-23 2017-09-28 Symbol Technologies, Llc Arrangement for, and method of, loading freight into a shipping container
EP3232404A1 (en) * 2016-04-13 2017-10-18 SICK, Inc. Method and system for measuring dimensions of a target object

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7277187B2 (en) * 2001-06-29 2007-10-02 Quantronix, Inc. Overhead dimensioning system and method
EP2439487A1 (en) * 2010-10-06 2012-04-11 Sick Ag Volume measuring device for mobile objects
DE102013015833A1 (en) * 2013-09-24 2015-03-26 Fovea Ug (Haftungsbeschränkt) Method for determining the amount of wood in wood bumps
US20170228885A1 (en) * 2014-08-08 2017-08-10 Cargometer Gmbh Device and method for determining the volume of an object moved by an industrial truck
US20160061591A1 (en) * 2014-08-28 2016-03-03 Lts Metrology, Llc Stationary Dimensioning Apparatus
WO2017042747A2 (en) * 2015-09-11 2017-03-16 Mer Mec S.P.A. An apparatus for the determination of the features of at least a moving load
US20170227645A1 (en) * 2016-02-04 2017-08-10 Symbol Technologies, Llc Methods and systems for processing point-cloud data with a line scanner
US20170280125A1 (en) * 2016-03-23 2017-09-28 Symbol Technologies, Llc Arrangement for, and method of, loading freight into a shipping container
EP3232404A1 (en) * 2016-04-13 2017-10-18 SICK, Inc. Method and system for measuring dimensions of a target object

Also Published As

Publication number Publication date
SE1751405A1 (en) 2019-04-02
WO2019098901A1 (en) 2019-05-23

Similar Documents

Publication Publication Date Title
US10825198B2 (en) 3 dimensional coordinates calculating apparatus, 3 dimensional coordinates calculating method, 3 dimensional distance measuring apparatus and 3 dimensional distance measuring method using images
US11361469B2 (en) Method and system for calibrating multiple cameras
CN109029253B (en) Package volume measuring method and system, storage medium and mobile terminal
CN112270713B (en) Calibration method and device, storage medium and electronic device
US10424081B2 (en) Method and apparatus for calibrating a camera system of a motor vehicle
CN109801333B (en) Volume measurement method, device and system and computing equipment
US10321116B2 (en) Method and system for volume determination using a structure from motion algorithm
DE102012021375B4 (en) Apparatus and method for detecting a three-dimensional position and orientation of an article
EP3496035B1 (en) Using 3d vision for automated industrial inspection
JP6649796B2 (en) Object state specifying method, object state specifying apparatus, and carrier
CN112902874B (en) Image acquisition device and method, image processing method and device and image processing system
CN112132523B (en) Method, system and device for determining quantity of goods
WO2019098901A1 (en) Method and image processing system for facilitating estimation of volumes of load of a truck
KR102073468B1 (en) System and method for scoring color candidate poses against a color image in a vision system
CN107504917B (en) Three-dimensional size measuring method and device
CN112489106A (en) Video-based vehicle size measuring method and device, terminal and storage medium
CN112378333B (en) Method and device for measuring warehoused goods
KR20180098945A (en) Method and apparatus for measuring speed of vehicle by using fixed single camera
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
KR102556759B1 (en) Apparatus for Camera-Lidar Calibration and method thereof
KR102420856B1 (en) Method and Device for Examining the Existence of 3D Objects Using Images
EP3629292A1 (en) Reference point selection for extrinsic parameter calibration
US20180005049A1 (en) Determining the position of an object in a scene
CN112991372B (en) 2D-3D camera external parameter calibration method based on polygon matching
KR20150096128A (en) Auto Calibration Method for Virtual Camera based on Mobile Platform