CN109146952B - Method, device and computer readable storage medium for estimating free volume of carriage - Google Patents

Method, device and computer readable storage medium for estimating free volume of carriage Download PDF

Info

Publication number
CN109146952B
CN109146952B CN201811036587.6A CN201811036587A CN109146952B CN 109146952 B CN109146952 B CN 109146952B CN 201811036587 A CN201811036587 A CN 201811036587A CN 109146952 B CN109146952 B CN 109146952B
Authority
CN
China
Prior art keywords
dimensional model
image
carriage
coordinate system
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811036587.6A
Other languages
Chinese (zh)
Other versions
CN109146952A (en
Inventor
吕晓磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201811036587.6A priority Critical patent/CN109146952B/en
Publication of CN109146952A publication Critical patent/CN109146952A/en
Application granted granted Critical
Publication of CN109146952B publication Critical patent/CN109146952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The disclosure provides a method and a device for estimating the free volume of a carriage and a computer readable storage medium, and relates to the technical field of logistics storage. The method for estimating the free volume of the compartment comprises the following steps: recognizing an image of a carriage shot by a camera to obtain a carriage model, and acquiring a three-dimensional model of the carriage according to the carriage model; acquiring idle depth information of the carriage by using the image and the three-dimensional model; and estimating the idle volume of the carriage by using the idle depth information and the three-dimensional model. The method and the device can realize automatic estimation of the idle volume in the carriage, and improve the efficiency of estimating the idle volume of the carriage.

Description

Method, device and computer readable storage medium for estimating free volume of carriage
Technical Field
The present disclosure relates to the field of logistics storage technologies, and in particular, to a method and an apparatus for estimating a carriage free volume, and a computer-readable storage medium.
Background
Measuring the empty volume of a truck bed is critical to the logistics industry. Estimating the empty volume of a freight car means that after a part of the freight is loaded in the freight car, it is necessary to estimate how much freight the empty volume can be loaded in order to plan the next loading.
Conventional estimation methods typically employ techniques of manual measurement estimation to estimate the car free volume. The manual measurement generally uses a tape measure or a laser range finder and other devices to measure the length, width and height of the vacant part of the freight wagon compartment, and then calculates the remaining volume of the freight wagon compartment.
The conventional estimation method requires a certain operation experience of an operator and is relatively time-consuming and labor-consuming. In addition, different measurement and estimation methods may be required for different models of trucks, and thus the work efficiency is low.
Disclosure of Invention
The technical problem solved by the present disclosure is how to realize automatic estimation of the free volume in the carriage, and improve the efficiency of estimating the free volume of the carriage.
According to an aspect of the embodiments of the present disclosure, there is provided a method of estimating a car free volume, including: recognizing an image of a carriage shot by a camera to obtain a carriage model, and acquiring a three-dimensional model of the carriage according to the carriage model; acquiring idle depth information of the carriage by using the image and the three-dimensional model; and estimating the idle volume of the carriage by using the idle depth information and the three-dimensional model.
In some embodiments, using the image and the three-dimensional model, obtaining the free depth information of the car comprises: acquiring a rotation and translation relation between a coordinate system of the three-dimensional model and a camera coordinate system by using the image and the three-dimensional model; and acquiring the idle depth information of the carriage by using the image, the three-dimensional model and the rotation and translation relation.
In some embodiments, using the image and the three-dimensional model, obtaining a roto-translational relationship of a coordinate system of the three-dimensional model to a camera coordinate system comprises: constructing a first objective function; the function value of the first objective function is: projecting the coordinate points on the three-dimensional model to an image, and then counting the number of the coordinate points which are superposed with the pixel points with the pixel gradient value larger than a preset threshold value in the image; the variables of the first objective function are: a rotation matrix and a translation matrix from a coordinate system of the three-dimensional model to a camera coordinate system; and solving the rotation matrix and the translation matrix when the function value of the first objective function is maximum.
In some embodiments, R and T are determined that maximize the following objective function:
Figure BDA0001791031420000021
wherein R, T represents the rotation matrix and translation matrix, x, from the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, II () representing the perspective projection of the point on the coordinate system of the camera onto the image by using the perspective projection matrix of the camera, y representing a binary image obtained by detecting the line segment of the image,
Figure BDA0001791031420000022
representing the image y in coordinates II (Rx)iThe pixel value at + T), Σ1Represents a numerical sum, Σ2Indicating the number of coordinates.
In some embodiments, using the image, the three-dimensional model, and the roto-translational relationship, obtaining the idle depth information of the car comprises: constructing a second objective function; the function value of the second objective function is: projecting coordinates from a carriage door to a starting point shielded by goods on each ridge in the cargo depth direction of the three-dimensional model to an image, and then, determining the number of coordinate points superposed with pixel points with pixel gradient values larger than a preset threshold value in the image; the variable of the second objective function is a depth coordinate of the starting point on the three-dimensional model; and when the solution enables the function value of the second objective function to be maximum, the depth coordinate of the starting point on the three-dimensional model is obtained.
In some embodiments, D is determined when the following objective function is maximizedlb、Dlu、Dru、Drb
Figure BDA0001791031420000023
Constraint of Dlu≤Dlb,Dru≤Drb(ii) a Wherein R, T represents the rotation matrix and translation matrix, x, from the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, pi () representing the perspective projection of the point on the camera coordinate system onto the image by using the perspective projection matrix of the camera, y representing a binary image obtained by detecting the line segment of the image,
Figure BDA0001791031420000031
indicating that the image y is at coordinate II (Rx)iPixel value at + T), Dlb、Dlu、Dru、DrbRespectively representing the starting points of the carriage at the left lower corner, the left upper corner, the right upper corner and the right lower corner in the depth direction of the cargo, which are shielded by the cargoThe depth coordinate on the three-dimensional model, j takes lb, lu, ru, rb, [0, D ] respectivelyj]Denotes xiThe value range of (1) is that the jth arris of the carriage in the depth direction of the cargo is from 0 to DjCoordinate point of [ D ]j,L]Denotes xiThe value range is that the depth coordinate of the jth arris of the carriage in the depth direction of the cargo is DjTo the point of the maximum L.
In some embodiments, estimating the free volume of the car using the free depth information and the three-dimensional model comprises: averaging the depth coordinates to obtain an idle depth estimation value of the carriage; determining the section area of the carriage by using the three-dimensional model, wherein the section is vertical to the depth direction; and multiplying the idle depth estimated value of the carriage by the sectional area to obtain an idle volume estimated value of the carriage.
In some embodiments, the method further comprises: the car is photographed by a camera from the outside of the door of the car so that the boundary of the image coincides with the outer surface of the car.
According to another aspect of the disclosed embodiments, there is provided an apparatus for estimating a car free volume, including: the car model identification module is configured to identify the car image shot by the camera to obtain the car model; the three-dimensional model acquisition module is configured to acquire a three-dimensional model of the compartment according to the compartment model; the depth information acquisition module is configured to acquire the idle depth information of the carriage by using the image and the three-dimensional model; a cargo volume estimation module configured to estimate an empty volume of the car using the empty depth information and the three-dimensional model.
In some embodiments, the depth information acquisition module is configured to: acquiring a rotation and translation relation between a coordinate system of the three-dimensional model and a camera coordinate system by using the image and the three-dimensional model; and acquiring the idle depth information of the carriage by using the image, the three-dimensional model and the rotation and translation relation.
In some embodiments, the depth information acquisition module is configured to: constructing a first objective function; the function value of the first objective function is: projecting the coordinate points on the three-dimensional model to an image, and then counting the number of the coordinate points which are superposed with the pixel points with the pixel gradient value larger than a preset threshold value in the image; the variables of the first objective function are: a rotation matrix and a translation matrix from a coordinate system of the three-dimensional model to a camera coordinate system; and solving the rotation matrix and the translation matrix when the function value of the first objective function is maximum.
In some embodiments, the depth information acquisition module is configured to: determining R and T that maximize the following objective function:
Figure BDA0001791031420000041
wherein R, T represents the rotation matrix and translation matrix, x, from the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, pi () representing the perspective projection of the point on the camera coordinate system onto the image by using the perspective projection matrix of the camera, y representing a binary image obtained by detecting the line segment of the image,
Figure BDA0001791031420000042
indicating that the image y is at coordinate II (Rx)iThe pixel value at + T), Σ1Represents a numerical sum, Σ2Indicating the number of coordinates.
In some embodiments, the depth information acquisition module is configured to: constructing a second objective function; the function value of the second objective function is: projecting coordinates from a carriage door to a starting point shielded by goods on each ridge in the cargo depth direction of the three-dimensional model to an image, and then, determining the number of coordinate points superposed with pixel points with pixel gradient values larger than a preset threshold value in the image; the variable of the second objective function is a depth coordinate of the starting point on the three-dimensional model; and when the solution enables the function value of the second objective function to be maximum, the depth coordinate of the starting point on the three-dimensional model is obtained.
In some embodiments, the depth information acquisition module is configured to: determining D when the following objective function takes the maximum valuelb、Dlu、Dru、Drb
Figure BDA0001791031420000043
Constraint of Dlu≤Dlb,Dru≤Drb(ii) a Wherein R, T represents the rotation matrix and translation matrix, x, from the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, pi () representing the perspective projection of the point on the camera coordinate system onto the image by using the perspective projection matrix of the camera, y representing a binary image obtained by detecting the line segment of the image,
Figure BDA0001791031420000044
indicating that the image y is at coordinate II (Rx)iPixel value at + T), Dlb、Dlu、Dru、DrbRespectively representing the depth coordinates of the starting points of the left lower corner, the left upper corner, the right upper corner and the right lower corner of the carriage in the depth direction of the cargo on the three-dimensional model, wherein j takes lb, lu, ru and rb, [0, D ] respectivelyj]Denotes xiThe value range of (1) is that the jth arris of the carriage in the depth direction of the cargo is from 0 to DjCoordinate point of [ D ]j,L]Denotes xiThe value range is that the depth coordinate of the jth arris of the carriage in the depth direction of the cargo is DjTo the point of the maximum L.
In some embodiments, the cargo volume estimation module is configured to: averaging the depth coordinates to obtain an idle depth estimation value of the carriage; determining the section area of the carriage by using the three-dimensional model, wherein the section is vertical to the depth direction; and multiplying the idle depth estimated value of the carriage by the sectional area to obtain an idle volume estimated value of the carriage.
In some embodiments, the apparatus further comprises: and an image shooting module configured to shoot the vehicle cabin from the outside of the door of the vehicle cabin by using the camera so that the boundary of the image coincides with the outer surface of the vehicle cabin.
According to another aspect of the disclosed embodiments, there is provided an apparatus for estimating a car free volume, including: a memory; and a processor coupled to the memory, the processor configured to perform the aforementioned method of estimating the car free volume based on instructions stored in the memory.
According to yet another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions, which when executed by a processor, implement the aforementioned method of estimating a car empty volume.
The method and the device can realize automatic estimation of the idle volume in the carriage, and improve the efficiency of estimating the idle volume of the carriage.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 illustrates a flow diagram of a method of estimating a car free volume of some embodiments of the present disclosure.
Fig. 2 shows a schematic view of a shooting interface of the camera when shooting a car.
Fig. 3 shows a schematic view of a three-dimensional model of a boxcar.
Fig. 4 shows a schematic diagram of a binary image obtained by performing line segment detection on an image.
FIG. 5 shows a schematic diagram of estimating depth coordinates of a starting point within a boxcar occluded by cargo.
Fig. 6 shows a schematic structural diagram of an apparatus for estimating a car free volume according to some embodiments of the present disclosure.
Fig. 7 is a schematic structural diagram of an apparatus for estimating a car free volume according to further embodiments of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The inventor researches and discovers that mobile terminal equipment is gradually popularized along with the development of social economy. If the freight condition of carriage is shot to the monocular camera that can use the removal end, then the idle volume in current carriage is judged out automatically, will improve work efficiency by a wide margin. In view of the above, the present disclosure provides a method for estimating the empty volume of a wagon compartment based on a monocular camera at a mobile terminal, which automatically identifies the model of the wagon compartment by taking a picture through the monocular camera at the mobile terminal, and automatically estimates the empty volume of the wagon compartment without manually measuring the length, width and height of the empty part of the wagon compartment to calculate the empty volume.
Some embodiments of the disclosed method of estimating car free volume are first described in conjunction with fig. 1.
Fig. 1 illustrates a flow diagram of a method of estimating a car free volume of some embodiments of the present disclosure. As shown in fig. 1, the present embodiment includes steps S102 to S108.
In step S102, the vehicle cabin is photographed from the outside of the door of the vehicle cabin by a camera.
Fig. 2 shows a schematic view of a shooting interface of the camera when shooting a car. As shown in fig. 2, when capturing an image of a car, the mobile device may provide information related to the interactive prompt, requiring the boundary of the image to coincide with the outer surface of the car. The images of the recorded freight condition meeting the follow-up processing requirements can be obtained by shooting, so that the follow-up optimization processing is facilitated, the calculation amount of the follow-up optimization is reduced, the accuracy of the estimation result is ensured, and the calculation time is saved. Meanwhile, the angle range of shooting by the user is limited through the interactive prompt box in the step, and the rotation and translation relation between the coordinate system of the three-dimensional model and the camera coordinate system can be determined more quickly in the subsequent steps.
After the user finishes shooting, the image can be uploaded to the cloud end to be subjected to subsequent processing steps, and after the subsequent processing steps are finished, the mobile terminal device can receive a returned estimation result.
In step S104, the recognition camera captures an image of the car to obtain a car model, and a three-dimensional model of the car is obtained according to the car model.
Since the images contain information such as the truck lights, the outer contour of the carriage and the like, the model of the truck carriage in the images can be identified by using a deep learning method. Specifically, the type of the current truck may be judged using a conventional fine-grained classification (fine-grained classification).
In some embodiments, a bilinear based convolutional network may be used for identification. The identification method of bilinear convolutional network can be realized by referring to the following paper, Lin T Y, RoyChowdhury A, Maji S.Biliner cnn models for fine-grained visual recognition [ C ]// Proceedings of the IEEE International Conference on Computer Vision.2015: 1449-. The bilinear convolution network trained in advance by a large number of different types of boxcar images is adopted for deep learning, and a fine-grained classification result with higher accuracy can be obtained.
The three-dimensional model of the boxcar can be a 3D mesh grid model. Fig. 3 shows a schematic view of a three-dimensional model of a boxcar. After the corresponding relation between the models of various compartments and the three-dimensional model is obtained in advance, the three-dimensional model of the boxcar can be obtained by utilizing the boxcar models obtained through deep learning.
In step S106, the image and the three-dimensional model are used to acquire the free depth information of the car.
In order to acquire the idle depth information of the carriage, firstly, the image and the three-dimensional model are used for acquiring the rotational-translational relation between the coordinate system of the three-dimensional model and the camera coordinate system, and then the image, the three-dimensional model and the rotational-translational relation are used for acquiring the idle depth information of the carriage. The methods for obtaining the rotational-translational relationship and the idle depth information are described later.
In step S108, the empty volume of the car is estimated using the empty depth information and the three-dimensional model.
The free depth information can be, for example, a depth coordinate D of a starting point occluded by the cargo on the three-dimensional modellb、Dlu、Dru、Drb. The average operation is carried out on the depth coordinates, and the idle depth estimated value (D) of the compartment can be obtainedlb+Dlu+Dru+Drb)/4. Then, the sectional area S of the car can be determined using the three-dimensional model, wherein the sectional plane is perpendicular to the depth direction. Finally, the estimated value of the idle depth of the carriage is multiplied by the sectional area to obtain the estimated value (D) of the idle volume of the carriagelb+Dlu+Dru+Drb)*S/4。
The embodiment provides a scheme for estimating the idle volume of the carriage based on the monocular camera at the mobile terminal, can realize automatic estimation of the idle volume in the carriage, does not need manual measurement of the length, the width and the height of the idle part of the carriage to calculate the idle volume, and improves the efficiency of estimating the idle volume of the carriage.
The following describes in detail how to obtain the rotational-translational relationship between the coordinate system of the three-dimensional model and the camera coordinate system by using the image and the three-dimensional model.
And acquiring the rotation and translation relation between the coordinate system of the three-dimensional model and the camera coordinate system, namely calculating a rotation matrix R and a translation matrix T of the coordinate system of the three-dimensional model of the boxcar relative to the camera coordinate system. After the rotation matrix R and the translation matrix T are obtained, the three-dimensional model can be matched with the characteristics of the image.
In the conventional matching of a 2D picture and a three-dimensional model, a feature point detection method is usually adopted to detect and obtain some 2D feature points (2D landmark), and then a rotational-translational parameter is used as an optimization variable, so that after a corresponding point of the three-dimensional model is mapped to the 2D picture, the distance between the corresponding point and the 2D picture feature points is minimum. For this embodiment, the conventional method is less suitable because the types of the freight cars are many and the characteristic points of each freight car are different. If 2D feature point definition is carried out on each boxcar and a large number of pictures are collected to train the 2D feature point detector, the workload is huge.
In order to more simply and efficiently acquire the rotation-translation relation between the coordinate system of the three-dimensional model and the camera coordinate system, a first objective function can be constructed firstly. Wherein the function value of the first objective function is: projecting the coordinate points on the three-dimensional model to an image, and then counting the number of the coordinate points which are superposed with the pixel points with the pixel gradient value larger than a preset threshold value in the image; the variables of the first objective function are: a rotation matrix and a translation matrix from the coordinate system of the three-dimensional model to the camera coordinate system. Then, a rotation matrix and a translation matrix when the function value of the first objective function is maximized are solved.
For example, the following objective function may be constructed:
Figure BDA0001791031420000081
and determines R and T that maximize the objective function. Wherein R, T represents the rotation matrix and translation matrix, x, from the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, pi () representing the perspective projection of the point on the coordinate system of the camera onto the image using the perspective projection matrix of the camera (for a given camera, the calibration parameters are known), y representing the binary image obtained by line segment detection on the image,
Figure BDA0001791031420000091
indicating that the image y is at coordinate II (Rx)iThe pixel value at + T), Σ1Represents a numerical sum, Σ2Indicating the number of coordinates.
Fig. 4 shows a schematic diagram of a binary image obtained by performing line segment detection on an image. As shown in FIG. 4, the Line Segment of the photographed Image is detected by Rafael Grompone von Gioi, J é mie Jakubotz, Jean-Michel Morel, and Gregory Randall, LSD a Line Segment Detector, Image Processing On Line,2(2012), pp.35-55, https:// doi.org/10.5201/ipol.2012.gjmr-LSD. In the binary image, the line segment refers to a connected region with a large pixel gradient (the edge of an object in the original image and the color change can cause the pixel gradient to be large, and the gradient is the first derivative of the pixel), and is represented as a white region in the binary image. If a certain pixel point is on the line segment, the pixel value is 1, and if the certain pixel point is not on the line segment, the pixel value is 0.
The optimization problem indicates how to adjust R and T so that, after a three-dimensional model (which can be understood as a coordinate point on the edge of a model frame) is projected onto an image plane, occupied pixels coincide with pixel points with pixel values of 1 in an image of an original image detected by line segments as much as possible. The above objective function also uses the number of all pixel points obtained by projection as a normalization parameter.
In the embodiment, the problem of how to determine the rotational-translational relationship between the coordinate system of the three-dimensional model of the carriage and the camera coordinate system is converted into the problem of solving the optimization objective function, so that the rotational-translational relationship between the coordinate system of the three-dimensional model and the camera coordinate system can be obtained more simply and efficiently. In addition, for an image shot by a camera in a natural illumination scene, the scheme provided by the embodiment is adopted to obtain the rotation and translation relation between the coordinate system of the three-dimensional model and the coordinate system of the camera, so that the robustness is better.
After the rotation and translation relation is obtained, the depth information of the pixel points of the photographed image which are overlapped with the binary image obtained by the three-dimensional model projection can be further obtained. The following describes how to obtain the idle depth information of the car by using the images, the three-dimensional model and the rotational-translational relationship.
Normally, the freight wagon is loaded to preferentially occupy the space close to the head of the wagon. Therefore, we only need to estimate the distance from the outside of the truck to the cargo. First, a second objective function is constructed. Wherein the function value of the second objective function is: projecting coordinates from a carriage door to a starting point shielded by goods on each ridge in the cargo depth direction of the three-dimensional model to an image, and then, determining the number of coordinate points superposed with pixel points with pixel gradient values larger than a preset threshold value in the image; the variable of the second objective function is a depth coordinate of the starting point on the three-dimensional model. Then, when the function value of the second objective function is maximized, the depth coordinate of the starting point on the three-dimensional model is solved.
FIG. 5 shows a schematic diagram of estimating depth coordinates of a starting point within a boxcar occluded by cargo. As shown in FIG. 5, the problem of depth estimation can be translated into an optimization objective function problem, Dlb、Dlu、Dru、DrbAnd the depth coordinates of the starting points of the corners of the lower left corner, the upper right corner and the lower right corner of the carriage in the cargo depth direction, which are shielded by the cargo, on the three-dimensional model are respectively represented. With DluFor example, wherein [0, Dlu]The representation is as visible as possible in the image, and [ D ]lu,L]Is invisible due to cargo occlusion, so is at [0, D ]lu]The coincidence of the area projection and the line segment detection map is as large as possible, and [ D ]lu,L]The regions overlap the line segment detection map as little as possible. Also, due to the need for supports for cargo stacking, in general Dlu≤DlbAnd Dru≤Drb. Thus, the following objective function can be constructed:
Figure BDA0001791031420000101
constraint of Dlu≤Dlb,Dru≤Drb. Wherein R, T represents the rotation matrix and translation matrix, x, from the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, II () representing the perspective projection of the point on the coordinate system of the camera onto the image by using the perspective projection matrix of the camera, y representing a binary image obtained by detecting the line segment of the image,
Figure BDA0001791031420000102
representing the image y in coordinates II (Rx)i+ T) pixel values, j respectively taking lb, lu, ru, rb, [0, Dj]Denotes xiThe value range of (1) is that the jth arris of the carriage in the depth direction of the cargo is from 0 to DjCoordinate point of [ D ]j,L]Denotes xiThe value range is that the depth coordinate of the jth arris of the carriage in the depth direction of the cargo is DjTo the point of the maximum L. By solving the objective function, we can obtain D when the objective function takes the maximum valuelb、Dlu、Dru、DrbSo as to convert Dlb、Dlu、Dru、DrbThe free volume of the car is estimated as free depth information.
According to the embodiment, the problem of how to obtain the idle depth information of the carriage is converted into the problem of solving the optimization objective function, and the idle depth information of the carriage can be obtained more simply and efficiently, so that the idle volume of the carriage can be estimated conveniently. In addition, the difference between the non-occlusion area and the occlusion area is used as an optimization objective function, so that the idle depth information can be obtained more accurately.
The means for estimating the car free volume of some embodiments of the present disclosure is described below in conjunction with fig. 6.
Fig. 6 shows a schematic structural diagram of an apparatus for estimating a car free volume according to some embodiments of the present disclosure. As shown in fig. 6, the apparatus 60 for estimating the free volume of the car in the present embodiment includes a car model identification module 602, a three-dimensional model acquisition module 604, a depth information acquisition module 606, and a cargo volume estimation module 608.
A car model identification module 602 configured to identify an image of a car captured by the camera to obtain a car model;
a three-dimensional model obtaining module 604 configured to obtain a three-dimensional model of the car according to the car model;
a depth information obtaining module 606 configured to obtain idle depth information of the car using the image and the three-dimensional model;
a cargo volume estimation module 608 configured to estimate an empty volume of the car using the empty depth information and the three-dimensional model.
In some embodiments, the apparatus 60 further comprises: the image photographing module 600 is configured to photograph the vehicle compartment from the outside of the door of the vehicle compartment with a camera such that the boundary of the image coincides with the outer surface of the vehicle compartment.
In some embodiments, the depth information acquisition module 606 is configured to: acquiring a rotation and translation relation between a coordinate system of the three-dimensional model and a camera coordinate system by using the image and the three-dimensional model; and acquiring the idle depth information of the carriage by using the image, the three-dimensional model and the rotation and translation relation.
The embodiment provides a scheme for estimating the idle volume of the carriage based on the monocular camera at the mobile terminal, can realize automatic estimation of the idle volume in the carriage, does not need manual measurement of the length, the width and the height of the idle part of the carriage to calculate the idle volume, and improves the efficiency of estimating the idle volume of the carriage.
In some embodiments, the depth information acquisition module 606 is configured to: constructing a first objective function; the function value of the first objective function is: projecting the coordinate points on the three-dimensional model to an image, and then counting the number of the coordinate points which are superposed with the pixel points with the pixel gradient value larger than a preset threshold value in the image; the variables of the first objective function are: a rotation matrix and a translation matrix from a coordinate system of the three-dimensional model to a camera coordinate system; and solving the rotation matrix and the translation matrix when the function value of the first objective function is maximum.
In some embodiments, the depth information acquisition module 606 is configured to: determining R and T that maximize the following objective function:
Figure BDA0001791031420000121
wherein R, T represents the rotation matrix and translation matrix, x, from the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, pi () representing the perspective projection of the point on the camera coordinate system onto the image by using the perspective projection matrix of the camera, y representing a binary image obtained by detecting the line segment of the image,
Figure BDA0001791031420000122
indicating that the image y is at coordinate II (Rx)iThe pixel value at + T), sigma 1 represents the numerical sum, sigma2Express the number of coordinatesAnd (4) counting.
In the embodiment, the problem of how to determine the rotational-translational relationship between the coordinate system of the three-dimensional model of the carriage and the camera coordinate system is converted into the problem of solving the optimization objective function, so that the rotational-translational relationship between the coordinate system of the three-dimensional model and the camera coordinate system can be obtained more simply and efficiently. In addition, for an image shot by a camera in a natural illumination scene, the scheme provided by the embodiment is adopted to obtain the rotation and translation relation between the coordinate system of the three-dimensional model and the coordinate system of the camera, so that the robustness is better.
In some embodiments, the depth information acquisition module 606 is configured to: constructing a second objective function; the function value of the second objective function is: projecting coordinates from a carriage door to a starting point shielded by goods on each ridge in the cargo depth direction of the three-dimensional model to an image, and then, determining the number of coordinate points superposed with pixel points with pixel gradient values larger than a preset threshold value in the image; the variable of the second objective function is a depth coordinate of the starting point on the three-dimensional model; and when the solution enables the function value of the second objective function to be maximum, the depth coordinate of the starting point on the three-dimensional model is obtained.
In some embodiments, the depth information acquisition module 606 is configured to: determining D when the following objective function takes the maximum valuelb、Dlu、Dru、Drb
Figure BDA0001791031420000123
Constraint of Dlu≤Dlb,Dru≤Drb(ii) a Wherein R, T represents the rotation matrix and translation matrix, x, from the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, II () representing the perspective projection of the point on the coordinate system of the camera onto the image by using the perspective projection matrix of the camera, y representing a binary image obtained by detecting the line segment of the image,
Figure BDA0001791031420000124
representing the image y in coordinates II (Rx)iPixel value at + T), Dlb、Dlu、Dru、DrbRespectively representing the depth coordinates of the starting points of the left lower corner, the left upper corner, the right upper corner and the right lower corner of the carriage in the depth direction of the cargo on the three-dimensional model, wherein j takes lb, lu, ru and rb, [0, D ] respectivelyj]Denotes xiThe value range of (1) is that the jth arris of the carriage in the depth direction of the cargo is from 0 to DjCoordinate point of [ D ]j,L]Denotes xiThe value range is that the depth coordinate of the jth arris of the carriage in the depth direction of the cargo is DjTo the point of the maximum L.
In some embodiments, the cargo volume estimation module 608 is configured to: averaging the depth coordinates to obtain an idle depth estimation value of the carriage; determining the section area of the carriage by using the three-dimensional model, wherein the section is vertical to the depth direction; and multiplying the idle depth estimated value of the carriage by the sectional area to obtain an idle volume estimated value of the carriage.
According to the embodiment, the problem of how to obtain the idle depth information of the carriage is converted into the problem of solving the optimization objective function, and the idle depth information of the carriage can be obtained more simply and efficiently, so that the idle volume of the carriage can be estimated conveniently. In addition, the difference between the non-occlusion area and the occlusion area is used as an optimization objective function, so that the idle depth information can be obtained more accurately.
Fig. 7 is a schematic structural diagram of an apparatus for estimating a car free volume according to further embodiments of the present disclosure. As shown in fig. 7, the device 70 for estimating the free volume of the car of this embodiment includes: a memory 710 and a processor 720 coupled to the memory 710, the processor 720 configured to perform a method of estimating a car free volume in any of the foregoing embodiments based on instructions stored in the memory 710.
Memory 710 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.
The means 70 for estimating the cabin free volume may further comprise an input output interface 730, a network interface 740, a storage interface 750, etc. These interfaces 730, 740, 750, as well as the memory 710 and the processor 720, may be connected, for example, by a bus 760. The input/output interface 730 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 740 provides a connection interface for various networking devices. The storage interface 750 provides a connection interface for external storage devices such as an SD card and a usb disk.
The present disclosure also includes a computer readable storage medium having stored thereon computer instructions that, when executed by a processor, implement the method of estimating the car free volume in any of the foregoing embodiments.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (14)

1. A method of estimating a car free volume, comprising:
recognizing an image of a carriage shot by a camera to obtain a carriage model, and acquiring a three-dimensional model of the carriage according to the carriage model;
acquiring a rotation and translation relation between a coordinate system of the three-dimensional model and a camera coordinate system by using the image and the three-dimensional model;
acquiring the idle depth information of the carriage by using the image, the three-dimensional model and the rotational translation relation, and specifically comprising the following steps: constructing a second objective function; the function value of the second objective function is: projecting coordinates from a carriage door to a starting point shielded by goods on each ridge in the cargo depth direction of the three-dimensional model to the image, and then, coinciding the coordinates with pixel points in the image, wherein the pixel gradient value of the pixel points is greater than a preset threshold value; the variable of the second objective function is a depth coordinate of the starting point on the three-dimensional model; solving the depth coordinate of the starting point on the three-dimensional model when the function value of the second objective function is maximum;
and estimating the idle volume of the carriage by using the idle depth information and the three-dimensional model.
2. The method of claim 1, wherein said obtaining, using the image and the three-dimensional model, a roto-translational relationship of a coordinate system of the three-dimensional model to a camera coordinate system comprises:
constructing a first objective function; the function value of the first objective function is: projecting the coordinate points on the three-dimensional model to the image, and then counting the number of the coordinate points which are superposed with the pixel points with the pixel gradient value larger than a preset threshold value in the image; the variables of the first objective function are: a rotation matrix and a translation matrix from a coordinate system of the three-dimensional model to a camera coordinate system;
and solving the rotation matrix and the translation matrix when the function value of the first objective function is maximum.
3. The method of claim 2, wherein R and T are determined that maximize an objective function:
Figure FDA0002688612440000011
wherein R, T represents the rotation matrix and translation matrix, x, of the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, pi () representing the perspective projection of the point on the camera coordinate system onto the image by using the perspective projection matrix of the camera, y representing a binary image obtained by detecting the line segment of the image,
Figure FDA0002688612440000021
indicating that the image y is at coordinate II (Rx)iThe pixel value at + T), Σ1Represents a numerical sum, Σ2Indicating the number of coordinates.
4. The method of claim 1, wherein D is determined when the following objective function is maximizedlb、Dlu、Dru、Drb
Figure FDA0002688612440000022
Constraint of Dlu≤Dlb,Dru≤Drb
Wherein R, T represents the rotation matrix and translation matrix, x, of the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, pi () representing the perspective projection of the point on the camera coordinate system onto the image by using the perspective projection matrix of the camera, y representing a binary image obtained by detecting the line segment of the image,
Figure FDA0002688612440000023
indicating that the image y is at coordinate II (Rx)iPixel value at + T), Dlb、Dlu、Dru、DrbRespectively representing the depth coordinates of the starting points of the left lower corner, the left upper corner, the right upper corner and the right lower corner of the carriage in the depth direction of the cargo on the three-dimensional model, wherein j takes lb, lu, ru and rb, [0, Dj]Denotes xiThe value range of (1) is that the jth arris of the carriage in the depth direction of the cargo is from 0 to DjCoordinate point of [ D ]j,L]Denotes xiThe value range is that the depth coordinate of the jth arris of the carriage in the depth direction of the cargo is DjTo the point of the maximum L.
5. The method of claim 1, wherein said estimating an empty volume of a car using said empty depth information and said three-dimensional model comprises:
averaging the depth coordinates to obtain an idle depth estimation value of the carriage;
determining the section area of the carriage by using the three-dimensional model, wherein the section is vertical to the depth direction;
and multiplying the idle depth estimated value of the carriage by the section area to obtain an idle volume estimated value of the carriage.
6. The method of claim 1, further comprising:
the vehicle compartment is photographed from the outside of the door of the vehicle compartment with a camera such that the boundary of the image coincides with the outer surface of the vehicle compartment.
7. An apparatus for estimating a car free volume, comprising:
the car model identification module is configured to identify the car image shot by the camera to obtain the car model;
the three-dimensional model acquisition module is configured to acquire a three-dimensional model of the compartment according to the compartment model;
a depth information acquisition module configured to: acquiring a rotation and translation relation between a coordinate system of the three-dimensional model and a camera coordinate system by using the image and the three-dimensional model; acquiring the idle depth information of the carriage by using the image, the three-dimensional model and the rotational translation relation, and specifically comprising the following steps: constructing a second objective function; the function value of the second objective function is: projecting coordinates from a carriage door to a starting point shielded by goods on each ridge in the cargo depth direction of the three-dimensional model to the image, and then, coinciding the coordinates with pixel points in the image, wherein the pixel gradient value of the pixel points is greater than a preset threshold value; the variable of the second objective function is a depth coordinate of the starting point on the three-dimensional model; solving the depth coordinate of the starting point on the three-dimensional model when the function value of the second objective function is maximum;
a cargo volume estimation module configured to estimate an empty volume of a car using the empty depth information and the three-dimensional model.
8. The apparatus of claim 7, wherein the depth information acquisition module is configured to:
constructing a first objective function; the function value of the first objective function is: projecting the coordinate points on the three-dimensional model to the image, and then counting the number of the coordinate points which are superposed with the pixel points with the pixel gradient value larger than a preset threshold value in the image; the variables of the first objective function are: a rotation matrix and a translation matrix from a coordinate system of the three-dimensional model to a camera coordinate system;
and solving the rotation matrix and the translation matrix when the function value of the first objective function is maximum.
9. The apparatus of claim 8, wherein the depth information acquisition module is configured to: determining R and T that maximize the following objective function:
Figure FDA0002688612440000031
wherein R, T represents the rotation matrix and translation matrix, x, of the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, pi () representing the perspective projection of the point on the camera coordinate system onto the image by using the perspective projection matrix of the camera, y representing a binary image obtained by detecting the line segment of the image,
Figure FDA0002688612440000041
indicating that the image y is at coordinate II (Rx)iThe pixel value at + T), Σ1Represents a numerical sum, Σ2Indicating the number of coordinates.
10. The apparatus of claim 7, wherein the depth information acquisition module is configured to: determining D when the following objective function takes the maximum valuelb、Dlu、Dru、Drb
Figure FDA0002688612440000042
Constraint of Dlu≤Dlb,Dru≤Drb
Wherein R, T represents the rotation matrix and translation matrix, x, of the coordinate system of the three-dimensional model to the camera coordinate system, respectivelyiRepresenting the coordinates of the ith point on the three-dimensional model in the coordinate system of the three-dimensional model, pi () representing the perspective projection of the point on the camera coordinate system onto the image by using the perspective projection matrix of the camera, y representing a binary image obtained by detecting the line segment of the image,
Figure FDA0002688612440000043
indicating that the image y is at coordinate II (Rx)iPixel value at + T), Dlb、Dlu、Dru、DrbRespectively representing the depth coordinates of the starting points of the left lower corner, the left upper corner, the right upper corner and the right lower corner of the carriage in the depth direction of the cargo on the three-dimensional model, wherein j takes lb, lu, ru and rb, [0, Dj]Denotes xiThe value range of (1) is that the jth arris of the carriage in the depth direction of the cargo is from 0 to DjCoordinate point of [ D ]j,L]Denotes xiThe value range is that the depth coordinate of the jth arris of the carriage in the depth direction of the cargo is DjTo the point of the maximum L.
11. The apparatus of claim 7, wherein the cargo volume estimation module is configured to:
averaging the depth coordinates to obtain an idle depth estimation value of the carriage;
determining the section area of the carriage by using the three-dimensional model, wherein the section is vertical to the depth direction;
and multiplying the idle depth estimated value of the carriage by the section area to obtain an idle volume estimated value of the carriage.
12. The apparatus of claim 7, further comprising:
an image capturing module configured to capture a vehicle compartment from outside a door of the vehicle compartment with a camera such that a boundary of the image coincides with an outer surface of the vehicle compartment.
13. An apparatus for estimating a car free volume, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the method of estimating car free volume of any of claims 1-6 based on instructions stored in the memory.
14. A computer readable storage medium, wherein the computer readable storage medium stores computer instructions which, when executed by a processor, implement a method of estimating car free volume as claimed in any one of claims 1 to 6.
CN201811036587.6A 2018-09-06 2018-09-06 Method, device and computer readable storage medium for estimating free volume of carriage Active CN109146952B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811036587.6A CN109146952B (en) 2018-09-06 2018-09-06 Method, device and computer readable storage medium for estimating free volume of carriage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811036587.6A CN109146952B (en) 2018-09-06 2018-09-06 Method, device and computer readable storage medium for estimating free volume of carriage

Publications (2)

Publication Number Publication Date
CN109146952A CN109146952A (en) 2019-01-04
CN109146952B true CN109146952B (en) 2020-11-20

Family

ID=64827335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811036587.6A Active CN109146952B (en) 2018-09-06 2018-09-06 Method, device and computer readable storage medium for estimating free volume of carriage

Country Status (1)

Country Link
CN (1) CN109146952B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469871B (en) * 2020-03-30 2023-07-14 长沙智能驾驶研究院有限公司 Carriage loadable space detection method and device based on three-dimensional laser
CN112288712B (en) * 2020-10-28 2022-07-22 山东黄金矿业(莱州)有限公司三山岛金矿 Gold mine drop shaft feeding visual detection method based on live-action modeling
CN113052525B (en) * 2021-03-15 2022-07-01 江苏满运物流信息有限公司 Cargo volume estimation method, cargo volume ordering method, cargo volume estimation device, cargo volume ordering device and electronic equipment
CN115272351B (en) * 2022-09-30 2023-01-24 煤炭科学研究总院有限公司 Mine trackless rubber-tyred vehicle overrun detection method based on binocular vision and linear laser

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107388960A (en) * 2016-05-16 2017-11-24 杭州海康机器人技术有限公司 A kind of method and device for determining object volume
CN107392958A (en) * 2016-05-16 2017-11-24 杭州海康机器人技术有限公司 A kind of method and device that object volume is determined based on binocular stereo camera
CN107873101A (en) * 2014-11-21 2018-04-03 克里斯多夫·M·马蒂 For process identification and the imaging system assessed
CN108416804A (en) * 2018-02-11 2018-08-17 深圳市优博讯科技股份有限公司 Obtain method, apparatus, terminal device and the storage medium of target object volume

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9940730B2 (en) * 2015-11-18 2018-04-10 Symbol Technologies, Llc Methods and systems for automatic fullness estimation of containers
US10573018B2 (en) * 2016-07-13 2020-02-25 Intel Corporation Three dimensional scene reconstruction based on contextual analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107873101A (en) * 2014-11-21 2018-04-03 克里斯多夫·M·马蒂 For process identification and the imaging system assessed
CN107388960A (en) * 2016-05-16 2017-11-24 杭州海康机器人技术有限公司 A kind of method and device for determining object volume
CN107392958A (en) * 2016-05-16 2017-11-24 杭州海康机器人技术有限公司 A kind of method and device that object volume is determined based on binocular stereo camera
CN108416804A (en) * 2018-02-11 2018-08-17 深圳市优博讯科技股份有限公司 Obtain method, apparatus, terminal device and the storage medium of target object volume

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Kinect深度相机的三维人体重建技术研究;周瑾;《万方数据》;20131008;第1-55页 *

Also Published As

Publication number Publication date
CN109146952A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109146952B (en) Method, device and computer readable storage medium for estimating free volume of carriage
US10373380B2 (en) 3-dimensional scene analysis for augmented reality operations
US9747493B2 (en) Face pose rectification method and apparatus
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
JP4341564B2 (en) Object judgment device
EP2811423A1 (en) Method and apparatus for detecting target
CN103745452B (en) Camera external parameter assessment method and device, and camera external parameter calibration method and device
JP5538868B2 (en) Image processing apparatus, image processing method and program
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
CN108475433A (en) Method and system for determining RGBD camera postures on a large scale
KR100953076B1 (en) Multi-view matching method and device using foreground/background separation
US10586321B2 (en) Automatic detection, counting, and measurement of lumber boards using a handheld device
CN105303514A (en) Image processing method and apparatus
CN107016348B (en) Face detection method and device combined with depth information and electronic device
EP3678096A1 (en) Method for calculating a tow hitch position
CN110926330B (en) Image processing apparatus, image processing method, and program
CN111307039A (en) Object length identification method and device, terminal equipment and storage medium
KR20170092533A (en) A face pose rectification method and apparatus
CN109934873B (en) Method, device and equipment for acquiring marked image
CN111192326A (en) Method and system for visually identifying direct-current charging socket of electric automobile
US11216905B2 (en) Automatic detection, counting, and measurement of lumber boards using a handheld device
CN116363223A (en) Binocular vision-based boxcar size measurement method, device and medium
CN114663626A (en) Luggage rapid modeling method and device based on single-frame sampling and storage medium
CN115546314A (en) Sensor external parameter calibration method and device, equipment and storage medium
CN111639642B (en) Image processing method, device and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant