CN112278891A - Carriage internal attitude detection method - Google Patents
Carriage internal attitude detection method Download PDFInfo
- Publication number
- CN112278891A CN112278891A CN202011586767.9A CN202011586767A CN112278891A CN 112278891 A CN112278891 A CN 112278891A CN 202011586767 A CN202011586767 A CN 202011586767A CN 112278891 A CN112278891 A CN 112278891A
- Authority
- CN
- China
- Prior art keywords
- carriage
- coordinate system
- standard
- processing module
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G65/00—Loading or unloading
- B65G65/005—Control arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a carriage internal attitude detection method, which comprises the following steps: calibrating a coordinate system of the 3D vision sensor and a coordinate system of a standard carriage before measurement; the method comprises the steps of collecting a current front angular point cloud in a compartment and dividing the front side, the side face and the ground of the compartment, wherein the front angular point cloud in the compartment is an area which simultaneously comprises the front side, the side face and the ground of the compartment. Inside 3D vision sensor followed the loading and unloading robot and got into the carriage, can be direct, gather in real time the loading and unloading robot the place ahead within 1~2 meters for the inside attitude image in carriage, detection accuracy is high, is convenient for adjust the position or the direction of loading and unloading robot in real time at the loading and unloading in-process, guarantees that the goods is accurate to be put things in good order and is in the preset position, is applicable to various types of freight train.
Description
Technical Field
The invention relates to the field of loading and unloading vehicles, in particular to a carriage internal posture detection method.
Background
The loading and unloading operation flow is a main factor for restricting the transportation and turnover of articles between a logistics warehouse and a factory. Various current automatic loading and unloading goods schemes are being used for loading and unloading car processes gradually for improve loading and unloading goods efficiency and quality, can reduce artificial intensity of labour simultaneously, can also reduce the harm to health and safety at hazardous articles or heavy object transport trade.
In a system and method for loading bagged materials as disclosed in patent application No. 201310289503.0, image data of a boxcar is acquired through a vision sensor, an actual stacking path is calculated according to the image data, an industrial robot is controlled to grasp goods, and the goods are loaded into the boxcar according to the actual stacking path. However, the current automatic loading system for goods has the following defects: 1) the method is only suitable for open trucks, and for closed trucks, the 3D attitude image in the carriage cannot be directly acquired, and the acquired image is spliced by using a plurality of visual sensors (such as being respectively arranged at the top and the side of a parking space of the truck) to obtain the 3D attitude image in the carriage; for a truck with a longer carriage, when a vision sensor at the position of a door collects images of the inner wall of the carriage, the imaging distance is far, so that the image precision is easily influenced; 2) according to the pre-collected carriage attitude image, when the stacking path of the industrial robot is set, the conditions that the inner wall of the carriage, the side wall of the carriage and the bottom surface are not perpendicular and the bottom of the carriage is not parallel to the ground due to the problems of tire pressure and the like are not considered, the stacking attitude of the industrial robot cannot be adjusted in real time, and the accuracy of the goods stacking position is influenced; 3) environmental suitability is low, installs the open-air position at freight train parking stall top with visual sensor, easily receives environmental impact, leads to detecting the precision reduction.
Disclosure of Invention
In order to overcome the defects, the invention provides the method for detecting the posture inside the carriage, the 3D vision sensor follows the loading and unloading robot to enter the carriage, the posture image relative to the inside of the carriage in the front of the loading and unloading robot within 1-2 meters can be directly collected in real time, the detection precision is high, the position or the direction of the loading and unloading robot can be conveniently adjusted in real time in the loading and unloading process, the goods are accurately stacked at the preset position, and the method is suitable for various types of trucks.
The purpose of the invention is realized by the following technical scheme:
a carriage internal attitude detection method comprises the following steps:
step 1, calibrating a coordinate system of a 3D vision sensor and a coordinate system of a standard carriage before measurement;
step 1.1, calibrating a 3D visual sensor coordinate system and a standard coordinate system, and acquiring a conversion matrix RT from the 3D visual sensor coordinate system to the standard coordinate system1;
Step 1.2, calibrating the standard coordinate system to the standard carriage coordinate system to obtain the standard coordinate system to the standard carriage coordinate systemIs given by the transformation matrix RT2Predefining that an X axis, a Y axis and a Z axis of the standard compartment coordinate system are respectively vertical to the front surface, the side surface and the ground of the standard compartment, and the origin of the standard compartment coordinate system is the intersection O of three planes of the front surface, the side surface and the ground of the standard compartmentxyz;
Step 2, collecting current front angular point cloud in the carriage and dividing the front of the carriage, the side of the carriage and the ground of the carriage, wherein the front angular point cloud in the carriage is an area which simultaneously comprises the front of the carriage, the side of the carriage and the ground;
step 2.1, the 3D vision sensor collects 3D point cloud data of a front angular point area inside a carriage, and transmits the 3D point cloud data of the front angular point inside the carriage to a 3D data processing module;
2.2, the 3D data processing module obtains a plane set in the point cloud by a plane segmentation method;
step 2.3, the 3D data processing module calculates the plane equation information of each plane in the plane set in the step 2.2 one by one to obtain a plane normal vector, and the RT is used for passing the plane normal vector1And RT2Converting the plane normal vector into a standard compartment coordinate system, respectively calculating included angles between the plane normal vector in the standard compartment coordinate system and an X axis, a Y axis and a Z axis of the standard compartment coordinate system, screening a coordinate axis with the smallest included angle with the plane normal vector, and classifying the plane as the front, the side or the ground of a compartment according to compartment surface information corresponding to the coordinate axis;
step 2.4, the 3D data processing module calculates the intersection point O 'of the three planes based on the plane equation information of the front, the side and the ground of the current compartment obtained in the step 2.3'xyz;
Step 2.5, the 3D data processing module is based on normal vectors of the front surface of the carriage, the side surface of the carriage and the ground surface of the carriage and an intersection point O'xyzInformation, calculating the attitude transformation matrix RT from the standard compartment coordinate system to the current compartment coordinate system3。
Preferably, the 3D vision processing module passes through the RT1、RT2And RT3Calculating the 3D vision sensorAnd converting the coordinate system into the attitude conversion matrix of the current compartment coordinate system.
Preferably, the 3D vision sensor is mounted on a sensor mobile device, and the sensor mobile device can obtain the movement posture change matrix RT of the 3D vision4The 3D data processing module is based on RT when the 3D vision sensor changes relative to the standard coordinate system4Correcting the posture conversion matrix RT from the 3D vision sensor coordinate system to the standard coordinate system in real time1。
Preferably, the sensor moving device is a robot.
Preferably, the attitude transformation matrix RT from the standard compartment coordinate system to the current compartment coordinate system is calculated3The steps are as follows:
the 3D data processing module calculates a normal vector N of the carriage ground based on plane equation information of the carriage front, the carriage side and the carriage groundZCalculating the cross vector L of the front of the current carriage and the carriage groundYOr the intersection vector L of the side surface of the current carriage and the ground of the carriageX;
The 3D data processing module calculates the L according to preset requirementsYTo said NZCross product vector V ofXNOr calculating said NZTo the LXCross product vector V ofYNTo obtain coordinate axis vectors V which are mutually perpendicular in pairsXN,LY,NZOr LX,VYN,NZ;
The 3D data processing module calculates an attitude transformation matrix RT from the standard carriage coordinate system to the current carriage coordinate system according to the O' xyz of the current carriage coordinate system and the coordinate axis vectors which are perpendicular to each other in pairs3。
Preferably, the obtaining of the posture conversion matrix RT from the 3D vision sensor coordinate system to the standard coordinate system1The method comprises the following steps:
the 3D vision sensor collects image or point cloud information of a standard sample, and the standard sample comprises features capable of identifying a standard coordinate system;
the 3D data processing module extracts standard coordinate system information according to the image or 3D point cloud information of the standard sample, wherein the standard coordinate system information comprises X-axis, Y-axis, Z-axis directions and origin coordinates of a coordinate system;
the 3D data processing module calculates an attitude transformation matrix RT from the 3D vision sensor coordinate system to the standard coordinate system according to the extracted standard coordinate system information1。
Preferably, the standard sample is a camera calibration plate or contains three planes perpendicular to each other.
Preferably, the plane segmentation method includes the steps of:
the 3D data processing module processes 3D point cloud data in the carriage and judges whether point cloud information needs to be detected in the point cloud data; if not, returning error information; if yes, entering the next step;
the 3D point cloud data processing module calculates the distance between adjacent points, judges that the corresponding points belong to different objects based on a distance threshold value, traverses all the points in the point cloud by the method, segments a plurality of groups of object data, and simultaneously counts the number of the point clouds in each group of object data;
the 3D point cloud data processing module eliminates the objects of which the point cloud number is less than a set threshold in the step 2.3;
and the 3D point cloud data processing module fits a plane set in the point cloud in the step 2.4 by using a random sampling consistency method, and eliminates planes with the number of points or the area smaller than a set threshold value in the plane set.
An automatic handling equipment comprises a 3D vision sensor, a handling robot and a 3D data processing module; the automatic loading and unloading equipment adopts a carriage internal attitude detection method; the 3D vision sensor is installed on the loading and unloading robot and used for entering the interior of the carriage along with the loading and unloading robot and collecting the point cloud of the corner points in front of the interior of the carriage in real time; the 3D data processing module is used for acquiring a posture conversion matrix of the current carriage internal posture relative to a standard carriage posture; and the automatic loading and unloading equipment adjusts the position or the direction of the automatic loading and unloading equipment in real time according to the current posture conversion matrix in the carriage.
Compared with the prior art, the invention provides a carriage internal attitude detection method, which comprises the following steps: calibrating a coordinate system of the 3D vision sensor and a coordinate system of a standard carriage before measurement; the method comprises the steps of collecting a current front angular point cloud in a compartment and dividing the front side, the side face and the ground of the compartment, wherein the front angular point cloud in the compartment is an area which simultaneously comprises the front side, the side face and the ground of the compartment. Inside 3D vision sensor followed the loading and unloading robot and got into the carriage, can be direct, gather in real time the loading and unloading robot the place ahead within 1~2 meters for the inside attitude image in carriage, detection accuracy is high, is convenient for adjust the position or the direction of loading and unloading robot in real time at the loading and unloading in-process, guarantees that the goods is accurate to be put things in good order and is in the preset position, is applicable to various types of freight train.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention provides a carriage internal attitude detection method, which comprises the following steps:
step 1, calibrating a coordinate system of a 3D vision sensor and a coordinate system of a standard carriage before measurement;
step 1.1, calibrating a 3D visual sensor coordinate system and a standard coordinate system, and acquiring a conversion matrix RT from the 3D visual sensor coordinate system to the standard coordinate system1;
Step 1.2, calibrating the standard coordinate system to the standard carriage coordinate system to obtain a conversion matrix RT from the standard coordinate system to the standard carriage coordinate system2Predefining that an X axis, a Y axis and a Z axis of the standard compartment coordinate system are respectively vertical to the front surface, the side surface and the ground of the standard compartment, and the origin of the standard compartment coordinate system is the intersection O of three planes of the front surface, the side surface and the ground of the standard compartmentxyz;
Step 2, collecting current front angular point cloud in the carriage and dividing the front of the carriage, the side of the carriage and the ground of the carriage, wherein the front angular point cloud in the carriage is an area which simultaneously comprises the front of the carriage, the side of the carriage and the ground;
step 2.1, the 3D vision sensor collects 3D point cloud data of a front angular point area inside a carriage, and transmits the 3D point cloud data of the front angular point inside the carriage to a 3D data processing module;
2.2, the 3D data processing module obtains a plane set in the point cloud by a plane segmentation method;
step 2.3, the 3D data processing module calculates the plane equation information of each plane in the plane set in the step 2.2 one by one to obtain a plane normal vector, and the RT is used for passing the plane normal vector1And RT2Converting the plane normal vector into a standard compartment coordinate system, respectively calculating included angles between the plane normal vector in the standard compartment coordinate system and an X axis, a Y axis and a Z axis of the standard compartment coordinate system, screening a coordinate axis with the smallest included angle with the plane normal vector, and classifying the plane as the front, the side or the ground of a compartment according to compartment surface information corresponding to the coordinate axis; in the step, if the included angle of the normal vector is larger than 90 degrees, taking the included angle value of the reverse normal vector, and calculating the acute angle value; the included angle value of the normal vector is smaller than a preset threshold value, and the value of the preset threshold value can be 10 degrees.
Step 2.4, the 3D data processing module calculates the intersection point O 'of the three planes based on the plane equation information of the front, the side and the ground of the current compartment obtained in the step 2.3'xyz;
Step 2.5, the 3D data processing module is based on normal vectors of the front surface of the carriage, the side surface of the carriage and the ground surface of the carriage and an intersection point O'xyzInformation, calculating the attitude transformation matrix RT from the standard compartment coordinate system to the current compartment coordinate system3。
The 3D vision processing module passes through the RT1、RT2And RT3And calculating a posture conversion matrix from the 3D vision sensor coordinate system to the current carriage coordinate system.
The 3D vision sensor is carried on a sensor mobile device, and the sensor mobile device can obtain the 3D visionPerceived movement posture change matrix RT4The 3D data processing module is based on RT when the 3D vision sensor changes relative to the standard coordinate system4Correcting the posture conversion matrix RT from the 3D vision sensor coordinate system to the standard coordinate system in real time1。
The sensor moving device is a robot.
Calculating a posture conversion matrix RT from the standard carriage coordinate system to the current carriage coordinate system3The steps are as follows:
the 3D data processing module calculates a normal vector N of the carriage ground based on plane equation information of the carriage front, the carriage side and the carriage groundZCalculating the cross vector L of the front of the current carriage and the carriage groundYOr the intersection vector L of the side surface of the current carriage and the ground of the carriageX;
The 3D data processing module calculates the L according to preset requirementsYTo said NZCross product vector V ofXNOr calculating said NZTo the LXCross product vector V ofYNTo obtain coordinate axis vectors V which are mutually perpendicular in pairsXN,LY,NZOr LX,VYN,NZ;
The 3D data processing module calculates an attitude transformation matrix RT from the standard carriage coordinate system to the current carriage coordinate system according to the O' xyz of the current carriage coordinate system and the coordinate axis vectors which are perpendicular to each other in pairs3。
Acquiring an attitude transformation matrix RT from the 3D vision sensor coordinate system to the standard coordinate system1The method comprises the following steps:
the 3D vision sensor collects image or point cloud information of a standard sample, and the standard sample comprises features capable of identifying a standard coordinate system;
the 3D data processing module extracts standard coordinate system information according to the image or 3D point cloud information of the standard sample, wherein the standard coordinate system information comprises X-axis, Y-axis, Z-axis directions and origin coordinates of a coordinate system;
the 3D data processing module calculates an attitude transformation matrix RT from the 3D vision sensor coordinate system to the standard coordinate system according to the extracted standard coordinate system information1。
The standard sample is a camera calibration plate or comprises three planes which are vertical to each other.
The plane segmentation method comprises the following steps:
the 3D data processing module processes 3D point cloud data in the carriage and judges whether point cloud information needs to be detected in the point cloud data; if not, returning error information; if yes, entering the next step;
the 3D point cloud data processing module calculates the distance between adjacent points, judges that the corresponding points belong to different objects based on a distance threshold value, traverses all the points in the point cloud by the method, segments a plurality of groups of object data, and simultaneously counts the number of the point clouds in each group of object data; in the step, the value range of the distance threshold is 1-10 cm.
The 3D point cloud data processing module eliminates the objects of which the point cloud number is less than a set threshold in the step 2.3;
and the 3D point cloud data processing module fits a plane set in the point cloud in the step 2.4 by using a random sampling consistency method, and eliminates planes with the number of points or the area smaller than a set threshold value in the plane set.
An automatic handling equipment comprises a 3D vision sensor, a handling robot and a 3D data processing module; the automatic loading and unloading equipment adopts a carriage internal attitude detection method; the 3D vision sensor is installed on the loading and unloading robot and used for entering the interior of the carriage along with the loading and unloading robot and collecting the point cloud of the corner points in front of the interior of the carriage in real time; the 3D data processing module is used for acquiring a posture conversion matrix of the current carriage internal posture relative to a standard carriage posture; and the automatic loading and unloading equipment adjusts the position or the direction of the automatic loading and unloading equipment in real time according to the current posture conversion matrix in the carriage.
Compared with the prior art, the invention provides a carriage internal attitude detection method, which comprises the following steps: calibrating a coordinate system of the 3D vision sensor and a coordinate system of a standard carriage before measurement; the method comprises the steps of collecting a current front angular point cloud in a compartment and dividing the front side, the side face and the ground of the compartment, wherein the front angular point cloud in the compartment is an area which simultaneously comprises the front side, the side face and the ground of the compartment. Inside 3D vision sensor followed the loading and unloading robot and got into the carriage, can be direct, gather in real time the loading and unloading robot the place ahead within 1~2 meters for the inside attitude image in carriage, detection accuracy is high, is convenient for adjust the position or the direction of loading and unloading robot in real time at the loading and unloading in-process, guarantees that the goods is accurate to be put things in good order and is in the preset position, is applicable to various types of freight train.
Claims (9)
1. A carriage interior attitude detection method is characterized by comprising the following steps:
step 1, calibrating a coordinate system of a 3D vision sensor and a coordinate system of a standard carriage before measurement;
step 1.1, calibrating a 3D visual sensor coordinate system and a standard coordinate system, and acquiring a conversion matrix RT from the 3D visual sensor coordinate system to the standard coordinate system1;
Step 1.2, calibrating the standard coordinate system to the standard carriage coordinate system to obtain a conversion matrix RT from the standard coordinate system to the standard carriage coordinate system2Predefining that an X axis, a Y axis and a Z axis of the standard compartment coordinate system are respectively vertical to the front surface, the side surface and the ground of the standard compartment, and the origin of the standard compartment coordinate system is the intersection O of three planes of the front surface, the side surface and the ground of the standard compartmentxyz;
Step 2, collecting current front angular point cloud in the carriage and dividing the front of the carriage, the side of the carriage and the ground of the carriage, wherein the front angular point cloud in the carriage is an area which simultaneously comprises the front of the carriage, the side of the carriage and the ground;
step 2.1, the 3D vision sensor collects 3D point cloud data of a front angular point area inside a carriage, and transmits the 3D point cloud data of the front angular point inside the carriage to a 3D data processing module;
2.2, the 3D data processing module obtains a plane set in the point cloud by a plane segmentation method;
step 2.3, the 3D data processing module calculates the plane equation information of each plane in the plane set in the step 2.2 one by one to obtain a plane normal vector, and the RT is used for passing the plane normal vector1And RT2Converting the plane normal vector into a standard compartment coordinate system, respectively calculating included angles between the plane normal vector in the standard compartment coordinate system and an X axis, a Y axis and a Z axis of the standard compartment coordinate system, screening a coordinate axis with the smallest included angle with the plane normal vector, and classifying the plane as the front, the side or the ground of a compartment according to compartment surface information corresponding to the coordinate axis;
step 2.4, the 3D data processing module calculates the intersection point O 'of the three planes based on the plane equation information of the front, the side and the ground of the current compartment obtained in the step 2.3'xyz;
Step 2.5, the 3D data processing module is based on normal vectors of the front surface of the carriage, the side surface of the carriage and the ground surface of the carriage and an intersection point O'xyzInformation, calculating the attitude transformation matrix RT from the standard compartment coordinate system to the current compartment coordinate system3。
2. The vehicle interior attitude detection method according to claim 1, wherein said 3D vision processing module passes through said RT1、RT2And RT3And calculating a posture conversion matrix from the 3D vision sensor coordinate system to the current carriage coordinate system.
3. The vehicle interior attitude detection method according to claim 1, wherein the 3D vision sensor is mounted on a sensor mobile device, and the sensor mobile device obtains a movement attitude change matrix RT of the 3D vision4The 3D data processing module is based on RT when the 3D vision sensor changes relative to the standard coordinate system4Correcting the posture conversion matrix RT from the 3D vision sensor coordinate system to the standard coordinate system in real time1。
4. A vehicle interior attitude detecting method according to claim 3, wherein said sensor moving device is a robot.
5. The vehicle interior attitude detection method according to claim 1, wherein said calculation of an attitude transformation matrix RT from said standard vehicle coordinate system to said current vehicle coordinate system3The steps are as follows:
the 3D data processing module calculates a normal vector N of the carriage ground based on plane equation information of the carriage front, the carriage side and the carriage groundZCalculating the cross vector L of the front of the current carriage and the carriage groundYOr the intersection vector L of the side surface of the current carriage and the ground of the carriageX;
The 3D data processing module calculates the L according to preset requirementsYTo said NZCross product vector V ofXNOr calculating said NZTo the LXCross product vector V ofYNTo obtain coordinate axis vectors V which are mutually perpendicular in pairsXN,LY,NZOr LX,VYN,NZ;
The 3D data processing module calculates an attitude transformation matrix RT from the standard carriage coordinate system to the current carriage coordinate system according to the O' xyz of the current carriage coordinate system and the coordinate axis vectors which are perpendicular to each other in pairs3。
6. The vehicle interior attitude detection method according to claim 1, wherein said obtaining of attitude transformation matrix RT of said 3D vision sensor coordinate system to said standard coordinate system1The method comprises the following steps:
the 3D vision sensor collects image or point cloud information of a standard sample, and the standard sample comprises features capable of identifying a standard coordinate system;
the 3D data processing module extracts standard coordinate system information according to the image or 3D point cloud information of the standard sample, wherein the standard coordinate system information comprises X-axis, Y-axis, Z-axis directions and origin coordinates of a coordinate system;
the 3D data processing module calculates an attitude transformation matrix RT from the 3D vision sensor coordinate system to the standard coordinate system according to the extracted standard coordinate system information1。
7. The method of detecting the attitude of the interior of a vehicle compartment of claim 6, wherein the standard sample is a camera calibration plate or includes three planes perpendicular to each other.
8. A vehicle interior attitude detection method according to claim 1, wherein said plane division method comprises the steps of:
the 3D data processing module processes 3D point cloud data in the carriage and judges whether point cloud information needs to be detected in the point cloud data; if not, returning error information; if yes, entering the next step;
the 3D point cloud data processing module calculates the distance between adjacent points, judges that the corresponding points belong to different objects based on a distance threshold value, traverses all the points in the point cloud by the method, segments a plurality of groups of object data, and simultaneously counts the number of the point clouds in each group of object data;
the 3D point cloud data processing module eliminates the objects of which the point cloud number is less than a set threshold in the step 2.3;
and the 3D point cloud data processing module fits a plane set in the point cloud in the step 2.4 by using a random sampling consistency method, and eliminates planes with the number of points or the area smaller than a set threshold value in the plane set.
9. An automatic handling equipment comprises a 3D vision sensor, a handling robot and a 3D data processing module; characterized in that the automatic handling equipment applies a carriage interior attitude detection method according to any one of claims 1 to 8; the 3D vision sensor is installed on the loading and unloading robot and used for entering the interior of the carriage along with the loading and unloading robot and collecting the point cloud of the corner points in front of the interior of the carriage in real time; the 3D data processing module is used for acquiring a posture conversion matrix of the current carriage internal posture relative to a standard carriage posture; and the automatic loading and unloading equipment adjusts the position or the direction of the automatic loading and unloading equipment in real time according to the current posture conversion matrix in the carriage.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011586767.9A CN112278891B (en) | 2020-12-29 | 2020-12-29 | Carriage internal attitude detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011586767.9A CN112278891B (en) | 2020-12-29 | 2020-12-29 | Carriage internal attitude detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112278891A true CN112278891A (en) | 2021-01-29 |
CN112278891B CN112278891B (en) | 2021-04-02 |
Family
ID=74426678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011586767.9A Active CN112278891B (en) | 2020-12-29 | 2020-12-29 | Carriage internal attitude detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112278891B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113763341A (en) * | 2021-08-26 | 2021-12-07 | 西门子工厂自动化工程有限公司 | Visual realization method and device of car loader and computer readable storage medium |
CN116251214A (en) * | 2022-10-10 | 2023-06-13 | 南京景曜智能科技有限公司 | Whole-flow disinfection monitoring method for intelligent disinfection robot |
CN116425088A (en) * | 2023-06-09 | 2023-07-14 | 未来机器人(深圳)有限公司 | Cargo carrying method, device and robot |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2895099A1 (en) * | 2005-12-19 | 2007-06-22 | Giat Ind Sa | METHOD FOR PROVIDING NAVIGATION AND / OR GUIDANCE AND / OR CONTROL OF A PROJECTILE TO A GOAL AND DEVICE IMPLEMENTING SUCH A METHOD |
US20090164067A1 (en) * | 2003-03-20 | 2009-06-25 | Whitehead Michael L | Multiple-antenna gnss control system and method |
CN103342240A (en) * | 2013-07-10 | 2013-10-09 | 深圳先进技术研究院 | Bagged material car-loading system and method |
CN103645480A (en) * | 2013-12-04 | 2014-03-19 | 北京理工大学 | Geographic and geomorphic characteristic construction method based on laser radar and image data fusion |
CN106395430A (en) * | 2016-11-24 | 2017-02-15 | 南京景曜智能科技有限公司 | 3D stereoscopic vision auxiliary car loading and unloading system |
CN106705952A (en) * | 2016-11-24 | 2017-05-24 | 南京景曜智能科技有限公司 | Car gesture detecting device and correcting method thereof |
CN207275775U (en) * | 2017-09-27 | 2018-04-27 | 四川福德机器人股份有限公司 | A kind of flexibility automatic loading system |
US20190352090A1 (en) * | 2017-01-05 | 2019-11-21 | Roy MALLADY | Container Transporter and Methods |
-
2020
- 2020-12-29 CN CN202011586767.9A patent/CN112278891B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090164067A1 (en) * | 2003-03-20 | 2009-06-25 | Whitehead Michael L | Multiple-antenna gnss control system and method |
FR2895099A1 (en) * | 2005-12-19 | 2007-06-22 | Giat Ind Sa | METHOD FOR PROVIDING NAVIGATION AND / OR GUIDANCE AND / OR CONTROL OF A PROJECTILE TO A GOAL AND DEVICE IMPLEMENTING SUCH A METHOD |
CN103342240A (en) * | 2013-07-10 | 2013-10-09 | 深圳先进技术研究院 | Bagged material car-loading system and method |
CN103645480A (en) * | 2013-12-04 | 2014-03-19 | 北京理工大学 | Geographic and geomorphic characteristic construction method based on laser radar and image data fusion |
CN106395430A (en) * | 2016-11-24 | 2017-02-15 | 南京景曜智能科技有限公司 | 3D stereoscopic vision auxiliary car loading and unloading system |
CN106705952A (en) * | 2016-11-24 | 2017-05-24 | 南京景曜智能科技有限公司 | Car gesture detecting device and correcting method thereof |
US20190352090A1 (en) * | 2017-01-05 | 2019-11-21 | Roy MALLADY | Container Transporter and Methods |
CN207275775U (en) * | 2017-09-27 | 2018-04-27 | 四川福德机器人股份有限公司 | A kind of flexibility automatic loading system |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113763341A (en) * | 2021-08-26 | 2021-12-07 | 西门子工厂自动化工程有限公司 | Visual realization method and device of car loader and computer readable storage medium |
CN116251214A (en) * | 2022-10-10 | 2023-06-13 | 南京景曜智能科技有限公司 | Whole-flow disinfection monitoring method for intelligent disinfection robot |
CN116251214B (en) * | 2022-10-10 | 2024-05-17 | 南京景曜智能科技有限公司 | Whole-flow disinfection monitoring method for intelligent disinfection robot |
CN116425088A (en) * | 2023-06-09 | 2023-07-14 | 未来机器人(深圳)有限公司 | Cargo carrying method, device and robot |
CN116425088B (en) * | 2023-06-09 | 2023-10-24 | 未来机器人(深圳)有限公司 | Cargo carrying method, device and robot |
Also Published As
Publication number | Publication date |
---|---|
CN112278891B (en) | 2021-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112278891B (en) | Carriage internal attitude detection method | |
CN109064495B (en) | Bridge deck vehicle space-time information acquisition method based on fast R-CNN and video technology | |
CN112418103B (en) | Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision | |
CN107314741A (en) | Measurement of cargo measuring method | |
Knyaz et al. | Photogrammetric techniques for road surface analysis | |
AU2017100306A4 (en) | Train Wagon 3D Profiler | |
US20220366596A1 (en) | Positioning system for measuring position of moving body using image capturing apparatus | |
US20210101747A1 (en) | Positioning apparatus capable of measuring position of moving body using image capturing apparatus | |
CN114022537B (en) | Method for analyzing loading rate and unbalanced loading rate of vehicle in dynamic weighing area | |
US20220366599A1 (en) | Positioning system and moving body for measuring position of moving body using image capturing apparatus | |
EP3924794B1 (en) | Autonomous mobile aircraft inspection system | |
CN107796373A (en) | A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven | |
CN111652936A (en) | Three-dimensional sensing and stacking planning method and system for open container loading | |
CN116991162A (en) | Autonomous positioning visual identification method for non-line inspection robot | |
CN116757350A (en) | Unmanned forklift cluster scheduling processing system | |
US20210312661A1 (en) | Positioning apparatus capable of measuring position of moving body using image capturing apparatus | |
CN112684797B (en) | Obstacle map construction method | |
AU2013237637A1 (en) | Train Wagon 3D Profiler | |
CN115930791B (en) | Multi-mode data container cargo position and size detection method | |
CN115077385B (en) | Unmanned container pose measuring method and measuring system thereof | |
CN116692522A (en) | Automatic loading method for loading station chute based on monocular vision recognition | |
CN114279452B (en) | Unmanned integrated card trailer posture detection method and detection system | |
CN112179336A (en) | Automatic luggage transportation method based on binocular vision and inertial navigation combined positioning | |
CN114119742A (en) | Method and device for positioning container truck based on machine vision | |
Meyer et al. | Automatic extrinsic rotational calibration of lidar sensors and vehicle orientation estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |