CN115222804A - Industrial material cage identification and positioning method based on depth camera point cloud data - Google Patents

Industrial material cage identification and positioning method based on depth camera point cloud data Download PDF

Info

Publication number
CN115222804A
CN115222804A CN202211079835.1A CN202211079835A CN115222804A CN 115222804 A CN115222804 A CN 115222804A CN 202211079835 A CN202211079835 A CN 202211079835A CN 115222804 A CN115222804 A CN 115222804A
Authority
CN
China
Prior art keywords
point cloud
cloud data
cage
point
depth camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211079835.1A
Other languages
Chinese (zh)
Inventor
周军
杨卓
龙羽
徐菱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Ruixinxing Technology Co ltd
Original Assignee
Chengdu Ruixinxing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Ruixinxing Technology Co ltd filed Critical Chengdu Ruixinxing Technology Co ltd
Priority to CN202211079835.1A priority Critical patent/CN115222804A/en
Publication of CN115222804A publication Critical patent/CN115222804A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Forklifts And Lifting Vehicles (AREA)

Abstract

The invention discloses an industrial material cage identification and positioning method based on depth camera point cloud data, which comprises the following steps: s1: a depth camera acquires scene point cloud data; s2: carrying out Euclidean distance separation on the acquired scene point cloud data; s3: the depth camera collects template point cloud data of the material cage; s4: matching the separated point cloud data with template point cloud data of the collected material cage to obtain material cage point cloud; s5: performing secondary separation on the stock cage point cloud to obtain a stock cage front point cloud; s6: and determining the central position and the central shaft angle of the material cage by the point cloud data on the front surface of the material cage to obtain the pose of the material cage. Before the forklift moves to the material, the depth camera starts point cloud data collection, the point cloud data are used for identifying the position of a central hole of the stock cage, the stock cage is used for navigation and positioning when the stock cage is inserted and taken, and a parallel processing algorithm is adopted to execute a template matching algorithm, so that the calculation efficiency is improved, and the matching time of the point cloud of the stock cage is reduced.

Description

Industrial material cage identification and positioning method based on depth camera point cloud data
Technical Field
The invention relates to the technical field of point cloud processing and target identification, in particular to an industrial material cage identification and positioning method based on depth camera point cloud data.
Background
The intelligent positioning and navigation functions are more and more widely applied in the field of intelligent warehousing, and the technology of installing a 3D depth camera in a forklift to identify and position a target material cage is just one of the technologies. With the rapid development of Chinese economy, traditional enterprises which depend on manual transportation begin to change from mechanization to automation and intelligence, and the demands of logistics and warehouse logistics in factories on automatic transportation equipment with high flexibility degree increase rapidly. The forklift is used as a main force in logistics carrying equipment, and gradually approaches to advanced technologies such as intelligent identification and autonomous navigation positioning, and the intelligent forklift is researched and designed under the background.
Fork truck generally uses the fork to get the form of goods unloading cage through the fork and carries out transporting of material, and prior art can't accomplish the great material cage of auto-picking angle of deflection, needs personnel to control, and degree of automation and intellectuality is not high, increases the personnel cost of enterprise to harm the interests of enterprise, along with scientific and technological progress's development, because traditional fork truck can not automatic accomplish the operation, can't satisfy the high efficiency of commodity circulation industry, the operation demand of long term. Through long-term research of the inventor, the invention provides an industrial material cage identification and positioning method based on depth camera point cloud data.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an industrial material cage identification and positioning method based on depth camera point cloud data.
The purpose of the invention is realized by the following technical scheme: an industrial material cage identification and positioning method based on depth camera point cloud data comprises the following steps:
s1: acquiring scene point cloud data through a depth camera and preprocessing the scene point cloud data;
s2: carrying out Euclidean distance separation on the acquired scene point cloud data;
s3: acquiring template point cloud data of the material cage through a depth camera;
s4: matching the separated point cloud data with template point cloud data of the collected material cage to realize material cage identification and obtain material cage point cloud;
s5: performing secondary separation on the stock cage point cloud to obtain a stock cage front point cloud;
s6: and determining the center position and the center shaft angle of the material cage according to the point cloud data on the front surface of the material cage to obtain the pose of the material cage, wherein the pose is used for navigation and positioning of the material cage when the material cage is inserted and taken.
Preferably, in step S1, the depth camera needs to be fixed at an installation position, a relative relationship between the depth camera and a standard coordinate system is obtained through measurement, the point cloud data is preprocessed, and the point cloud data is separated from the ground in height.
Preferably, in step S2, the camera point cloud data is divided into N types according to the distance, and the separation distance and the size of each type of point cloud data are set.
Preferably, the depth camera point cloud data is classified according to Euclidean distance, and the method comprises the following steps:
a1: carrying out distance clustering on the input point cloud data according to a Euclidean method;
a2: creating a Kd-Tree expression P for the input disordered point cloud to realize rapid search;
a3: an empty cluster list C is set and a set of points Q waiting to be checked.
Preferably, each point in P in A2
Figure 344819DEST_PATH_IMAGE001
The following steps are performed:
b1: will be provided with
Figure 964150DEST_PATH_IMAGE001
Adding to the current point set Q;
b2: searching
Figure 342042DEST_PATH_IMAGE001
Set of all point configurations within a sphere of radius r as the center
Figure 21416DEST_PATH_IMAGE002
To, for
Figure 688021DEST_PATH_IMAGE002
Each point in
Figure 235677DEST_PATH_IMAGE001
If not treated, is added toQIn, whenQAll the points in the process are processed, and the process is carried outQJoining a cluster listCWill beQResetting to empty;
b3: when all points are
Figure 671338DEST_PATH_IMAGE001
Are all being processed, at that timeCIn which all clusters are stored.
Preferably, in step S4, each type of segmented point cloud data is matched with the acquired template point cloud data of the stock cage by using a data registration algorithm of a nearest neighbor iteration method, and the point cloud with the highest matching score is calculated as the point cloud where the stock cage is located, wherein the matching formula is
Figure 380668DEST_PATH_IMAGE003
Where Q is a set of points, i =1,2,3.. Denotes the first set of points, i.e., point cloud data for each class,
Figure 3410DEST_PATH_IMAGE004
i =1,2,3.. Indicate a second point set, namely point cloud data of the stock cage template, the alignment registration of the two point sets is converted to minimize the target function, R is a rotation matrix, and T is a translation matrix, namely rotation parameters and translation parameters between the point cloud data to be registered and the reference point cloud data, so that the optimal matching between the two point set data is satisfied.
Preferably, E1: calculating the corresponding closest point of each point in the P in the Q point set;
e2: obtaining rigid body transformation which enables the average distance of the corresponding points to be minimum, and obtaining translation parameters and rotation parameters;
e3: obtaining a new transformation point set P' by using the translation parameter and the rotation parameter obtained in the E2 for the P;
e4: and if the average distance between the new transformation point set and the reference point set is smaller than a given threshold value, stopping iterative computation, otherwise, taking the new transformation point set P' as a new input to continue iteration until the requirement is met.
The invention has the following advantages: according to the method, when the forklift moves to the front of the material, the depth camera starts to acquire point cloud data, the position of a center hole of the stock cage is identified by using the point cloud data, so that the stock cage is navigated and positioned when the stock cage is inserted and taken, the template matching algorithm is executed by adopting the parallel processing algorithm, the calculation efficiency is improved, the matching time of the point cloud of the stock cage is reduced, the original point cloud of the depth camera is separated from the ground by cutting the original point cloud of the depth camera in height, the influence of the ground point cloud on template matching can be avoided, and the influence of the ground point cloud on the original point cloud is reduced and the matching efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of the structure of a flow of an identification algorithm;
fig. 2 is a schematic structural diagram of a positioning algorithm flow.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without inventive efforts based on the embodiments of the present invention, are within the scope of protection of the present invention.
In addition, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, or orientations or positional relationships that the products of the present invention conventionally lay out when in use, or orientations or positional relationships that are conventionally understood by those skilled in the art, which are merely for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like are used merely to distinguish one description from another, and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In this embodiment, referring to fig. 1-2 together, an industrial material cage identification and positioning method based on depth camera point cloud data includes the following steps:
s1: acquiring scene point cloud data through a depth camera and preprocessing the scene point cloud data;
s2: carrying out Euclidean distance separation on the acquired scene point cloud data;
s3: acquiring template point cloud data of the material cage through a depth camera;
s4: matching the separated point cloud data with template point cloud data of the collected material cage to realize material cage identification and obtain material cage point cloud;
s5: performing secondary separation on the stock cage point cloud to obtain a stock cage front point cloud;
s6: and determining the center position and the center shaft angle of the material cage according to the point cloud data on the front surface of the material cage to obtain the pose of the material cage, wherein the pose is used for navigation and positioning of the material cage when the material cage is inserted and taken. When the forklift moves to the front of the material, the depth camera starts to acquire point cloud data, the position of a center hole of the stock cage is identified by using the point cloud data, so that the stock cage is navigated and positioned when the stock cage is inserted and taken, a template matching algorithm is executed by adopting a parallel processing algorithm to improve the calculation efficiency and reduce the matching time of the point cloud of the stock cage, the original point cloud of the depth camera is separated from the ground by cutting the original point cloud of the depth camera in height, the influence of the point cloud of the ground on template matching can be avoided, and the influence of the point cloud of the ground on the original point cloud is reduced and the matching efficiency is improved.
Further, in step S1, the depth camera needs to be fixed at an installation position, a relative relationship between the depth camera and a standard coordinate system is obtained through measurement, the point cloud data is preprocessed, and the point cloud data is separated from the ground in height. Specifically, after a camera is fixed in position, a depth camera is used for collecting front data point clouds of a stock cage in a close range and taking the front data point clouds as source data matched with a template, camera point clouds issued by an IntelRealsense depth camera are received through communication, meanwhile, preprocessing is needed to be carried out on original point clouds, the preprocessing comprises coordinate conversion on the camera depth point cloud data, coordinates of each point under a camera coordinate system need to be converted into a reference coordinate system with a forklift as an origin, the camera range is limited, and therefore the influence of point cloud noise outside the range on the stock cage point clouds is reduced.
In this embodiment, in step S2, the camera point cloud data is divided into N types according to the distance, and the separation distance and the size of each type of point cloud data are set. Specifically, as many types of point clouds exist in the visual field range of the camera, euclidean distance clustering is carried out on the processed point clouds, the point clouds are divided into different types according to the distance, the divided point clouds of different types are used for subsequent template matching, and thus the point cloud with the highest matching degree with the template is matched, namely the point cloud for identifying the stock cage in the point cloud data of the camera.
Further, the depth camera point cloud data classification according to the Euclidean distance comprises the following steps:
a1: carrying out distance clustering on the input point cloud data according to an Euclidean method;
a2: creating a Kd-Tree expression P for the input disordered point cloud to realize rapid search;
a3: an empty cluster list C and a set of points Q waiting to be checked are set. Still further, each point in P in A2
Figure 495702DEST_PATH_IMAGE001
The following steps are performed: b1: will be provided with
Figure 520290DEST_PATH_IMAGE001
Adding to the current point set Q; b2: searching
Figure 400521DEST_PATH_IMAGE001
Set of all point configurations within a sphere of radius r as the center
Figure 572877DEST_PATH_IMAGE002
To, for
Figure 931177DEST_PATH_IMAGE002
Each point in
Figure 747954DEST_PATH_IMAGE001
If not treated, is added toQIn, whenQAll the points in the process are processed, and the process is carried outQJoining a Cluster ListCWill beQResetting to empty; b3: when all points are
Figure 799087DEST_PATH_IMAGE001
Are all being processed, at that timeCMemory storageAll clusters are put.
In this embodiment, referring to fig. 2 for description, in step S4, each type of point cloud data after segmentation is matched with the acquired template point cloud data of the stock cage by using a data registration algorithm of a nearest neighbor iteration method, and the point cloud with the highest matching score is calculated as the point cloud where the stock cage is located, where the matching formula is
Figure 396421DEST_PATH_IMAGE003
Where Q is a set of points, i =1,2,3.. Denotes the first set of points, i.e., point cloud data for each class,
Figure 558413DEST_PATH_IMAGE004
i =1,2,3.. Indicate a second point set, namely point cloud data of the stock cage template, the alignment registration of the two point sets is converted to minimize the target function, R is a rotation matrix, and T is a translation matrix, namely rotation parameters and translation parameters between the point cloud data to be registered and the reference point cloud data, so that the optimal matching between the two point set data is satisfied. Further, E1: calculating the corresponding closest point of each point in the P in the Q point set; e2: obtaining rigid body transformation which enables the average distance of the corresponding points to be minimum, and obtaining translation parameters and rotation parameters; e3: obtaining a new transformation point set P' by using the translation parameter and the rotation parameter obtained in the step E2 for P; e4: and if the average distance between the new transformation point set and the reference point set is smaller than a given threshold value, stopping iterative computation, otherwise, taking the new transformation point set P' as a new input to continue iteration until the requirement is met. Specifically, template matching is carried out on different types of point clouds after Euclidean distance segmentation through a nearest neighbor iteration method, corresponding points between a source point cloud and a target point cloud are obtained, a rotation and translation matrix is constructed based on the corresponding points, the source point cloud is transformed to a coordinate system of the target point cloud by using the obtained matrix, an error function of the transformed source point cloud and the target point cloud is estimated, if an error function value is larger than a threshold value, the operation is carried out iteratively until a given error requirement is met, and therefore the matching degree between the camera point cloud and the template point cloud is calculated, and the matching degree is the highestThe high point cloud is the material cage.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that various changes in the embodiments and/or modifications of the invention can be made, and equivalents and modifications of some features of the invention can be made without departing from the spirit and scope of the invention.

Claims (7)

1. An industrial material cage identification and positioning method based on depth camera point cloud data is characterized by comprising the following steps: the method comprises the following steps:
s1: acquiring scene point cloud data through a depth camera and preprocessing the scene point cloud data;
s2: carrying out Euclidean distance separation on the acquired scene point cloud data;
s3: acquiring template point cloud data of the material cage through the depth camera;
s4: matching the separated point cloud data with template point cloud data of the collected material cage to realize material cage identification and obtain material cage point cloud;
s5: performing secondary separation on the stock cage point cloud to obtain a stock cage front point cloud;
s6: and determining the center position and the center shaft angle of the material cage according to the point cloud data on the front surface of the material cage to obtain the pose of the material cage, wherein the pose is used for navigation and positioning of the material cage when the material cage is inserted and taken.
2. The industrial material cage identification and positioning method based on the point cloud data of the depth camera as claimed in claim 1, wherein: in the step S1, the depth camera needs to be fixed at a position, a relative relationship between the depth camera and a standard coordinate system is obtained through measurement, point cloud data is preprocessed, and the point cloud data is separated from the ground in height.
3. The method for identifying and locating industrial cages based on point cloud data of depth cameras as claimed in claim 2, characterized in that: in the step S2, the camera point cloud data is divided into N types according to the distance, and the separation distance and the size of each type of point cloud data are set.
4. The industrial material cage identification and positioning method based on the point cloud data of the depth camera as claimed in claim 3, wherein: classifying the depth camera point cloud data according to Euclidean distance, comprising the following steps:
a1: carrying out distance clustering on the input point cloud data according to a Euclidean method;
a2: creating a Kd-Tree expression P for the input disordered point cloud to realize rapid search;
a3: an empty cluster list C and a set of points Q waiting to be checked are set.
5. The method of claim 4 for industrial material cage identification and localization based on depth camera point cloud data, wherein: each point in P in said step A2
Figure 738575DEST_PATH_IMAGE001
Executing the following steps:
b1: will be provided with
Figure 777069DEST_PATH_IMAGE001
Adding to the current point set Q;
b2: searching
Figure 569576DEST_PATH_IMAGE001
Set of all point configurations in a sphere of radius r as center
Figure 942788DEST_PATH_IMAGE002
To, for
Figure 478943DEST_PATH_IMAGE002
Each point in
Figure 383663DEST_PATH_IMAGE001
If notTreated and added toQIn, whenQAll the points in the process are processed, and the process is carried outQJoining a cluster listCWill beQResetting to empty;
b3: when all points are
Figure 550333DEST_PATH_IMAGE001
Are all being processed, at that timeCIn which all clusters are stored.
6. The method for identifying and locating the industrial material cage based on the point cloud data of the depth camera according to claim 1, wherein: in the step S4, each type of segmented point cloud data is matched with the acquired template point cloud data of the stock cage by using a data registration algorithm of a nearest neighbor iteration method, and the point cloud with the highest matching score is calculated as the point cloud where the stock cage is located, wherein the matching formula is as follows:
Figure 161574DEST_PATH_IMAGE003
where Q is a set of points, i =1,2,3,. Denotes the first set of points, i.e. the point cloud data of each class,
Figure 953949DEST_PATH_IMAGE004
i =1,2,3.. Indicate a second point set, i.e., point cloud data of the stock cage template, the alignment registration of the two point sets is converted to minimize the target function, R is a rotation matrix, T is a translation matrix, i.e., a rotation parameter and a translation parameter between the point cloud data to be registered and the reference point cloud data, so that the optimal matching between the two point set data is satisfied, and n is the number of point sets representing actual point clouds.
7. The depth camera point cloud data-based industrial material cage identification and localization method according to claim 6, characterized by comprising the following steps:
step E1: calculating the corresponding closest point of each point in the P in the Q point set;
step E2: obtaining rigid body transformation which enables the average distance of the corresponding points to be minimum, and obtaining translation parameters and rotation parameters;
step E3: obtaining a new transformation point set P' by using the translation parameter and the rotation parameter obtained in the step E2 for P;
step E4: and if the average distance between the new transformation point set and the reference point set is smaller than a given threshold value, stopping iterative computation, otherwise, taking the new transformation point set P' as a new input to continue iteration until the requirement is met.
CN202211079835.1A 2022-09-05 2022-09-05 Industrial material cage identification and positioning method based on depth camera point cloud data Pending CN115222804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211079835.1A CN115222804A (en) 2022-09-05 2022-09-05 Industrial material cage identification and positioning method based on depth camera point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211079835.1A CN115222804A (en) 2022-09-05 2022-09-05 Industrial material cage identification and positioning method based on depth camera point cloud data

Publications (1)

Publication Number Publication Date
CN115222804A true CN115222804A (en) 2022-10-21

Family

ID=83617186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211079835.1A Pending CN115222804A (en) 2022-09-05 2022-09-05 Industrial material cage identification and positioning method based on depth camera point cloud data

Country Status (1)

Country Link
CN (1) CN115222804A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950500A (en) * 2020-08-21 2020-11-17 成都睿芯行科技有限公司 Real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment
CN114419150A (en) * 2021-12-21 2022-04-29 未来机器人(深圳)有限公司 Forklift goods taking method and device, computer equipment and storage medium
CN114694134A (en) * 2022-03-23 2022-07-01 成都睿芯行科技有限公司 Tray identification and positioning method based on depth camera point cloud data
CN114820391A (en) * 2022-06-28 2022-07-29 山东亚历山大智能科技有限公司 Point cloud processing-based storage tray detection and positioning method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950500A (en) * 2020-08-21 2020-11-17 成都睿芯行科技有限公司 Real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment
CN114419150A (en) * 2021-12-21 2022-04-29 未来机器人(深圳)有限公司 Forklift goods taking method and device, computer equipment and storage medium
CN114694134A (en) * 2022-03-23 2022-07-01 成都睿芯行科技有限公司 Tray identification and positioning method based on depth camera point cloud data
CN114820391A (en) * 2022-06-28 2022-07-29 山东亚历山大智能科技有限公司 Point cloud processing-based storage tray detection and positioning method and system

Similar Documents

Publication Publication Date Title
Lin et al. Color-, depth-, and shape-based 3D fruit detection
CN111680542B (en) Steel coil point cloud identification and classification method based on multi-scale feature extraction and Pointnet neural network
CN110222702B (en) Industrial vehicle with dome lamp based positioning
CN109000649B (en) Omni-directional mobile robot pose calibration method based on right-angle bend characteristics
CN114820391B (en) Point cloud processing-based storage tray detection and positioning method and system
CN111260289A (en) Micro unmanned aerial vehicle warehouse checking system and method based on visual navigation
CN114694134A (en) Tray identification and positioning method based on depth camera point cloud data
CN113253737B (en) Shelf detection method and device, electronic equipment and storage medium
CN111198496A (en) Target following robot and following method
CN111539429A (en) Automatic circulation box positioning method based on image geometric features
CN115321090B (en) Method, device, equipment, system and medium for automatically receiving and taking luggage in airport
CN116128841A (en) Tray pose detection method and device, unmanned forklift and storage medium
Holz et al. Fast edge-based detection and localization of transport boxes and pallets in rgb-d images for mobile robot bin picking
Naumann et al. Literature review: Computer vision applications in transportation logistics and warehousing
CN113469195B (en) Target identification method based on self-adaptive color quick point feature histogram
CN113420648B (en) Target detection method and system with rotation adaptability
Tazir et al. Cluster ICP: Towards sparse to dense registration
CN115222804A (en) Industrial material cage identification and positioning method based on depth camera point cloud data
CN102564424A (en) Multiple sensor-based data fusion method
Li et al. Pallet detection and localization with RGB image and depth data using deep learning techniques
CN113111899A (en) Object recognition or object registration method based on image classification and computing system
CN106845317A (en) Many bar code localization methods, device and terminal
CN112288038B (en) Object recognition or object registration method based on image classification and computing system
Fontana et al. A combinatorial approach to detection of box pallet layouts
Zhu et al. Object detection and recognition method based on binocular

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221021