CN115995013A - Covering agent adding method, covering agent adding device, computer equipment and storage medium - Google Patents

Covering agent adding method, covering agent adding device, computer equipment and storage medium Download PDF

Info

Publication number
CN115995013A
CN115995013A CN202310273351.9A CN202310273351A CN115995013A CN 115995013 A CN115995013 A CN 115995013A CN 202310273351 A CN202310273351 A CN 202310273351A CN 115995013 A CN115995013 A CN 115995013A
Authority
CN
China
Prior art keywords
point cloud
clustering
covering agent
points
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310273351.9A
Other languages
Chinese (zh)
Inventor
石珣
徐宗立
徐恒翼
刘明灯
周鼎
林启森
魏云吉
朱莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jinheng Information Technology Co Ltd
Original Assignee
Jiangsu Jinheng Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jinheng Information Technology Co Ltd filed Critical Jiangsu Jinheng Information Technology Co Ltd
Priority to CN202310273351.9A priority Critical patent/CN115995013A/en
Publication of CN115995013A publication Critical patent/CN115995013A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a covering agent adding method, a covering agent adding device, computer equipment and a storage medium, wherein the covering agent adding method comprises the following steps: acquiring a target point cloud image of a covering agent placement area transmitted by image acquisition equipment; the point cloud points in the target point cloud image are used for representing the position information of the covering agent stored in the covering agent placement area; clustering point cloud points in the target point cloud image to obtain at least one point cloud clustering area; according to the point cloud clustering area, determining grabbing position information of the target covering agent; the control mechanical arm grabs the target covering agent from the covering agent placement area according to the grabbing position information, and adds the target covering agent into the tundish. Wherein each clustered point cloud cluster region characterizes a placement region of a bag of covering agent. Therefore, the method and the device divide the placement area of the single-bag covering agent based on the target point cloud image, control the mechanical arm to perform grabbing and adding operations, realize safe covering agent adding and improve the covering agent adding efficiency.

Description

Covering agent adding method, covering agent adding device, computer equipment and storage medium
Technical Field
The application relates to the technical field of metallurgical steelmaking, in particular to a covering agent adding method, a covering agent adding device, computer equipment and a storage medium.
Background
Continuous casting is an important link in steel production, and a tundish is used as a transitional container of a ladle and a crystallizer and is an important component of a continuous casting machine. In the continuous casting process, in order to improve the quality of molten steel, a certain amount of covering agent needs to be added on the molten steel in the tundish. The covering agent can play a role in heat insulation and heat preservation and preventing secondary oxidation of molten steel, and can form a slag layer with a certain thickness on the surface of the molten steel, adsorb nonmetallic inclusion, refractory material particles and other suspended matters floating on the surface of the molten steel, and play a role in purifying the molten steel.
The conventional covering agent addition mode is as follows: an operator stands on the tundish operation platform, and intermittently and manually puts covering agent into the tundish. When the ladle is opened, the input amount of the covering agent is large, and about 50 bags of the covering agent (about 500 kg) are needed to be input manually within 10 minutes, so that the manual operation load is large, and the manual operation efficiency is low. In the process, if the covering agent is not added in time manually and cannot meet the requirements of fast-paced high-quality continuous casting production, the quality of molten steel is reduced, so that the quality of steel billets and the productivity of a production line are affected. In addition, the operation platform of the tundish is a typical steel smelting area, risks such as high temperature, severe dust, molten steel splashing, explosion and the like exist, the operation platform belongs to a dangerous operation area, and the operation risk of manually adding the covering agent is high.
Based on this, there is a need for a method that can safely and effectively add a covering agent to solve the problems of low manual operation efficiency and high safety risk in the process of adding the covering agent.
Disclosure of Invention
The application provides a covering agent adding method, a covering agent adding device, computer equipment and a storage medium, which can improve the covering agent adding efficiency in the continuous casting production process and reduce the operation safety risk.
In a first aspect, the present application provides a covering agent adding method applied to a processor, where the processor is connected to an image capturing device and a mechanical arm, respectively, and includes:
acquiring a target point cloud image of a covering agent placement area transmitted by image acquisition equipment; the point cloud points in the target point cloud image are used for representing the position information of the covering agent stored in the covering agent placement area;
clustering point cloud points in the target point cloud image to obtain at least one point cloud clustering area; each point cloud clustering area represents a placement area of a bag of covering agent;
according to the point cloud clustering area, determining grabbing position information of the target covering agent;
the control mechanical arm grabs the target covering agent from the covering agent placement area according to the grabbing position information, and adds the target covering agent into the tundish.
In some embodiments, clustering the point cloud points in the target point cloud image to obtain at least one point cloud clustering area includes:
determining a clustering starting point from non-clustered point cloud points in the target point cloud image; the clustering starting point is a point cloud point with the minimum curvature value in non-clustered point cloud points;
and determining clustered point cloud points belonging to the same point cloud clustering area with the clustering starting point from the non-clustered point cloud points until all the point cloud points in the target point cloud image are clustered to the corresponding point cloud clustering areas, so as to obtain at least one point cloud clustering area.
In some embodiments, determining clustered point cloud points from the non-clustered point cloud points that belong to the same point cloud clustering region as the clustered start point includes:
according to normal included angles between non-clustered point cloud points and clustering starting points, candidate point cloud points are determined from the non-clustered point cloud points; the normal included angle between the candidate point cloud point and the clustering starting point is smaller than a preset included angle threshold;
determining cluster point cloud points from the candidate point cloud points according to the curvature value of the candidate point cloud points; the clustering point cloud points are point cloud points with curvature values not smaller than a preset curvature threshold value in the candidate point cloud points;
If the candidate point cloud points have search point cloud points with curvature values smaller than the curvature threshold value, determining new candidate point cloud points based on the search point cloud points, and continuously acquiring cluster point cloud points through the new candidate point cloud points until the new candidate point cloud points do not have the search point cloud points.
In some of these embodiments, determining candidate point cloud points from the non-clustered point cloud points based on normal angles between the non-clustered point cloud points and the clustering start point includes:
constructing a spatial distribution structure tree corresponding to the target point cloud image; the spatial distribution structure tree comprises a plurality of spatial division areas, wherein each spatial division area comprises at least one point cloud point;
based on a spatial distribution structure tree, acquiring a plurality of adjacent points of a clustering starting point from non-clustered point cloud points by adopting a mean value clustering method;
and determining candidate point cloud points from the plurality of adjacent points according to normal included angles between each adjacent point and the clustering starting point.
In some embodiments, if the number of the point cloud clustering areas is a plurality, determining the capturing position information of the target covering agent according to the point cloud clustering areas includes:
calculating the mass centers of the cloud clustering areas of the points to obtain a plurality of mass center coordinate information;
And determining grabbing position information of the target covering agent from the plurality of centroid coordinate information based on a preset centroid screening rule.
In some of these embodiments, determining the grasping location information of the target covering agent from the plurality of centroid coordinate information based on a preset centroid screening rule includes:
determining at least one candidate point cloud clustering area according to the space geometric information of the point cloud clustering area, the centroid coordinate information of the point cloud clustering area and a preset limit threshold value;
and determining the grabbing position information of the target covering agent according to the centroid coordinate information of the candidate point cloud clustering area and centroid screening rules.
In some embodiments, acquiring a cloud image of a target point of a covering agent placement region transmitted by an image acquisition device includes:
acquiring an original point cloud image of a covering agent placement area acquired by image acquisition equipment;
performing downsampling treatment on the original point cloud image to obtain an intermediate point cloud image;
and extracting a point cloud bounding box from the intermediate point cloud image to obtain a target point cloud image.
In a second aspect, the present application provides a covering agent adding device integrated in a processor, the processor being connected to an image capturing device and a mechanical arm, respectively, comprising:
The image acquisition module is used for acquiring a target point cloud image of the covering agent placement area transmitted by the image acquisition equipment; the point cloud points in the target point cloud image are used for representing the position information of the covering agent stored in the covering agent placement area;
the point cloud clustering module is used for carrying out clustering processing on point cloud points in the target point cloud image to obtain at least one point cloud clustering area; each point cloud clustering area represents a placement area of a bag of covering agent;
the grabbing point positioning module is used for determining grabbing position information of the target covering agent according to the point cloud clustering area;
and the adding control module is used for controlling the mechanical arm to grab the target covering agent from the covering agent placement area according to the grabbing position information and adding the target covering agent into the tundish.
In a third aspect, the present application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method for adding a covering agent as described in any one of the first aspects above when the computer program is called from the memory and executed by the processor.
In a fourth aspect, the present application provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the covering agent addition method as set forth in any one of the first aspects above.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the covering agent addition method as set out in any of the first aspects above.
The technical scheme provided by the application can at least achieve the following beneficial effects:
the covering agent adding method, the covering agent adding device, the computer equipment and the storage medium acquire a target point cloud image of the covering agent placing area transmitted by the image acquisition equipment; clustering point cloud points in the target point cloud image to obtain at least one point cloud clustering area; according to the point cloud clustering area, determining grabbing position information of the target covering agent; the control mechanical arm grabs the target covering agent from the covering agent placement area according to the grabbing position information, and adds the target covering agent into the tundish. The point cloud points in the target point cloud image are used for representing the position information of the covering agent stored in the covering agent placement area, and the clustered point cloud clustering area represents the placement area of a bag of covering agent. Therefore, the covering agent adding operation is not required to be manually executed, visual sampling is combined with image processing, the target point cloud image of the covering agent placing area is acquired through the image acquisition equipment, then the point cloud points in the target point cloud image are divided into at least one point cloud clustering area through point cloud clustering, the dividing processing of the covering agent placing area is realized, and the placing area corresponding to the single-bag covering agent is determined. Further, after the placement area of the single-bag covering agent is obtained, the target covering agent and the grabbing position information to be grabbed can be determined, so that the mechanical arm is controlled to accurately grab the target covering agent based on the grabbing position information and the target covering agent is added into the tundish. Thus, in the industrial production process, the safe addition of the covering agent can be realized, and the addition efficiency of the covering agent is improved.
Drawings
Fig. 1 is a schematic view of an application scenario of a covering agent adding method according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a computer device according to an exemplary embodiment of the present application;
FIG. 3 is a flow chart illustrating a method of adding a covering agent according to an exemplary embodiment of the present application;
FIG. 4 is a point cloud image of a covering agent placement region, as shown in an exemplary embodiment of the present application;
FIG. 5 is a flow chart of a point cloud point clustering process according to an exemplary embodiment of the present application;
FIG. 6 is a schematic view of a point cloud clustering result of a target point cloud image according to an exemplary embodiment of the present application;
fig. 7 is a schematic structural view of a covering agent adding device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Before explaining the covering agent adding method provided by the embodiment of the application, an application scene and related hardware equipment of the embodiment of the application are described.
The covering agent adding method provided by the application can be applied to an application scene shown in fig. 1, wherein the computer device 200 is respectively connected with the image acquisition device 100 and the mechanical arm 300. The image capturing apparatus 100 is configured to acquire a point cloud image of the covering agent placement area, and transmit the point cloud image to the computer apparatus 200. The computer device 200 processes the spot cloud image, divides a placement area of each bag of the covering agent, and determines a target covering agent to be grasped and grasping position information. Further, the computer apparatus 200 transmits the grasping position information to the robot arm 300, so that the robot arm 300 accurately grasps the target covering agent from the covering agent placement area according to the grasping position information, and adds the target covering agent to the tundish.
In some embodiments, the image capturing device 100 is installed in the covering agent placement area, so that a point cloud image of the entire covering agent placement area can be clearly obtained, and the specific installation position is not limited in the embodiments of the present application.
As one example, the image capture device 100 may be a 3D camera, a depth camera, a laser sensor, or the like.
In some embodiments, multiple robotic arms 300 may be arranged along the covering agent placement area to the tundish according to industry needs. Grabbing the target covering agent by a first mechanical arm at the covering agent placement area, then transferring the target covering agent to a transfer mechanical arm arranged along the line, transferring the target covering agent to a second mechanical arm arranged at the tundish by the transfer mechanical arm, and finally adding the covering agent into the tundish by the second mechanical arm.
In the process of grabbing, transferring and adding the target covering agent, the number and the arrangement positions of the mechanical arms are not limited in the embodiment of the application.
In some embodiments, the above-mentioned mechanical arm 300 may also be mounted on a movable industrial robot, and the computer device 200 communicates with the industrial robot to control the mechanical arm to perform operations of gripping, handling, adding, etc. by the industrial robot, so as to achieve the control of the mechanical arm to grip the target covering agent in the covering agent placement area, transport the target covering agent to the tundish, and then control the mechanical arm to add the target covering agent to the tundish.
Referring to fig. 2, the computer device 200 may include at least one processor 210, a memory 220, a communication bus 230, and at least one communication interface 240.
The processor 210 may be a general-purpose central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a microprocessor, or may be one or more integrated circuits for implementing the aspects of the present Application, such as an Application-specific integrated circuit (ASIC), a programmable logic device (Programmable Logic Device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (Complex Programmable Logic Device, CPLD), a Field programmable gate array (Field-Programmable Gate Array, FPGA), general array logic (Generic Array Logic, GAL), or any combination thereof.
In some embodiments, processor 210 may include one or more CPUs. The computer device 200 may include a plurality of processors 210. Each of these processors 210 may be a single-Core Processor (CPU) or a multi-core processor (multi-CPU).
It is noted that the processor 210 may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
The Memory 220 may be, but is not limited to, a Read-Only Memory (ROM) or other type of static storage device capable of storing static information and instructions, a random access Memory (Random Access Memory, RAM) or other type of dynamic storage device capable of storing information and instructions, an EEPROM, a compact disk (Compact Disc Read-Only Memory, CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage media, or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Alternatively, the memory 220 may be stand alone and coupled to the processor 210 via the communication bus 230; memory 220 may also be integrated with processor 210.
Where communication bus 230 is used to transfer information between components (e.g., between processor 210 and memory 220), communication bus 230 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, fig. 2 is illustrated with only one communication bus, but not with only one bus or one type of bus.
Wherein the communication interface 240 is used for the computer device 200 to communicate with other devices or communication networks. The communication interface 240 includes a wired communication interface or a wireless communication interface. For example, the wired communication interface may be an ethernet interface, which may be an optical interface, an electrical interface, or a combination thereof; the wireless communication interface may be a wireless local area network (Wireless Local Area Networks, WLAN) interface, a cellular network communication interface, a combination thereof, or the like.
In some embodiments, the computer device 200 may also include output devices and input devices (not shown in FIG. 2). Wherein the output device is in communication with the processor 210, information may be displayed in a variety of ways; the input device is in communication with the processor 210 and may receive user input in a variety of ways. For example, the output device may be a liquid crystal display (Liquid Crystal Display, LCD), a light emitting diode (Light Emitting Diode, LED) display device, a Cathode Ray Tube (CRT) display device, or a projector (projector), or the like; the input device may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.
In some embodiments, the memory 220 is used to store a computer program that executes aspects of the present application, and the processor 210 may execute the computer program stored in the memory 220. For example, the computer device 200 may invoke and execute a computer program stored in the memory 220 by the processor 210 to implement some or all of the steps of the covering agent addition method provided by embodiments of the present application.
It should be understood that the covering agent adding method provided in the present application may also be applied to a covering agent adding device, where the covering agent adding device may be implemented as part or all of a processor in a manner of software, hardware, or a combination of software and hardware, so as to be integrated in different computer devices.
Next, the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems will be specifically described by way of examples with reference to the accompanying drawings. Embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. It will be apparent that the described embodiments are some, but not all, of the embodiments of the present application.
Referring to fig. 3, the present application provides a covering agent adding method, in which a processor 210 applied to the computer device shown in fig. 2 is illustrated, and the processor 210 is connected to an image capturing device and a mechanical arm, respectively. The method may comprise the steps of:
Step 310: and acquiring a target point cloud image of the covering agent placement area transmitted by the image acquisition equipment.
The point cloud points in the target point cloud image are used for representing the position information of the covering agent stored in the covering agent placement area. That is, the target point cloud image includes a plurality of point cloud points, and different point cloud points may represent position information of the same bag covering agent, or may represent position information of different bag covering agents.
In some embodiments, the original point cloud image acquired by the image acquisition device may be directly used as the target point cloud image, and the following step 320 is performed; the original point cloud image acquired by the image acquisition device may also be preprocessed, and the processed point cloud image is used as the target point cloud image, so as to execute step 320 described below.
In one possible implementation, the implementation procedure of step 310 may be: acquiring an original point cloud image of a covering agent placement area acquired by image acquisition equipment; performing downsampling treatment on the original point cloud image to obtain an intermediate point cloud image; and extracting a point cloud bounding box from the intermediate point cloud image to obtain a target point cloud image.
Wherein the downsampling may be achieved by a downsampling function or a filtering process. In this way, the original point cloud image is subjected to downsampling, redundant point cloud points and noise in the original point cloud image can be removed, the number of the point cloud points is reduced, the resource consumption of point cloud computing is reduced, and the computing speed is improved.
It should be noted that redundant information may also exist on the edge of the down-sampled intermediate point cloud image, so by extracting the point cloud bounding box, abnormal outlier cloud points and redundant point cloud points may be further removed.
As one example, a point cloud bounding box may be extracted from the intermediate point cloud image by a box function to obtain a target point cloud image.
Referring to fig. 4, fig. 4 (a) is an original point cloud image, which may be a point cloud image obtained from a top view of a covering agent placement area, in which a large amount of redundant information and noise exist. After the above-described downsampling process and bounding box extraction operation, the obtained target point cloud image is shown in fig. 4 (b), where the point cloud points in the target point cloud image may represent the position information of the covering agent placed in the covering agent placement area.
It should be understood that the covering agent placement area will typically be filled with multiple bags of covering agent, which are grasped in bags when adding the covering agent. Therefore, after the target point cloud image of the covering agent placement area is acquired, the target point cloud image is subjected to area segmentation, and then the position information of the single covering agent in the covering agent placement area is preliminarily determined according to the area segmentation result.
When the target point cloud image is subjected to region segmentation, clustering processing is performed on point cloud points in the target point cloud image in a clustering processing mode, so that the point cloud points in the target point cloud image are divided into at least one point cloud clustering region. Thus, after clustering is carried out on all the point cloud points in the target point cloud image, the region segmentation of the whole target point cloud image can be realized.
It should be noted that, when the region segmentation is performed on the target point cloud image, other segmentation strategies (a non-clustering manner) may be adopted to segment the target point cloud image so as to obtain at least one region segmentation result, which is not limited in the embodiment of the present application.
Step 320: and clustering point cloud points in the target point cloud image to obtain at least one point cloud clustering area.
Wherein, each point cloud clustering area characterizes a placing area of a bag of covering agent.
The curvature can reflect the smooth change trend of the region where the point cloud points are located, namely the distribution change of the point cloud points in the region with small curvature value is smoother, and the distribution change of the point cloud points in the region with large curvature value is steeper. In the target point cloud image of the covering agent placement area, the curvature value of the point cloud point at the contact position of the small bag covering agent and the small bag covering agent is larger in change amplitude, the surface of the single bag covering agent is relatively smooth, and the curvature value of the corresponding point cloud point is smaller in change amplitude, so that the point cloud points in the target point cloud image can be clustered based on the curvature value of the point cloud point, and the position area of the single bag covering agent can be separated from the target point cloud image as far as possible.
The clustering processing is to determine connected domains with the same characteristics based on point cloud points contained in the target point cloud image, so that a point cloud clustering area is obtained. In the embodiment of the application, the normal angle and the curvature value are used as the characteristics, and one or more point cloud clustering areas with smooth characteristic change in the target point cloud image are screened out.
It should be understood that for different point cloud clustering areas, the included normal angles and curvature values between the point cloud points change greatly.
In some embodiments, referring to fig. 5, the implementation of step 320 may include the sub-steps of:
step 321: determining a clustering starting point from non-clustered point cloud points in the target point cloud image; the clustering starting point is the point cloud point with the minimum curvature value in the non-clustered point cloud points.
When searching a first point cloud clustering area, non-clustered point cloud points in the target point cloud image are all point cloud points in the target point cloud image; and when searching the second point cloud clustering area, the non-clustered point cloud points in the target point cloud image are other point cloud points except the first point cloud clustering area in the target point cloud image, and the like until the clustering of all the point cloud points in the target point cloud image is completed, and at least one point cloud clustering area can be determined in the target point cloud image.
In one possible implementation, the curvature value of each cloud point can be solved by adopting a covariance matrix of a neighborhood point set and a singular value decomposition mode, referring to the following formula (1).
Figure SMS_1
(1)
In the method, in the process of the invention,
Figure SMS_2
for the point cloud curvature value of the target point cloud point i to be solved, +.>
Figure SMS_3
And the characteristic value of the covariance matrix of the target point cloud point i.
Figure SMS_4
Representing the change of the curved surface along the normal vector, +.>
Figure SMS_5
Figure SMS_6
The distribution of the target point cloud point i in the tangential plane is represented.
That is, curvature values of cloud points at each point are determined by approximating curvature information using a curvature variation.
Step 323: and determining clustered point cloud points belonging to the same point cloud clustering area with the clustering starting point from the non-clustered point cloud points until all the point cloud points in the target point cloud image are clustered to the corresponding point cloud clustering areas, so as to obtain at least one point cloud clustering area.
Specifically, determining clustered point cloud points belonging to the same point cloud clustering area with the clustering starting point from non-clustered point cloud points, if other non-clustered point cloud points exist in the target point cloud image, determining a new clustering starting point again, and determining point cloud points belonging to the same point cloud clustering area with the new clustering starting point until all point cloud points in the target point cloud image are clustered to the corresponding point cloud clustering area, and ending the cycle to obtain at least one point cloud clustering area.
As an example, if the point cloud image includes a point cloud point 1-point cloud point 100, a point cloud point 5 with the smallest curvature value is determined as a clustering start point according to the curvature value of the point cloud point, and a clustered point cloud point belonging to the same point cloud clustering area as the clustering start point (i.e., the point cloud point 5) is determined from 100 point cloud points which are not clustered, so as to obtain a first point cloud clustering area. The first point cloud clustering area comprises point cloud points 1, 2, 3, 4 and 7-25.
Then, according to the curvature value of the non-clustered point cloud points 26-100, the point cloud point 50 with the smallest curvature value is used as a new clustering starting point, and the point cloud point belonging to the clustering area with one point cloud is determined from the non-clustered point cloud points 26-100 and the new clustering starting point (namely, the point cloud point 50), so as to obtain a second point cloud clustering area. The second point cloud cluster area includes point cloud points 45-68.
And the same is repeated, the new clustering starting point is determined in the non-clustered point cloud points (the point cloud points 26-44 and the point cloud points 69-100 or other point cloud points except the first point cloud clustering area and the second point cloud clustering area) and the point cloud points which belong to the same clustering area with the new clustering starting point are continuously determined until the point cloud points in the target point cloud point image are clustered.
If the number of the point cloud points with the minimum curvature value is a plurality of points, one point cloud point may be selected as a clustering start point, or the clustering start point may be determined based on other screening conditions, which is not limited in the embodiment of the present application.
In some embodiments, based on the clustering starting point, the implementation process of searching the clustering point cloud points in the non-clustered point cloud points can be as follows: according to normal included angles between non-clustered point cloud points and clustering starting points, candidate point cloud points are determined from the non-clustered point cloud points; determining cluster point cloud points from the candidate point cloud points according to the curvature value of the candidate point cloud points; if the candidate point cloud points have search point cloud points with curvature values smaller than the curvature threshold value, determining new candidate point cloud points based on the search point cloud points, and continuously acquiring cluster point cloud points through the new candidate point cloud points until the new candidate point cloud points do not have the search point cloud points.
The normal included angle between the candidate point cloud point and the clustering starting point is smaller than a preset included angle threshold, and the clustering point cloud point is a point cloud point with a curvature value not smaller than the preset curvature threshold in the candidate point cloud point.
It should be noted that, the present application realizes clustering based on the normal angle, and groups the point cloud points smaller than the threshold value of the normal angle into one type; expanding the search area based on the curvature threshold, and taking the point cloud points which are larger than or equal to the curvature threshold as new search points to continuously search for other clustering point cloud points.
It should be understood that after the candidate point cloud point is determined, the candidate point cloud point is the point cloud point which belongs to the same point cloud clustering area as the clustering starting point. However, the candidate cloud points determined at this time may not be comprehensive, and other cloud points capable of clustering with the clustering starting point exist in the target point cloud image. Therefore, in order to continue searching point cloud points capable of being clustered with the clustering starting point, point cloud points with curvature values smaller than a curvature threshold value in the candidate point cloud points are used as searching point cloud points, new candidate point cloud points are continuously searched in non-clustered point cloud points until the curvature values of the candidate point cloud points are all larger than or equal to the curvature threshold value, namely, when the searching point cloud points do not exist in the candidate point cloud points, searching is ended, all point cloud points capable of being clustered in the target point cloud images by the clustering starting point are obtained, and therefore a point cloud clustering area to which the clustering starting point belongs can be determined.
In one possible implementation, based on the clustering start point, the process of determining candidate point cloud points from non-clustered point cloud points in the target point cloud image may be: traversing non-clustered point cloud points in the target point cloud image according to preset clustering parameters, and determining a plurality of initial adjacent points of a clustering starting point from the non-clustered point cloud points; and determining candidate point cloud points from the plurality of adjacent points according to normal included angles between each adjacent point and the clustering starting point.
The clustering parameters include a threshold value of adjacent points, the number of maximum clustering points, the number of minimum clustering points and the like, and the clustering parameters can be set manually according to actual requirements, and the embodiment of the application does not limit the clustering parameters.
In another possible implementation manner, based on the clustering starting point, the process of determining candidate point cloud points from non-clustered point cloud points in the target point cloud image may be: constructing a spatial distribution structure tree corresponding to the target point cloud image; based on a spatial distribution structure tree, acquiring a plurality of adjacent points of a clustering starting point from non-clustered point cloud points by adopting a mean value clustering method; and determining candidate point cloud points from the plurality of adjacent points according to normal included angles between each adjacent point and the clustering starting point.
The spatial distribution structure tree comprises a plurality of spatial division areas, and each spatial division area comprises at least one point cloud point.
As an example, the spatial distribution structure tree may be a binary tree, a k-dimensional tree (kd-Tree for short), a quadtree, etc., which is not limited by the embodiments of the present application.
Therefore, based on the spatial distribution structure tree, a plurality of adjacent points of the clustering starting point can be quickly and effectively searched, all non-clustered point cloud points do not need to be traversed, the adjacent point searching efficiency is improved, and the data processing capacity is reduced.
It should be noted that, for a clustering start point, a plurality of determined neighboring points may or may not belong to the same point cloud clustering area as the clustering start point. Therefore, the initial neighboring points need to be screened to determine candidate point cloud points belonging to the same point cloud clustering area as the clustering start point.
As an example, for a cluster starting point S, if 5 neighboring points are searched through the spatial distribution structure tree: point cloud point a, point cloud point b, point cloud point c, point cloud point d and point cloud point e, and the preset normal angle threshold value is
Figure SMS_7
The curvature threshold is k.
Firstly, calculating the normal line included angle between the point cloud point a and the clustering starting point S
Figure SMS_8
Normal angle between point cloud point b and cluster starting point S>
Figure SMS_9
Normal angle between point cloud point c and cluster starting point S>
Figure SMS_10
Normal angle between point cloud point d and cluster starting point S>
Figure SMS_11
And the normal angle between the point cloud point e and the cluster starting point S>
Figure SMS_12
。/>
Wherein, it is assumed that
Figure SMS_13
Figure SMS_14
And->
Figure SMS_15
Less than the angle threshold->
Figure SMS_16
And determining the point cloud point a, the point cloud point d and the point cloud point e as candidate point cloud points.
Further, assuming that the curvature value ka of the point cloud point a is larger than the curvature threshold k, the curvature value kd of the point cloud point d and the curvature value ke of the point cloud point e are smaller than the curvature threshold k, the point cloud point a is determined to be a clustering point cloud point of the clustering starting point S. And selecting one point cloud point from the point cloud point d and the point cloud point e as a search point cloud point.
The candidate point cloud point is smaller than the curvature threshold value, and the point cloud point with the smallest curvature value is determined as the searching point cloud point. For example, the point cloud point e is determined as the search point cloud point.
Then, based on the search point cloud point e, a plurality of adjacent points of the search point cloud point e are determined in the non-clustered point cloud points through the spatial distribution structure tree, and candidate point cloud points capable of being clustered with the clustering starting point S and clustered point cloud points are obtained.
Therefore, based on the clustering starting point, the operation can be repeatedly executed through the spatial distribution structure tree, so that all clustering point cloud points which can be clustered with the clustering starting point in the target point cloud image are determined.
In addition, the number of point cloud clustering areas included in the target point cloud image after the clustering processing is smaller than or equal to the number of bags of the covering agent stored in the covering agent placement area. That is, the present application can maximally divide the placement area of the single bag of covering agent from the entire target point cloud image of the placement area of the covering agent by the clustering process of step 320.
As an example, the target point cloud image shown in fig. 4 (i.e., (b) in fig. 4) is shown in fig. 6, and the point cloud clustering areas obtained by the point cloud clustering process include 8 point cloud clustering areas, where each point cloud clustering area corresponds to a placement area of a bag of covering agent.
Based on the clustered point cloud clustering region, the placement region of the target covering agent to be grabbed can be further determined.
Step 330: and determining the grabbing position information of the target covering agent according to the point cloud clustering area.
If the number of the point cloud clustering areas is one, the situation that only one bag of covering agent is possibly stored in the covering agent placement area is indicated, the covering agent represented by the point cloud clustering areas is directly determined to be a target covering agent, and grabbing position information is determined according to the centroid coordinates of the point cloud clustering areas.
If the number of the point cloud clustering areas is multiple, as shown in fig. 6, it indicates that multiple bags of covering agents may be stored in the covering agent placement area, before the robotic arm is instructed to perform the grabbing operation, one point cloud clustering area needs to be selected from the multiple bags of covering agents, the covering agent stored in the point cloud clustering area is determined to be the target covering agent, and then grabbing position information of the target covering agent is determined.
It should be noted that, when the covering agents are densely stacked and there is an overlapping area between the covering agents in each bag, the clustering result may not completely separate the boundary of the single-bag covering agent placement area, and in order to achieve the grabbing requirement, the optimal grabbing point needs to be determined in the cloud clustering area of each point after the clustering process according to a preset screening rule.
In one possible implementation manner, the implementation process of the step 330 may be: calculating the mass centers of the cloud clustering areas of the points to obtain a plurality of mass center coordinate information; and determining grabbing position information of the target covering agent from the plurality of centroid coordinate information based on a preset centroid screening rule.
The process of determining the grabbing position information of the target covering agent may be: determining at least one candidate point cloud clustering area according to the space geometric information of the point cloud clustering area, the centroid coordinate information of the point cloud clustering area and a preset limit threshold value; and determining the grabbing position information of the target covering agent according to the centroid coordinate information of the candidate point cloud clustering area and centroid screening rules.
As one example, candidate point cloud cluster regions may be screened by any of the following region screening rules:
(1) Whether the z-axis height difference of each point cloud clustering area is within a preset height threshold limit or not;
(2) Whether the z-axis width difference of each point cloud clustering area is within a preset width threshold limit or not;
(3) Whether the z-axis length difference of each point cloud clustering area is within a preset length threshold limit or not;
(4) Whether the centroid coordinate value (x, y, z) of the cloud clustering area of each point is within a preset centroid threshold value range or not.
Based on the cloud cluster area of each point, the Z-axis height is determined in the world coordinate system, the x-axis is the front-back direction (i.e., the vertical direction perpendicular to the plane of the Z-axis), and the y-axis is the left-right direction (i.e., the horizontal direction perpendicular to the plane of the Z-axis).
As one example, centroid screening rules may include any of the following:
(1) Sequencing the barycenter coordinate information of the candidate point cloud clustering area according to the Z value, and determining the minimum barycenter of the Z value as a final grabbing point;
(2) And taking the mass center of the candidate point cloud clustering area with the minimum point cloud quantity as the final grabbing point, wherein the point cloud quantity of the candidate point cloud clustering area is larger than the point cloud quantity threshold value.
Further, the grabbing position information is determined according to the coordinate information of the final grabbing point.
Step 340: the control mechanical arm grabs the target covering agent from the covering agent placement area according to the grabbing position information, and adds the target covering agent into the tundish.
That is, after the processor processes the information through the above steps 310 to 330, the obtained gripping position information is transmitted to the robot arm disposed around the covering agent placement area to instruct the robot arm to grip the target covering agent from the covering agent placement area according to the gripping position information. The target coating agent is then transported to the tundish and is then manipulated by the robotic arm to be added to the tundish.
As described above, the mechanical arm for gripping the target covering agent and adding the target covering agent to the tundish may be the same mechanical arm or may be different mechanical arms, which is not limited in the embodiment of the present application.
In the embodiment of the application, the covering agent adding operation is not required to be manually executed, but the image acquisition equipment is used for acquiring the target point cloud image of the covering agent placing area, and then clustering processing is carried out on the point cloud points in the target point cloud image so as to segment the target point cloud image, so that the placing area of the single-bag covering agent is determined. Further, after the placement area of each bag of covering agent is obtained, the target covering agent to be grasped and grasping position information can be determined from the placement area, so that the mechanical arm is controlled to grasp the target covering agent, and the target covering agent is added into the tundish. Thus, the addition efficiency of the covering agent is improved while the safe addition of the covering agent is realized.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or steps.
Based on the above covering agent adding method, the same technical concept is adopted, and the embodiment of the application also provides a covering agent adding device corresponding to the covering agent adding method, which can be integrated in a processor and is respectively connected with the image acquisition equipment and the mechanical arm, and the implementation scheme of the covering agent adding device for solving the problems is similar to that recorded in the embodiment of the method.
As shown in fig. 7, the covering agent adding apparatus 700 includes an image acquisition module 710, a point cloud clustering module 720, a grabbing point positioning module 730, and an adding control module 740, wherein:
an image acquisition module 710, configured to acquire a cloud image of a target point of the covering agent placement area transmitted by the image acquisition device; the point cloud points in the target point cloud image are used for representing the position information of the covering agent stored in the area where the covering agent is located;
the point cloud clustering module 720 is configured to perform clustering processing on point cloud points in the target point cloud image to obtain at least one point cloud clustering area; each point cloud clustering area represents a placement area of a bag of covering agent;
the grabbing point positioning module 730 is configured to determine grabbing position information of the target covering agent according to the point cloud clustering area;
The adding control module 740 is configured to control the mechanical arm to grab the target covering agent from the covering agent placement area according to the grabbing position information, and add the target covering agent to the tundish.
In some of these embodiments, the point cloud clustering module 720 includes:
the starting point determining unit is used for determining a clustering starting point from non-clustered point cloud points in the target point cloud image; the clustering starting point is a point cloud point with the minimum curvature value in non-clustered point cloud points;
and the clustering unit is used for determining clustered point cloud points belonging to the same point cloud clustering area with the clustering starting point from the non-clustered point cloud points until all the point cloud points in the target point cloud image are clustered to the corresponding point cloud clustering areas, so as to obtain at least one point cloud clustering area.
In some of these embodiments, the clustering unit comprises:
the first determining subunit is used for determining candidate point cloud points from the non-clustered point cloud points according to normal included angles between the non-clustered point cloud points and the clustering starting points; the normal included angle between the candidate point cloud point and the clustering starting point is smaller than a preset included angle threshold;
the second determining subunit is used for determining cluster point cloud points from the candidate point cloud points according to the curvature value of the candidate point cloud points; the clustering point cloud points are point cloud points with curvature values not smaller than a preset curvature threshold value in the candidate point cloud points;
And the searching subunit is used for determining new candidate point cloud points based on the search point cloud points if the search point cloud points with curvature values smaller than the curvature threshold value exist in the candidate point cloud points, and continuously acquiring cluster point cloud points through the new candidate point cloud points until the search point cloud points do not exist in the new candidate point cloud points.
In some of these embodiments, the first determining subunit is specifically configured to:
constructing a spatial distribution structure tree corresponding to the target point cloud image; the spatial distribution structure tree comprises a plurality of spatial division areas, wherein each spatial division area comprises at least one point cloud point;
based on a spatial distribution structure tree, acquiring a plurality of adjacent points of a clustering starting point from non-clustered point cloud points by adopting a mean value clustering method;
and determining candidate point cloud points from the plurality of adjacent points according to normal included angles between each adjacent point and the clustering starting point.
In some embodiments, if the number of the point cloud clustering areas is multiple, the grabbing point positioning module 730 includes:
the centroid calculating unit is used for calculating the centroids of the cloud clustering areas of the points to obtain a plurality of centroid coordinate information;
and the grabbing point screening unit is used for determining grabbing position information of the target covering agent from the plurality of centroid coordinate information based on a preset centroid screening rule.
In one possible implementation manner, the grabbing point screening unit is specifically configured to:
determining at least one candidate point cloud clustering area according to the space geometric information of the point cloud clustering area, the centroid coordinate information of the point cloud clustering area and a preset limit threshold value;
and determining the grabbing position information of the target covering agent according to the centroid coordinate information of the candidate point cloud clustering area and centroid screening rules.
In some of these embodiments, the image acquisition module 710 includes:
the acquisition unit is used for acquiring the original point cloud image of the covering agent placement area acquired by the image acquisition equipment;
the downsampling unit is used for downsampling the original point cloud image to obtain an intermediate point cloud image;
and the bounding box processing unit is used for extracting the point cloud bounding box from the intermediate point cloud image to obtain a target point cloud image.
It should be noted that, the specific limitation of the covering agent adding device may be referred to the limitation of the covering agent adding method hereinabove, and will not be described herein.
The respective modules in the above-described covering agent adding device may be realized in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one exemplary embodiment, the present application also provides a computer-readable storage medium. The computer readable storage medium may store a computer program, where the computer program may implement some or all of the steps in the covering agent adding method provided in the present application when the computer program is called and executed by a processor.
As one example, the computer readable storage medium may be a magnetic disk, optical disk, read-only memory, random-access memory, or the like.
It should be understood that the technical solutions in the embodiments of the present application may be implemented by means of software plus necessary general hardware platforms. Thus, the aspects of embodiments of the present application, in essence or contributing to the prior art, may be embodied in the form of a software product, which may be stored in a computer readable storage medium.
In one exemplary embodiment, the present application also provides a computer program product. Wherein the computer program product comprises a computer program which, when invoked and run by a processor, can implement some or all of the steps of the covering agent adding method provided herein.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that any modifications, equivalent substitutions, improvements, etc. made by those skilled in the art without departing from the spirit of the present application are intended to be included in the protection scope of the embodiments of the present application.

Claims (10)

1. A method of adding a capping agent, characterized by being applied to a processor connected to an image capturing device and a robot arm, respectively, comprising:
acquiring a target point cloud image of a covering agent placement area transmitted by the image acquisition equipment; the point cloud points in the target point cloud image are used for representing the position information of the covering agent stored in the covering agent placement area;
clustering point cloud points in the target point cloud image to obtain at least one point cloud clustering area; each point cloud clustering area represents a placement area of a bag of covering agent;
according to the point cloud clustering area, determining grabbing position information of a target covering agent;
and controlling the mechanical arm to grab the target covering agent from the covering agent placement area according to the grabbing position information, and adding the target covering agent into a tundish.
2. The method according to claim 1, wherein the clustering the point cloud points in the target point cloud image to obtain at least one point cloud clustering area includes:
determining a clustering starting point from non-clustered point cloud points in the target point cloud image; the clustering starting point is a point cloud point with the minimum curvature value in the non-clustered point cloud points;
And determining clustered point cloud points belonging to the same point cloud clustering area with the clustering starting point from the non-clustered point cloud points until all the point cloud points in the target point cloud image are clustered to the corresponding point cloud clustering areas, so as to obtain at least one point cloud clustering area.
3. The method according to claim 2, wherein the determining, from the non-clustered point cloud points, clustered point cloud points belonging to the same point cloud clustering region as the clustering start point includes:
according to normal included angles between the non-clustered point cloud points and the clustering starting points, candidate point cloud points are determined from the non-clustered point cloud points; the normal included angle between the candidate point cloud point and the clustering starting point is smaller than a preset included angle threshold;
determining the clustering point cloud points from the candidate point cloud points according to the curvature value of the candidate point cloud points; the clustering point cloud points are point cloud points with curvature values not smaller than a preset curvature threshold value in the candidate point cloud points;
if the candidate point cloud points have search point cloud points with curvature values smaller than the curvature threshold, determining new candidate point cloud points based on the search point cloud points, and continuously acquiring the cluster point cloud points through the new candidate point cloud points until the search point cloud points do not exist in the new candidate point cloud points.
4. The method of claim 3, wherein the determining candidate point cloud points from the non-clustered point cloud points according to normal angles between the non-clustered point cloud points and the clustering start point comprises:
constructing a spatial distribution structure tree corresponding to the target point cloud image; the spatial distribution structure tree comprises a plurality of spatial division areas, and each spatial division area comprises at least one point cloud point;
based on the spatial distribution structure tree, acquiring a plurality of adjacent points of the clustering starting point from the non-clustered point cloud points by adopting a mean value clustering method;
and determining candidate point cloud points from the plurality of adjacent points according to normal included angles between the adjacent points and the clustering starting point.
5. The method according to any one of claims 1 to 4, wherein if the number of the point cloud cluster areas is plural, the determining the grabbing position information of the target covering agent according to the point cloud cluster areas includes:
calculating the mass centers of the point cloud clustering areas to obtain a plurality of mass center coordinate information;
and determining grabbing position information of the target covering agent from the plurality of centroid coordinate information based on a preset centroid screening rule.
6. The method of claim 5, wherein determining the grasping location information of the target covering agent from the plurality of centroid coordinate information based on a preset centroid screening rule comprises:
determining at least one candidate point cloud clustering area according to the space geometric information of each point cloud clustering area, the centroid coordinate information of each point cloud clustering area and a preset limit threshold value;
and determining the grabbing position information of the target covering agent according to the centroid coordinate information of the candidate point cloud clustering area and the centroid screening rule.
7. The method according to any one of claims 1 to 4, wherein the acquiring the target point cloud image of the covering agent placement area transmitted by the image acquisition device includes:
acquiring an original point cloud image of the covering agent placement area acquired by the image acquisition equipment;
performing downsampling processing on the original point cloud image to obtain an intermediate point cloud image;
and extracting a point cloud bounding box from the intermediate point cloud image to obtain the target point cloud image.
8. The utility model provides a covering agent interpolation device, its characterized in that is integrated in the treater, the treater is connected with image acquisition equipment and arm respectively, includes:
The image acquisition module is used for acquiring a target point cloud image of the covering agent placement area transmitted by the image acquisition equipment; the point cloud points in the target point cloud image are used for representing the position information of the covering agent stored in the area where the covering agent is located;
the point cloud clustering module is used for carrying out clustering processing on the point cloud points in the target point cloud image to obtain at least one point cloud clustering area; each point cloud clustering area represents a placement area of a bag of covering agent;
the grabbing point positioning module is used for determining grabbing position information of the target covering agent according to the point cloud clustering area;
and the adding control module is used for controlling the mechanical arm to grab the target covering agent from the covering agent placement area according to the grabbing position information and adding the target covering agent into a tundish.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when calling and executing the computer program from the memory, implements the steps of the method of any of the preceding claims 1 to 7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of the preceding claims 1 to 7.
CN202310273351.9A 2023-03-21 2023-03-21 Covering agent adding method, covering agent adding device, computer equipment and storage medium Pending CN115995013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310273351.9A CN115995013A (en) 2023-03-21 2023-03-21 Covering agent adding method, covering agent adding device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310273351.9A CN115995013A (en) 2023-03-21 2023-03-21 Covering agent adding method, covering agent adding device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115995013A true CN115995013A (en) 2023-04-21

Family

ID=85992260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310273351.9A Pending CN115995013A (en) 2023-03-21 2023-03-21 Covering agent adding method, covering agent adding device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115995013A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107748890A (en) * 2017-09-11 2018-03-02 汕头大学 A kind of visual grasping method, apparatus and its readable storage medium storing program for executing based on depth image
CN109409437A (en) * 2018-11-06 2019-03-01 安徽农业大学 A kind of point cloud segmentation method, apparatus, computer readable storage medium and terminal
CN109559341A (en) * 2017-09-27 2019-04-02 北京猎户星空科技有限公司 A kind of generation method and device of mechanical arm fetching
CN210730915U (en) * 2019-07-22 2020-06-12 广东韶钢松山股份有限公司 Automatic covering agent adding device
WO2022116677A1 (en) * 2020-12-02 2022-06-09 达闼机器人股份有限公司 Target object grasping method and apparatus, storage medium, and electronic device
CN115330819A (en) * 2022-10-12 2022-11-11 杭州蓝芯科技有限公司 Soft package segmentation positioning method, industrial personal computer and robot grabbing system
CN115781673A (en) * 2022-11-18 2023-03-14 节卡机器人股份有限公司 Part grabbing method, device, equipment and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107748890A (en) * 2017-09-11 2018-03-02 汕头大学 A kind of visual grasping method, apparatus and its readable storage medium storing program for executing based on depth image
CN109559341A (en) * 2017-09-27 2019-04-02 北京猎户星空科技有限公司 A kind of generation method and device of mechanical arm fetching
CN109409437A (en) * 2018-11-06 2019-03-01 安徽农业大学 A kind of point cloud segmentation method, apparatus, computer readable storage medium and terminal
CN210730915U (en) * 2019-07-22 2020-06-12 广东韶钢松山股份有限公司 Automatic covering agent adding device
WO2022116677A1 (en) * 2020-12-02 2022-06-09 达闼机器人股份有限公司 Target object grasping method and apparatus, storage medium, and electronic device
CN115330819A (en) * 2022-10-12 2022-11-11 杭州蓝芯科技有限公司 Soft package segmentation positioning method, industrial personal computer and robot grabbing system
CN115781673A (en) * 2022-11-18 2023-03-14 节卡机器人股份有限公司 Part grabbing method, device, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱德海: "《点云库PCL学习教程》", 北京航空航天大学出版社, pages: 104 - 105 *
郭保青 等: "铁路场景三维点云分割与分类识别算法", 《仪器仪表学报》, vol. 38, no. 9, pages 2103 - 2111 *

Similar Documents

Publication Publication Date Title
FI127606B (en) Method, load handling device, computer program and computer program product for positioning gripping means
JP3768174B2 (en) Work take-out device
CN110253570A (en) The industrial machinery arm man-machine safety system of view-based access control model
EP3960692A1 (en) Intelligent forklift and container position and posture deviation detection method
CN108858199A (en) The method of the service robot grasp target object of view-based access control model
CN110653820B (en) Robot grabbing pose estimation method combined with geometric constraint
CN110298886B (en) Dexterous hand grabbing planning method based on four-stage convolutional neural network
CN202924613U (en) Automatic control system for efficient loading and unloading work of container crane
JP2010207989A (en) Holding system of object and method of detecting interference in the same system
EP3275831B1 (en) Modified video stream for supporting remote control of a container crane
CN114029951B (en) Robot autonomous recognition intelligent grabbing method based on depth camera
CN109493313A (en) A kind of the coil of strip localization method and equipment of view-based access control model
CN109657518B (en) Container laser scanning identification method and device, electronic equipment and readable medium
WO2019216474A1 (en) Bin modeling method for bin picking, and apparatus therefor
CN111331607A (en) Automatic grabbing and stacking method and system based on mechanical arm
CN113420746A (en) Robot visual sorting method and device, electronic equipment and storage medium
CN115995013A (en) Covering agent adding method, covering agent adding device, computer equipment and storage medium
CN114170442A (en) Method and device for determining space grabbing points of robot
CN110533717B (en) Target grabbing method and device based on binocular vision
CN117565825A (en) Inclined battery grabbing method and device
CN101941648A (en) Method for processing conflict among multiple cranes in logistics simulation system in steelmaking continuous casting workshop
JP2024015358A (en) Systems and methods for robotic system with object handling
CN113910235A (en) Collision detection method, device and equipment for robot to grab materials and storage medium
JP3502525B2 (en) Camera parameter estimation method
CN115741690A (en) Material bag grabbing method and system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230421