CN117690121A - 3D vision-based feeding method and device, computer equipment and storage medium - Google Patents

3D vision-based feeding method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN117690121A
CN117690121A CN202311738089.7A CN202311738089A CN117690121A CN 117690121 A CN117690121 A CN 117690121A CN 202311738089 A CN202311738089 A CN 202311738089A CN 117690121 A CN117690121 A CN 117690121A
Authority
CN
China
Prior art keywords
paper
layer
identification
stack
paper stack
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311738089.7A
Other languages
Chinese (zh)
Inventor
颜嘉雯
张岱
朱轩亚
杜鹏程
史永明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Jinchen Intelligent Technology Co ltd
Original Assignee
Suzhou Jinchen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Jinchen Intelligent Technology Co ltd filed Critical Suzhou Jinchen Intelligent Technology Co ltd
Priority to CN202311738089.7A priority Critical patent/CN117690121A/en
Publication of CN117690121A publication Critical patent/CN117690121A/en
Pending legal-status Critical Current

Links

Abstract

The invention relates to the technical field of feeding, in particular to a feeding method, a device, computer equipment and a storage medium based on 3D vision, wherein the method comprises the following steps: acquiring a recognition layer point cloud and a recognition layer image of the paper stack, and generating a recognition layer recognition graph based on the recognition layer point cloud and the recognition layer image; processing the recognition layer recognition graph to obtain a recognition layer paper stack mask group; processing the recognition layer recognition graph to obtain recognition layer paper stack position information, and generating a label group based on the paper stack, the recognition layer point cloud, the recognition layer paper stack mask group and the recognition layer paper stack position information; feeding the paper stacks in the paper stacks based on the identification layer paper stack position information and the tag group; the invention has the effect of being convenient for improving the label paper feeding efficiency and the cigarette case manufacturing efficiency.

Description

3D vision-based feeding method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of feeding, in particular to a feeding method, a device, computer equipment and a storage medium based on 3D vision.
Background
The cigarette case for packing cigarettes is obtained by processing label paper, and a single label paper can be processed into a cigarette case by a cigarette case making machine.
The trademark paper has head end and tail end, and head end and tail end's shape are different, and the trademark paper stacks into a pile formation trademark paper stack usually, and when making the cigarette case, place the trademark paper stack in cigarette case preparation machine next door, then the manual work removes the trademark paper in the trademark paper stack in proper order to in the cigarette case preparation machine to realize the manual work material loading of trademark paper, then cigarette case preparation machine is handled the trademark paper of material loading in proper order into the cigarette case.
When the label paper is manually fed, the label paper is fed uninterruptedly, and the label paper is manually fed with the front face facing upwards, the head end facing the feeding port of the cigarette case making machine and the length direction along the preset horizontal feeding direction; because the manual feeding label paper is higher to workman's requirement to can often appear the mistake when leading to manual feeding label paper, and then lead to the material loading efficiency of label paper and the preparation efficiency of cigarette case lower.
Disclosure of Invention
The embodiment of the invention provides a 3D vision-based feeding method, a device, computer equipment and a storage medium, and in a first aspect, the 3D vision-based feeding method comprises the following steps:
acquiring a recognition layer point cloud and a recognition layer image of the paper stack, and generating a recognition layer recognition graph based on the recognition layer point cloud and the recognition layer image;
Processing the recognition layer recognition graph to obtain a recognition layer paper stack mask group;
processing the recognition layer recognition graph to obtain recognition layer paper stack position information, and generating a label group based on the paper stack, the recognition layer point cloud, the recognition layer paper stack mask group and the recognition layer paper stack position information;
and feeding the paper stacks in the paper stacks based on the identification layer paper stack position information and the label group.
In a second aspect, a feeding device based on 3D vision provided by an embodiment of the present invention includes:
the recognition layer point cloud and recognition layer image are generated based on the recognition layer point cloud and the recognition layer image;
the mask set acquisition module is used for processing the identification layer identification graph to obtain an identification layer paper stack mask set;
the label group generating module is used for processing the identification layer identification graph to obtain identification layer paper stack position information and generating a label group based on paper stacks, identification layer point clouds, identification layer paper stack mask groups and the identification layer paper stack position information;
and the feeding control module is used for feeding the paper stacks in the paper stacks based on the identification layer paper stack position information and the label group.
In a third aspect, the present application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method described above.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method described above.
According to the feeding method, the device, the computer equipment, the computer readable storage medium and the computer program product based on 3D vision, the acquired identification layer point cloud and the identification layer image are processed to obtain the identification layer identification image, then the identification layer paper stack mask group and the identification layer paper stack position information are obtained based on the identification layer identification image, then the label group is obtained by processing the paper stack, the identification layer point cloud, the identification layer paper stack mask group and the identification layer paper stack position information, and finally the feeding process of the paper stack is automatically controlled according to the label group, so that the label paper is not required to be fed manually and uninterruptedly, the orientation, the gesture and the like of the corresponding label paper are not required to be manually determined on the label paper, the position, the orientation, the gesture and the like of the label paper are automatically determined, the automatic feeding control of the label paper is realized through the label group, and the feeding efficiency of the label paper and the manufacturing efficiency of a cigarette case are improved.
Drawings
Fig. 1 is an application environment diagram of a feeding method based on 3D vision provided by an embodiment of the present invention;
fig. 2 is a flowchart of a feeding method based on 3D vision according to an embodiment of the present invention;
FIG. 3 is a schematic view of a trademark stack according to an embodiment of the present invention;
fig. 4 is a structural block diagram of a feeding device based on 3D vision according to an embodiment of the present invention;
FIG. 5 is an internal block diagram of a computer device according to an embodiment of the present application;
fig. 6 is an internal structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
The feeding method based on 3D vision provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a communication network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, etc. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
As shown in fig. 2, the embodiment of the present application provides a 3D vision-based feeding method, and the method is applied to the terminal 102 or the server 104 in fig. 1 for illustration. It is understood that the computer device may include at least one of a terminal and a server. The method comprises the following steps:
s100, acquiring a recognition layer point cloud and a recognition layer image of the paper stack, and generating a recognition layer recognition graph based on the recognition layer point cloud and the recognition layer image.
The paper stack is formed by stacking a plurality of layers of trademark paper stacks, and each layer of trademark paper stack comprises a plurality of trademark paper stacks with a certain number; referring to fig. 3, each trademark stack includes a plurality of stacked trademark sheets, kraft paper is wound around the outer circumference of the plurality of stacked trademark sheets, the kraft paper is used for wrapping the plurality of stacked trademark sheets, and both ends of the plurality of stacked trademark sheets are exposed; the paper stacks are arranged on the tray in advance, and the paper stacks are moved to a preset position beside the cigarette case making machine by moving the tray; the cigarette case making machine comprises a robot for grabbing label paper and a label paper processing device for processing the label paper into a cigarette case, wherein a 3D camera 3m-3.5m away from the ground is arranged on the label paper processing device, the 3D camera is electrically connected with a preset computer through a preset network cable, and the shooting direction and the structural light emitting direction of the 3D camera are downward.
In the implementation, the paper stack is moved to the position right below the 3D camera by moving the tray, and then the 3D camera emits structural light to the top layer of the paper stack, so that point clouds of the top layer of the paper stack are obtained, the point clouds consist of a plurality of characteristic points obtained by identifying the top layer of the paper stack, and the identified point clouds are recorded as identification layer point clouds; simultaneously, shooting and acquiring an image of the top layer of the paper stack through a 3D camera, and recording the acquired image of the top layer of the paper stack as an image of the identification layer; it should be noted that the current top layer of the stack of papers is the identification layer of the 3D camera. And the 3D camera sends the acquired recognition layer point cloud and recognition layer image of the current top layer of the paper stack to the computer, and the computer processes the recognition layer point cloud and the recognition layer image to generate a recognition layer recognition graph corresponding to the top layer of the paper stack.
S200, processing the recognition layer recognition graph to obtain the recognition layer paper stack mask set.
The computer performs image processing on the generated identification layer identification graph, and can acquire paper pile masks corresponding to each trademark paper pile on the current top layer of the paper pile one by one, wherein the paper pile masks are contour edge information and contour inner information of the top wall of the corresponding trademark paper pile; the set of stack masks for all brand stacks at the top layer of the stack at the time is referred to as the identification layer stack mask set.
In practice, the computer performs image processing on the acquired recognition layer recognition graph, thereby generating the recognition layer paper stack mask set.
S300, processing the identification layer identification graph to obtain identification layer paper stack position information, and generating a label group based on paper stacks, identification layer point clouds, identification layer paper stack mask groups and identification layer paper stack position information.
The identification layer paper stack position information is a set of paper stack positions of all trademark paper stacks on the current top layer of the paper stack; the label group is an instruction for controlling the robot to grasp the label paper, and the label group is generated by a computer and then sent to a register of the robot for storage.
In implementation, the computer performs image processing on the identification layer image so as to obtain corresponding identification layer paper stack position information, further, the computer generates a label group by processing the paper stack, the identification layer point cloud, the identification layer paper stack mask group and the identification layer paper stack position information, then sends the label group to the robot, and a register in the robot stores the label group.
S400, feeding the paper stacks in the paper stacks based on the identification layer paper stack position information and the label group.
In the implementation, the robot generates corresponding control instructions by analyzing the label group, and then controls the robot to feed the label paper to the label paper processing device according to the control instructions.
According to the feeding method based on 3D vision, the identification layer identification chart is calculated to further obtain the identification layer paper stack position information, then the label group is generated based on the paper stack, the identification layer point cloud, the identification layer paper stack mask group and the identification layer paper stack position information, then the label group is sent to the robot, the robot can know the position, the end direction and the gesture information of each trademark paper stack on the current top layer of the paper stack through analyzing the label group, and can also know the number of trademark paper stacks on the top layer of the paper stack, the number of trademark paper stacks in the paper stack and the like, so that the robot can feed trademark paper in the paper stack into the trademark paper processing device continuously according to the label group, and the trademark paper processing device can conveniently take the trademark paper as a cigarette case; by implementing the method, the feeding efficiency of label paper and the manufacturing efficiency of the cigarette case are improved conveniently.
In one embodiment, generating an identification layer identification map based on an identification layer point cloud and an identification layer image includes:
s110, filtering the point cloud of the identification layer to obtain a filtered point cloud corresponding to the paper stack in the identification layer.
Wherein, the point cloud which is erroneously identified due to environmental factors may exist in the identification layer point cloud; in implementation, the computer needs to filter the obtained point cloud of the identification layer, and filter the point cloud which is mistakenly identified, so as to obtain a filtered point cloud corresponding to the paper stack in the identification layer.
And S120, acquiring the highest characteristic point in the filtered point cloud, recording the highest characteristic point, and calculating the height difference between other characteristic points in the filtered point cloud and the highest characteristic point to obtain a height difference group.
It should be noted that, the number of trademark stacks of each layer of the paper stack that is not loaded is fixed, if the topmost layer (i.e. the identification layer) of a certain paper stack does not meet a certain number when loading is started, the obtained filtered point clouds may not be at the same height, the characteristic points with higher filtered point clouds are the topmost layer, and the characteristic points with lower filtered point clouds are the sub-top layer.
In the implementation, after the filtered point cloud is obtained, firstly obtaining the feature point with the highest height, marking the feature point as the highest feature point, and then calculating the height difference between other feature points in the filtered point cloud and the highest feature point, thereby obtaining a height difference group.
S130, processing the identification layer image into an identification layer identification map based on the height difference value group and a preset height difference threshold value.
In the implementation, in order to distinguish the characteristic points corresponding to different layers of trademark paper stacks in the filtered point cloud, corresponding height difference threshold values are preset.
In the implementation, judging whether each height difference value in the height difference value group is larger than a preset height difference threshold value, if so, marking the corresponding characteristic point as an effective characteristic point, otherwise, marking the corresponding characteristic point as an ineffective characteristic point; determining pixel points corresponding to effective feature points in the identification layer image and pixel points corresponding to ineffective feature points, enabling the area of the effective feature points corresponding to the identification layer image to be displayed in color, and particularly enabling the area of the effective feature points corresponding to the identification layer image to be displayed in color based on a preset 2D camera; further, the area of the recognition layer image corresponding to the invalid feature point is displayed in black, that is, even if the pixel point corresponding to the invalid feature point is changed to a black pixel, thereby obtaining the recognition layer recognition map.
In one embodiment, filtering the identification layer point cloud to obtain a filtered point cloud corresponding to a paper stack in the identification layer includes:
s111, calculating a first virtual plane normal vector and a second virtual plane normal vector based on the identification layer point cloud, wherein the first virtual plane normal vector corresponds to the identification layer.
The computer-acquired point cloud of the identification layer not only comprises the first point cloud of the current top trademark stack of the stack, but also possibly comprises a plurality of groups of second point clouds which are mistakenly identified due to environmental factors such as dust, impurities and the like in the air; the computer generates a first virtual plane corresponding to the identification layer by processing the first point cloud, and the first virtual plane is basically horizontal because the identification layer is basically in a horizontal state; and the computer also processes each group of acquired second point clouds so as to generate second virtual planes corresponding to each group of second point clouds one by one, and the second virtual planes corresponding to the misrecognized second point clouds are inclined planes due to environmental factors.
Further, the computer calculates normal vectors of the first virtual planes to obtain first virtual plane normal vectors, and calculates normal vectors of each second virtual plane to obtain second virtual plane normal vectors.
S112, judging whether the normal vector of the second virtual surface is parallel to the normal vector of the first virtual surface, if not, deleting the characteristic points corresponding to the normal vector of the second virtual surface.
After the first virtual plane normal vector and each second virtual plane normal vector are obtained, further, the computer judges whether each second virtual plane normal vector is parallel to the first virtual plane normal vector; if the second virtual plane normal vector is not parallel to the first virtual plane normal vector, the second point cloud corresponding to the second virtual plane normal vector is misidentified due to environmental factors, and all characteristic points corresponding to the second point cloud are deleted at the moment, so that the preliminary filtered point cloud is obtained.
And S113, if so, identifying the characteristic points corresponding to the normal vectors of the first virtual surface and the second virtual surface to obtain identification characteristic points, and clustering the identification characteristic points with a preset distance to obtain a characteristic point identification group corresponding to the paper pile.
If the second virtual plane normal vector is parallel to the first virtual plane normal vector, it is stated that the second point cloud corresponding to the second virtual plane normal vector may be a part of the first point cloud, or may be just the second virtual plane normal vector corresponding to the second point cloud that happens to be identified by mistake, and is parallel to the first virtual plane normal vector.
Further, under the condition that the second virtual plane normal vector is judged to be parallel to the first virtual plane normal vector, the characteristic points corresponding to the first virtual plane normal vector and the second virtual plane normal vector are identified, so that identification characteristic points are obtained, and further, clustering processing is carried out on the identification characteristic points meeting the preset distance, so that a characteristic point identification group corresponding to each trademark paper stack at the current top layer of the paper stack one by one is obtained.
And S114, judging whether the number of the characteristic points in the characteristic point identification group is smaller than a preset characteristic point number threshold, and if so, deleting the corresponding characteristic point identification group.
The number of feature points in the feature point identification group corresponding to one label sheet bundle has the lowest threshold value, and in this embodiment, this threshold value is referred to as a feature point number threshold value.
In the implementation, after the feature point identification group corresponding to each trademark stack one by one at the current top layer of the stack is obtained, further, whether the number of feature points in each feature point identification group is smaller than a threshold value of the number of feature points is judged, if so, the fact that the corresponding feature points in the corresponding feature point identification group are not the feature points of the trademark stacks is explained, and the fact that the corresponding feature points are the second point cloud which is parallel to the normal vector of the first virtual plane and is precisely and wrongly identified is quite possible, and at the moment, the feature point identification group with the number of the feature points smaller than the threshold value of the number of the feature points is deleted.
The filtering of the point cloud of the identification layer can be achieved by deleting the second point cloud with the second virtual plane normal vector being not parallel to the first virtual plane normal vector and the characteristic point identification group with the number of characteristic points being smaller than the threshold value of the number of characteristic points, so that the filtered point cloud is obtained, and the filtered point cloud is the point cloud corresponding to the current identification layer of the paper stack. By filtering the point cloud of the identification layer, the feature points which are mistakenly identified due to environmental factors can be filtered, so that the accurate point cloud corresponding to the identification layer can be obtained.
In one embodiment, processing the identification layer identification map to obtain identification layer stack position information includes:
s310, carrying out edge feature extraction processing on the identification layer identification graph to obtain primary edge features corresponding to the paper pile.
In order to facilitate the robot to know the position of the label paper to be grasped, it is necessary to determine the position of each label paper stack in the identification layer based on the identification layer identification map.
Specifically, after the identification layer identification map is obtained, the identification layer identification map is further processed by using a canny operator, so that edge characteristics of kraft paper top walls in each trademark paper stack are enhanced, gray values of pixels corresponding to the edge characteristics are set to 255, namely white, and gray values of pixels of the rest of the kraft paper top walls are set to 0, namely black, so that primary edge characteristics are obtained.
S320, performing expansion operation and contraction operation on the primary edge feature to obtain an optimized edge feature.
It should be noted that, the primary edge feature generally has a broken portion, and the broken portion may adversely affect the subsequent process of using the primary edge feature, so that:
after the primary edge feature is obtained, the expansion operation is carried out on the whole primary edge feature, namely the line expansion of the primary edge feature is carried out until the two ends of the broken line part of the primary edge feature are connected together; further, performing shrinkage operation on the expanded primary edge feature until the size of the line of the expanded primary edge feature is shrunk to be the same as the size of the line of the initial primary edge feature; the primary edge feature at this time no longer contains a broken line portion and is noted as an optimized edge feature that appears to be a shape that conforms to the shape of the corresponding kraft top wall edge. It should be noted that, since kraft paper is relatively smooth, a complete kraft paper rectangular edge can be obtained by implementing step S320.
S330, calculating the central position of each optimized edge feature to obtain the position information of the identification layer paper stack.
After obtaining the optimized edge features corresponding to each trademark stack in the identification layer, further, extracting edge information of kraft paper based on the optimized edge features, then calculating the central position of each optimized edge feature based on the edge information of kraft paper, thereby extracting the central position of rectangular kraft paper, and then recording the set of the central positions of all the optimized edge features as the identification layer stack position information.
In one embodiment, generating a tag group based on a stack of sheets, an identification layer point cloud, an identification layer stack mask group, and identification layer stack position information includes:
s340, processing the paper stack and the identification layer point cloud to obtain the area label corresponding to the paper stack.
It should be noted that, be equipped with a plurality of tongs on the operation end of robot, every tongs is arranged in grabbing the trade mark pile in the corresponding space region in the paper stack, in order to be convenient for control the trade mark pile in the space region that the tongs on the robot can accurately snatch, need set up regional label for every trade mark pile in the corresponding space region.
In practice, the stack of sheets and the point cloud of the identification layer may be processed to obtain an area label corresponding to each brand stack of sheets. Assuming that the paper stack is divided into two spatial regions in total, one spatial region being close to the robot and the other spatial region being far from the robot, the region label of the trademark stack in the spatial region close to the robot is set to 1, and the region label of the trademark stack in the spatial region far from the robot is set to 2.
S350, performing color unification processing on the paper pile mask group of the identification layer to obtain a paper pile unification mask group, and processing the paper pile unification mask group to obtain an orientation label corresponding to the paper pile.
It should be noted that, each pile mask of the recognition layer pile mask group corresponds to a top wall of a trademark pile, the top wall of the trademark pile has a certain pattern, the pattern can also generate a corresponding texture in the corresponding pile mask, the shape generated by the pattern can cause adverse effect on the process of distinguishing the direction of the trademark pile, and the method comprises the following steps:
after the recognition layer paper pile mask group is obtained through the step S200, further, the contour edge of each paper pile mask in the recognition layer paper pile mask group and the color inside the contour edge are uniformly set to be white, and the color outside the contour of each paper pile mask in the recognition layer paper pile mask group is uniformly set to be black, so that the paper pile uniform color mask group is obtained; thus, the adverse effect of recognition caused by the shape generated by the patterns in the paper pile mask is eliminated, and the accuracy of judging the orientation of the trademark paper pile is improved.
Further, the paper stack system color mask set is processed through a preset YOLOv5 model, so that the direction of a specific end of each paper stack mask in the paper stack system color mask set is obtained, and a corresponding direction label is generated. In this embodiment, if the specific end of the stack mask faces to the left, the corresponding orientation flag is set to 0; if the specific end of the stack mask is oriented to the right, the corresponding orientation flag is set to 1.
S360, generating a quantity label corresponding to the paper stack based on the identification layer paper stack position information.
After the position information of the identification layer paper stack is obtained through the step S330, counting the number of the optimized edge characteristic center positions in the position information of the identification layer paper stack to obtain the number of trademark paper stacks in the identification layer, thereby obtaining corresponding number labels; assuming that the number of stacks of brand papers in the identification layer is 16, the number label is 16; assuming that the number of stacks of brand paper in the identification layer is 6, the number label is 06.
S370, processing the paper stack to obtain the layer number label and the posture label corresponding to the paper stack.
In order to enable the robot to know the number of layers of the trademark paper stack to be grabbed, the paper stack is processed to obtain a layer number label to be sent to the robot, in order to prevent the robot from grabbing the trademark paper stack too inclined in the length direction, the phenomenon that a mechanical arm of the robot exceeds a stroke is avoided, and the paper stack is processed to obtain a gesture label to be sent to the robot.
S380, generating a label group based on the region labels, the orientation labels, the quantity labels, the layer number labels and the gesture labels.
After generating the region label, the orientation label, the number label, the layer label and the gesture label, further, combining the region label, the orientation label, the number label, the layer label and the gesture label into a label group, and then sending the label group to the robot; one label set for each stack of brand paper.
In one embodiment, processing a stack of sheets and identifying layer point clouds to obtain an area tag corresponding to the stack of sheets includes:
s341, acquiring preset characteristic points in the point cloud of the identification layer, marking the characteristic points as characteristic base points, and dividing at least two grabbing areas based on preset area size data and the characteristic base points.
Specifically, a preset feature point in the point cloud of the identification layer is obtained, in this embodiment, the preset feature point is specifically a feature point corresponding to one corner point of the identification layer, and the feature point is marked as a feature base point; further, preset area size data is obtained, and in an embodiment, the area size data includes length data corresponding to a long side of a top wall of the paper stack and width data corresponding to a wide side of the top wall of the paper stack; and dividing the plane area where the top wall of the paper stack is positioned into at least two grabbing areas by taking the characteristic base point as a starting point and combining the length data and the width data.
S342, obtaining characteristic points corresponding to the paper pile, determining a grabbing area where the characteristic points are located, and generating an area label corresponding to the paper pile based on the determined grabbing area.
Acquiring at least one characteristic point corresponding to each trademark paper stack, determining the position of the corresponding characteristic point of each trademark paper stack, and determining the grabbing area of the corresponding trademark paper stack according to the position of the characteristic point; and then determining the area label of the corresponding trademark paper stack according to the grabbing area corresponding to each trademark paper stack.
Assuming that 2 grabbing areas are provided, if the trademark paper stack is positioned in the 1 st grabbing area, setting the area label of the corresponding trademark paper stack to be 1; if the trademark stack is in the 2 nd grabbing area, the area label of the corresponding trademark stack is set to be 2.
The regional label is set up for the trade mark paper pile, and the control of the robot is convenient for the tongs that corresponds with regional label know the regional of snatching that it is responsible for when snatching the trade mark paper to the high-efficient harmonious work of different tongs of being convenient for.
In one embodiment, processing a stack of sheets to obtain a layer number label and an attitude label corresponding to the stack of sheets includes:
s371, generating a layer number label corresponding to the paper stack based on the acquired height of the paper stack, the thickness of the material tray and the thickness of the paper stack.
The paper stack comprises a material tray, a plurality of layers of trademark paper stacks are placed on the material tray, and the material tray is used for supporting the plurality of layers of trademark paper stacks; the separator paper is arranged in the two adjacent trademark paper stacks and used for separating the trademark paper stacks of different layers, the thickness of the separator paper is thinner, and the thickness of the separator paper can be ignored in the subsequent process of calculating the number of the trademark paper stacks.
In the implementation, the phase Guan Zuobiao is converted into a robot coordinate system, and then the distance between the ground and the robot base is measured to obtain the ground height coordinate value. Then, the height of the top wall of the paper stack of the highest trademark is obtained through a 3D camera, a paper stack pseudo-height coordinate value is obtained, both the ground height coordinate value and the paper stack pseudo-height coordinate value are converted into a robot coordinate system, and then the difference between the paper stack pseudo-height coordinate value and the ground height coordinate value is calculated, so that the height of the paper stack is obtained; and then measuring the thickness of the material tray, subtracting the thickness of the material tray from the height of the paper stack, dividing the height of the trademark paper stack by the pre-measured height of the trademark paper stack to obtain the number of layers of the trademark paper stack, and generating a layer number label corresponding to the paper stack according to the number of layers.
The purpose of generating the layer number label is to transmit the signal of the layer number label to the AGV trolley by the router when the layer number corresponding to the layer number label is the lowest layer of the paper stack, so that the AGV trolley automatically pulls a new paper stack.
S372, acquiring the length direction of a preset reference paper stack in the paper stack, and marking the length direction as a reference length direction; the length direction of the other pile was obtained and noted as pile length direction.
Specifically, a reference paper stack is manually preset in the identification layer of the paper stack, the length direction of the reference paper stack is recorded as a reference length direction, and the reference length direction is the ideal length direction of the label paper when the label paper is fed; further, the length direction of the trademark stack to be grasped is obtained and is referred to as the stack length direction.
S373, calculating an included angle between the length direction of the paper stack and the reference length direction, and judging whether the included angle is larger than a preset included angle threshold value or not to obtain a judging result.
Specifically, calculating an included angle between the length direction of the paper stack and the reference length direction, judging whether the included angle is larger than a preset included angle threshold value, and obtaining a corresponding judgment result.
And S374, generating an attitude label based on the judging result.
Specifically, if the judgment result shows that the included angle is larger than the preset included angle threshold, the corresponding trademark paper stack is inclined in the horizontal direction and does not accord with the ideal feeding direction, a corresponding gesture label is generated at the moment, and the gesture label is set to be 0; if the judgment result shows that the included angle is not larger than the preset included angle threshold value, the corresponding trademark paper stack is ideal in the horizontal direction and accords with the ideal feeding direction, a corresponding gesture label is generated at the moment, and the gesture label is set to be 1.
And S300, acquiring a label group corresponding to each trademark paper stack, then sending the corresponding label group to a robot, analyzing the label group by the robot, acquiring area labels, orientation labels, quantity labels, layer number labels and posture labels in the label group, and grabbing trademark paper in the corresponding trademark paper stack into a trademark paper processing device according to the labels.
Specifically, the robot analyzes the regional label to obtain a grabbing area where the trademark paper stack corresponding to the label group is located, so that the robot can select a gripper corresponding to the grabbing area.
The robot analyzes the orientation labels to determine the orientation of the trademark paper stack corresponding to the label group, namely whether the appointed end of the trademark paper stack faces left or right; assuming that the preset orientation of the trademark paper stack is left, but the designated end of the trademark paper stack is right after the orientation label is analyzed, in this case, if the trademark paper in the trademark paper stack is grabbed to the trademark paper processing device, the trademark paper processing device cannot process the corresponding trademark paper into a cigarette case, and a temporary stop condition occurs; at this time, the robot will give an alarm prompting the staff to correct the orientation of the trademark stack.
The robot analyzes the quantity labels to obtain the quantity of trademark paper stacks in the identification layer, and if feeding of one trademark paper stack is completed, the quantity of trademark paper stacks corresponding to the quantity labels is reduced by 1 until the quantity of trademark paper stacks is reduced to 0; when the number of trademark stacks corresponding to the number labels is 0, it is indicated that the trademark stacks in the identification layer are all completely loaded, then the 3D camera is triggered to emit primary structure light to the stacks, so that point clouds of the partition paper between the identification layer and the secondary top layer are obtained, size data of the partition paper are calculated according to the point clouds of the partition paper, whether the size data are attached to the size of the actual partition paper is judged, if yes, the robot removes the partition paper between the identification layer and the secondary top layer, then the 3D camera is triggered to emit structure light to the trademark stacks of the secondary top layer and obtain images of the trademark stacks of the secondary top layer, the secondary top layer is used as a new identification layer according to the mode of each embodiment, and the 3D vision-based loading method steps of the new identification layer are executed.
The robot analyzes the layer number labels to obtain the layer number corresponding to the layer number labels, namely the layer number corresponding to the label group, namely the layer number of the trademark paper stack, wherein the layer number of the topmost layer of the paper stack is the highest, and when feeding of one trademark paper stack is completed, the layer number in the layer number label corresponding to the next trademark paper stack is reduced by 1.
The robot analyzes the gesture label to acquire whether the skew degree of the trademark paper stack corresponding to the gesture label in the horizontal direction is ideal; if not, the robot sends out an alarm to prompt the staff to adjust the corresponding trademark paper stack until the skew degree of the trademark paper stack in the horizontal direction reaches the ideal degree. If the robot grabs the label paper with the non-ideal skew degree to the label paper processing device, the label paper processing device cannot process the corresponding label paper into a cigarette case, and temporary shutdown can occur.
In the implementation, the robot can not only realize the automatic feeding step of the trademark paper stacks in the paper stacks in sequence by acquiring the position information of the recognition layer paper stacks and the label groups, but also prompt a worker to adjust the corresponding trademark paper stacks when judging that the trademark paper stacks are opposite in orientation and/or large in horizontal skew degree, so that shutdown of the trademark paper processing device is prevented conveniently; to sum up the implementation of step, be convenient for promote the material loading efficiency of label paper and the preparation efficiency of cigarette case.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the invention also provides a feeding device based on 3D vision; fig. 4 is a structural block diagram of a feeding device based on 3D vision according to an embodiment of the present invention, and referring to fig. 4, the device includes:
the recognition layer point cloud and recognition layer image are generated based on the recognition layer point cloud and the recognition layer image;
the mask set acquisition module is used for processing the identification layer identification graph to obtain an identification layer paper stack mask set;
the label group generating module is used for processing the identification layer identification graph to obtain identification layer paper stack position information and generating a label group based on paper stacks, identification layer point clouds, identification layer paper stack mask groups and the identification layer paper stack position information;
and the feeding control module is used for feeding the paper stacks in the paper stacks based on the identification layer paper stack position information and the label group.
All or part of each module in the 3D vision-based feeding device can be realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
It should be noted that, the technical scheme for solving the technical problem provided by the feeding device based on 3D vision is similar to the technical scheme defined by the feeding method based on 3D vision, and the technical scheme provided by the feeding device based on 3D vision is not repeated here.
In some embodiments, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer equipment is used for storing data such as identification layer images, identification layer identification diagrams, identification layer paper stack mask groups and the like. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by the processor, implements the steps in the 3D vision-based feeding method described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In some embodiments, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the method embodiments described above when the computer program is executed.
In some embodiments, an internal structural diagram of a computer-readable storage medium is provided as shown in fig. 6, the computer-readable storage medium storing a computer program that, when executed by a processor, implements the steps of the method embodiments described above.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program, which may be stored on a non-transitory computer readable storage medium and which, when executed, may comprise the steps of the above-described embodiments of the methods. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not thereby to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (11)

1. The 3D vision-based feeding method is characterized by comprising the following steps of:
acquiring a recognition layer point cloud and a recognition layer image of a paper stack, and generating a recognition layer recognition graph based on the recognition layer point cloud and the recognition layer image;
processing the identification layer identification graph to obtain an identification layer paper stack mask group;
processing the identification layer identification graph to obtain identification layer paper stack position information, and generating a label group based on the paper stack, the identification layer point cloud, the identification layer paper stack mask group and the identification layer paper stack position information;
And feeding the paper stacks in the paper stacks based on the identification layer paper stack position information and the label group.
2. A method according to claim 1, wherein generating a recognition layer recognition graph based on the recognition layer point cloud and the recognition layer image comprises:
filtering the point cloud of the identification layer to obtain a filtered point cloud corresponding to the paper stack in the identification layer;
acquiring the highest characteristic point in the filtered point cloud, recording the highest characteristic point, and calculating the height difference between other characteristic points in the filtered point cloud and the highest characteristic point to obtain a height difference group;
and processing the identification layer image into an identification layer identification map based on the height difference group and a preset height difference threshold.
3. A method according to claim 2, wherein said filtering said identification layer point cloud to obtain a filtered point cloud corresponding to said stack of sheets in an identification layer, comprises:
calculating a first virtual plane normal vector and a second virtual plane normal vector based on the identification layer point cloud, wherein the first virtual plane normal vector corresponds to the identification layer;
judging whether the normal vector of the second virtual surface is parallel to the normal vector of the first virtual surface, if not, deleting the characteristic points corresponding to the normal vector of the second virtual surface;
If yes, identifying the characteristic points corresponding to the normal vector of the first virtual surface and the normal vector of the second virtual surface to obtain identification characteristic points, and clustering the identification characteristic points with a preset distance to obtain a characteristic point identification group corresponding to the paper pile;
judging whether the number of the characteristic points in the characteristic point identification group is smaller than a preset characteristic point number threshold value, if so, deleting the corresponding characteristic point identification group.
4. A method according to claim 1, wherein said processing said identification layer identification map to obtain identification layer stack position information comprises:
performing edge feature extraction processing on the identification layer identification graph to obtain primary edge features corresponding to the paper pile;
performing expansion operation and contraction operation on the primary edge characteristics to obtain optimized edge characteristics;
and calculating the central position of each optimized edge feature to obtain the position information of the identification layer paper stack.
5. The method of claim 1, wherein the generating a label set based on the stack of papers, the identification layer point cloud, the identification layer stack mask set, and the identification layer stack position information comprises:
Processing the paper stack and the identification layer point cloud to obtain an area label corresponding to the paper stack;
performing color unification treatment on the paper pile mask group of the identification layer to obtain a paper pile unification mask group, and processing the paper pile unification mask group to obtain an orientation label corresponding to the paper pile;
generating a quantity label corresponding to the paper stack based on the identification layer paper stack position information;
processing the paper stack to obtain a layer number label and an attitude label corresponding to the paper stack;
and generating the tag group based on the region tag, the orientation tag, the number tag, the layer number tag and the gesture tag.
6. A method according to claim 5, wherein said processing said stack of sheets and said identifying layer point cloud to obtain an area label corresponding to said stack of sheets comprises:
acquiring preset characteristic points in the identification layer point cloud, marking the characteristic points as characteristic base points, and dividing at least two grabbing areas based on preset area size data and the characteristic base points;
and acquiring the characteristic points corresponding to the paper pile, determining the grabbing areas where the characteristic points are located, and generating area labels corresponding to the paper pile based on the determined grabbing areas.
7. A method according to claim 5, wherein said processing said stack of sheets to obtain a layer number label and an attitude label corresponding to said stack of sheets, comprises:
generating a layer number label corresponding to the paper stack based on the acquired height of the paper stack, the thickness of the material tray and the thickness of the paper stack;
acquiring the length direction of a preset reference paper stack in the paper stack, and marking the length direction as the reference length direction; acquiring the length direction of other paper stacks, and marking the length direction of the paper stacks as the length direction of the paper stacks;
calculating an included angle between the length direction of the paper stack and the reference length direction, and judging whether the included angle is larger than a preset included angle threshold value or not to obtain a judging result;
and generating the gesture label based on the judging result.
8. Feeding device based on 3D vision, its characterized in that includes:
the recognition layer point cloud and recognition layer image are generated based on the recognition layer point cloud and the recognition layer image;
the mask set acquisition module is used for processing the identification layer identification graph to obtain an identification layer paper stack mask set;
the label group generating module is used for processing the identification layer identification graph to obtain identification layer paper stack position information, and generating a label group based on the paper stack, the identification layer point cloud, the identification layer paper stack mask group and the identification layer paper stack position information;
And the feeding control module is used for feeding the paper stacks in the paper stacks based on the identification layer paper stack position information and the tag group.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
11. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202311738089.7A 2023-12-18 2023-12-18 3D vision-based feeding method and device, computer equipment and storage medium Pending CN117690121A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311738089.7A CN117690121A (en) 2023-12-18 2023-12-18 3D vision-based feeding method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311738089.7A CN117690121A (en) 2023-12-18 2023-12-18 3D vision-based feeding method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117690121A true CN117690121A (en) 2024-03-12

Family

ID=90136772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311738089.7A Pending CN117690121A (en) 2023-12-18 2023-12-18 3D vision-based feeding method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117690121A (en)

Similar Documents

Publication Publication Date Title
CN102164718B (en) Method for taking out work
US10124489B2 (en) Locating, separating, and picking boxes with a sensor-guided robot
CN106573381B (en) The visualization of truck unloader
DE102014212304B4 (en) Information processing apparatus, information processing method and storage medium
JP5295828B2 (en) Object gripping system and interference detection method in the system
US9576363B2 (en) Object picking system, object detecting device, object detecting method
US11794343B2 (en) System and method for height-map-based grasp execution
JP6555751B2 (en) Banknote processing apparatus and banknote processing method
CN110395515B (en) Cargo identification and grabbing method and equipment and storage medium
JP4866890B2 (en) Work shape estimation device
CN114820781A (en) Intelligent carrying method, device and system based on machine vision and storage medium
KR20230094948A (en) Method for forklift pickup, computer device, and non-volatile storage medium
US20230410362A1 (en) Target object detection method and apparatus, and electronic device, storage medium and program
JP2022045905A (en) Mix-size depalletizing
JP7178803B2 (en) Information processing device, information processing device control method and program
CN117690121A (en) 3D vision-based feeding method and device, computer equipment and storage medium
JP2021111423A5 (en)
CN113269112A (en) Method and device for identifying capture area, electronic equipment and storage medium
CN111696152A (en) Method, device, computing equipment, system and storage medium for detecting package stacking
CN114800533B (en) Sorting control method and system for industrial robot
JP6041710B2 (en) Image recognition method
JP6008399B2 (en) Management system, management apparatus, management method, and program
JP2021026573A (en) Image processing system
US11893718B2 (en) Image recognition method and image recognition device
JP7037777B2 (en) Image processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination