CN117680977A - Robot feeding, splicing and aligning method, device, equipment and medium - Google Patents

Robot feeding, splicing and aligning method, device, equipment and medium Download PDF

Info

Publication number
CN117680977A
CN117680977A CN202410157432.7A CN202410157432A CN117680977A CN 117680977 A CN117680977 A CN 117680977A CN 202410157432 A CN202410157432 A CN 202410157432A CN 117680977 A CN117680977 A CN 117680977A
Authority
CN
China
Prior art keywords
panel
spliced
coordinates
determining
corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410157432.7A
Other languages
Chinese (zh)
Other versions
CN117680977B (en
Inventor
范朝龙
廖茂竹
吴宇君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ji Hua Laboratory
Original Assignee
Ji Hua Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ji Hua Laboratory filed Critical Ji Hua Laboratory
Priority to CN202410157432.7A priority Critical patent/CN117680977B/en
Publication of CN117680977A publication Critical patent/CN117680977A/en
Application granted granted Critical
Publication of CN117680977B publication Critical patent/CN117680977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a robot feeding, splicing and aligning method, a device, equipment and a medium, wherein the method comprises the following steps: performing corner detection on a segmentation graph corresponding to a spliced panel in each spliced panel based on a preset corner detection algorithm, determining corner coordinates of each corner of the spliced panel, and determining an attitude rotation angle of the spliced panel based on the corner coordinates of each corner; determining a preset gap distance between the panel to be spliced and the last spliced panel, and determining a coordinate value corresponding to a panel center point of the panel to be spliced after splicing alignment according to the angular point coordinates of each angle and the preset gap distance; and the feeding robot performs alignment and placement on the panel to be spliced according to the coordinate value corresponding to the panel center point of the panel to be spliced after the alignment. The method and the device can greatly improve the accuracy of the production, splicing and alignment of the solar panel, and remarkably improve the product qualification rate of the solar panel.

Description

Robot feeding, splicing and aligning method, device, equipment and medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a robot feeding, stitching and aligning method, a corresponding device, an electronic device, and a computer readable storage medium.
Background
In many industrial production, workpiece alignment and splicing processes are required, some panels are required to be spliced, for example, solar panels are produced, one of the most important is alignment splicing placement among the panels, if alignment is not accurate enough or error is large, defective products of products are increased, raw materials are wasted, and at present, many production lines are required to manually perform alignment placement, and the manual alignment placement has high yield but low efficiency.
At present, a traditional contact type alignment and placement mode is adopted by a robot, and the traditional alignment and placement method mainly causes inaccurate placement, because the contact type alignment and placement mode is that alignment and splicing cannot know the amount of alignment errors, once the alignment errors are large, the splicing errors are large, the requirements of products are easily exceeded, and defective products are generated.
In summary, the robot is suitable for the problems that in the prior art, defective products of products are increased, raw materials are wasted, efficiency is low, the contact type alignment and placement mode of the robot is that alignment and splicing cannot know the amount of alignment errors, the requirements of the products are easily exceeded, defective products are generated, and the like.
Disclosure of Invention
The present application aims to solve the above problems and provide a robot feeding, splicing and aligning method, a corresponding device, an electronic device and a computer readable storage medium.
In order to meet the purposes of the application, the application adopts the following technical scheme:
one of the purposes of the application is to provide a robot feeding, splicing and aligning method, which comprises the following steps:
acquiring spliced panel images, performing multi-scale encoding and decoding on the spliced panel images based on a pre-trained image segmentation model, determining a plurality of pieces of image feature information of contour features of each spliced panel, and extracting segmentation graphs corresponding to each spliced panel from the spliced panel images according to the plurality of pieces of image feature information;
performing corner detection on a segmentation graph corresponding to a spliced panel in each spliced panel based on a preset corner detection algorithm, determining corner coordinates of each corner of the spliced panel, and determining an attitude rotation angle of the spliced panel based on the corner coordinates of each corner;
determining a preset gap distance between a panel to be spliced and the last spliced panel, and determining a coordinate value corresponding to a panel center point of the panel to be spliced after splicing alignment according to the angular point coordinates of each angle and the preset gap distance;
And the feeding robot performs alignment and placement on the panel to be spliced according to the gesture rotation angle and the coordinate value corresponding to the panel center point of the panel to be spliced after splicing alignment, so that the robot performs horizontal splicing alignment on the panel to be spliced.
Optionally, the step of performing multi-scale encoding and decoding on the spliced panel image based on a pre-trained image segmentation model, determining a plurality of image feature information capturing contour features of each spliced panel, and extracting a segmentation map corresponding to each spliced panel from the spliced panel image according to the plurality of image feature information includes:
invoking a pre-trained image segmentation model, and fully connecting all image feature information based on a full-connection layer in the image segmentation model to fuse and generate mask image data, wherein the mask image data inherits contour features of all spliced panels in the image feature information;
and extracting the original specification of the spliced panel images based on the mask image data to obtain a segmentation map corresponding to each spliced panel, wherein the segmentation map is formed by images in the outline corresponding to the outline characteristics of each spliced panel.
Optionally, the step of performing corner detection on the segmentation graph corresponding to the spliced panel in each spliced panel based on a preset corner detection algorithm, determining corner coordinates of each corner of the spliced panel, and determining the attitude rotation angle of the spliced panel based on the corner coordinates of each corner includes:
determining the coordinates of an upper right corner point and the coordinates of a lower right corner point in the coordinates of the corners of each corner of the last spliced panel, and calculating and determining the slope between the two points of the upper right corner point and the lower right corner point according to the coordinates of the upper right corner point and the coordinates of the lower right corner point;
and calculating and determining the attitude rotation angle of the last spliced panel by adopting an arctangent function according to the slope between the upper right corner point and the lower right corner point.
Optionally, determining a preset gap distance between the panel to be spliced and the last spliced panel, and determining a coordinate value corresponding to a panel center point of the panel to be spliced after splicing alignment according to the corner coordinates of each corner and the preset gap distance, where the step includes:
determining the coordinates of an upper right corner point and the coordinates of a lower right corner point in the coordinates of the corners of each corner of the last spliced panel, and calculating and determining the coordinates of the middle point of a straight line between the upper right corner point and the lower right corner point according to the coordinates of the upper right corner point and the coordinates of the lower right corner point;
And determining the panel width of the panel to be spliced, and calculating and determining the coordinate value corresponding to the panel center point of the panel to be spliced after splicing alignment according to the panel width of the panel to be spliced, the midpoint coordinate of the straight line between the upper right corner point and the lower right corner point and the preset gap distance between the panel to be spliced and the last spliced panel.
Optionally, the step of aligning and placing the panel to be spliced by the feeding robot according to the posture rotation angle and a coordinate value corresponding to the panel center point of the panel to be spliced after splicing and aligning includes:
acquiring the coordinates of an upper left corner point and the coordinates of an upper right corner point in the coordinates of the corners of each corner of the spliced panel, and calculating and determining the slopes between the two points of the upper left corner point and the upper right corner point according to the coordinates of the upper left corner point and the coordinates of the upper right corner point;
determining the placement posture of the panel to be spliced based on the slope between the left upper corner point and the right upper corner point, the posture rotation angle of the last spliced panel and the coordinate value corresponding to the panel center point of the panel to be spliced after splicing alignment;
And the feeding robot performs alignment and placement according to the placement posture of the panel to be spliced so as to finish horizontal splicing alignment of the robot on the panel to be spliced.
Optionally, the basic network architecture of the image segmentation model is U 2 net model.
Optionally, the panel comprises one or more of a solar panel or a new energy automobile battery panel.
Adapt to another purpose of this application and provide a robot material loading concatenation counterpoint device includes:
the image segmentation module is used for acquiring spliced panel images, carrying out multi-scale encoding and decoding on the spliced panel images based on a pre-trained image segmentation model, determining a plurality of image feature information for capturing contour features of each spliced panel, and extracting segmentation graphs corresponding to each spliced panel from the spliced panel images according to the plurality of image feature information;
the attitude angle determining module is used for carrying out angle point detection on a segmentation graph corresponding to a spliced panel in each spliced panel based on a preset angle point detection algorithm, determining angle point coordinates of each angle of the spliced panel, and calculating and determining an attitude rotation angle of the spliced panel based on the angle point coordinates of each angle;
The panel position determining module is used for determining a preset gap distance between a panel to be spliced and the last spliced panel, and determining coordinate values corresponding to panel center points of the panel to be spliced after splicing alignment according to corner coordinates of each corner and the preset gap distance;
the panel splicing alignment module is arranged in such a way that the feeding robot aligns and places the panel to be spliced according to the gesture rotation angle and the coordinate value corresponding to the panel center point of the panel to be spliced after splicing alignment so as to finish horizontal splicing alignment of the robot to the panel to be spliced.
An electronic device adapted to another object of the present application includes a central processing unit and a memory, where the central processing unit is configured to invoke and run a computer program stored in the memory to execute the steps of the method for positioning and splicing the feeding of the robot.
A computer readable storage medium adapted to another object of the present application stores a computer program implemented according to the robot feeding splice alignment method in the form of computer readable instructions, which when invoked by a computer, executes steps included in the corresponding method.
Compared with the prior art, the application aims at solving the problems that in the prior art, defective products of products are increased, raw materials are wasted, the efficiency is low, the contact type alignment and placement mode of a robot is that alignment errors cannot be known, the requirements of products are easily exceeded, defective products are generated, and the application comprises the following beneficial effects:
firstly, the robot feeding, splicing and aligning method can remarkably improve the accuracy of splicing and aligning of the upper panels of the industrial production line, greatly save manpower and material resources, and avoid the problems of increased defective products, waste of raw materials and the like of products caused by inaccurate alignment and placement or larger errors between the upper panels of the industrial production line;
secondly, the splicing alignment is carried out on the upper panel of the industrial production line based on the image segmentation model, manual alignment and placement are not needed, the feeding robot can carry out automatic non-contact placement, the production efficiency of panel production is greatly improved, the problems that defective products are produced due to the fact that the production of the panel exceeds the requirement of products, user experience is reduced and the like are avoided;
further, the robot feeding, splicing and aligning method can greatly improve the accuracy of solar panel production, splicing and aligning, remarkably improve the product qualification rate of the solar panel, and remarkably improve the user experience while improving the enterprise benefit.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a schematic flow chart of a feeding, splicing and aligning method of a robot;
fig. 2 is a schematic flow chart of extracting a segmentation chart corresponding to each spliced panel from a spliced panel image according to a plurality of image feature information in an embodiment of the present application;
FIG. 3 is a diagram of U in an embodiment of the present application 2 A structural schematic block diagram of a net image segmentation model;
FIG. 4 is a schematic flow chart of determining the attitude rotation angle of the last spliced panel according to the embodiment of the present application;
fig. 5 is a flowchart illustrating a process of determining coordinate values corresponding to a panel center point of a panel to be spliced after splicing alignment in the embodiment of the present application;
fig. 6 is a schematic flow chart of aligning and placing panels to be spliced in the embodiment of the application;
fig. 7 is a schematic block diagram of a robot feeding, splicing and aligning device in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device in an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of illustrating the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, "client," "terminal device," and "terminal device" are understood by those skilled in the art to include both devices that include only wireless signal receivers without transmitting capabilities and devices that include receiving and transmitting hardware capable of two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device such as a personal computer, tablet, or the like, having a single-line display or a multi-line display or a cellular or other communication device without a multi-line display; a PCS (Personal Communications Service, personal communication system) that may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant ) that can include a radio frequency receiver, pager, internet/intranet access, web browser, notepad, calendar and/or GPS (Global Positioning System ) receiver; a conventional laptop and/or palmtop computer or other appliance that has and/or includes a radio frequency receiver. As used herein, "client," "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, at any other location(s) on earth and/or in space. As used herein, a "client," "terminal device," or "terminal device" may also be a communication terminal, an internet terminal, or a music/video playing terminal, for example, a PDA, a MID (Mobile Internet Device ), and/or a mobile phone with music/video playing function, or may also be a device such as a smart tv, a set top box, or the like.
The hardware referred to by the names "server", "client", "service node" and the like in the present application is essentially an electronic device having the performance of a personal computer, and is a hardware device having necessary components disclosed by von neumann's principle, such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, and an output device, and a computer program is stored in the memory, and the central processing unit calls the program stored in the external memory to run in the memory, executes instructions in the program, and interacts with the input/output device, thereby completing a specific function.
It should be noted that the concept of "server" as referred to in this application is equally applicable to the case of a server farm. The servers should be logically partitioned, physically separate from each other but interface-callable, or integrated into a physical computer or group of computers, according to network deployment principles understood by those skilled in the art. Those skilled in the art will appreciate this variation and should not be construed as limiting the implementation of the network deployment approach of the present application.
One or several technical features of the present application, unless specified in the plain text, may be deployed either on a server to implement access by remotely invoking an online service interface provided by the acquisition server by a client, or directly deployed and run on the client to implement access.
The neural network model cited or possibly cited in the application can be deployed on a remote server and used for implementing remote call on a client, or can be deployed on a client with sufficient equipment capability for direct call unless specified in a clear text, and in some embodiments, when the neural network model runs on the client, the corresponding intelligence can be obtained through migration learning so as to reduce the requirement on the running resources of the hardware of the client and avoid excessively occupying the running resources of the hardware of the client.
The various data referred to in the present application, unless specified in the plain text, may be stored either remotely in a server or in a local terminal device, as long as it is suitable for being invoked by the technical solution of the present application.
Those skilled in the art will appreciate that: although the various methods of the present application are described based on the same concepts so as to be common to each other, the methods may be performed independently, unless otherwise indicated. Similarly, for each of the embodiments disclosed herein, the concepts presented are based on the same inventive concept, and thus, the concepts presented for the same description, and concepts that are merely convenient and appropriately altered although they are different, should be equally understood.
The various embodiments to be disclosed herein, unless the plain text indicates a mutually exclusive relationship with each other, the technical features related to the various embodiments may be cross-combined to flexibly construct a new embodiment, so long as such combination does not depart from the inventive spirit of the present application and can satisfy the needs in the art or solve the deficiencies in the prior art. This variant will be known to the person skilled in the art.
In many industrial production, workpiece alignment and splicing processes are required, some panels are required to be spliced, for example, solar panels are produced, one of the most important is alignment splicing placement among the panels, if alignment is not accurate enough or error is large, defective products of products are increased, raw materials are wasted, and at present, many production lines are required to manually perform alignment placement, and the manual alignment placement has high yield but low efficiency. At present, a traditional contact type alignment and placement mode is adopted by a robot, and the traditional alignment and placement method mainly causes inaccurate placement, because the contact type alignment and placement mode is that alignment and splicing cannot know the amount of alignment errors, once the alignment errors are large, the splicing errors are large, the requirements of products are easily exceeded, and defective products are generated.
On the basis of referring to the above exemplary scenario, referring to fig. 1, in one embodiment, the method for aligning and splicing the feeding of the robot of the present application includes:
s10, acquiring spliced panel images, performing multi-scale encoding and decoding on the spliced panel images based on a pre-trained image segmentation model, determining a plurality of pieces of image feature information of contour features of each spliced panel captured in the spliced panel images, and extracting segmentation graphs corresponding to each spliced panel from the spliced panel images according to the plurality of pieces of image feature information;
the panel comprises one or more of a solar panel or a new energy automobile battery panel, an industrial camera or other camera equipment in the feeding robot can acquire spliced panel images, the spliced panel images are transmitted to corresponding terminal equipment to carry out image segmentation processing, the spliced panel characterizes the well placed or spliced panel, encoding and decoding of multiple scales are carried out on the spliced panel images based on a pre-trained image segmentation model, multiple image feature information capturing outline features of each spliced panel is determined, and a segmentation map corresponding to each spliced panel is extracted from the spliced panel images according to the multiple image feature information.
In some embodiments, the present application employs U-based 2 For example, the image segmentation model of the net infrastructure is trained to be converged and then used for extracting a segmentation map corresponding to each spliced panel from the spliced panel image in the application.
Further, referring to fig. 2, the step of performing multi-scale encoding and decoding on the spliced panel image based on a pre-trained image segmentation model, determining a plurality of image feature information capturing contour features of each spliced panel, and extracting a segmentation map corresponding to each spliced panel from the spliced panel image according to the plurality of image feature information includes:
s101, invoking a pre-trained image segmentation model, and fully connecting all image feature information based on a full-connection layer in the image segmentation model to fuse and generate mask image data, wherein the mask image data inherits contour features of all spliced panels in the image feature information;
and step S103, extracting the original specification of the spliced panel images based on the mask image data to obtain a segmentation map corresponding to each spliced panel, wherein the segmentation map is formed by images in the contours corresponding to the contour features of each spliced panel.
Specifically, U 2 The net image segmentation model is based on the residual convolution principle and is constructed with an encoding path and a decoding path, wherein the encoding path and the decoding path are respectively provided with the same number of multi-stage decoders, and the stages of the encoder and the decoder can be used forAnd can be flexibly determined according to the actual situation by a person skilled in the art.
For example, six levels of encoders and decoders are shown in the example of fig. 3, in which the bottom encoder and decoder are shown in a single block diagram, mainly in that they directly transform intermediate feature information obtained by encoding the previous level by a 1*1 convolution kernel to form image feature information, and thus are generally shown in a single block diagram.
The six encoders (En_1 to En_5 and En_De) in the encoding path and the side branch path in FIG. 3 are adapted to the corresponding specification original image of the spliced panel image, namely, the adaptation U 2 The spliced panel image is cut into spliced panel images of a specified specification as required by net for the specification of the input image, and the original image of the specification is encoded step by step. The first-stage encoder at the top layer extracts the intermediate characteristic information corresponding to the first scale from the original specification chart, then transmits the intermediate characteristic information to the encoder at the next stage to extract the intermediate characteristic information corresponding to the second scale, and similarly, the scale of the original specification chart is gradually reduced to extract the corresponding intermediate characteristic information, so that six intermediate characteristic information corresponding to the original specification chart can be obtained after the encoder performs step-by-step encoding on the original specification chart.
It can be understood that the intermediate feature information of each scale is a representation obtained after deep semantic understanding of the specification image under the corresponding scale, and is information extracted from the contour features of each spliced panel in the spliced panel image. U (U) 2 This ability of net image segmentation models is known to those skilled in the art, as long as it is trained to converge with a sufficient amount of training samples, its encoding path is provided with deep semantic understanding capability to capture the contour features of each stitched panel therein from the stitched panel image.
Among the six decoders (de_1 to de_5 and en_de) of the right branch path in fig. 3, the first decoder en_de performs 1*1 convolution transformation on the intermediate feature information output by the encoder en_5 to obtain the image feature information of the corresponding scale, and besides, each of the other higher-level decoders uses the image feature information produced by the next stage as the basis, refers to the image feature information produced by the encoder of the same stage, restores the image feature information of the corresponding scale of the stage where the decoder is located, and so on to the last decoder of the top layer, so as to obtain the image feature information of the same scale as the original specification.
It will also be appreciated that the intermediate feature information of each scale provides context information with intermediate feature information of its level, populating image feature information of its lower level to obtain larger scale image feature information yields (Mask 1 to Mask 6) that also include representations of outline features of each stitched panel within the stitched panel image captured from the stitched panel image. Unlike the intermediate feature information, the image feature information is decoded and restored to represent the outline feature in a mask form, essentially mask image data. It will be appreciated that after progressive decoding via the decoding path, mask image data of six different scales corresponding to the number of stages will be available.
As can be understood from the disclosure herein regarding the structure and principle of the image segmentation model, the spliced panel image of the present application is encoded and decoded step by the image segmentation model to obtain a plurality of image characteristic information, and the image characteristic information is obtained according to U 2 The net image segmentation model principle, these image feature information can be further integrated to obtain Mask image data (Mask 7).
In U 2 In the net image segmentation model, a full connection layer is adopted to fully connect a plurality of image characteristic information obtained by decoding paths of the net image segmentation model, the image characteristic information is processed into single Mask image data (Mask 7), the Mask image data is essentially a binary gray image, and in the expression, when the spliced panel image does not exist any spliced panel, the gray image is a pure white image; when a spliced panel exists in the spliced panel image, the gray level image is defined by a foreground defined by the outline of each spliced panelAn image with a solid background. It is easy to understand that after full connection, a mask image required by commonly called "matting" is obtained, and can be used for implementing image segmentation.
Accordingly, U 2 The net image segmentation model uses the Mask image data (Mask 7) to segment images defined by corresponding outlines from the original specification image according to outline information represented by the Mask image data, and forms a segmentation diagram corresponding to each spliced panel. The corresponding segmentation map of each spliced panel deletes background image information relative to the original spliced panel image, and the residual spliced panel image information plays a role in colloquially called 'matting'.
Step S20, carrying out corner detection on a segmentation graph corresponding to a spliced panel in each spliced panel based on a preset corner detection algorithm, determining corner coordinates of each corner of the spliced panel, and calculating and determining an attitude rotation angle of the spliced panel based on the corner coordinates of each corner;
after extracting the segmentation graphs corresponding to each spliced panel from the spliced panel images according to the image characteristic information, it is easy to understand that the segmentation graph of the panel which is put or spliced last time by the feeding robot is also determined, namely the segmentation graph corresponding to the spliced panel is determined, the segmentation graph corresponding to the spliced panel is subjected to angular point detection based on a preset angular point detection algorithm, the angular point coordinates of each angle of the spliced panel in the spliced panel are determined, and the attitude rotation angle of the spliced panel is determined based on the angular point coordinates of each angle;
the corner detection algorithm is a Harris corner detection algorithm, the corner detection is performed on the segmentation graph corresponding to the last spliced panel based on the Harris corner detection algorithm, the corner coordinates of each corner of the last spliced panel in the spliced panels are determined, the corner coordinates of each corner are intersection coordinates of four corners, and the size of the four corner coordinate values in the last spliced panel is compared to determine which corner corresponds to the four coordinates, and the method specifically comprises the following steps:
1) Firstly, comparing the X value in the four corner coordinates, and determining that the two coordinates are either the upper right corner or the lower right corner or the upper left corner or the lower left corner by the two coordinates with the large X value;
2) On the basis of screening comparison in the step 1), comparing the sizes of Y values of two coordinates with large X values, wherein the coordinates with large Y values can be confirmed to be the coordinates of the right lower corner point, and the coordinates with small Y values are confirmed to be the coordinates of the right upper corner point; and then, on the basis of screening comparison in the step 1), comparing the two coordinates with small X values, wherein the coordinates with small Y values can be confirmed to be the coordinates of the corner point of the upper left corner, and the coordinates with large Y values can be confirmed to be the coordinates of the corner point of the lower left corner, so that the corner coordinates of each corner of the last spliced panel in the spliced panels are determined.
Referring to fig. 4, the step of performing corner detection on a segmentation graph corresponding to a spliced panel in each spliced panel based on a preset corner detection algorithm, determining corner coordinates of each corner of the spliced panel, and determining an attitude rotation angle of the spliced panel based on the corner coordinates of each corner includes:
Step S201, determining the coordinates of an upper right corner point and the coordinates of a lower right corner point in the coordinates of the corners of each corner of the last spliced panel, and calculating and determining the slope between the two points of the upper right corner point and the lower right corner point according to the coordinates of the upper right corner point and the coordinates of the lower right corner point;
and step 203, calculating and determining the attitude rotation angle of the last spliced panel by adopting an arctangent function according to the slope between the upper right corner point and the lower right corner point.
Specifically, the coordinates of the upper right corner point of the last spliced panel are (X1, Y1), the coordinates of the lower right corner point are (X2, Y2), the slope between the upper right corner point and the lower right corner point is calculated and determined according to the coordinates (X1, Y1) of the upper right corner point and the coordinates (X2, Y2) of the lower right corner point, the posture rotation angle a of the last spliced panel is calculated and determined according to the slope between the two points by adopting an arctangent function, and the calculated formula is expressed as follows:
step S30, determining a preset gap distance between the panel to be spliced and the last spliced panel, and determining a coordinate value corresponding to a panel center point of the panel to be spliced after splicing alignment according to the corner coordinates of each corner and the preset gap distance;
After the current gesture rotation angle of the last spliced panel is determined based on the angular point coordinates of each angle, determining a preset gap distance D between the panel to be spliced and the last spliced panel, wherein the preset gap distance D can be 5cm, 10cm and the like, the preset gap distance can be determined according to parameters such as product characteristics, product structures and the like of the panel, a person skilled in the art can determine the preset gap distance according to actual conditions as required, the method is not limited herein, and coordinate values corresponding to panel center points of the panel to be spliced after splicing alignment are determined according to the angular point coordinates of each angle and the preset gap distance;
further, referring to fig. 5, determining a preset gap distance between a panel to be spliced and the last spliced panel, and determining a coordinate value corresponding to a panel center point of the panel to be spliced after splicing alignment according to the corner coordinates of each corner and the preset gap distance, where the step includes:
step S301, determining the coordinates of an upper right corner point and the coordinates of a lower right corner point in the corner point coordinates of each corner of the last spliced panel, and calculating and determining the midpoint coordinates of a straight line between the upper right corner point and the lower right corner point according to the coordinates of the upper right corner point and the coordinates of the lower right corner point;
Step 303, determining the panel width of the panel to be spliced, and calculating and determining the coordinate value corresponding to the panel center point of the panel to be spliced after splicing alignment according to the panel width of the panel to be spliced, the midpoint coordinate of the straight line between the upper right corner point and the lower right corner point and the preset gap distance between the panel to be spliced and the last spliced panel.
Specifically, the upper right corner point coordinates of the last spliced panel are (X1, Y1), the lower right corner point coordinates are (X2, Y2), and the midpoint coordinates q (X5, Y5) of the straight line between the upper right corner point and the lower right corner point of the last spliced panel are calculated and determined according to the upper right corner point coordinates (X1, Y1) and the lower right corner point coordinates (X2, Y2), and the calculation formula is as follows:
(2),
(3),
calculating and determining the midpoint coordinates q (X5, Y5) of a straight line between the upper right corner point and the lower right corner point of the last spliced panel according to the calculation formulas (2) and (3);
after determining the midpoint coordinates q (X5, Y5) of the straight line between the upper right corner point and the lower right corner point in the last spliced panel, determining the panel width L of the last spliced panel, and calculating and determining the coordinate values O (X, Y) corresponding to the panel center point of the panel to be spliced after splicing alignment according to the panel width L, the midpoint coordinates q (X5, Y5) of the straight line between the upper right corner point and the lower right corner point in the last spliced panel and the preset gap distance D, wherein the calculation formula is as follows:
(4),
(5),
And (3) determining the coordinate value O (x, y) corresponding to the panel center point of the panel to be spliced after splicing alignment according to the calculation formula (4) and (5).
And S40, aligning and placing the panel to be spliced by the feeding robot according to the attitude rotation angle and the coordinate value corresponding to the panel center point of the panel to be spliced after splicing and aligning so as to finish horizontal splicing and aligning of the panel to be spliced by the robot.
After determining the attitude rotation angle a of the last spliced panel and the coordinate value O (x, y) corresponding to the panel center point of the panel to be spliced after splicing alignment, transmitting the data of the attitude rotation angle a of the last spliced panel and the coordinate value O (x, y) corresponding to the panel center point of the panel to be spliced after splicing alignment to a mechanical arm in a feeding robot, and carrying out splicing adjustment and alignment placement of the panel to be spliced according to the attitude rotation angle a of the last spliced panel and the data of the coordinate value O (x, y) corresponding to the panel center point of the panel to be spliced after splicing alignment by the mechanical arm in the feeding robot so as to finish feeding alignment splicing of the robot.
Referring to fig. 6, the step of aligning and placing the panel to be spliced by the feeding robot according to the posture rotation angle and the coordinate value corresponding to the panel center point of the panel to be spliced after the splicing and aligning includes:
step S401, obtaining the coordinates of the left upper corner point and the coordinates of the right upper corner point in the coordinates of the corner points of each corner of the last spliced panel, and calculating and determining the slopes between the left upper corner point and the right upper corner point according to the coordinates of the left upper corner point and the coordinates of the right upper corner point;
step S403, determining the placement posture of the panel to be spliced based on the slope between the left upper corner point and the right upper corner point, the posture rotation angle of the last spliced panel and the coordinate value corresponding to the panel center point of the panel to be spliced after splicing alignment;
and step 405, the loading robot performs alignment and placement according to the placement posture of the panel to be spliced so as to finish horizontal splicing alignment of the robot on the panel to be spliced.
Specifically, the upper left corner point coordinates of the last spliced panel are (X4, Y4), and the upper right corner point coordinates are (X1, Y1), and the slope between the two points of the upper left corner point and the upper right corner point of the last spliced panel is calculated and determined according to the upper left corner point coordinates (X4, Y4) and the upper right corner point coordinates (X1, Y1), and the calculation formula is expressed as follows:
(6),
After calculating and determining the slope between the left upper corner point and the right upper corner point of the last spliced panel, determining the placement posture of the panel to be spliced based on the slope between the left upper corner point and the right upper corner point of the last spliced panel, the posture rotation angle a of the last spliced panel and the coordinate value O (x, y) corresponding to the panel center point of the panel to be spliced after splicing alignment, and carrying out alignment placement by a feeding robot according to the placement posture of the panel to be spliced so as to finish feeding alignment splicing of the robot.
Compared with the prior art, the application aims at solving the problems that in the prior art, defective products of products are increased, raw materials are wasted, the efficiency is low, the contact type alignment and placement mode of a robot is that alignment errors cannot be known, the requirements of products are easily exceeded, defective products are generated, and the application comprises the following beneficial effects:
firstly, the robot feeding, splicing and aligning method can remarkably improve the accuracy of splicing and aligning of the upper panels of the industrial production line, greatly save manpower and material resources, and avoid the problems of increased defective products, waste of raw materials and the like of products caused by inaccurate alignment and placement or larger errors between the upper panels of the industrial production line;
Secondly, the splicing alignment is carried out on the upper panel of the industrial production line based on the image segmentation model, manual alignment and placement are not needed, the feeding robot can carry out automatic non-contact placement, the production efficiency of panel production is greatly improved, the problems that defective products are produced due to the fact that the production of the panel exceeds the requirement of products, user experience is reduced and the like are avoided;
further, the robot feeding, splicing and aligning method can greatly improve the accuracy of solar panel production, splicing and aligning, remarkably improve the product qualification rate of the solar panel, and remarkably improve the user experience while improving the enterprise benefit.
Referring to fig. 7, a robotic feeding and stitching alignment device provided for one of the purposes of the present application includes an image segmentation module 1100, an attitude angle determination module 1200, a panel position determination module 1300, and a panel stitching alignment module 1400. The image segmentation module 1100 is configured to acquire a stitched panel image, perform multi-scale encoding and decoding on the stitched panel image based on a pre-trained image segmentation model, determine a plurality of image feature information capturing contour features of each stitched panel therein, and extract a segmentation map corresponding to each stitched panel from the stitched panel image according to the plurality of image feature information; the attitude angle determining module 1200 is configured to perform corner detection on a segmentation graph corresponding to a spliced panel in each spliced panel based on a preset corner detection algorithm, determine corner coordinates of each corner of the spliced panel, and determine an attitude rotation angle of the spliced panel based on the corner coordinates of each corner; the panel position determining module 1300 is configured to determine a preset gap distance between the panel to be spliced and the last spliced panel, and determine a coordinate value corresponding to a panel center point of the panel to be spliced after splicing alignment according to the corner coordinates of each corner and the preset gap distance; the panel splicing alignment module 1400 is configured to align and place the panel to be spliced according to the gesture rotation angle and the coordinate value corresponding to the panel center point of the panel to be spliced after splicing alignment by the feeding robot, so as to finish horizontal splicing alignment of the panel to be spliced by the robot.
On the basis of any embodiment of the present application, please refer to fig. 8, another embodiment of the present application further provides an electronic device, where the electronic device may be implemented by a computer device, and as shown in fig. 8, the internal structure of the computer device is schematically shown. The computer device includes a processor, a computer readable storage medium, a memory, and a network interface connected by a system bus. The computer readable storage medium of the computer equipment stores an operating system, a database and computer readable instructions, the database can store a control information sequence, and when the computer readable instructions are executed by a processor, the processor can realize a robot feeding splicing alignment method. The processor of the computer device is used to provide computing and control capabilities, supporting the operation of the entire computer device. The memory of the computer device may store computer readable instructions, which when executed by the processor, may cause the processor to execute the robot feeding splice alignment method of the present application. The network interface of the computer device is for communicating with a terminal connection. It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
The processor in this embodiment is configured to execute specific functions of each module and its sub-module in fig. 7, and the memory stores program codes and various data required for executing the above modules or sub-modules. The network interface is used for data transmission between the user terminal or the server. The memory in this embodiment stores program codes and data required for executing all modules/sub-modules in the robot feeding, splicing and aligning device of the present application, and the server can call the program codes and data of the server to execute the functions of all sub-modules.
The application further provides a storage medium storing computer readable instructions, which when executed by one or more processors, cause the one or more processors to execute the steps of the robot feeding splice alignment method according to any embodiment of the application.
The application also provides a computer program product, which comprises a computer program/instruction, wherein the computer program/instruction realizes the steps of the robot feeding splicing alignment method according to any embodiment of the application when being executed by one or more processors.
Those skilled in the art will appreciate that implementing all or part of the above-described methods of embodiments of the present application may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed, may comprise the steps of embodiments of the methods described above. The storage medium may be a computer readable storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (RandomAccess Memory, RAM).
The foregoing is only a partial embodiment of the present application, and it should be noted that, for a person skilled in the art, several improvements and modifications can be made without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.
In summary, the robot feeding, splicing and aligning method can greatly improve the accuracy of solar panel production, splicing and aligning, remarkably improve the product qualification rate of the solar panel, and remarkably improve the user experience while improving the enterprise benefit.

Claims (10)

1. The robot feeding, splicing and aligning method is characterized by comprising the following steps of:
acquiring spliced panel images, performing multi-scale encoding and decoding on the spliced panel images based on a pre-trained image segmentation model, determining a plurality of pieces of image feature information of contour features of each spliced panel, and extracting segmentation graphs corresponding to each spliced panel from the spliced panel images according to the plurality of pieces of image feature information;
performing corner detection on a segmentation graph corresponding to a spliced panel in each spliced panel based on a preset corner detection algorithm, determining corner coordinates of each corner of the spliced panel, and determining an attitude rotation angle of the spliced panel based on the corner coordinates of each corner;
Determining a preset gap distance between a panel to be spliced and the last spliced panel, and determining a coordinate value corresponding to a panel center point of the panel to be spliced after splicing alignment according to the angular point coordinates of each angle and the preset gap distance;
and the feeding robot performs alignment and placement on the panel to be spliced according to the gesture rotation angle and the coordinate value corresponding to the panel center point of the panel to be spliced after splicing alignment, so that the robot performs horizontal splicing alignment on the panel to be spliced.
2. The method of claim 1, wherein the step of performing multi-scale encoding and decoding on the spliced panel image based on a pre-trained image segmentation model, determining a plurality of image feature information capturing contour features of each spliced panel, and extracting a segmentation map corresponding to each spliced panel from the spliced panel image according to the plurality of image feature information comprises:
invoking a pre-trained image segmentation model, and fully connecting all image feature information based on a full-connection layer in the image segmentation model to fuse and generate mask image data, wherein the mask image data inherits contour features of all spliced panels in the image feature information;
And extracting the original specification of the spliced panel images based on the mask image data to obtain a segmentation map corresponding to each spliced panel, wherein the segmentation map is formed by images in the outline corresponding to the outline characteristics of each spliced panel.
3. The method of alignment of robotic feeding and stitching according to claim 1, wherein the step of performing corner detection on a segmentation map corresponding to a last stitched panel in each stitched panel based on a preset corner detection algorithm, determining corner coordinates of each corner of the last stitched panel, and determining an attitude rotation angle of the last stitched panel based on the corner coordinates of each corner includes:
determining the coordinates of an upper right corner point and the coordinates of a lower right corner point in the coordinates of the corners of each corner of the last spliced panel, and calculating and determining the slope between the two points of the upper right corner point and the lower right corner point according to the coordinates of the upper right corner point and the coordinates of the lower right corner point;
and calculating and determining the attitude rotation angle of the last spliced panel by adopting an arctangent function according to the slope between the upper right corner point and the lower right corner point.
4. The method for positioning and splicing the robot according to claim 1, wherein the step of determining a preset gap distance between the panel to be spliced and the last spliced panel, and determining a coordinate value corresponding to a panel center point of the panel to be spliced after positioning and splicing according to the corner coordinates of each corner and the preset gap distance comprises the steps of:
determining the coordinates of an upper right corner point and the coordinates of a lower right corner point in the coordinates of the corners of each corner of the last spliced panel, and calculating and determining the coordinates of the middle point of a straight line between the upper right corner point and the lower right corner point according to the coordinates of the upper right corner point and the coordinates of the lower right corner point;
and determining the panel width of the panel to be spliced, and calculating and determining the coordinate value corresponding to the panel center point of the panel to be spliced after splicing alignment according to the panel width of the panel to be spliced, the midpoint coordinate of the straight line between the upper right corner point and the lower right corner point and the preset gap distance between the panel to be spliced and the last spliced panel.
5. The method for positioning and splicing the panels to be spliced by the robot according to claim 1, wherein the step of positioning and placing the panels to be spliced by the feeding robot according to the posture rotation angle and the coordinate value corresponding to the panel center point of the panels to be spliced after the positioning and splicing comprises the following steps:
Acquiring the coordinates of an upper left corner point and the coordinates of an upper right corner point in the coordinates of the corners of each corner of the spliced panel, and calculating and determining the slopes between the two points of the upper left corner point and the upper right corner point according to the coordinates of the upper left corner point and the coordinates of the upper right corner point;
determining the placement posture of the panel to be spliced based on the slope between the left upper corner point and the right upper corner point, the posture rotation angle of the last spliced panel and the coordinate value corresponding to the panel center point of the panel to be spliced after splicing alignment;
and the feeding robot performs alignment and placement according to the placement posture of the panel to be spliced so as to finish horizontal splicing alignment of the robot on the panel to be spliced.
6. The robot feeding, stitching and aligning method according to claim 1, wherein the basic network architecture of the image segmentation model is U 2 net model.
7. The robotic feed splice alignment method of any of claims 1-6, wherein the panel comprises one or more of a solar panel or a new energy automobile battery panel.
8. Robot material loading concatenation counterpoint device, its characterized in that includes:
The image segmentation module is used for acquiring spliced panel images, carrying out multi-scale encoding and decoding on the spliced panel images based on a pre-trained image segmentation model, determining a plurality of image feature information for capturing contour features of each spliced panel, and extracting segmentation graphs corresponding to each spliced panel from the spliced panel images according to the plurality of image feature information;
the attitude angle determining module is used for carrying out angle point detection on a segmentation graph corresponding to a spliced panel in each spliced panel based on a preset angle point detection algorithm, determining angle point coordinates of each angle of the spliced panel, and calculating and determining an attitude rotation angle of the spliced panel based on the angle point coordinates of each angle;
the panel position determining module is used for determining a preset gap distance between a panel to be spliced and the last spliced panel, and determining coordinate values corresponding to panel center points of the panel to be spliced after splicing alignment according to corner coordinates of each corner and the preset gap distance;
the panel splicing alignment module is arranged in such a way that the feeding robot aligns and places the panel to be spliced according to the gesture rotation angle and the coordinate value corresponding to the panel center point of the panel to be spliced after splicing alignment so as to finish horizontal splicing alignment of the robot to the panel to be spliced.
9. An electronic device comprising a central processor and a memory, characterized in that the central processor is arranged to invoke a computer program stored in the memory for performing the steps of the method according to any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it stores in the form of computer-readable instructions a computer program implemented according to the method of any one of claims 1 to 7, which, when invoked by a computer, performs the steps comprised by the corresponding method.
CN202410157432.7A 2024-02-04 2024-02-04 Robot feeding, splicing and aligning method, device, equipment and medium Active CN117680977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410157432.7A CN117680977B (en) 2024-02-04 2024-02-04 Robot feeding, splicing and aligning method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410157432.7A CN117680977B (en) 2024-02-04 2024-02-04 Robot feeding, splicing and aligning method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN117680977A true CN117680977A (en) 2024-03-12
CN117680977B CN117680977B (en) 2024-04-16

Family

ID=90130573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410157432.7A Active CN117680977B (en) 2024-02-04 2024-02-04 Robot feeding, splicing and aligning method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117680977B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4788440A (en) * 1981-05-11 1988-11-29 Diffracto Ltd. Electro-optical systems for control of robots, manipulator arms and coordinate measuring machines
JP2007185723A (en) * 2006-01-11 2007-07-26 Fujifilm Corp Apparatus and method for automatic alignment
CN112045397A (en) * 2020-09-07 2020-12-08 中铁工程装备集团有限公司 Steel arch splicing device and working method thereof
CN112053401A (en) * 2020-09-11 2020-12-08 北京半导体专用设备研究所(中国电子科技集团公司第四十五研究所) Chip splicing method, device, equipment and storage medium
CN113459109A (en) * 2021-09-03 2021-10-01 季华实验室 Mechanical arm path planning method and device, electronic equipment and storage medium
WO2023024697A1 (en) * 2021-08-26 2023-03-02 北京旷视科技有限公司 Image stitching method and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4788440A (en) * 1981-05-11 1988-11-29 Diffracto Ltd. Electro-optical systems for control of robots, manipulator arms and coordinate measuring machines
JP2007185723A (en) * 2006-01-11 2007-07-26 Fujifilm Corp Apparatus and method for automatic alignment
CN112045397A (en) * 2020-09-07 2020-12-08 中铁工程装备集团有限公司 Steel arch splicing device and working method thereof
CN112053401A (en) * 2020-09-11 2020-12-08 北京半导体专用设备研究所(中国电子科技集团公司第四十五研究所) Chip splicing method, device, equipment and storage medium
WO2023024697A1 (en) * 2021-08-26 2023-03-02 北京旷视科技有限公司 Image stitching method and electronic device
CN113459109A (en) * 2021-09-03 2021-10-01 季华实验室 Mechanical arm path planning method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN117680977B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN110413812B (en) Neural network model training method and device, electronic equipment and storage medium
CN110910422A (en) Target tracking method and device, electronic equipment and readable storage medium
CN108205680B (en) Image feature extraction integrated circuit, method and terminal
EP3872764B1 (en) Method and apparatus for constructing map
EP3825911A2 (en) Image recognizing method, apparatus, electronic device and storage medium
CN112232165B (en) Data processing method, device, computer and readable storage medium
CN112651490B (en) Training method and device for human face key point detection model and readable storage medium
CN110782430A (en) Small target detection method and device, electronic equipment and storage medium
CN117680977B (en) Robot feeding, splicing and aligning method, device, equipment and medium
CN115972198B (en) Mechanical arm visual grabbing method and device under incomplete information condition
CN116343266A (en) Image character recognition method and device, equipment, medium and product thereof
CN111340146A (en) Method for accelerating video recovery task through shared feature extraction network
US20220351495A1 (en) Method for matching image feature point, electronic device and storage medium
CN111862015B (en) Image quality grade determining method and device and electronic equipment
CN115937537A (en) Intelligent identification method, device and equipment for target image and storage medium
CN115623242A (en) Video processing method and related equipment thereof
CN115346270A (en) Traffic police gesture recognition method and device, electronic equipment and storage medium
CN113222121A (en) Data processing method, device and equipment
CN117553808B (en) Deep learning-based robot positioning navigation method, device, equipment and medium
CN116012873B (en) Pedestrian re-identification method and device, electronic equipment and storage medium
CN116453221B (en) Target object posture determining method, training device and storage medium
CN116596990B (en) Target detection method, device, equipment and storage medium
CN116168442B (en) Sample image generation method, model training method and target detection method
CN115841587B (en) Feature extraction method, device, equipment and storage medium for image classification task
CN113569727A (en) Method, system, terminal and medium for identifying construction site in remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant