CN115713547A - Motion trail generation method and device and processing equipment - Google Patents

Motion trail generation method and device and processing equipment Download PDF

Info

Publication number
CN115713547A
CN115713547A CN202211427344.1A CN202211427344A CN115713547A CN 115713547 A CN115713547 A CN 115713547A CN 202211427344 A CN202211427344 A CN 202211427344A CN 115713547 A CN115713547 A CN 115713547A
Authority
CN
China
Prior art keywords
dimensional
track
point cloud
target
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211427344.1A
Other languages
Chinese (zh)
Inventor
李文智
黄义亮
郎需林
姜宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yuejiang Technology Co Ltd
Original Assignee
Shenzhen Yuejiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yuejiang Technology Co Ltd filed Critical Shenzhen Yuejiang Technology Co Ltd
Priority to CN202211427344.1A priority Critical patent/CN115713547A/en
Publication of CN115713547A publication Critical patent/CN115713547A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application is applicable to the technical field of motion path generation, and provides a motion trajectory generation method, a motion trajectory generation device and a motion trajectory processing device, wherein the motion trajectory generation device comprises: receiving a three-dimensional point cloud picture of a target body sent by image acquisition equipment; receiving a two-dimensional track map, wherein the two-dimensional track map is obtained by determining a first motion track in a two-dimensional image of the target body; determining the projection of the first motion track in the two-dimensional track map on the three-dimensional point cloud map according to the mapping relation between the two-dimensional image and the three-dimensional point cloud map to obtain a first target point cloud; determining track points of the mechanical arm and postures corresponding to the track points according to the first target point cloud; and generating a second motion trail of the mechanical arm according to the track points of the mechanical arm and the postures corresponding to the track points, wherein the second motion trail is a three-dimensional motion trail. By the method, the speed and the accuracy of the obtained motion trail can be improved.

Description

Motion trail generation method and device and processing equipment
Technical Field
The present application relates to the field of motion path generation technologies, and in particular, to a method, an apparatus, a processing device, and a computer-readable storage medium for generating a motion trajectory.
Background
When the existing mechanical arm is used for polishing or massaging a certain area, polishing or massaging is required to be carried out according to a specified track path. The specified track path is usually determined by manually dragging the mechanical arm to teach the surface of the object, that is, the mechanical arm is dragged in advance to teach according to the specified track motion, and then the motion path of the specified track is generated.
However, since the drag teaching is a motion performed by a human dragging the robot arm, that is, a human-machine interaction is required, the precision of the motion path generated by the drag teaching varies from person to person, and if the motion trajectory is required to be complicated, the time consumption of the teaching process is long. In addition, if drag the teaching to unevenness's curved surface, then can handle when the place of turning round is taught not well, and then lead to the orbit of teaching not level and smooth, this moment, probably need the teaching can succeed many times, efficiency is done all the more than half.
In conclusion, it is difficult to quickly and accurately determine the motion trajectory of the mechanical arm by a manual dragging teaching method.
Disclosure of Invention
The embodiment of the application provides a motion trail generation method, a motion trail generation device and processing equipment, and can solve the problems of low speed and low accuracy of motion trail generation of a mechanical arm.
In a first aspect, an embodiment of the present application provides a method for generating a motion trajectory, which is applied to a controller of a mechanical arm, and includes:
receiving a three-dimensional point cloud picture of a target body sent by image acquisition equipment;
receiving a two-dimensional track map, wherein the two-dimensional track map is obtained by determining a first motion track in a two-dimensional image of the target body, and the first motion track is a two-dimensional motion track;
determining the projection of the first motion track in the two-dimensional track map on the three-dimensional point cloud map according to the mapping relation between the two-dimensional image and the three-dimensional point cloud map to obtain a first target point cloud;
determining track points of the mechanical arm and postures corresponding to the track points according to the first target point cloud;
and generating a second motion trail of the mechanical arm according to the track points of the mechanical arm and the postures corresponding to the track points, wherein the second motion trail is a three-dimensional motion trail.
In a second aspect, an embodiment of the present application provides a motion trajectory generation apparatus, which is applied to a controller of a robot arm, and includes:
the three-dimensional point cloud picture acquisition module is used for receiving a three-dimensional point cloud picture of a target body sent by the image acquisition equipment;
the two-dimensional track map acquisition module is used for receiving a two-dimensional track map, and the two-dimensional track map is obtained by determining a first motion track in a two-dimensional image of the target body, wherein the first motion track is a two-dimensional motion track;
the first target point cloud determining module is used for determining the projection of the first motion track in the two-dimensional track map on the three-dimensional point cloud map according to the mapping relation between the two-dimensional image and the three-dimensional point cloud map to obtain a first target point cloud;
the gesture determining module is used for determining track points of the mechanical arm and gestures corresponding to the track points according to the first target point cloud;
and the second motion track determining module is used for generating a second motion track of the mechanical arm according to the track points of the mechanical arm and the postures corresponding to the track points, wherein the second motion track is a three-dimensional motion track.
In a third aspect, an embodiment of the present application provides a processing device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method according to the first aspect is implemented.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a processing device, causes the processing device to execute the method described in the first aspect.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
in the embodiment of the application, the first motion track is a two-dimensional motion track determined in the two-dimensional image of the target body, so that the first motion track can be determined quickly and accurately according to the two-dimensional track image, meanwhile, a specific mapping relation exists between the two-dimensional image of the target body and the three-dimensional point cloud image, so that a second motion track (namely a three-dimensional motion track) corresponding to the first motion track can be determined according to the mapping relation, and meanwhile, a manual mechanical arm is not needed to be dragged in the generation process of the second motion track, so that the generation precision and accuracy of the second motion track are further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below.
Fig. 1 is a flowchart of a method for generating a motion trajectory according to an embodiment of the present application;
fig. 2 is an interaction flowchart of a 3D camera, an upper computer and a controller according to an embodiment of the present application;
fig. 3 is a schematic view of an application scenario of a 3D camera, an upper computer and a controller according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a motion trajectory generation apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a processing apparatus according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise.
The first embodiment is as follows:
when the mechanical arm is dragged to polish a certain area manually according to the appointed track path, the human and mechanical arm are matched according to the human, and the speed of manually dragging the mechanical arm is low, so that the precision and the speed of the obtained motion track are not high.
In order to improve the precision and the speed of a motion track obtained when a mechanical arm polishes a certain area, the embodiment of the application provides a motion track generation method. In the method for generating the motion trail, a two-dimensional image and a three-dimensional point cloud picture of a target body needing to generate the motion trail are obtained, after the two-dimensional motion trail is determined on the two-dimensional image, the two-dimensional motion trail is projected into a point cloud of the three-dimensional point cloud picture to obtain a first target point cloud, and then a track point and a posture of the mechanical arm are determined according to the first target point cloud to generate the three-dimensional motion trail.
The following describes a method for generating a motion trajectory according to an embodiment of the present application with reference to the drawings.
Fig. 1 shows a flowchart of a method for generating a motion trajectory provided in an embodiment of the present application, which is applied to a controller of a robot arm, and is detailed as follows:
and S11, receiving the three-dimensional point cloud picture of the target body sent by the image acquisition equipment.
The target body is the target body on which the mechanical arm needs to obtain a motion track, and comprises one or more of a human body, a specified area and a specified type of object.
In this embodiment, an object may be photographed by an image capture device (e.g., a 3D camera), a two-dimensional image and a three-dimensional point cloud chart corresponding to the object are obtained (i.e., the three-dimensional point cloud chart is a three-dimensional image of the object, and the two-dimensional image is a two-dimensional image of the object), and the three-dimensional point cloud chart of the object is sent to a controller, or both the two-dimensional image and the three-dimensional point cloud chart of the object are sent to the controller. The three-dimensional point cloud picture of the target body comprises a point cloud of the target body.
And S12, receiving a two-dimensional track map, wherein the two-dimensional track map is obtained by determining a first motion track in a two-dimensional image of the target body, and the first motion track is a two-dimensional motion track.
In the embodiment of the application, the image acquisition device sends the two-dimensional image to an upper computer (such as a computer, a mobile phone and the like), after the two-dimensional image is displayed through image editing software (such as a drawing board) installed on the upper computer, a user can directly draw a first motion track of the two-dimensional image in the image editing software, and after the drawing is finished, the two-dimensional image (namely, a two-dimensional track map) with the first motion track drawn is sent to the controller through the upper computer. The first motion trail is a two-dimensional motion trail corresponding to the situation that a user wants the mechanical arm to move on the target body. For example, if the mechanical arm is a massage-type mechanical arm, the target body is a human body, and the user wants to move the mechanical arm from the left shoulder to the right shoulder of the human body, the user draws a motion trajectory from the left shoulder to the right shoulder in the two-dimensional image corresponding to the human body as the first motion trajectory in the embodiment of the present application.
In some embodiments, the image acquisition device sends the two-dimensional image to an upper computer, a preset algorithm is stored in the upper computer, and the upper computer draws a first motion track on the two-dimensional image according to the algorithm after receiving the two-dimensional image, generates a two-dimensional track map, and then sends the two-dimensional track map to the controller. In some embodiments, a plurality of preset algorithms are stored in the upper computer, and the upper computer draws a first motion track on the two-dimensional image according to the algorithm selected by a user to generate a two-dimensional track map; in some embodiments, the upper computer selects an algorithm matched with the rule from a plurality of preset algorithms according to the rule set by the user, and then draws a first motion track on the two-dimensional image according to the algorithm to generate a two-dimensional track map.
In some embodiments, the image acquisition device sends the two-dimensional image to an upper computer, a preset algorithm is stored in the upper computer, the upper computer sends the two-dimensional image and the preset algorithm to the controller, and the controller draws a first motion track on the two-dimensional image according to the algorithm after receiving the two-dimensional image and the algorithm to generate a two-dimensional track map. In some embodiments, a plurality of preset algorithms are stored in the upper computer, and the upper computer sends the algorithm selected by the user and the two-dimensional image to the controller; in some embodiments, the upper computer selects an algorithm matched with the rule from a plurality of preset algorithms according to the rule set by the user, and then sends the algorithm and the two-dimensional image to the controller, so that the controller draws a first motion track on the two-dimensional image according to the algorithm to generate a two-dimensional track map. That is, the controller receives the two-dimensional trackmap, essentially a two-dimensional image and a corresponding algorithm.
In some embodiments, a preset algorithm is stored in the controller, the image acquisition device sends the two-dimensional image to the controller, and the controller draws a first motion track on the two-dimensional image according to the preset algorithm to generate a two-dimensional track map.
And S13, determining the projection of the first motion track in the two-dimensional track map on the three-dimensional point cloud map according to the mapping relation between the two-dimensional image and the three-dimensional point cloud map to obtain a first target point cloud.
In the embodiment of the application, because a certain mapping relationship (or called projection relationship) exists between the two-dimensional image and the three-dimensional point cloud picture of the same target body, and the same mapping relationship also exists between the two-dimensional track picture and the three-dimensional point cloud picture, the point cloud corresponding to the first motion track can be searched in the three-dimensional point cloud picture according to the mapping relationship between the two-dimensional image and the three-dimensional point cloud picture.
In some embodiments, the image capturing device is a 3D camera (or called stereo camera), and the pixels of the two-dimensional image captured by the 3D camera are: w x h, where w is the width of the two-dimensional image and h is the height of the two-dimensional image. After the 3D camera collects the image, an alignment algorithm (the algorithm carried by the 3D camera) is used for aligning the collected two-dimensional image and the depth image, the pixel of the depth image after the alignment processing is w x h, and therefore the pixel point of the two-dimensional image and the pixel point of the depth image are in one-to-one correspondence. Then, the 3D camera calculates to obtain a three-dimensional point cloud picture according to internal and external parameters and the depth image of the 3D camera; the three-dimensional point cloud graph is a vector of w x h dimensions, each value of the vector is a three-dimensional coordinate and is used for representing the coordinate value of each point corresponding to the target body; each pixel point in the two-dimensional image corresponds to each vector value in the three-dimensional point cloud image one by one, and a mapping relation between the two-dimensional image and the three-dimensional point cloud image is formed. And the 3D camera sends the two-dimensional image, the three-dimensional point cloud picture and the mapping relation to the controller.
And S14, determining the track points of the mechanical arm and the corresponding postures of the track points according to the first target point cloud.
The first motion trail is a two-dimensional motion trail corresponding to the mechanical arm expected to move on the target body by the user, and the first target point cloud is each three-dimensional coordinate point determined according to the determined first motion trail, so that the three-dimensional track point of the mechanical arm and the corresponding posture of each track point can be determined according to each three-dimensional coordinate point.
And S15, generating a second motion track of the mechanical arm according to the track points of the mechanical arm and the corresponding postures of the track points, wherein the second motion track is a three-dimensional motion track.
In the embodiment of the application, after the three-dimensional point cloud picture and the two-dimensional track picture of the target body are obtained, the point cloud corresponding to the first motion track in the two-dimensional track picture is determined according to the mapping relation between the two-dimensional image and the three-dimensional point cloud picture of the target body, the first target point cloud is obtained, and then the track points of the mechanical arm and the corresponding postures of the track points are determined according to the first target point cloud, so that the second motion track of the mechanical arm is determined. The first motion track is determined in the two-dimensional image of the target body, so that the first motion track can be determined quickly and accurately according to the two-dimensional track image, meanwhile, the two-dimensional image of the target body and the three-dimensional point cloud image have a specific mapping relation, so that the second motion track corresponding to the first motion track can be determined according to the mapping relation, and meanwhile, the generation process of the second motion track is not required to be manually dragged by a mechanical arm, so that the generation precision and the accuracy of the second motion track are further improved.
In the application, if the image capturing device is a 3D camera and the image editing software is image editing software (such as a drawing board) of an upper computer, an interaction flowchart among the 3D camera, the upper computer, and the controller may be as shown in fig. 2.
In order to more clearly describe the method for generating the motion trail provided by the embodiment of the present application, the following description is made in conjunction with an application scenario.
As shown in fig. 3, an image capturing device photographs a human body (or a human body model) to obtain a three-dimensional point cloud image and a two-dimensional image of the human body (the human body is a target body of the present application), the image capturing device further sends the three-dimensional point cloud image and the two-dimensional image to a controller, the controller sends the two-dimensional image to a host computer (in some embodiments, the image capturing device sends the two-dimensional image to the host computer), the host computer is provided with image editing software, a user performs drawing of a first motion trajectory on the two-dimensional image displayed by the image editing software through the host computer to obtain a two-dimensional image (i.e., a two-dimensional trajectory image) with the first motion trajectory, the host computer sends the two-dimensional trajectory image to the controller, the controller determines, according to a mapping relationship between the two-dimensional image and the three-dimensional point cloud image, a projection of the first motion trajectory in the two-dimensional trajectory image on the three-dimensional point cloud image to obtain a first target, and determines, according to a trajectory point of the mechanical arm and a pose corresponding to the point cloud of the mechanical arm, and a pose of the point cloud of the mechanical arm are generated according to the first target. When the follow-up mechanical arm needs to massage the human body, the controller controls the mechanical arm to massage the human body according to the second motion trail.
In some embodiments, in order to increase the obtaining speed of the first target point cloud, the step S13 includes:
a1, performing skeleton extraction processing on the first motion trajectory in the two-dimensional trajectory diagram to obtain a target skeleton.
The skeleton extraction is also called skeleton extraction, the skeleton can be extracted by the existing method, such as the method based on the intense fire simulation, and can also be extracted by the method based on the maximum disc, and the extracted skeleton can highlight the main structure and shape information of the object and remove redundant information.
In the embodiment of the present application, a first motion trajectory, which is a skeleton to be extracted (i.e., a target skeleton), is extracted from a two-dimensional trajectory diagram through skeleton extraction processing. In other words, in the embodiment of the present application, only the first motion trajectory is extracted as the target skeleton, so that only the pixel coordinates of the target skeleton need to be processed subsequently, and the pixel coordinates of the whole two-dimensional trajectory diagram do not need to be processed, thereby effectively reducing the number of the pixel coordinates that need to be processed.
And A2, sequencing the pixel coordinates of the target skeleton to obtain the sequenced pixel coordinates.
Since the target skeleton is a skeleton corresponding to the first motion trajectory, and the first motion trajectory has a sequence, for example, if the starting point of the first motion trajectory is a and the end point of the first motion trajectory is B, the sequence of the pixel coordinates corresponding to the first motion trajectory is that the closer to the point a, the earlier the sequence is, the closer to the point B, the later the sequence is.
And A3, projecting the sorted pixel coordinates to the point cloud of the three-dimensional point cloud picture according to the mapping relation between the two-dimensional image and the three-dimensional point cloud picture to obtain a first target point cloud.
And the target points in the first target point cloud also have a sequential relation with the sorted pixel coordinates.
In the embodiment of the application, the first motion track is extracted from the two-dimensional track map and then projected to the three-dimensional point cloud map, so that the number of pixel coordinates to be projected is reduced, and the speed of obtaining the first target point cloud is increased.
In some embodiments, in order to improve the accuracy of the obtained target trajectory, the step A1 includes:
and A11, converting the two-dimensional locus diagram into a gray scale map and carrying out binarization processing.
And A12, performing skeleton extraction processing on the first motion trail in the two-dimensional locus diagram after binarization processing to obtain the target skeleton.
In the embodiment of the application, when the two-dimensional track map is a Red Green Blue (RGB) color picture, the two-dimensional track map is converted into a gray map and then binarized to increase the difference between the first motion track and other objects in the two-dimensional track map, so that the target skeleton can be accurately extracted from the binarized two-dimensional track map.
In some embodiments, in order to increase the speed of obtaining the trace points, the step S14 includes:
b1, performing down-sampling processing on the first target point cloud to obtain a second target point cloud.
Specifically, after the first target point cloud is down-sampled, the number of the target points in the second target point cloud is less than the number of the target points in the first target point cloud. In some embodiments, to facilitate extraction, corresponding target points are extracted from the first target point cloud at equal intervals such that distances between adjacent target points in the second target point cloud are equal.
And B2, determining the track points of the mechanical arm and the corresponding postures of the track points according to the second target point cloud.
In the embodiment of the application, the second target point cloud is obtained after the first target point cloud is subjected to downsampling, so that the number of the target points in the obtained second target point cloud is less than that of the target points in the first target point cloud, and the speed of the obtained track points and postures can be greatly improved when the track points and postures of the mechanical arm are determined according to the second target point cloud.
In some embodiments, the step B2 includes:
and B21, determining a matrix of the grabbing points according to the conversion relation between the camera coordinate system and the mechanical arm coordinate system and the second target point cloud, wherein the matrix of the grabbing points comprises the coordinates of the target points in the second target point cloud and information used for determining the posture of the mechanical arm at the track point.
The coordinates of the trajectory points of the robot arm may be determined according to the coordinates of the target points in the second target point cloud, for example, the coordinates of each target point are used as the coordinates of each trajectory point of the robot arm.
Because the second target point cloud is obtained by shooting with a camera, the coordinate system of the second target point cloud is the camera coordinate system, and the coordinates of the track point of the mechanical arm belong to the mechanical arm coordinate system, so that the second target point cloud needs to be converted into the mechanical arm coordinate system from the camera coordinate system before the track point of the mechanical arm is determined.
In the embodiment of the present application, the transformation relationship between the camera coordinate system and the robot arm coordinate system may be represented by a matrix.
And B22, determining the coordinates of the track points of the mechanical arm and the posture of the mechanical arm at the track points according to the matrix of the grabbing points.
Specifically, because the second target point cloud is determined according to the first motion trajectory, the trajectory points of the robotic arm include the target points in the second target point cloud, and correspondingly, the coordinates of the trajectory points of the robotic arm include the coordinates of the target points in the second target point cloud.
In the embodiment of the application, the coordinate conversion is carried out on the second target point cloud according to the conversion relation between the camera coordinate system and the mechanical arm coordinate system, so that the obtained coordinate of the track point is more adaptive to the mechanical arm, and the accuracy of the subsequently obtained second motion track is improved.
In some embodiments, the step B21 includes:
executing the following steps for any two adjacent target points in the second target point cloud, wherein the ordering relationship of each target point in the second target point cloud is determined according to the ordering relationship of the ordered pixel coordinates:
and B211, determining a first straight line (assumed as Lb) where a normal vector on the point cloud picture where the previously ordered target point is located.
For example, assuming that the second target point cloud includes three target points W1, W2, W3, W1 and W2 are two adjacent target points, and W2 and W3 are also two adjacent target points. For two targets, W1 and W2, W1 is the top-ranked target, and W2 is the bottom-ranked target. However, for the two targets W2 and W3, W2 is the target ranked earlier and W3 is the target ranked later.
In the embodiment of the application, lb is perpendicular to a cloud point image where the previously ranked target point is located, and is a straight line where a normal vector is located on the cloud point image where the previously ranked target point is located.
And B212, determining a second straight line (assumed as La) which passes through the ranked target points and is perpendicular to the Lb.
Wherein La passes through the next target point in the two adjacent target points, and La and Lb are perpendicular to each other.
And B213, determining an intermediate point according to the Lb and the La.
Wherein the intermediate point is typically a point between two adjacent target points. Assuming that W1 is the top-ranked target point, W2 is the bottom-ranked target point, and the middle point is denoted by P, then P is typically the point between W1 and W2.
In some embodiments, the intermediate point may be the intersection of Lb and La.
And B214, determining a direction vector moving from the middle point to the sequenced target point, wherein the direction vector is perpendicular to the Lb.
In the exemplary embodiment, the direction vector is determined from the center point to the target point ordered after, and when the robot arm is operated, from the target point ordered before to the target point ordered after.
And B215, determining a track normal vector according to the normal vector and the direction vector, wherein the track normal vector is perpendicular to a plane formed by the Lb and the La.
The normal vector, the direction vector and the track normal vector conform to a cross-product right hand rule, and the direction of the track normal vector is the direction pointed by the thumb of the right hand.
And B216, determining a conversion matrix according to the direction vector, the normal vector, the track normal vector and the coordinates of the target points ranked in the front.
In some embodiments, it is assumed that the direction vector is represented by dir _ v, the normal vector is represented by nor _ v, the trajectory normal vector is represented by res _ v, and the components of each vector in the X axis, the Y axis, and the Z axis are respectively labeled by corresponding numbers, for example, when the number is "0", the component of the corresponding vector in the X axis is represented (for example, dir _ v [0] represents the component of the direction vector in the X axis), when the number is "1", the component of the corresponding vector in the Y axis is represented (for example, dir _ v [1] represents the component of the direction vector in the Y axis), and when the number is "2", the component of the corresponding vector in the Z axis is represented (for example, dir _ v [2] represents the component of the direction vector in the Y axis). x0, y0, z0 represent the coordinates of the top-ranked target point W1, the transformation matrix can be represented in the form:
Figure BDA0003944889180000121
and B216, determining a matrix of the grabbing points corresponding to the two target points according to the conversion relation between the camera coordinate system and the mechanical arm coordinate system and the conversion matrix.
The matrix of the grabbing points can be obtained by cross multiplication of a conversion relation and a conversion matrix of a camera coordinate system and a mechanical arm coordinate system. For example, assuming that the transformation relationship between the camera coordinate system and the robot arm coordinate system is represented by matrix (the matrix is usually set to be a 4 × 4 homogeneous matrix for describing translation transformation and perspective projection transformation), the transformation matrix is represented by matrix xpick (the matrix is usually set to be a 4 × 4 homogeneous matrix), and the matrix of the grasp point is represented by grabCordinate, then grabCordinate = matrix × matrix xpick.
In the embodiment of the application, because the conversion matrix is determined according to the direction vector, the normal vector, the track normal vector and the coordinate of the target point sequenced in front, the direction vector, the normal vector and the track normal vector are mutually perpendicular vectors, and the posture of the track point can be determined only by three mutually perpendicular vectors, the conversion matrix comprises information for determining the posture of the target point sequenced in front and information for determining the posture of the coordinate point of the mechanical arm at the target point sequenced in front, so that after the matrix grabCordinate of the grasping point is determined according to the conversion relation between the camera coordinate system and the mechanical arm coordinate system and the conversion matrix, the grabCordinate can be ensured to comprise the information for determining the posture of the target point sequenced in front, information for determining coordinates of the track points of the mechanical arm (i.e., coordinates of the target points in the second target point cloud) is also included, so that euler angles can be calculated subsequently according to the direction vector, the normal vector and the trajectory normal vector in the gradcoordinate, so as to obtain the postures of the track points corresponding to the target points ranked in front of the mechanical arm, and the coordinates of the corresponding track points are determined according to the coordinates of the target points ranked in front of the gradcoordinate, for example, the coordinates of the corresponding track points (0, 3) in the gradcoordinate, 1,3 and 2, 3) are respectively x0, y0 and z0 of the track points according to the last item of each row in the gradcoordinate.
In some embodiments, the conversion relationship between the camera coordinate system and the robot arm coordinate system may be determined in the following manner, and at this time, the method for generating a motion trajectory provided by the embodiment of the present application further includes:
and C1, acquiring a two-dimensional image and a three-dimensional point cloud picture corresponding to a space with a motion track to be determined, wherein at least 4 designated graphs are placed in the space with the motion track to be determined in advance.
The designated graph comprises a two-dimensional code graph, such as an apriltag two-dimensional code picture.
In the embodiment of the application, at least 4 apriltag two-dimensional code pictures can be placed in a space with a motion track needing to be determined in advance, and then the space with the motion track needing to be determined is shot through a 3D camera to obtain a corresponding two-dimensional image and a corresponding three-dimensional point cloud picture.
In some embodiments, 9 apriltag two-dimensional code pictures can be placed in advance in a space where a motion trajectory needs to be determined, so as to improve the precision of subsequent calibration.
And C2, identifying the at least 4 designated graphs in the two-dimensional image corresponding to the space of the motion track to be determined to obtain two-dimensional coordinates of each designated graph.
Specifically, the coordinates of a certain point of a specified figure (such as a center point or a point at the upper left corner of the specified figure) may be taken as the two-dimensional coordinates of the specified figure.
In the embodiment of the present application, if there are 9 pre-placed designated patterns, positions of the 9 designated patterns in the two-dimensional image are respectively identified, and two-dimensional coordinates of the 9 designated patterns are obtained.
And C3, aligning the two-dimensional image and the three-dimensional point cloud picture corresponding to the space of the motion track to be determined, and determining the three-dimensional point cloud coordinates corresponding to the two-dimensional coordinates of each designated graph.
And C4, acquiring the coordinates of the mechanical arm when the tail end of the mechanical arm points to the at least 4 designated graphs respectively.
Specifically, the position pointed by the tip end of the robot arm is the same as the position of the selected two-dimensional coordinate point that can be used as the designated figure, for example, if the coordinate of the center of the designated figure is selected as the two-dimensional coordinate of the designated figure, the coordinate of the robot arm when the tip end of the robot arm points to the center of the designated figure is acquired.
In the embodiment of the application, the tail ends of the mechanical arms can be moved to point to the designated graphs respectively according to the preset sequence, and the coordinates of the mechanical arms with the same number as the designated graphs are obtained. The preset order is the same as the order of determining the two-dimensional coordinates of each designated figure, and for example, if 4 designated figures are placed on the upper left, lower left, upper right, and lower right, respectively, and when the two-dimensional coordinates of the designated figures are determined, the two-dimensional coordinates of the designated figures are recorded in the order of upper left, lower left, upper right, and lower right, and when the end of the robot arm is moved, the movement is performed in the order of upper left, lower left, upper right, and lower right.
It should be noted that this step may be performed before step C1, and accordingly, the order of recording the two-dimensional coordinates of the designated graph needs to be ensured to be the same as the order of moving the end of the robot arm, which is not described herein again.
And C5, determining a conversion relation between a camera coordinate system and a mechanical arm coordinate system according to the three-dimensional point cloud coordinates and the mechanical arm coordinates.
Specifically, the coordinates of each three-dimensional point cloud and the coordinates of the corresponding mechanical arm are aligned to calculate the coordinate system conversion, so that the conversion relation between the camera coordinate system and the mechanical arm coordinate system is obtained, and the calibration is completed.
In the embodiment of the application, the designated graph is placed in the space where the motion trail needs to be determined, and the designated graph is easy to recognize, so that the mechanical arm can be calibrated quickly and accurately according to the recognition result.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two:
fig. 4 shows a block diagram of a motion trajectory generation device provided in the embodiment of the present application, and for convenience of description, only the parts related to the embodiment of the present application are shown.
Referring to fig. 4, the motion trajectory generation device 4 applied to a controller of a robot arm includes: a three-dimensional point cloud image acquisition module 41, a two-dimensional track image acquisition module 42, a first target point cloud determination module 43, an attitude determination module 44, and a second motion track determination module 45. Wherein:
and a three-dimensional point cloud image obtaining module 41, configured to receive a three-dimensional point cloud image of the target body sent by the image acquisition device.
The target body is a target body on which the mechanical arm needs to obtain a motion track, and the target body comprises one or more of a human body, a specified area and a specified kind of objects.
In the embodiment of the application, the target body can be shot through the 3D camera to obtain the two-dimensional image and the three-dimensional point cloud picture corresponding to the target body, and the three-dimensional point cloud picture of the target body is sent to the controller, or the two-dimensional image and the three-dimensional point cloud picture of the target body are both sent to the controller. The three-dimensional point cloud picture of the target body comprises a point cloud of the target body.
The two-dimensional trajectory graph acquiring module 42 is configured to receive a two-dimensional trajectory graph, where the two-dimensional trajectory graph is obtained by determining a first motion trajectory in the two-dimensional image of the target body, where the first motion trajectory is a two-dimensional motion trajectory.
In the embodiment of the application, the 3D camera can send the two-dimensional image to an upper computer (such as a computer, a mobile phone, etc.), after the two-dimensional image is displayed through image editing software (such as a drawing board) installed on the upper computer, a user can directly draw a first motion track of the two-dimensional image in the image editing software, and after the drawing is finished, the two-dimensional image (i.e., a two-dimensional track map) with the first motion track drawn is sent to the controller through the upper computer. The first motion trail is a two-dimensional motion trail corresponding to the situation that a user wants the mechanical arm to move on the target body.
A first target point cloud determining module 43, configured to determine, according to a mapping relationship between the two-dimensional image and the three-dimensional point cloud image, a projection of the first motion trajectory in the two-dimensional trajectory image on the three-dimensional point cloud image, so as to obtain a first target point cloud.
And the attitude determination module 44 is configured to determine the track points of the robot arm and the attitude corresponding to each track point according to the first target point cloud.
And a second motion trajectory determining module 45, configured to generate a second motion trajectory of the robot arm according to the track points of the robot arm and the corresponding postures of the track points, where the second motion trajectory is a three-dimensional motion trajectory.
In the embodiment of the application, after the three-dimensional point cloud image and the two-dimensional track image of the target body are obtained, the point cloud corresponding to the first motion track in the two-dimensional track image is determined according to the mapping relation between the two-dimensional image and the three-dimensional point cloud image of the target body, the first target point cloud is obtained, and then the track point and the posture of the mechanical arm are determined according to the first target point cloud so as to determine the second motion track of the mechanical arm. The first motion track is determined in the two-dimensional image of the target body, so that the first motion track can be determined quickly and accurately according to the two-dimensional track image, meanwhile, the two-dimensional image of the target body and the three-dimensional point cloud image have a specific mapping relation, so that the second motion track corresponding to the first motion track can be determined according to the mapping relation, and meanwhile, the generation process of the second motion track is not required to be manually dragged by a mechanical arm, so that the generation precision and the accuracy of the second motion track are further improved.
In some embodiments, the first target point cloud determining module 43 includes:
and the target skeleton determining unit is used for performing skeleton extraction processing on the first motion track in the two-dimensional track map to obtain the target skeleton.
In the embodiment of the application, only the first motion track is extracted as the target skeleton, so that the pixel coordinates of the target skeleton only need to be processed subsequently, and the pixel coordinates of the whole two-dimensional track map do not need to be processed, thereby effectively reducing the number of the pixel coordinates needing to be processed.
And the sequenced pixel coordinate determining unit is used for sequencing the pixel coordinates of the target skeleton to obtain the sequenced pixel coordinates.
And the first target point cloud determining unit is used for projecting the sorted pixel coordinates into the point cloud of the three-dimensional point cloud picture according to the mapping relation between the two-dimensional image and the three-dimensional point cloud picture to obtain a first target point cloud. And each target point in the first target point cloud also has a sequential relation with the sorted pixel coordinates.
In the embodiment of the application, the first motion track is extracted from the two-dimensional track map and then projected to the three-dimensional point cloud map, so that the number of pixel coordinates to be projected is reduced, and the speed of obtaining the first target point cloud is increased.
In some embodiments, the target skeleton determining unit includes:
and the binarization processing unit is used for converting the two-dimensional locus diagram into a gray map and performing binarization processing.
And a skeleton extraction processing unit, configured to perform skeleton extraction processing on the first motion trajectory in the binarized two-dimensional trajectory diagram to obtain the target skeleton.
In the embodiment of the application, when the two-dimensional track map is an RGB color picture, the two-dimensional track map is converted into a gray map and then subjected to binarization processing, so that the difference between the first motion track and other objects in the two-dimensional track map is increased, and the target skeleton can be accurately extracted from the two-dimensional track map subjected to binarization processing.
In some embodiments, the attitude determination module 44 includes:
and the down-sampling processing unit is used for performing down-sampling processing on the first target point cloud to obtain a second target point cloud.
Specifically, after the first target point cloud is down-sampled, the number of the target points in the second target point cloud is less than the number of the target points in the first target point cloud. In some embodiments, to facilitate extraction, corresponding target points are extracted from the first target point cloud at equal intervals such that the distances between adjacent target points in the second target point cloud are equal.
And the track point determining unit is used for determining the track points and the postures of the mechanical arm according to the second target point cloud.
In the embodiment of the application, the second target point cloud is obtained after the first target point cloud is subjected to downsampling, so that the number of the target points in the obtained second target point cloud is less than that of the target points in the first target point cloud, and the speed of the obtained track points and postures can be greatly improved when the track points and postures of the mechanical arm are determined according to the second target point cloud.
In some embodiments, the trajectory point determination unit includes:
and the matrix determining unit of the grabbing points is used for determining the matrix of the grabbing points according to the conversion relation between the camera coordinate system and the mechanical arm coordinate system and the second target point cloud, wherein the matrix of the grabbing points comprises the coordinates of the target point in the second target point cloud and information used for determining the posture of the mechanical arm at the track point.
And the coordinate determination unit of the track point is used for determining the coordinate of the track point of the mechanical arm and the posture of the mechanical arm at the track point according to the matrix of the grabbing point.
Specifically, since the second target point cloud is determined according to the first motion trajectory, the trajectory point of the robot arm includes the target point in the second target point cloud, and correspondingly, the coordinate of the trajectory point of the robot arm includes the coordinate of the target point in the second target point cloud.
In the embodiment of the application, the coordinate conversion is carried out on the second target point cloud according to the conversion relation between the camera coordinate system and the mechanical arm coordinate system, so that the obtained coordinate of the track point is more adaptive to the mechanical arm, and the accuracy of the subsequently obtained second motion track is improved.
In some embodiments, the above-mentioned matrix determination unit of the grab point is specifically configured to:
executing the following steps for any two adjacent target points in the second target point cloud, wherein the ordering relationship of each target point in the second target point cloud is determined according to the ordering relationship of the ordered pixel coordinates:
determining a straight line Lb where a normal vector on a point cloud picture where a previously ranked target point is located;
determining a straight line La which passes through the sorted target points and is perpendicular to the Lb;
determining an intermediate point according to the Lb and the La;
determining a direction vector moving from the intermediate point to the ranked target point, the direction vector being perpendicular to Lb;
determining a normal vector of a track according to the normal vector and the direction vector, wherein the normal vector of the track is perpendicular to a plane formed by the Lb and the La;
determining a conversion matrix according to the direction vector, the normal vector, the upper orbit normal vector and the coordinates of the target points ranked in the front;
and determining a matrix of the grabbing points corresponding to the two target points according to the conversion relation between the camera coordinate system and the mechanical arm coordinate system and the conversion matrix.
In the embodiment of the application, because the transformation matrix is determined according to the direction vector, the normal vector, the track normal vector and the coordinate of the target point sequenced in front, the direction vector, the normal vector and the track normal vector are mutually perpendicular vectors, and the posture of the track point can be determined only by three mutually perpendicular vectors, the transformation matrix comprises the information for determining the posture of the target point sequenced in front and the information for determining the coordinate point of the target point sequenced in front, so that after the matrix grabCordinate of the capturing point is determined according to the transformation relation between the camera coordinate system and the mechanical arm coordinate system and the transformation matrix, the grabCordinate can be ensured to include both information for determining the postures of the target points ranked in front and information for determining coordinate points of the target points ranked in front, so that calculation of euler angles can be subsequently performed according to direction vectors, normal vectors and track normal vectors in the grabCordinate, and then the postures of the track points corresponding to the target points ranked in front are obtained, and coordinates of the corresponding track points are determined according to the coordinates of the target points ranked in front in the grabCordinate, for example, the coordinates of the corresponding track points are determined according to the last item of each row in the grabCordinate (bcordinate (0, 3), grabCordinate (1, 3) and grabCordinate (2, 3) are x0, y0 and z0 of the track points respectively).
In some embodiments, the conversion relationship between the camera coordinate system and the robot arm coordinate system may be determined in the following manner, and at this time, the motion trajectory generation apparatus 4 provided by the embodiment of the present application further includes:
the two-dimensional image acquisition module is used for acquiring a two-dimensional image and a three-dimensional point cloud picture corresponding to a space with a motion track to be determined, wherein at least 4 designated graphs are placed in the space with the motion track to be determined in advance.
And the two-dimensional coordinate identification module of the designated graphs is used for identifying the at least 4 designated graphs in the two-dimensional image corresponding to the space of the motion track to be determined to obtain the two-dimensional coordinates of each designated graph.
And the three-dimensional point cloud coordinate determining module is used for aligning the two-dimensional image and the three-dimensional point cloud picture corresponding to the space of the motion track to be determined and determining the three-dimensional point cloud coordinate corresponding to the two-dimensional coordinate of each designated graph.
And the coordinate acquisition module of the mechanical arm is used for acquiring the coordinates of the mechanical arm when the tail end of the mechanical arm points to the at least 4 designated graphs respectively.
And the conversion relation determining module is used for determining the conversion relation between the camera coordinate system and the mechanical arm coordinate system according to the three-dimensional point cloud coordinates and the coordinates of the mechanical arms.
In the embodiment of the application, the designated graph is placed in the space where the motion trail needs to be determined, and the designated graph is easy to recognize, so that the mechanical arm can be calibrated quickly and accurately according to the recognition result.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Example three:
fig. 5 is a schematic structural diagram of a processing apparatus according to an embodiment of the present application. As shown in fig. 5, the processing apparatus 5 of this embodiment includes: at least one processor 50 (only one processor is shown in fig. 5), a memory 51, and a computer program 52 stored in the memory 51 and executable on the at least one processor 50, wherein the steps of any of the method embodiments described above are implemented when the processor 50 executes the computer program 52.
The processing device 5 may be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server. The processing device may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is merely an example of the processing device 5, and does not constitute a limitation of the processing device 5, and may include more or less components than those shown, or combine some of the components, or different components, such as input output devices, network access devices, etc.
The Processor 50 may be a Central Processing Unit (CPU), and the Processor 50 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the processing device 5, such as a hard disk or a memory of the processing device 5. The memory 51 may be an external storage device of the processing device 5 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the processing device 5. Further, the memory 51 may include both an internal storage unit and an external storage device of the processing device 5. The memory 51 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, other programs, and the like, such as program codes of the computer programs. The above-mentioned memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
An embodiment of the present application further provides a network device, where the network device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the method embodiments when executing the computer program.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above method embodiments.
Embodiments of the present application provide a computer program product, which when executed on a processing device, enables the processing device to implement the steps in the above method embodiments.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow of the method of the embodiments described above can be implemented by instructing relevant hardware by a computer program, and the computer program can be stored in a computer readable storage medium, and when executed by a processor, the computer program can implement the steps of the embodiments of the methods described above. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/processing device, recording medium, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-drive, a removable hard drive, a magnetic or optical disk, etc. In some jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and proprietary practices.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for generating a motion trail is characterized in that a controller applied to a mechanical arm comprises the following steps:
receiving a three-dimensional point cloud picture of a target body sent by image acquisition equipment;
receiving a two-dimensional track map, wherein the two-dimensional track map is obtained by determining a first motion track in a two-dimensional image of the target body, and the first motion track is a two-dimensional motion track;
determining the projection of the first motion track in the two-dimensional track map on the three-dimensional point cloud map according to the mapping relation between the two-dimensional image and the three-dimensional point cloud map to obtain a first target point cloud;
determining track points of the mechanical arm and postures corresponding to the track points according to the first target point cloud;
and generating a second motion trail of the mechanical arm according to the track points of the mechanical arm and the postures corresponding to the track points, wherein the second motion trail is a three-dimensional motion trail.
2. The method for generating the motion trail according to claim 1, wherein the determining the projection of the first motion trail in the two-dimensional trail graph on the three-dimensional point cloud graph according to the mapping relationship between the two-dimensional image and the three-dimensional point cloud graph comprises:
performing skeleton extraction processing on the first motion trajectory in the two-dimensional trajectory graph to obtain a target skeleton;
sorting the pixel coordinates of the target skeleton to obtain sorted pixel coordinates;
and projecting the sorted pixel coordinates into the point cloud of the three-dimensional point cloud picture according to the mapping relation between the two-dimensional image and the three-dimensional point cloud picture to obtain a first target point cloud.
3. The method for generating a motion trail according to claim 2, wherein the performing skeleton extraction processing on the first motion trail in the two-dimensional trail graph to obtain the target skeleton comprises:
converting the two-dimensional track map into a gray map and carrying out binarization processing;
and performing skeleton extraction processing on the first motion trajectory in the two-dimensional trajectory graph after binarization processing to obtain the target skeleton.
4. The method for generating a motion trajectory according to claim 2 or 3, wherein the determining the trajectory points of the robot arm and the postures corresponding to the trajectory points according to the first target point cloud comprises:
performing down-sampling processing on the first target point cloud to obtain a second target point cloud;
and determining the track points of the mechanical arm and the corresponding postures of the track points according to the second target point cloud.
5. The method for generating a motion trail according to claim 4, wherein the determining the trajectory points of the robotic arm and the corresponding postures of the trajectory points according to the second target point cloud comprises:
determining a matrix of the grabbing points according to the conversion relation between the camera coordinate system and the mechanical arm coordinate system and the second target point cloud, wherein the matrix of the grabbing points comprises coordinates of target points in the second target point cloud and information used for determining the posture of the mechanical arm at the track points;
and determining the coordinates of the track points of the mechanical arm and the postures of the mechanical arm at the track points according to the matrix of the grabbing points.
6. The method for generating a motion trail according to claim 5, wherein the determining a matrix of the grabbing points according to the transformation relation between the camera coordinate system and the robot arm coordinate system and the second target point cloud comprises:
executing the following steps for any two adjacent target points in the second target point cloud, wherein the ordering relationship of each target point in the second target point cloud is determined according to the ordering relationship of the ordered pixel coordinates:
determining a first straight line where a normal vector on a point cloud picture where a previously ordered target point is located;
determining a second straight line which passes through the sorted target points and is perpendicular to the first straight line;
determining an intermediate point according to the first straight line and the second straight line;
determining a direction vector moving from the intermediate point to the ranked target point, the direction vector being perpendicular to the first line;
determining a track normal vector according to the normal vector and the direction vector, wherein the track normal vector is perpendicular to a plane formed by the first straight line and the second straight line;
determining a conversion matrix according to the direction vector, the normal vector, the track normal vector and the coordinates of the target points ranked in the front;
and determining a matrix of the grabbing points corresponding to the two target points according to the conversion relation between the camera coordinate system and the mechanical arm coordinate system and the conversion matrix.
7. The method for generating a motion trajectory according to claim 6, further comprising:
acquiring a two-dimensional image and a three-dimensional point cloud picture corresponding to a space with a motion track to be determined, wherein at least 4 designated graphs are placed in the space with the motion track to be determined in advance;
identifying the at least 4 designated graphs in the two-dimensional image corresponding to the space of the motion track to be determined to obtain two-dimensional coordinates of each designated graph;
aligning the two-dimensional image and the three-dimensional point cloud image corresponding to the space of the motion track to be determined, and determining a three-dimensional point cloud coordinate corresponding to the two-dimensional coordinate of each designated graph;
acquiring the coordinates of the mechanical arm when the tail end of the mechanical arm points to the at least 4 designated graphs respectively;
and determining the conversion relation between a camera coordinate system and a mechanical arm coordinate system according to the three-dimensional point cloud coordinates and the coordinates of the mechanical arms.
8. A motion trail generation device is characterized in that a controller applied to a mechanical arm comprises:
the three-dimensional point cloud picture acquisition module is used for receiving a three-dimensional point cloud picture of a target body sent by the image acquisition equipment;
the two-dimensional track map acquisition module is used for receiving a two-dimensional track map, and the two-dimensional track map is obtained by determining a first motion track in a two-dimensional image of the target body, wherein the first motion track is a two-dimensional motion track;
the first target point cloud determining module is used for determining the projection of the first motion track in the two-dimensional track map on the three-dimensional point cloud map according to the mapping relation between the two-dimensional image and the three-dimensional point cloud map to obtain a first target point cloud;
the gesture determining module is used for determining track points of the mechanical arm and gestures corresponding to the track points according to the first target point cloud;
and the second motion track determining module is used for generating a second motion track of the mechanical arm according to the track points of the mechanical arm and the postures corresponding to the track points, wherein the second motion track is a three-dimensional motion track.
9. A processing device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202211427344.1A 2022-11-15 2022-11-15 Motion trail generation method and device and processing equipment Pending CN115713547A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211427344.1A CN115713547A (en) 2022-11-15 2022-11-15 Motion trail generation method and device and processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211427344.1A CN115713547A (en) 2022-11-15 2022-11-15 Motion trail generation method and device and processing equipment

Publications (1)

Publication Number Publication Date
CN115713547A true CN115713547A (en) 2023-02-24

Family

ID=85233263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211427344.1A Pending CN115713547A (en) 2022-11-15 2022-11-15 Motion trail generation method and device and processing equipment

Country Status (1)

Country Link
CN (1) CN115713547A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557599A (en) * 2024-01-12 2024-02-13 上海仙工智能科技有限公司 3D moving object tracking method and system and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557599A (en) * 2024-01-12 2024-02-13 上海仙工智能科技有限公司 3D moving object tracking method and system and storage medium
CN117557599B (en) * 2024-01-12 2024-04-09 上海仙工智能科技有限公司 3D moving object tracking method and system and storage medium

Similar Documents

Publication Publication Date Title
CN110991319B (en) Hand key point detection method, gesture recognition method and related device
CN107571260B (en) Method and device for controlling robot to grab object
Skrypnyk et al. Scene modelling, recognition and tracking with invariant image features
CN109015640B (en) Grabbing method, grabbing system, computer device and readable storage medium
CN111738261A (en) Pose estimation and correction-based disordered target grabbing method for single-image robot
Azad et al. Stereo-based 6d object localization for grasping with humanoid robot systems
CN108509848A (en) The real-time detection method and system of three-dimension object
Azad et al. 6-DoF model-based tracking of arbitrarily shaped 3D objects
CN109359514B (en) DeskVR-oriented gesture tracking and recognition combined strategy method
CN109887030A (en) Texture-free metal parts image position and posture detection method based on the sparse template of CAD
CN113724368B (en) Image acquisition system, three-dimensional reconstruction method, device, equipment and storage medium
CN109271023B (en) Selection method based on three-dimensional object outline free-hand gesture action expression
CN112686950B (en) Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium
WO2015051827A1 (en) Method of determining a similarity transformation between first and second coordinates of 3d features
CN113997295B (en) Hand-eye calibration method and device for mechanical arm, electronic equipment and storage medium
CN115713547A (en) Motion trail generation method and device and processing equipment
CN107507133B (en) Real-time image splicing method based on circular tube working robot
CN115082498A (en) Robot grabbing pose estimation method, device, equipment and storage medium
JP7178803B2 (en) Information processing device, information processing device control method and program
CN110796702A (en) Industrial equipment identification and positioning method, system and equipment based on machine vision
CN109785444A (en) Recognition methods, device and the mobile terminal of real plane in image
CN112197708B (en) Measuring method and device, electronic device and storage medium
CN109981967A (en) For the image pickup method of intelligent robot, device, terminal device and medium
CN115284279A (en) Mechanical arm grabbing method and device based on aliasing workpiece and readable medium
CN113822946B (en) Mechanical arm grabbing method based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination