CN113327281A - Motion capture method and device, electronic equipment and flower drawing system - Google Patents

Motion capture method and device, electronic equipment and flower drawing system Download PDF

Info

Publication number
CN113327281A
CN113327281A CN202110691108.XA CN202110691108A CN113327281A CN 113327281 A CN113327281 A CN 113327281A CN 202110691108 A CN202110691108 A CN 202110691108A CN 113327281 A CN113327281 A CN 113327281A
Authority
CN
China
Prior art keywords
information
target object
mechanical arm
pose information
motion information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110691108.XA
Other languages
Chinese (zh)
Inventor
谭志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Zhiyuan Robot Technology Co Ltd
Original Assignee
Guangdong Zhiyuan Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Zhiyuan Robot Technology Co Ltd filed Critical Guangdong Zhiyuan Robot Technology Co Ltd
Priority to CN202110691108.XA priority Critical patent/CN113327281A/en
Publication of CN113327281A publication Critical patent/CN113327281A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Toxicology (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application provides a motion capture method, a motion capture device, electronic equipment and a flower drawing system, wherein the method comprises the following steps: acquiring a received multi-frame image, wherein the multi-frame image comprises a target object; based on the multi-frame images, obtaining first motion information of the target object, wherein the first motion information comprises first pose information of the target object in each frame of the image, the first pose information is pose information of the target object in a first coordinate system, and the first coordinate system takes a camera as a reference; acquiring a coordinate conversion relation, and performing coordinate conversion on the first pose information based on the coordinate conversion relation to obtain a plurality of second pose information, wherein the second pose information is pose information of the tail end of the mechanical arm in a second coordinate system, and the second coordinate system takes a mechanical arm base as a reference; and obtaining second motion information based on a plurality of second position and posture information, so that the tail end of the mechanical arm moves according to the second motion information.

Description

Motion capture method and device, electronic equipment and flower drawing system
Technical Field
The present disclosure relates to the field of motion capture technologies, and in particular, to a motion capture method, an apparatus, an electronic device, and a flower-drawing system.
Background
Nowadays, the catering is more and more popular, and the requirements on the manufacturing process of the catering are higher and higher. For example, coffee flower is one of the catering services which have high requirements on the production process. The traditional coffee garlanding is finished by the manual garlanding of a coffee garlanding operator, and the requirements on experience and skill are high.
The existing market provides coffee flower drawing equipment, which controls a mechanical arm to simulate artificial flower drawing according to preset parameters and pulls out required flower patterns after a large amount of debugging work. However, each pattern is added, new parameters need to be added to the equipment, and the debugging of one parameter usually needs a long time, which is not beneficial to practical application. In addition, the parameter setting of different flower drawing devices is different, so that the problem that the flower drawing devices cannot be migrated exists.
Disclosure of Invention
The application provides a motion capture method, a motion capture device, electronic equipment and a garland system, which can capture motion information of a target object, obtain the motion information for controlling a mechanical arm through coordinate transformation, reduce complex operation, improve accuracy and are beneficial to solving the problem of incapability of migration.
In a first aspect, the present application provides a motion capture method applied to a patterning system, where the patterning system includes a camera and a robot arm, the robot arm includes a robot arm base and a robot arm end, and the method includes:
acquiring a received multi-frame image, wherein the multi-frame image comprises a target object and is obtained by shooting through the camera;
obtaining first motion information of the target object based on the multi-frame images, wherein the first motion information comprises first pose information of the target object in each frame of the image, the first pose information is pose information of the target object in a first coordinate system, and the first coordinate system takes the camera as a reference;
acquiring a coordinate conversion relation, and performing coordinate conversion on the first pose information based on the coordinate conversion relation to obtain a plurality of second pose information, wherein the second pose information is pose information of the tail end of the mechanical arm in a second coordinate system, and the second coordinate system takes the mechanical arm base as a reference;
and obtaining second motion information based on a plurality of second position and posture information, so that the tail end of the mechanical arm moves according to the second motion information.
In one possible implementation manner, the target object includes a two-dimensional code, and the camera is a black and white camera.
In one possible implementation manner, the obtaining first motion information of the target object based on the multiple frames of images includes:
and identifying the space state of the target object in each frame of image by using a visual identification model so as to obtain the first pose information of the target object in each frame of image.
In one possible implementation manner, the performing, in the first motion information, a relative relationship between first pose information of a target object in a first frame image and first pose information of a target object in other frame images, and performing coordinate transformation on the first pose information based on the coordinate transformation relationship to obtain a plurality of second pose information includes:
converting the first position and posture information of the target object in the first frame image into second position and posture information based on the coordinate conversion relation;
and obtaining a plurality of second position information corresponding to the first position information of the target object in the other frame images according to the second position information obtained by conversion and the relative relationship.
In one possible implementation manner, the coordinate transformation relationship includes:
Figure BDA0003126772160000021
wherein,
Figure BDA0003126772160000022
the coordinate transformation relation of the tail end of the mechanical arm relative to the base of the mechanical arm,
Figure BDA0003126772160000023
is the coordinate conversion relation of the camera relative to the base of the mechanical arm,
Figure BDA0003126772160000024
is the coordinate transformation relationship of the target object with respect to the camera,
Figure BDA0003126772160000025
the coordinate transformation relationship of the end of the mechanical arm relative to the target object.
In one possible implementation manner, the obtaining the second motion information based on the plurality of second position and orientation information includes:
and performing inverse kinematics calculation on the second position and posture information to obtain motion information of six joint axes of the mechanical arm.
In one possible implementation manner, after obtaining the first motion information of the target object based on the multiple frames of images, the method further includes:
and carrying out filtering processing on the first bit attitude information to obtain filtered first bit attitude information.
In a second aspect, the present application provides a robot arm control method, including:
obtaining second motion information obtained by the method of the first aspect;
and controlling the mechanical arm according to the second motion information.
In a third aspect, the present application provides a motion capture device comprising a camera and a robot arm, the robot arm comprising a robot arm base and a robot arm tip, the motion capture device further comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a received multi-frame image, the multi-frame image comprises a target object, and the image is obtained by shooting through the camera;
a first obtaining module, configured to obtain first motion information of the target object based on the multiple frames of images, where the first motion information includes first pose information of the target object in each frame of the image, the first pose information is pose information of the target object in a first coordinate system, and the first coordinate system is based on the camera;
the conversion module is used for acquiring a coordinate conversion relation, and performing coordinate conversion on the first pose information based on the coordinate conversion relation to obtain a plurality of second pose information, wherein the second pose information is pose information of the tail end of the mechanical arm in a second coordinate system, and the second coordinate system takes the mechanical arm base as a reference;
and the second obtaining module is used for obtaining second motion information based on the plurality of second position and posture information so as to enable the tail end of the mechanical arm to move according to the second motion information.
In a fourth aspect, the present application provides an electronic device, comprising:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the apparatus, cause the apparatus to perform the method of the first or second aspect.
In a fifth aspect, the present application provides a garland system, comprising:
the camera is used for shooting a target object to obtain a multi-frame image and sending the multi-frame image to the control device;
the control device includes:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the system, cause the system to perform the method of the first or second aspect;
and the mechanical arm is used for being controlled by the control device to perform movement operation.
In a sixth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when run on a computer, causes the computer to perform the method according to the first or second aspect.
In a seventh aspect, the present application provides a computer program for performing the method of the first or second aspect when the computer program is executed by a computer.
In a possible design, the program in the seventh aspect may be stored in whole or in part on a storage medium packaged with the processor, or in part or in whole on a memory not packaged with the processor.
Drawings
FIG. 1 is a schematic diagram of a motion capture method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating one embodiment of a motion capture method of the present application;
FIG. 3A is a schematic diagram of the application of a garlanding action by a garlanding artist in the present application;
fig. 3B is a schematic diagram illustrating a spatial state of a two-dimensional code reflected by a cube in the present application;
FIG. 4A is a schematic diagram of data of rotational motion around the X-axis in first pose information without filtering;
FIG. 4B is a schematic diagram of the data of the first pose information being filtered and rotating around the X-axis;
FIG. 5 is a schematic diagram illustrating coordinate transformation between a camera coordinate system and a robot base coordinate system according to an embodiment of the motion capture method of the present application;
FIG. 6 is a schematic diagram of a robotic arm according to an embodiment of the motion capture method of the present application;
FIG. 7 is a schematic illustration of a method of one embodiment of a robot arm control method of the present application;
FIG. 8 is a schematic block diagram of one embodiment of a motion capture device according to the present application;
FIG. 9 is a schematic structural view of an embodiment of the flower drawing system of the present application;
fig. 10 is a schematic structural diagram of an embodiment of an electronic device according to the present application.
Detailed Description
The terminology used in the description of the embodiments section of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application.
In the prior art, a coffee flower drawing device controls a mechanical arm to simulate manual flower drawing according to preset parameters, and after a large amount of debugging work, required flower patterns are drawn out. However, each pattern is added, new parameters need to be added, and debugging a parameter usually requires a long time, which is not beneficial to practical application. In addition, the parameter setting of different flower drawing devices is different, so that the problem that the flower drawing devices cannot be migrated exists.
Therefore, the application provides a motion capture method, a motion capture device, an electronic device and a garland system, which can capture motion information of a target object, obtain the motion information for controlling a mechanical arm through coordinate transformation, reduce complex transformation processes, improve accuracy and are beneficial to solving the problem of incapability of migration.
For example, the motion capture method can capture the motion information of a coffee flower puller pulling a flower cup on hand in the flower pulling process, and then obtains the motion information for controlling the mechanical arm through coordinate transformation, so that the mechanical arm can imitate the flower pulling action of the coffee flower puller to perform the flower pulling operation.
That is to say, the flower drawing system (or flower drawing equipment) provided by the application captures the flower drawing action of the flower drawing operator through the motion capture method so as to control the mechanical arm to repeat the flower drawing action of the flower drawing operator without debugging various flower type parameters, thereby reducing the difficulty of parameter debugging of the equipment and saving time. In addition, the motion capture method can convert the captured motion information through different coordinate conversion relations to obtain the motion information applied to different flower drawing devices, and is favorable for solving the problem of incapability of migration.
In this embodiment, the motion capture method is applied to a garland system, the garland system may include a camera and a robot arm, and the robot arm may include a robot arm base and a robot arm tip. The camera is used for shooting a target object (such as a two-dimensional code) to obtain a multi-frame image so as to capture first motion information (such as a flower drawing action of a flower drawing artist) of the target object. The method is used for converting the first motion information into second motion information for controlling the mechanical arm. The mechanical arm base can be fixed on the mounting platform, and the flower drawing system controls the mechanical arm according to the second motion information, so that the tail end of the mechanical arm moves relative to the mechanical arm base, and the effect of simulating the flower drawing action of a flower drawing operator is achieved. In practical application, the end of the mechanical arm can be provided with a garland cup, and the garland cup is used for pulling out corresponding garland patterns such as stars, moons, whales and other patterns or lines along with the movement of the end of the mechanical arm.
FIG. 1 is a schematic diagram of a motion capture method according to an embodiment of the present application. As shown in fig. 1 and 2, the motion capture method may include:
s101, obtaining a received multi-frame image, wherein the multi-frame image comprises a target object and is obtained by shooting through a camera.
For example, as shown in fig. 3A, during a flower drawing action performed by a flower drawer holding a flower drawing cup, the target object moves (e.g., changes position and posture) along with the hand action of the flower drawer, and the target object may be kept within the field of view of the camera all the time, so that the camera can capture a video of the movement of the target object during the flower drawing process, where the video may include multiple frames of images. In this embodiment, the target object is fixed at the position of the cup handle (or the handle) of the flower cup held by the gardener, so that the motion information of the target object is more suitable for the flower drawing action of the gardener, the subsequent coordinate transformation steps are reduced, and the error is reduced. In other embodiments, the target object may be fixed to the wall of the pull cup without limitation.
Specifically, in step S101, a video of the flower drawing action of the flower drawing artist may be recorded by the high frame rate camera, and a target video of a complete flower drawing process (a process from the beginning to the end of flower drawing) is intercepted from the video, where the target video includes multiple frames of images.
In one possible implementation manner, the target object may include a two-dimensional code, preferably, the two-dimensional code moves along with the hand motion of the gardor during the garland process, the motion information of the two-dimensional code may be used to represent the garland motion of the gardor holding the gardor cup, and the motion information of the two-dimensional code may include pose information, such as position and pose (such as rotation angle or direction), of the two-dimensional code in space. The camera is preferably a black-and-white camera, has a higher frame rate, can shoot more frames of images in the process of flower drawing, and is favorable for capturing more motion information of the target object.
S102, obtaining first motion information of the target object based on the multi-frame image, wherein the first motion information comprises first pose information of the target object in each frame of the image, the first pose information is pose information of the target object in a first coordinate system, and the first coordinate system takes the camera as a reference.
In this embodiment, the first motion information of the target object may be used to represent a motion state of the target object in space. The first pose information of the target object may be used to represent a spatial state of the target object relative to the camera, such as a position and a pose of the target object in space.
In one possible implementation manner, step S102 may include: and identifying the space state of the target object in each frame of image by using a visual identification model so as to obtain the first pose information of the target object in each frame of image.
The visual recognition model can comprise an apr i ltag visual positioning system which can position and recognize the pose information of a two-dimensional code (such as tag) in an image. That is, the first pose information may include a position and a pose of the target object in a first coordinate system (e.g., a camera coordinate system).
For example, as shown in fig. 3B, in step S102, the cube may be used to reflect the spatial state of the two-dimensional code, and the position and the rotation angle or direction of the cube are identified to determine the pose information of the two-dimensional code in the space.
It can be understood that, in one flower drawing process, according to multiple frames of images captured by the camera, in the capturing time sequence, multiple pieces of first pose information (e.g., a first pose information set) can be obtained to represent the spatial state of the target object in the flower drawing process.
In one possible implementation manner, after step S102, the method further includes: and carrying out filtering processing on the first bit attitude information to obtain filtered first bit attitude information.
Fig. 4A is a schematic diagram of data of rotational motion around the X axis in the first pose information without being subjected to filtering processing, and fig. 4B is a schematic diagram of data of rotational motion around the X axis in the first pose information with being subjected to filtering processing, wherein a horizontal axis is time and is represented by s, and a vertical axis is a position angle around the X axis and is represented by deg.
That is, due to the camera itself, there is a certain noise during the process of shooting the target object, so that the first pose information obtained in step S102 is not smooth enough, and therefore, the first pose information needs to be filtered to improve the data accuracy and reliability. It will be understood by those skilled in the art that the filtering method may be of various types, such as filtering with a filter, and the like, and is not limited thereto.
S103, obtaining a coordinate conversion relation, and performing coordinate conversion on the first position and posture information based on the coordinate conversion relation to obtain a plurality of second position and posture information, wherein the second position and posture information is position and posture information of the tail end of the mechanical arm in a second coordinate system, and the second coordinate system takes the mechanical arm base as a reference.
In this embodiment, the coordinate conversion relationship may be determined according to a relative positional relationship between the robot arm tip and the robot arm base, a coordinate conversion relationship between the camera and the robot arm base, and the like. And for different garland equipment, the corresponding coordinate conversion relations are different. For example, the relative positional relationship between the end of the robot arm and the robot arm base may be determined according to the arm length of the robot arm, the degree of freedom of the joint axis of the robot arm, the mounting position of the robot arm base, and the like.
In step S103, the second position and orientation information may include a position and orientation of the robot arm end in a second coordinate system (e.g., a robot arm base coordinate system), and the like, for representing a spatial state of the robot arm end. That is, the spatial state of the target object with respect to the camera is converted into the spatial state of the robot arm tip with respect to the robot arm base based on the coordinate conversion relationship, so that the robot arm tip can move in imitation of the drawing action of the drawing artist.
In one possible implementation manner, the first motion information further includes a relative relationship between the first pose information of the target object in the first frame image and the first pose information of the target object in the other frame images, and step S103 may include:
s201, converting first posture information of a target object in the first frame image into second posture information based on the coordinate conversion relation;
s202, according to the second position information obtained through conversion and the relative relation, obtaining a plurality of second position information corresponding to the first position information of the target object in the other frame images.
For example, the relative relationship between the first pose information of the target object in the first frame image and the first pose information of the target object in the other frame images may include a relative distance, a relative rotation angle, and the like. The relative distance may be determined according to a difference between position coordinates in the first position information, and the relative rotation angle may be determined according to a difference between angles in the first position information.
That is, in one drawing process, the first motion information of the target object may be determined according to the first pose information of the target object in the first frame image and the relative relationship. In order to enable the end of the mechanical arm to move by imitating the motion information of the target object, the relative relationship between the plurality of first position information in the first motion information and the relative relationship between the plurality of second position information in the second motion information may be equal.
It can be understood that, in step S201, only the first pose information of the target object in the first frame image needs to be converted into the second pose information of the end of the robot arm, and then the plurality of second pose information of the end of the robot arm can be obtained according to the second pose information obtained by conversion and the relative relationship. Therefore, in the method, all the first position information does not need to be converted according to the coordinate conversion relation to obtain the second position information, so that the steps of coordinate conversion are reduced, the accumulated error caused by the coordinate conversion is reduced, and the accuracy is improved.
In some other embodiments, the first pose information of the target object in each frame of image may be transformed according to the coordinate transformation relationship, respectively, to obtain a plurality of second pose information of the end of the robot arm, which is not limited herein.
In one possible implementation manner, the coordinate transformation relationship includes:
Figure BDA0003126772160000061
wherein,
Figure BDA0003126772160000062
the coordinate transformation relation of the tail end of the mechanical arm relative to the base of the mechanical arm,
Figure BDA0003126772160000063
is the coordinate conversion relation of the camera relative to the base of the mechanical arm,
Figure BDA0003126772160000064
is the coordinate transformation relationship of the target object with respect to the camera,
Figure BDA0003126772160000065
the coordinate transformation relationship of the end of the mechanical arm relative to the target object.
For example, as shown in fig. 5, an arbitrary coordinate vector S under the Camera coordinate system (Camera Frame)1Conversion to the robot arm Base coordinate system (Base Frame) can be expressed by the following formula:
Figure BDA0003126772160000066
expressed in matrix form as:
Figure BDA0003126772160000067
thus, the rotation matrix of the camera coordinate system relative to the robot arm base coordinate system may be expressed as:
Figure BDA0003126772160000068
wherein S is1As an arbitrary coordinate vector in the camera coordinate system, S2Is a coordinate vector of the origin of the camera coordinate system under the coordinate system of the mechanical arm base, S3Is S1Converting the coordinate vector into a coordinate vector under a base coordinate system of the mechanical arm,
Figure BDA0003126772160000069
is a rotation matrix of a camera coordinate system relative to a robot arm base coordinate system
Figure BDA00031267721600000610
The coordinate transformation relation of the camera relative to the mechanical arm base is determined.
For example, as shown in fig. 6, the present embodiment further provides a coordinate transformation relationship among a coordinate system of the cup mouth of the flower cup, a coordinate system of the end of the robot arm, and a coordinate system of the base of the robot arm, wherein an arbitrary coordinate vector S is set under the coordinate system of the base of the robot arm6And the conversion to the cup mouth coordinate system can be represented by the following formula:
Figure BDA00031267721600000611
expressed in matrix form as:
Figure BDA00031267721600000612
wherein,
Figure BDA0003126772160000071
wherein,
Figure BDA0003126772160000072
is a rotation matrix of the robot arm end coordinate system relative to the robot arm base coordinate system,
Figure BDA0003126772160000073
is a rotation matrix of the cup mouth coordinate system relative to the end coordinate system, S4Is a coordinate vector of the origin of the coordinate system at the tail end of the mechanical arm under the coordinate system of the base of the mechanical arm, S5Is a coordinate vector, s ', of the origin of the cup nozzle coordinate system under the end coordinate system of the mechanical arm'6Is S6Conversion to coordinate vector under cup stem coordinates, s'5Is S5Conversion to coordinate vector under cup stem coordinates, s'4Is S4And converting the coordinate vector into a coordinate vector under the cup mouth coordinate.
In this embodiment, the rotation matrix of the robot arm tip coordinate system relative to the robot arm base coordinate system
Figure BDA0003126772160000074
The coordinate transformation method can be obtained by recursion according to the coordinates of all joint axes of the mechanical arm in the initial pose so as to determine the coordinate transformation relation of the tail end of the mechanical arm relative to the base of the mechanical arm.
And S104, obtaining second motion information based on the plurality of second position and posture information, so that the tail end of the mechanical arm moves according to the second motion information.
That is, the flower drawing system controls the mechanical arm according to the second motion information, so that the tail end of the mechanical arm can move according to the second motion information, and the effect of simulating the motion of a target object (such as simulating the flower drawing action of a flower drawing artist) is achieved.
In one possible implementation manner, the second motion information may include motion information of six joint axes (e.g., six degrees of freedom) of the mechanical arm, and step S104 may include:
and S105, performing inverse kinematics calculation on the second position and posture information to obtain motion information of six joint axes of the mechanical arm.
For example, in step S105, the second pose information may be solved kinematically by a mechanical arm modeling (such as DH modeling, etc.), for example, the rotational transformation relationship between adjacent joint axes of the mechanical arm may be represented by the following formula:
Figure BDA0003126772160000075
wherein S and C are abbreviations for trigonometric functions sin and cos, respectively.
Under the condition of obtaining modeling parameters (such as a DH table), respectively calculating to obtain homogeneous coordinate transformation matrixes between adjacent joint shafts so as to obtain a pose matrix TCP of the tail end of the joint shaft relative to a mechanical arm base coordinate system, wherein the pose matrix TCP can be expressed as:
Figure BDA0003126772160000076
and then, carrying out inverse calculation on the pose matrix to obtain the motion information of six joint axes of the mechanical arm.
In this embodiment, the garland system may drive and control the mechanical arm after motion planning according to the motion information of the six joint axes of the mechanical arm, so that the mechanical arm performs corresponding motion operations, and the end of the mechanical arm can simulate the garland action of a garland player to move, so that a garland cup mounted at the end of the mechanical arm can pull out a desired garland pattern.
It is to be understood that some or all of the steps or operations in the above-described embodiments are merely examples, and other operations or variations of various operations may be performed by the embodiments of the present application. Further, the various steps may be performed in a different order presented in the above-described embodiments, and it is possible that not all of the operations in the above-described embodiments are performed.
Fig. 7 is a schematic diagram of a method according to an embodiment of a robot control method of the present application. As shown in fig. 7, the robot arm control method may include:
s301, acquiring second motion information obtained by a motion capture method provided in the embodiment of the method shown in FIG. 1;
and S302, controlling the mechanical arm according to the second motion information.
The mechanical arm control method can be applied to a flower drawing system (or flower drawing equipment) for controlling a mechanical arm to perform flower drawing operation and the like.
In step S301, the step or principle of obtaining the second motion information may refer to a motion capture method provided in the embodiment of the method shown in fig. 1, and is not described herein again.
In step S302, the second motion information may include motion information of six joint axes of the mechanical arm, and the flower drawing system may perform driving control on the mechanical arm after motion planning according to the motion information of the six joint axes of the mechanical arm, so that the mechanical arm performs corresponding motion operations, and the end of the mechanical arm can simulate a flower drawing action of a flower drawer to perform motion, so that a flower drawing cup installed at the end of the mechanical arm can draw out a desired flower drawing pattern.
Further, in step S301, the garland system may receive a user order, where the user order includes a target pattern (e.g., a garland pattern), and obtain second motion information corresponding to the target pattern from a database (e.g., a local database or a cloud database). That is, different target patterns correspond to different second motion information to satisfy the diversification requirement.
It is to be understood that some or all of the steps or operations in the above-described embodiments are merely examples, and other operations or variations of various operations may be performed by the embodiments of the present application. Further, the various steps may be performed in a different order presented in the above-described embodiments, and it is possible that not all of the operations in the above-described embodiments are performed.
Fig. 8 is a schematic diagram of an embodiment of the motion capture device 100 of the present application. As shown in fig. 8, the motion capture apparatus includes a camera and a robot arm including a robot base and a robot end, and the motion capture apparatus 100 may further include:
an obtaining module 10, configured to obtain a received multi-frame image, where the multi-frame image includes a target object, and the image is obtained by shooting with the camera;
a first obtaining module 20, configured to obtain first motion information of the target object based on the multiple frames of images, where the first motion information includes first pose information of the target object in each frame of the image, the first pose information is pose information of the target object in a first coordinate system, and the first coordinate system is based on the camera;
the conversion module 30 is configured to obtain a coordinate conversion relationship, perform coordinate conversion on the first pose information based on the coordinate conversion relationship, and obtain a plurality of second pose information, where the second pose information is pose information of the end of the mechanical arm in a second coordinate system, and the second coordinate system takes the mechanical arm base as a reference;
and a second obtaining module 40, configured to obtain second motion information based on the plurality of second position and orientation information, so that the end of the mechanical arm moves according to the second motion information.
In one possible implementation manner, the target object includes a two-dimensional code, and the camera is a black and white camera.
In one possible implementation manner, the first obtaining module 20 includes:
and identifying the space state of the target object in each frame of image by using a visual identification model so as to obtain the first pose information of the target object in each frame of image.
In one possible implementation manner, the first motion information further includes a relative relationship between the first pose information of the target object in the first frame image and the first pose information of the target object in the other frame images, and the conversion module 30 includes:
converting the first position and posture information of the target object in the first frame image into second position and posture information based on the coordinate conversion relation;
and obtaining a plurality of second position information corresponding to the first position information of the target object in the other frame images according to the second position information obtained by conversion and the relative relationship.
In one possible implementation manner, the coordinate transformation relationship includes:
Figure BDA0003126772160000091
wherein,
Figure BDA0003126772160000092
the coordinate transformation relation of the tail end of the mechanical arm relative to the base of the mechanical arm,
Figure BDA0003126772160000093
is the coordinate conversion relation of the camera relative to the base of the mechanical arm,
Figure BDA0003126772160000094
is the coordinate transformation relationship of the target object with respect to the camera,
Figure BDA0003126772160000095
the coordinate transformation relationship of the end of the mechanical arm relative to the target object.
In one possible implementation manner, the second motion information includes motion information of six joint axes of the mechanical arm, and the second obtaining module 40 includes:
and performing inverse kinematics calculation on the second position and posture information to obtain motion information of six joint axes of the mechanical arm.
In one possible implementation manner, the apparatus 100 further includes:
and the filtering module 50 is configured to perform filtering processing on the first bit attitude information to obtain filtered first bit attitude information.
It will be appreciated that the embodiment shown in fig. 8 provides a motion capture device that can be used to implement the solution of the method embodiment shown in fig. 1 of the present application, and that the implementation principles and technical effects thereof can be further referred to in the description of the method embodiment.
It should be understood that the division of the modules of the motion capture device shown in fig. 8 is merely a logical division, and the actual implementation may be wholly or partially integrated into one physical entity or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling by the processing element in software, and part of the modules can be realized in the form of hardware. For example, the conversion module may be a separate processing element, or may be integrated into a chip of the electronic device. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), one or more microprocessors (DSPs), one or more Field Programmable Gate Arrays (FPGAs), etc. For another example, these modules may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
Fig. 9 is a schematic structural diagram of an embodiment of a flower drawing system 200 of the present application. As shown in fig. 9, the flower drawing system 200 may include a camera 210 for shooting a target object to obtain a plurality of frames of images, and sending the plurality of frames of images to a control device 220; a control device 220; a robot arm 230 for being controlled by the control device 220 to perform a motion operation.
The control device 220 includes:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the system, cause the system to perform the following steps;
acquiring a received multi-frame image, wherein the multi-frame image comprises a target object and is obtained by shooting through the camera;
obtaining first motion information of the target object based on the multi-frame images, wherein the first motion information comprises first pose information of the target object in each frame of the image, the first pose information is pose information of the target object in a first coordinate system, and the first coordinate system takes the camera as a reference;
acquiring a coordinate conversion relation, and performing coordinate conversion on the first pose information based on the coordinate conversion relation to obtain a plurality of second pose information, wherein the second pose information is pose information of the tail end of the mechanical arm in a second coordinate system, and the second coordinate system takes the mechanical arm base as a reference;
and obtaining second motion information based on a plurality of second position and posture information, so that the tail end of the mechanical arm moves according to the second motion information.
In one possible implementation manner, the target object includes a two-dimensional code, and the camera is a black and white camera.
In one possible implementation manner, when the instructions are executed by the system, the system executes the obtaining of the first motion information of the target object based on the multiple frames of images, including:
and identifying the space state of the target object in each frame of image by using a visual identification model so as to obtain the first pose information of the target object in each frame of image.
In one possible implementation manner, the first motion information further includes a relative relationship between first pose information of a target object in a first frame image and first pose information of a target object in other frame images, and when the instruction is executed by the system, the system executes the coordinate conversion on the first pose information based on the coordinate conversion relationship to obtain a plurality of second pose information, including:
converting the first position and posture information of the target object in the first frame image into second position and posture information based on the coordinate conversion relation;
and obtaining a plurality of second position information corresponding to the first position information of the target object in the other frame images according to the second position information obtained by conversion and the relative relationship.
In one possible implementation manner, the coordinate transformation relationship includes:
Figure BDA0003126772160000101
wherein,
Figure BDA0003126772160000102
the coordinate transformation relation of the tail end of the mechanical arm relative to the base of the mechanical arm,
Figure BDA0003126772160000103
is the coordinate conversion relation of the camera relative to the base of the mechanical arm,
Figure BDA0003126772160000104
is the coordinate transformation relationship of the target object with respect to the camera,
Figure BDA0003126772160000105
the coordinate transformation relationship of the end of the mechanical arm relative to the target object.
In one possible implementation manner, the second motion information includes motion information of six joint axes of a mechanical arm, and when the instructions are executed by the system, the system executes the obtaining of the second motion information based on a plurality of second position and orientation information, including:
and performing inverse kinematics calculation on the second position and posture information to obtain motion information of six joint axes of the mechanical arm.
In one possible implementation manner, when the instructions are executed by the system, the system further performs, after the obtaining the first motion information of the target object based on the multiple frames of images is executed, the steps of:
and carrying out filtering processing on the first bit attitude information to obtain filtered first bit attitude information.
In one possible implementation, the instructions, when executed by the system, cause the system to further perform:
acquiring second motion information;
and controlling the mechanical arm according to the second motion information.
That is, the flower drawing system 200 may be used to execute the motion capture method in the embodiment shown in fig. 1 or the robot arm control method shown in fig. 7, and the functions or principles thereof may refer to the motion capture method in the embodiment shown in fig. 1 or the robot arm control method shown in fig. 7, which is not described herein again.
It is understood that the garland system 200 may also include a communication module for information communication, etc. The garland system 200 may further include an interaction module for interacting with a user, etc., to obtain a user order, etc. The garland system 200 may further include an alarm device for being controlled by the control device 220 to perform an alarm operation, etc.
It should be understood that the flower drawing system 200 of the present embodiment may include other different types of operating mechanisms to be controlled by the control module to perform different operations, and is not limited thereto.
It should be understood that the control device can be implemented as a control circuit, and the processor in the control device can be a system on chip SOC, and the processor can include a Central Processing Unit (CPU), and can further include other types of processors, such as: an image Processing Unit (hereinafter, referred to as GPU), and the like.
Fig. 10 is a schematic structural diagram of an embodiment of an electronic device according to the present application, and as shown in fig. 10, the electronic device may include: one or more processors; a memory; and one or more computer programs.
The electronic device may be a garland device or a robot arm control device.
Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the apparatus, cause the apparatus to perform the steps of:
acquiring a received multi-frame image, wherein the multi-frame image comprises a target object and is obtained by shooting through the camera;
obtaining first motion information of the target object based on the multi-frame images, wherein the first motion information comprises first pose information of the target object in each frame of the image, the first pose information is pose information of the target object in a first coordinate system, and the first coordinate system takes the camera as a reference;
acquiring a coordinate conversion relation, and performing coordinate conversion on the first pose information based on the coordinate conversion relation to obtain a plurality of second pose information, wherein the second pose information is pose information of the tail end of the mechanical arm in a second coordinate system, and the second coordinate system takes the mechanical arm base as a reference;
and obtaining second motion information based on a plurality of second position and posture information, so that the tail end of the mechanical arm moves according to the second motion information.
In one possible implementation manner, the target object includes a two-dimensional code, and the camera is a black and white camera.
In one possible implementation manner, when the instructions are executed by the apparatus, the apparatus is caused to perform the obtaining of the first motion information of the target object based on the multiple frames of images, including:
and identifying the space state of the target object in each frame of image by using a visual identification model so as to obtain the first pose information of the target object in each frame of image.
In one possible implementation manner, the first motion information further includes a relative relationship between first pose information of a target object in a first frame image and first pose information of a target object in other frame images, and when the instruction is executed by the apparatus, the apparatus performs the coordinate transformation on the first pose information based on the coordinate transformation relationship to obtain a plurality of second pose information, including:
converting the first position and posture information of the target object in the first frame image into second position and posture information based on the coordinate conversion relation;
and obtaining a plurality of second position information corresponding to the first position information of the target object in the other frame images according to the second position information obtained by conversion and the relative relationship.
In one possible implementation manner, the coordinate transformation relationship includes:
Figure BDA0003126772160000121
wherein,
Figure BDA0003126772160000122
the coordinate transformation relation of the tail end of the mechanical arm relative to the base of the mechanical arm,
Figure BDA0003126772160000123
is the coordinate conversion relation of the camera relative to the base of the mechanical arm,
Figure BDA0003126772160000124
is the coordinate transformation relationship of the target object with respect to the camera,
Figure BDA0003126772160000125
the coordinate transformation relationship of the end of the mechanical arm relative to the target object.
In one possible implementation manner, the second motion information includes motion information of six joint axes of a mechanical arm, and when the instruction is executed by the apparatus, the apparatus is caused to execute the obtaining of the second motion information based on a plurality of second position and orientation information, including:
and performing inverse kinematics calculation on the second position and posture information to obtain motion information of six joint axes of the mechanical arm.
In one possible implementation manner, when the instructions are executed by the apparatus, the apparatus further performs, after the obtaining the first motion information of the target object based on the multiple frames of images is executed, the steps of:
and carrying out filtering processing on the first bit attitude information to obtain filtered first bit attitude information.
When the instructions are executed by the device, the device is further caused to perform:
acquiring second motion information;
and controlling the mechanical arm according to the second motion information.
The electronic device shown in fig. 10 may be used to execute the motion capture method shown in the embodiment shown in fig. 1 or the robot arm control method shown in fig. 7, and the functions or principles thereof may refer to the motion capture method shown in the embodiment shown in fig. 1 or the robot arm control method shown in fig. 7, which is not described herein again.
As shown in fig. 10, the electronic device 900 includes a processor 910 and a memory 920. Wherein, the processor 910 and the memory 920 can communicate with each other through the internal connection path to transmit control and/or data signals, the memory 920 is used for storing computer programs, and the processor 910 is used for calling and running the computer programs from the memory 920.
The memory 920 may be a read-only memory (ROM), other types of static storage devices that can store static information and instructions, a Random Access Memory (RAM), or other types of dynamic storage devices that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disc storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, etc.
The processor 910 and the memory 920 may be combined into a processing device, and more generally, independent components, and the processor 910 is configured to execute the program codes stored in the memory 920 to realize the functions. In particular implementations, the memory 920 may be integrated with the processor 910 or may be separate from the processor 910.
In addition, in order to further improve the functions of the electronic apparatus 900, the electronic apparatus 900 may further include one or more of a camera 930, a power supply 940, an input unit 950, and the like.
Optionally, the power supply 950 is used to provide power to various devices or circuits in the electronic device.
It should be understood that the electronic device 900 shown in fig. 10 is capable of implementing the processes of the methods provided by the embodiments shown in fig. 1 or fig. 7 of the present application. The operations and/or functions of the respective modules in the electronic device 900 are respectively for implementing the corresponding flows in the above-described method embodiments. Reference may be made specifically to the description of the embodiments of the method illustrated in fig. 1 or fig. 7 of the present application, and a detailed description is appropriately omitted herein to avoid redundancy.
It should be understood that the processor 910 in the electronic device 900 shown in fig. 10 may be a system on chip SOC, and the processor 910 may include a Central Processing Unit (CPU), and may further include other types of processors, such as: an image Processing Unit (hereinafter, referred to as GPU), and the like.
In summary, various parts of the processors or processing units within the processor 910 may cooperate to implement the foregoing method flows, and corresponding software programs for the various parts of the processors or processing units may be stored in the memory 920.
The application also provides an electronic device, the device includes a storage medium and a central processing unit, the storage medium may be a non-volatile storage medium, a computer executable program is stored in the storage medium, and the central processing unit is connected with the non-volatile storage medium and executes the computer executable program to implement the method provided by the embodiment shown in fig. 1 or fig. 7 of the present application.
In the above embodiments, the processors may include, for example, a CPU, a DSP, a microcontroller, or a digital Signal processor, and may further include a GPU, an embedded Neural Network Processor (NPU), and an Image Signal Processing (ISP), and the processors may further include necessary hardware accelerators or logic Processing hardware circuits, such as an ASIC, or one or more integrated circuits for controlling the execution of the program according to the technical solution of the present application. Further, the processor may have the functionality to operate one or more software programs, which may be stored in the storage medium.
Embodiments of the present application further provide a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the method provided by the embodiments shown in fig. 1 or fig. 7 of the present application.
Embodiments of the present application also provide a computer program product, which includes a computer program, when the computer program runs on a computer, causing the computer to execute the method provided by the embodiments shown in fig. 1 or fig. 7 of the present application.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the protection scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A motion capture method is applied to a garland system, the garland system comprises a camera and a mechanical arm, the mechanical arm comprises a mechanical arm base and a mechanical arm tail end, and the method comprises the following steps:
acquiring a received multi-frame image, wherein the multi-frame image comprises a target object and is obtained by shooting through the camera;
obtaining first motion information of the target object based on the multi-frame images, wherein the first motion information comprises first pose information of the target object in each frame of the image, the first pose information is pose information of the target object in a first coordinate system, and the first coordinate system takes the camera as a reference;
acquiring a coordinate conversion relation, and performing coordinate conversion on the first pose information based on the coordinate conversion relation to obtain a plurality of second pose information, wherein the second pose information is pose information of the tail end of the mechanical arm in a second coordinate system, and the second coordinate system takes the mechanical arm base as a reference;
and obtaining second motion information based on a plurality of second position and posture information, so that the tail end of the mechanical arm moves according to the second motion information.
2. The method of claim 1, wherein the target object comprises a two-dimensional code and the camera is a black and white camera.
3. The method according to claim 1, wherein the obtaining first motion information of the target object based on the plurality of frames of images comprises:
and identifying the space state of the target object in each frame of image by using a visual identification model so as to obtain the first pose information of the target object in each frame of image.
4. The method according to claim 1, wherein the first motion information further includes a relative relationship between first pose information of a target object in a first frame image and first pose information of target objects in other frame images, and the coordinate transformation is performed on the first pose information based on the coordinate transformation relationship to obtain a plurality of second pose information, including:
converting the first position and posture information of the target object in the first frame image into second position and posture information based on the coordinate conversion relation;
and obtaining a plurality of second position information corresponding to the first position information of the target object in the other frame images according to the second position information obtained by conversion and the relative relationship.
5. The method of claim 1, wherein the coordinate transformation relationship comprises:
Figure FDA0003126772150000011
wherein,
Figure FDA0003126772150000012
the coordinate transformation relation of the tail end of the mechanical arm relative to the base of the mechanical arm,
Figure FDA0003126772150000013
is the coordinate conversion relation of the camera relative to the base of the mechanical arm,
Figure FDA0003126772150000014
is the coordinate transformation relationship of the target object with respect to the camera,
Figure FDA0003126772150000015
the coordinate transformation relationship of the end of the mechanical arm relative to the target object.
6. The method of claim 1, wherein the second motion information comprises motion information of six joint axes of a robotic arm, and wherein obtaining second motion information based on a plurality of the second pose information comprises:
and performing inverse kinematics calculation on the second position and posture information to obtain motion information of six joint axes of the mechanical arm.
7. The method according to any one of claims 1 to 6, further comprising, after the obtaining first motion information of the target object based on the plurality of frame images:
and carrying out filtering processing on the first bit attitude information to obtain filtered first bit attitude information.
8. A robot arm control method is characterized by comprising:
obtaining second motion information obtained by the method of any one of claims 1 to 7;
and controlling the mechanical arm according to the second motion information.
9. A motion capture device comprising a camera and a robotic arm, the robotic arm comprising a robotic arm base and a robotic arm tip, the motion capture device further comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a received multi-frame image, the multi-frame image comprises a target object, and the image is obtained by shooting through the camera;
a first obtaining module, configured to obtain first motion information of the target object based on the multiple frames of images, where the first motion information includes first pose information of the target object in each frame of the image, the first pose information is pose information of the target object in a first coordinate system, and the first coordinate system is based on the camera;
the conversion module is used for acquiring a coordinate conversion relation, and performing coordinate conversion on the first pose information based on the coordinate conversion relation to obtain a plurality of second pose information, wherein the second pose information is pose information of the tail end of the mechanical arm in a second coordinate system, and the second coordinate system takes the mechanical arm base as a reference;
and the second obtaining module is used for obtaining second motion information based on the plurality of second position and posture information so as to enable the tail end of the mechanical arm to move according to the second motion information.
10. An electronic device, comprising:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the apparatus, cause the apparatus to perform the method of any of claims 1 to 7 or claim 8.
11. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 7 or claim 8.
12. A garland system, comprising:
the camera is used for shooting a target object to obtain a multi-frame image and sending the multi-frame image to the control device;
the control device includes:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the system, cause the system to perform the method of any of claims 1 to 7 or claim 8;
and the mechanical arm is used for being controlled by the control device to perform movement operation.
CN202110691108.XA 2021-06-22 2021-06-22 Motion capture method and device, electronic equipment and flower drawing system Pending CN113327281A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110691108.XA CN113327281A (en) 2021-06-22 2021-06-22 Motion capture method and device, electronic equipment and flower drawing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110691108.XA CN113327281A (en) 2021-06-22 2021-06-22 Motion capture method and device, electronic equipment and flower drawing system

Publications (1)

Publication Number Publication Date
CN113327281A true CN113327281A (en) 2021-08-31

Family

ID=77424143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110691108.XA Pending CN113327281A (en) 2021-06-22 2021-06-22 Motion capture method and device, electronic equipment and flower drawing system

Country Status (1)

Country Link
CN (1) CN113327281A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113954064A (en) * 2021-09-27 2022-01-21 广东博智林机器人有限公司 Robot navigation control method, device and system, robot and storage medium
CN114067658A (en) * 2021-11-30 2022-02-18 深圳市越疆科技有限公司 Coffee flower teaching system
CN114147714A (en) * 2021-12-02 2022-03-08 浙江机电职业技术学院 Autonomous robot mechanical arm control parameter calculation method and system
CN114568942A (en) * 2021-12-10 2022-06-03 上海氦豚机器人科技有限公司 Method and system for garland track acquisition and garland control based on visual following
CN115008452A (en) * 2022-05-12 2022-09-06 兰州大学 Mechanical arm control method and system, electronic equipment and storage medium
CN115530617A (en) * 2022-10-25 2022-12-30 深圳市越疆科技有限公司 Coffee making method, device and system
WO2023097797A1 (en) * 2021-11-30 2023-06-08 深圳市越疆科技有限公司 Coffee preparation method, device, and system
CN116509449A (en) * 2023-07-03 2023-08-01 深圳华大智造云影医疗科技有限公司 Pose information determining method and device of mechanical arm and electronic equipment

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113954064A (en) * 2021-09-27 2022-01-21 广东博智林机器人有限公司 Robot navigation control method, device and system, robot and storage medium
CN114067658A (en) * 2021-11-30 2022-02-18 深圳市越疆科技有限公司 Coffee flower teaching system
WO2023097797A1 (en) * 2021-11-30 2023-06-08 深圳市越疆科技有限公司 Coffee preparation method, device, and system
CN114067658B (en) * 2021-11-30 2023-08-04 深圳市越疆科技有限公司 Coffee draws colored teaching system
CN114147714A (en) * 2021-12-02 2022-03-08 浙江机电职业技术学院 Autonomous robot mechanical arm control parameter calculation method and system
CN114568942A (en) * 2021-12-10 2022-06-03 上海氦豚机器人科技有限公司 Method and system for garland track acquisition and garland control based on visual following
CN114568942B (en) * 2021-12-10 2024-06-18 上海氦豚机器人科技有限公司 Flower drawing track acquisition and flower drawing control method and system based on vision following
CN115008452A (en) * 2022-05-12 2022-09-06 兰州大学 Mechanical arm control method and system, electronic equipment and storage medium
CN115008452B (en) * 2022-05-12 2023-01-31 兰州大学 Mechanical arm control method and system, electronic equipment and storage medium
CN115530617A (en) * 2022-10-25 2022-12-30 深圳市越疆科技有限公司 Coffee making method, device and system
CN116509449A (en) * 2023-07-03 2023-08-01 深圳华大智造云影医疗科技有限公司 Pose information determining method and device of mechanical arm and electronic equipment
CN116509449B (en) * 2023-07-03 2023-12-01 深圳华大智造云影医疗科技有限公司 Pose information determining method and device of mechanical arm and electronic equipment

Similar Documents

Publication Publication Date Title
CN113327281A (en) Motion capture method and device, electronic equipment and flower drawing system
CN113352338A (en) Mechanical arm control method and device, electronic equipment and flower drawing system
CN111402290B (en) Action restoration method and device based on skeleton key points
CN108994832B (en) Robot eye system based on RGB-D camera and self-calibration method thereof
US11331806B2 (en) Robot control method and apparatus and robot using the same
CN111462154A (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
WO2019041900A1 (en) Method and device for recognizing assembly operation/simulating assembly in augmented reality environment
CN112862878B (en) Mechanical arm blank repairing method based on 3D vision
KR20110033235A (en) Method of teaching robotic system
WO2020190166A1 (en) Method and system for grasping an object by means of a robotic device
CN109840508A (en) One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium
CN113284192A (en) Motion capture method and device, electronic equipment and mechanical arm control system
CN105500370A (en) Robot offline teaching programming system and method based on somatosensory technology
JP6193135B2 (en) Information processing apparatus, information processing system, and information processing method
CN113246131B (en) Motion capture method and device, electronic equipment and mechanical arm control system
CN113219854A (en) Robot simulation control platform, method and computer storage medium
CN109531578B (en) Humanoid mechanical arm somatosensory control method and device
KR101936130B1 (en) System and method for assembling blocks using robot arm
EP4155036A1 (en) A method for controlling a grasping robot through a learning phase and a grasping phase
CN115713547A (en) Motion trail generation method and device and processing equipment
CN111360819B (en) Robot control method and device, computer device and storage medium
RU2756437C1 (en) Method and system for planning the movement of a manipulator robot by correcting the reference trajectories
JP7478848B2 (en) Teacher data generation device, machine learning device, and robot joint angle estimation device
Regal et al. Using single demonstrations to define autonomous manipulation contact tasks in unstructured environments via object affordances
US20200058135A1 (en) System and method of object positioning in space for virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination