CN116690565A - Cooperative robot and control method - Google Patents

Cooperative robot and control method Download PDF

Info

Publication number
CN116690565A
CN116690565A CN202310732510.7A CN202310732510A CN116690565A CN 116690565 A CN116690565 A CN 116690565A CN 202310732510 A CN202310732510 A CN 202310732510A CN 116690565 A CN116690565 A CN 116690565A
Authority
CN
China
Prior art keywords
stability
robot
cooperative
collaborative
grabbing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310732510.7A
Other languages
Chinese (zh)
Other versions
CN116690565B (en
Inventor
邵春刚
鲁家威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Kuaijie Intelligent Technology Co ltd
Original Assignee
Suzhou Kuaijie Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Kuaijie Intelligent Technology Co ltd filed Critical Suzhou Kuaijie Intelligent Technology Co ltd
Priority to CN202410185007.9A priority Critical patent/CN117885099A/en
Priority to CN202310732510.7A priority patent/CN116690565B/en
Publication of CN116690565A publication Critical patent/CN116690565A/en
Application granted granted Critical
Publication of CN116690565B publication Critical patent/CN116690565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The embodiment of the specification provides a cooperative robot and a control method, wherein the cooperative robot comprises a movable base, a grabbing device, a driving device, an identification guiding device and a processor; the identification guide device comprises an electromagnetic identification guide device and a visual identification guide device; the processor is used for: in response to receiving the collaboration instruction, extracting a first location parameter and a second location parameter from the collaboration instruction, the first location parameter indicating an initial location of the target object, the second location parameter indicating a target location of the target object; determining at least one collaboration route based on the first location parameter and the second location parameter; based on at least one cooperation route, the movable base and the grabbing device are controlled to move under the guidance of the electromagnetic identification guiding device and/or the visual identification guiding device so as to complete cooperation tasks.

Description

Cooperative robot and control method
Technical Field
The specification relates to the technical field of robots, in particular to a cooperative robot and a control method.
Background
A cooperative robot refers to a robot that can directly interact with a person in a cooperative area (an area where the robot and the person can work at the same time). Compared with the traditional industrial robot, the cooperative robot has the remarkable characteristics of safety, easiness in use, flexibility in deployment and the like.
In order to improve the work efficiency, automated production processing begins with the simultaneous delivery of goods using multiple co-operating robots. After the existing collaborative robot is positioned and determined, tiny shaking can occur, and the accuracy of positioning, grabbing and conveying objects is affected. When a plurality of robots are in linkage operation, errors occur in positioning of the individual robots, so that normal operation of other robots is easily interfered, and overall coordination is affected.
Therefore, it is necessary to provide a cooperative robot and a control method capable of improving stability and coordination of the cooperative robot.
Disclosure of Invention
One of the embodiments of the present specification provides a cooperative robot including a movable base, a gripping device, a driving device, an identification guide device, and a processor; the identification guide device comprises an electromagnetic identification guide device and a visual identification guide device; the processor is used for: in response to receiving the collaboration instruction, extracting a first location parameter and a second location parameter from the collaboration instruction, the first location parameter indicating an initial location of the target object, the second location parameter indicating a target location of the target object; determining at least one collaboration route based on the first location parameter and the second location parameter; based on at least one cooperation route, the movable base and the grabbing device are controlled to move under the guidance of the electromagnetic identification guiding device and/or the visual identification guiding device so as to complete cooperation tasks.
One of the embodiments of the present specification provides a collaborative robot control system, the system including: the extraction module is used for responding to the received collaboration instruction, extracting a first position parameter and a second position parameter from the collaboration instruction, wherein the first position parameter indicates the initial position of the target object, and the second position parameter indicates the target position of the target object; a determining module for determining at least one collaboration route based on the first location parameter and the second location parameter; the control module is used for controlling the movable base and the grabbing device to move under the guidance of the electromagnetic identification guiding device and/or the visual identification guiding device so as to complete the cooperation task.
One of the embodiments of the present specification provides a collaborative robotic device including at least one memory and at least one processor; the at least one memory is configured to store computer instructions, and the at least one processor is configured to execute a portion of the computer instructions to implement the collaborative robot control method described in the above embodiments.
One of the embodiments of the present specification provides a computer-readable storage medium storing computer instructions that, when read by a computer, perform the collaborative robot control method described in the above embodiments.
Drawings
The present specification will be further elucidated by way of example embodiments, which will be described in detail by means of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is an exemplary flow chart of a method of a collaborative robotic control system shown in accordance with some embodiments of the present description;
FIG. 2A is an exemplary schematic diagram of a grab stability prediction model shown in accordance with some embodiments of the present description;
FIG. 2B is an exemplary schematic diagram of a movement stability prediction model shown in accordance with some embodiments of the present description;
FIG. 3 is an exemplary flow chart of a correction collaboration robot shown in accordance with some embodiments of the present description;
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "unit" and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
The running stability and the positioning accuracy of the cooperative robot are critical to the normal operation of the cooperative robot. When a plurality of robots are in linkage operation, the operation of the individual robots is unstable or errors occur in positioning, the normal operation of other robots is interfered, and the overall coordination is affected. And a plurality of cooperative robots are operated simultaneously, so that the cooperative route of each robot needs to be planned clearly, and the efficient and smooth operation is realized. Therefore, the specification provides a cooperative robot and a control method, wherein the cooperative robot is controlled to complete a cooperative task under the guidance of an identification guide device by determining a cooperative route of the cooperative robot, and the operation parameters of the cooperative robot are corrected in the operation process, so that the cooperative robot is ensured to successfully complete the cooperative task.
In some embodiments, the collaborative robot may include a movable base, a gripping device, a driving device, an identification guide device, and a processor.
A movable base may refer to a means for moving. For example, the movable base may be wheels or tracks driven by motors.
In some embodiments, the movable mount may include AGV (Automated Guided Vehicle).
In some embodiments, the movable base may include at least one retractable foot. Preferably, the movable base may have four retractable feet.
The retractable feet may refer to a means for supporting and stabilizing a cooperative robot that may adjust the level and stability by extending or shortening the feet.
Gripping means may refer to means for gripping an article. For example, the gripping means may be a gripper, an electromagnetic chuck, a robot, or the like.
The driving means may refer to means for driving the cooperative robot to move or work. For example, the drive means may be an electric motor, a hydraulic drive, a pneumatic drive, or the like. In some embodiments, the drive means may power the operation of the movable base, gripping means, etc.
In some embodiments, the driving device can realize fixation of the cooperative robot after positioning through a servo motor. The description of the servo motor can be found below.
The recognition guide means may refer to means for guiding the collaborative robot to complete a collaborative task. For example, the recognition guide device may guide the cooperative robot to travel along a predetermined path. For another example, the identification guide may guide the collaborative robot to a specified location.
In some embodiments, the identification guide may include an electromagnetic identification guide, a visual identification guide, a dip identification guide, and the like.
The electromagnetic recognition guide means may refer to a means for guiding the cooperative robot to accomplish a cooperative task using electromagnetic signals. For example, the electromagnetic identification guidance device may use an electromagnetic field sensor or an RFID reader (Radio Frequency Identification Reader) to read surrounding spatial magnetic field information for guidance.
The visual recognition guide means may refer to a means for guiding the cooperative robot to accomplish a cooperative task using image information. For example, the visual recognition guide may take a picture or sense of the surrounding environment using a camera or optical sensor reading to guide.
In some embodiments, the visual recognition guide may guide the grasping device to grasp the target object using a vision system 3D camera. The description of the vision system 3D camera may be found below.
The inclination recognition guide means may refer to means for detecting inclination data. For example, the inclination recognition guide means may include an acceleration sensor, a gyroscope, or the like. Description of the tilt angle data may be found below.
FIG. 1 is an exemplary flow chart of a collaborative robot shown to accomplish a collaborative task according to some embodiments of the present specification. As shown in fig. 1, the process 100 includes the following steps. In some embodiments, the process 100 may be performed by a processor.
In response to receiving the collaboration instruction, a first location parameter and a second location parameter are extracted from the collaboration instruction, step 110.
The collaboration instruction may refer to an instruction that controls the collaboration robot to complete a collaboration task. For example, the collaboration instruction may include information about the target object, a first location parameter, a second location parameter, and so on. The target object refers to an object grasped by the cooperative robot. For example, when the collaborative robot is applied to a production line, the target object may be a log device, a cargo, a processed product, or the like. The relevant information of the target object may include size, weight, type (e.g., including food, mechanical, chemical, etc.), properties (e.g., including fragile, flammable, etc.), and the like. For a description of the collaboration task, see the relevant description of step 130.
The first location parameter may be used to indicate an initial location of the target object. The initial position refers to a start position at which the target object is placed. For example, the target object is a to-be-transported object, and the initial position of the target object may include the position of the carrier corresponding to the to-be-transported object, and the position of the to-be-transported object (i.e., the specific position of the to-be-transported object on the carrier).
The second location parameter may be used to indicate a target location of the target object. The target position may be an end position of the conveyance target object. For example, the target object is a cargo to be transported, and the target location of the target object may include a destination location of the cargo to be transported.
In some embodiments, the processor may parse the collaboration instruction and extract therefrom the first location parameter and the second location parameter of the target object. For example, the parsing of the collaboration instruction by the processor may include data format identification, field identification, and the like.
At step 120, at least one collaboration route is determined based on the first location parameter and the second location parameter.
The cooperative route may refer to a route of the cooperative robot from an initial position of the target object to the target position. For example, when the target object is the goods to be transported in the foregoing example, the cooperative route may include at least one of a route 1 from the position of the cooperative robot to the position of the carrier corresponding to the goods to be transported, a route 2 from the position of the carrier corresponding to the goods to be transported to the position of the goods to be transported, and a route 3 from the position of the goods to be transported to the target position of the goods to be transported.
In some embodiments, when there is a single collaborative robot performing a collaborative task, the processor may determine a collaborative route using a path planning algorithm based on the first location parameter and the second location parameter. For example, the path planning algorithm may include at least one of a particle swarm algorithm, a genetic algorithm, an ant colony algorithm, and the like, or any combination thereof.
In some embodiments, when there are a plurality of cooperative robots to perform the linked cooperative task, the processor may evaluate abnormal risk of the linked cooperative task after determining the cooperative route according to the foregoing manner, and correct the cooperative robot according to the abnormal risk. For more on this embodiment see fig. 3 and its associated description.
And 130, controlling the movable base and the grabbing device to move under the guidance of the electromagnetic identification guiding device and/or the visual identification guiding device based on at least one cooperation route so as to complete the cooperation task.
A collaborative task may refer to a task that a collaborative robot carries a target object from an initial location to a target location.
In some embodiments, the processor may send control signals to the driving device to control the movement of the movable base and the gripping device based on the at least one cooperative route and the related information of the target object. For example, the movable base is controlled to rotate, accelerate, decelerate, change the traveling direction, or the like; the grabbing device is controlled to lift, grab, release and the like.
In some embodiments, the processor may obtain real-time information based on the electromagnetic identification guide and/or the visual identification guide to guide the movement of the movable base and the grasping device to accomplish the collaborative task. An exemplary automated guidance process includes the following steps S1-S4.
In step S1, the processor detects surrounding magnetic field information (for example, deviation parameters of a magnetic field, etc.) based on the electromagnetic identification guiding device, so as to obtain a current position (i.e., positioning) of the cooperative robot, and then controls the cooperative robot to move from the current position to an initial position of the target object according to the cooperative route.
And S2, after the initial position of the target object is reached, the processor acquires and identifies image information related to the target object based on the visual identification guide device so as to verify whether the target object is error-free or not, and the processor enters a grabbing stage when the target object is error-free. It should be noted that S2 is an optional step.
In the capturing stage, the relevant information (for example, size, shape, type, etc.) of the target object is obtained by identifying the image information related to the target object acquired by the visual identification guiding device, or the relevant information of the target object is obtained by reading the RFID tag of the target object by the electromagnetic identification guiding device (for example, RFID reader), and the corresponding capturing device is controlled to perform the capturing operation according to the relevant information of the target object. In the grabbing process, the specific position of the target object is acquired through the electromagnetic identification guide device and/or the visual identification guide device, so that the grabbing device is guided to a proper position to grab or release the target object.
And S4, entering a carrying stage after the grabbing is finished, acquiring the real-time position of the cooperative robot through an electromagnetic identification guiding device and/or a visual identification guiding device, and guiding the moving direction in real time. In some embodiments, a visual recognition guide may also be utilized to capture images of the surrounding environment and recognize obstacles in the environment, providing obstacle avoidance guidance for movement of the collaborative robot.
In some embodiments, during the movement of the collaborative robot, the processor may further determine a position of the collaborative robot by using a manner that the visual recognition guiding device recognizes and locates the two-dimensional code, and guide the collaborative robot to complete the collaborative task. For example, the visual recognition guide may recognize the positioning two-dimensional code and calculate the spatial coordinates of the collaborative robot based on the vision system 3D camera. The description of locating the two-dimensional code can be found below.
In some embodiments, the cooperative robot may control the gripping device to grip the target object under the guidance of the electromagnetic recognition guide device and/or the visual recognition guide device, and control the movable base to move on the cooperative route under the guidance of the electromagnetic recognition guide device and/or the visual recognition guide device, so as to complete the cooperative task.
In some embodiments of the present disclosure, the collaborative robot is capable of achieving accurate and efficient transport of the target object by electromagnetically and/or visually recognizing guidance of the guidance device on the collaborative route and based on movement of the movable base and the grasping device.
In some embodiments, the processor may further control the collaborative robot to achieve positioning based on the positioning two-dimensional code.
The positioning two-dimensional code can be used for realizing the positioning function. In some embodiments, the positioning two-dimensional code may include location information, direction information, environment information, and the like of its setting location. For example, the location information may include a floor, a room number, a shelf layer number, etc.; the direction information may guide the orientation of the collaborative robot positioning; the environmental information may refer to information of the surrounding environment (e.g., the location of an obstacle). In some embodiments, the positioning two-dimensional code may be provided on a wall, a post, the ground, or a sign suspended in the air. In some embodiments, the positioning two-dimensional code may be provided with at least two.
In some embodiments, the collaborative robot may utilize the AGV laser navigation and the positioning two-dimensional code to perform compound navigation, thereby controlling the movable base to achieve positioning. The AGV laser navigation is utilized to sense the surrounding environment through a laser scanner and position the cooperative robot on a map, so that autonomous navigation is realized.
In some embodiments, navigating with a positioning two-dimensional code includes: the processor can determine the parking position of the cooperative robot according to the identification information by controlling the external vision camera to identify and determine the identification information of at least two positioning two-dimensional codes near the cooperative robot. The identification information comprises information stored in advance in the positioning two-dimensional code. For example, position information, direction information, environment information, and the like of the setting position.
In some embodiments, the processor may further fine tune the position of the collaborative robot based on the parking position and the preset position of the collaborative robot to achieve accurate positioning. The preset position may be a position point on a collaboration path of the collaboration robot.
In some embodiments, the processor may further be configured to keep the movable base stationary by the servo motor band-type brake after positioning the cooperative robot using the positioning two-dimensional code.
It should be noted that, since the movable base is supported by a plurality of retractable supporting feet, slight shaking of the movable base and plastic deformation of soft materials on the surface of wheels of the movable base are caused by the band-type brake of the servo motor, thereby causing positioning errors. In some embodiments, the processor may correct the positioning error by adjusting the telescoping length of the at least one telescoping foot. Further description of adjusting the telescoping length of at least one of the telescoping support feet may be found in connection with figure 1.
In some embodiments of the present disclosure, the movable base is controlled by the ground two-dimensional code to achieve positioning and positioning correction, so that the cooperative robot can be guaranteed to achieve precise positioning with the precision of ±2mm.
In some embodiments, the processor is further to: determining the stability of the cooperative robot in real time; and adjusting the operation parameters of the cooperative robot in response to the stability not meeting the first preset condition.
Stability may refer to the ability of a collaborative robot to remain stable during performance of collaborative tasks. Stability may be represented by a number or scale, the greater the stability.
In some embodiments, the stability of the collaborative robot may be determined by tilt data. The tilt data may include tilt direction, tilt magnitude, among others. For example, stability may be measured by the magnitude of the tilt angle, the greater the magnitude of the tilt angle, the less stability. For another example, the result of subtracting the tilt angle magnitude from the tilt angle data variance from the value 100 may be used as the movement stability. The inclination angle data change variance can be calculated based on inclination angle data recorded in a certain period of time. The larger the inclination angle is, the larger the inclination angle change is, and the smaller the actual movement stability is.
In some embodiments, the tilt data may be acquired by a tilt identification guide. For a description of the inclination recognition guide means, reference is made to the preceding description.
The first preset condition may refer to a judgment condition related to the stability of the cooperative robot. The first preset condition may be set based on historical experience.
In some embodiments, the processor may determine whether the stability of the collaborative robot satisfies a first preset condition based on the tilt angle data. Accordingly, the first preset condition may be that the magnitude of the tilt angle is less than or equal to a preset threshold. And when the inclination angle is larger than the angle threshold value, judging that the stability of the cooperative robot does not meet the first preset condition. The angle threshold may be a system preset value, an artificial preset value, or the like.
In some embodiments, stability may include mobility stability and grip stability.
Movement stability may refer to the ability of a cooperative robot to remain stable while moving.
The gripping stability may refer to the ability of the cooperative robot to remain stable while gripping the target object.
The movement stability and the grasping stability can be determined in various ways. For example, the determination may be made based on tilt angle data. In some embodiments, the processor may also determine movement stability and grip stability based on a machine learning model, see fig. 2 and its associated description for further details.
In some embodiments, stability may also include positioning stability. Positioning stability can refer to the ability of the cooperative robot to remain stable after positioning and brake immobilization are completed.
In some embodiments, the first preset condition may include movement stability being less than a first threshold and grip stability being less than a second threshold.
The first threshold may refer to a threshold condition related to movement stability. The second threshold may refer to a threshold condition related to grip stability. The first threshold and the second threshold may be used to determine whether the operating parameters of the collaborative robot need to be adjusted.
In some embodiments, the first threshold and the second threshold may be determined based on a number of collaborative robots within the current space and a number of collaborative robots that collectively perform a task in a coordinated collaborative task in which the current collaborative robot is located.
The current space may refer to a space in which the collaborative robot is located when performing a collaborative task. For example, the current space may include a warehouse, a factory, a supermarket, and the like.
The coordinated cooperative task may refer to a cooperative task that at least two cooperative robots perform coordinated execution. See fig. 3 for more description of the linked collaboration task.
In some implementations, the greater the number of collaborative robots within the current space and the greater the number of collaborative robots in a linked collaborative task, the greater the first and second thresholds.
In some embodiments of the present disclosure, the movement stability and the grabbing stability are monitored and adjusted in real time through the first threshold and the second threshold, so that stable operation of the collaborative robots is ensured, and efficiency of linkage collaboration of the plurality of collaborative robots and completion efficiency of collaborative tasks are improved.
In some embodiments, the processor may control the associated device to alarm when the movement stability is less than the first alarm threshold or the grip stability is less than the second alarm threshold. The first alarm threshold and the second alarm threshold can be used for judging whether an alarm needs to be given. The first alarm threshold value and the second alarm threshold value can be system preset values or human preset values and the like.
In some embodiments, the processor may adjust the operating parameters of the collaborative robot in response to the stability not meeting the first preset condition.
The operation parameters may refer to parameters for controlling the operation of the collaborative robot. In some embodiments, the operational parameters may include a telescoping parameter of at least one telescoping foot. In some embodiments, the operational parameters may also include gripping parameters of the collaborative robot, movement parameters of the collaborative robot, and the like. As regards the gripping parameters of the collaborative robot, the movement parameters of the collaborative robot may be mentioned below.
The telescoping parameter may refer to a parameter related to the telescoping length of the retractable temple. For example, the telescoping parameter may control the telescoping foot to support the movable base and/or gripping device to a certain height.
The gripping parameters may refer to parameters for controlling the movement of the gripping means. In some embodiments, the grip parameters may include a preset level, a preset grip speed, a preset grip path, a preset grip point, and the like.
The preset level may refer to the level that the AGV needs to reach and may be set through historical experience.
The preset grabbing path is a planning path for grabbing materials by the grabbing device. For example, the preset grasping path may include a straight line, a curved line, or the like. The preset grabbing path may be manually or systematically preset. The predetermined grasping path may also be determined in real time under the guidance of the electromagnetic recognition guide and/or the visual recognition guide. For example, the processor may program a preset grasping path for generating grasping means based on an RRT (fast extended random tree, rapidly exploring random tree) algorithm.
The preset grabbing speed refers to the planning speed of grabbing materials by the grabbing device. The preset gripping speed may be preset.
The preset clamping points are planning positions clamped when the grabbing device grabs materials. For example, the predetermined clamping point may be a center point or two end points of the material. The preset clamping point may be manually or systematically preset. The predetermined clamping point may also be determined in real time under the guidance of the electromagnetic identification guide and/or the visual identification guide.
The movement parameter refers to a parameter for controlling the movement of the movable base. In some embodiments, the movement parameters may include a preset movement speed, a preset steering speed, and the like.
The preset grabbing speed refers to the planned moving speed of the movable base. The preset steering speed refers to a planned steering speed of the movable base. The preset grabbing speed and the preset steering speed can be preset manually or systematically.
In some embodiments, the processor may adjust the operational parameters of the collaborative robot based on the operational adjustment parameters by determining the operational adjustment parameters.
The operation adjustment parameter refers to a parameter value for adjusting the operation parameter. For example, the operation adjustment parameter may be an adjustment value that adjusts the operation parameter. For another example, the operation adjustment parameter may be an adjusted operation parameter.
In some embodiments, the operational adjustment parameters may include telescoping adjustment parameters. The expansion adjustment parameter is a parameter value for adjusting the expansion parameter.
In some embodiments, the processor may construct a horizontal feature vector based on the tilt angle data, the AGV model, the current height of the at least one retractable temple, and the preset horizontal height, and determine the telescoping adjustment parameters in the vector database by means of vector retrieval. The vector database stores a plurality of reference horizontal feature vectors and corresponding reference telescopic adjustment parameters thereof. An exemplary horizontal feature vector is (x 0, y0, z0, t 0), where x0 represents tilt angle data, y0 represents AGV model number, z0 represents the current height of at least one retractable temple, and t0 represents a preset horizontal height.
The reference horizontal feature vector meeting the preset condition in the vector database can be searched through vector search, and further the reference telescopic adjustment parameter corresponding to the reference horizontal feature vector can be determined to be the telescopic adjustment parameter of the cooperative robot. The preset condition is a judgment condition for determining the reference horizontal feature vector. In some embodiments, the preset conditions may include that the vector distance is less than a distance threshold, that the vector distance is minimal, etc.
In some embodiments, the processor may preset a correspondence between different stretching lengths and different stretching speeds of the stretching foot, and determine the stretching speed according to the stretching length corresponding to the stretching adjustment parameter and the correspondence. By determining the telescopic speeds under different telescopic lengths, the time required for adjusting the operation parameters can be reduced as much as possible while the collaborative robot is in a stable state when the operation parameters are adjusted.
In some embodiments, the operational adjustment parameters may include a grab adjustment parameter and a move adjustment parameter.
The grabbing adjustment parameter is a parameter value for adjusting the grabbing parameter. In some embodiments, the processor may determine the grip adjustment parameter in response to the grip stability of the collaborative robot being less than a first threshold.
In some embodiments, the processor may determine the grip adjustment parameters of the collaborative robot based on the grip stability prediction model.
In some embodiments, the processor may generate a plurality of sets of candidate grabbing adjustment parameters, input the plurality of sets of candidate grabbing adjustment parameters and other data into the grabbing stability prediction model, and determine grabbing stability corresponding to the plurality of sets of candidate grabbing adjustment parameters. The other data refer to other input data which needs to be input into the grabbing stability prediction model besides the grabbing parameters. For more description of the input data for grabbing the stability prediction model, see fig. 2A and its associated description.
And obtaining a plurality of output results through multiple times of processing of the grabbing stability prediction model, and determining the candidate grabbing adjustment parameters corresponding to the conditions (namely, grabbing stability is larger than a first threshold value) as final grabbing adjustment parameters.
In some embodiments, the multiple sets of candidate grabbing adjustment parameters may be generated in multiple ways. For example, multiple sets of candidate capture adjustment parameters are randomly generated by a processor. For another example, the processor may randomly generate multiple sets of candidate gripping adjustment parameters based on historical positioning data of the collaborative robot gripping the same target object (e.g., the object to be shipped). For another example, multiple sets of candidate capture adjustment parameters may be manually preset.
The movement adjustment parameter is a parameter value for adjusting the movement parameter. In some embodiments, the processor may determine the movement adjustment parameter in response to the movement stability of the collaborative robot being greater than a second threshold.
In some embodiments, the processor may determine the movement adjustment parameters of the collaborative robot based on a movement stability prediction model.
In some embodiments, the processor may generate a plurality of sets of candidate movement adjustment parameters, input the plurality of sets of candidate movement adjustment parameters, the other data, and the collaborative route when the movement stability is not greater than the second threshold value into the movement stability prediction model, and determine movement stability corresponding to the plurality of sets of candidate movement adjustment parameters. The other data refers to other input data which needs to be input into the movement stability prediction model besides the movement parameters. For more description of the input data for the motion stability prediction model, see fig. 2B and its associated description.
And obtaining a plurality of output results through multiple times of processing of the movement stability prediction model, and determining the candidate movement adjustment parameters corresponding to the conditions (namely, the movement stability is greater than a second threshold value) as final movement adjustment parameters.
In some embodiments, multiple sets of candidate mobile adjustment parameters may be generated in a variety of ways. For example, multiple sets of candidate capture adjustment parameters are randomly generated by a processor.
In some embodiments, the processor may gradually reduce the moving speed and the steering speed based on the current moving speed and the steering speed of the cooperative robot, to obtain a plurality of candidate moving adjustment parameters.
In some embodiments of the present disclosure, by adjusting the operation parameters of the collaborative robot, stability of the collaborative robot in a complex environment is effectively improved, accidental risks in the collaborative task execution process are reduced, and quality and efficiency of completion of the collaborative task are improved. The grabbing stability prediction model and the moving stability prediction model are used for determining grabbing adjustment parameters and moving adjustment parameters, so that operation adjustment parameters can be accurately and effectively determined, the coordination and stability of the grabbing process and the moving process of the cooperative robot are ensured, and the cooperative efficiency is effectively improved.
It should be noted that the above description of the process 100 is for illustration and description only, and is not intended to limit the scope of applicability of the present disclosure. Various modifications and changes to the process 100 will be apparent to those skilled in the art in light of the present description. However, such modifications and variations are still within the scope of the present description.
FIG. 2A is an exemplary schematic diagram of a grab-stability-based predictive model, according to some embodiments of the present description.
Referring to fig. 2A, in some embodiments, the processor may predict the gripping stability 230 of the collaborative robot based on the gripping stability prediction model 220.
In some embodiments, the grab stability prediction model may be a machine learning model or other neural network model, such as a convolutional neural network model (Convolutional Neural Network, CNN), or the like.
As shown in fig. 2A, in some embodiments, the inputs to the grasp stability prediction model may include AGV data 210, material morphology parameters 211, material weight parameters 212, material position 213, and grasping device data 214; the output may include the gripping stability 230 of the collaborative robot.
The AGV data may include an AGV model number, an AGV position, AGV inclination data, a preset level, a height of at least one retractable arm brace, etc. See fig. 1 for more description of the preset level.
The material is the target object. The material morphology parameters may include shape, size, etc. of the material. The material weight parameters may include material weight, material centroid, etc. The material morphology parameters and the material weight parameters may be pre-stored in a database. By reading the database, the form parameters and the weight parameters of the materials can be obtained.
The gripper data 214 may include a model 214-1 of the gripper, a preset gripper path 214-2, a preset gripper speed 214-3, a preset gripping point 214-4, and so on. The method comprises the steps of presetting a grabbing path, presetting grabbing speed and presetting clamping points to belong to grabbing parameters of a grabbing device. See fig. 1 for a more description of the grasping parameters.
In some embodiments, the grab stability prediction model may be trained by a first training sample with a plurality of first labels.
In some embodiments, the first training sample includes sample AGV data, sample material morphology parameters, sample material weight parameters, sample material position, and sample gripping device data. The first training sample may be obtained based on a grasping experiment. For example, the grabbing experiment can be performed by using AGVs of different models and different levels to grab different materials at different positions by using different clamping points through the grabbing device. The grabbing experiment process comprises the following steps: firstly, planning a preset grabbing path according to experimental conditions (such as related information of a target object, clamping point positions and the like), and grabbing according to the preset grabbing path. The path planning method is described with reference to fig. 1. In the grabbing process, the actual grabbing path, the actual grabbing speed, the actual clamping point and the like of the grabbing device are recorded as first training samples based on sensors (such as a position sensor and a speed sensor).
In some embodiments, the first label is an actual grabbing stability corresponding to the first training sample, and may be manually marked according to a grabbing experiment result. For example, the movement amplitude of the AGV is obtained by a sensor (such as a position sensor) on the AGV; and determining the actual grabbing stability based on the moving amplitude, the actual grabbing path and the preset grabbing path, and taking the actual grabbing stability as a first label. For example, the actual grab stability may be the sum of the movement magnitude score and the actual grab path score. The larger the moving amplitude of the AGV is, the lower the corresponding score is, and the larger the difference between the actual grabbing path and the preset grabbing path is, the lower the corresponding score is.
FIG. 2B is an exemplary schematic diagram of a motion stability prediction model according to some embodiments of the present description.
Referring to fig. 2B, in some embodiments, the processor may predict the movement stability 260 of the collaborative robot based on a movement stability prediction model 250.
In some embodiments, the movement stability prediction model may be a machine learning model or other neural network model, such as CNN, or the like.
As shown in fig. 2B, in some embodiments, the inputs to the movement stability prediction model 250 may include at least one collaborative route 240, AGV data 241, a preset movement speed 242, a material weight parameter 243, a model 244 of the gripping device, and a preset turn speed 245; the output may include movement stability 260 corresponding to each route segment (collaborative route) of the collaborative robot. See fig. 1 for a more description of the preset moving speed, the preset steering speed.
In some embodiments, the movement stability prediction model may be trained by a second training sample with a plurality of second labels.
In some embodiments, the second training samples include a sample collaboration route, sample AGV data, sample travel speed, sample material weight parameters, sample gripping device model, and sample steering speed. The second label is the actual movement stability corresponding to the second training sample.
In some embodiments, the second training sample, the second tag, may be obtained based on a mobile experiment. For example, in the mobile experiment, different levels of AVG and different models of gripping devices can be performed on different cooperative routes. The mobile experiment process comprises the following steps: recording the change of the inclination angle data of the AGV when each cooperative robot moves in different cooperative routes; and determining the movement stability corresponding to different cooperative routes based on the inclination angle data. And taking different experimental conditions as a second training sample, and marking the actual movement stability under the different experimental conditions as a second label.
In some embodiments of the present disclosure, the stability of the collaborative robot may be ensured by grasping the stability prediction model and the movement stability prediction model to predict the stability of the collaborative robot in real time, so as to effectively improve the execution efficiency and safety of the collaborative task.
Fig. 3 is an exemplary flow chart of a correction cooperative robot shown in accordance with some embodiments of the present description. As shown in fig. 3, the process 300 includes the following steps. In some embodiments, the process 300 may be performed by a processor.
In step 310, in response to the collaborative robot being in execution of the coordinated collaborative task, a current characteristic of at least one collaborative robot under the coordinated collaborative task is determined.
In some embodiments, the processor may determine whether the plurality of collaborative robots are in performing a coordinated collaborative task by reading positioning and task information of the plurality of collaborative robots.
The current features refer to features of the collaborative robot that are relevant to performing collaborative tasks. For example, including positioning information, a task currently being executed (including a collaboration route, etc.), execution progress, movement speed, instructions to be executed, and the like.
In some embodiments, the current characteristics may be determined in a variety of ways. For example, positioning information, execution progress, moving speed, etc. may be determined by identifying the guiding means, and the currently executed task and the instruction to be executed may be read from the processor for determination.
Step 320, evaluating abnormal risk of the linked collaborative task based on current characteristics of at least one collaborative robot.
Abnormal risks refer to various risks in the process of executing the cooperative tasks by the cooperative robot. In some embodiments, the abnormal risk may include a risk that the collaborative robot collides with other collaborative robots in performing the collaborative task. The abnormal risk may be related to the probability of collision between the cooperative robots, the higher the probability, the greater the abnormal risk.
In some embodiments, the processor may evaluate the risk of abnormality in a variety of ways. For example, the processor may analyze current characteristics of a plurality of cooperative robots that perform the same coordinated cooperative task, calculate collision risk that each two cooperative robots may collide according to the plurality of cooperative robot positioning information, the moving speed, the cooperative route, and the like, so as to evaluate abnormal risk of the coordinated cooperative task.
In some embodiments, the collision risk between the collaborative robots may be determined based on the distance between the collaborative robots. For example, when the distance s between the cooperative robots is greater than the distance threshold x, the collision risk is 0; when the distance s between the cooperative robots is 0 (collision occurs), the collision risk is 1; when the distance s between the cooperative robots is 0-x, the collision risk is 1-s/x, namely, the smaller the distance s between the cooperative robots is, the larger the collision risk is.
In some embodiments, when there is a collision risk between only two cooperative robots in a certain joint cooperative task, then the abnormal risk of the joint cooperative task is equal to the collision risk. When there is a collision risk between the plurality of cooperative robots in a certain joint cooperative task, the abnormal risk of the joint cooperative task is equal to a fusion value (e.g., weighted fusion value, etc.) of the plurality of collision risks.
In some embodiments, the processor may evaluate the abnormal risk of the linked collaborative task based on an abnormal risk assessment model.
In some embodiments, the processor may construct a graph (hereinafter referred to as a linkage map) for nodes with each of the cooperating robots participating in the same linkage cooperative task. A graph is a data structure made up of nodes and edges that connect the nodes, which may have attributes. The edge may include an edge connecting any two nodes.
In some embodiments, the node attributes of the graph may reflect the current characteristics of the collaborative robot. For example, the node attributes include positioning information, a task currently being executed (including a cooperative route, etc.), an execution progress, a moving speed, an instruction to be executed, and the like.
In some embodiments, the edge attributes may reflect distance characteristics between the collaborative robots. The edge attributes include a straight line distance and a level difference between the cooperative robots. The level difference refers to a difference in level between the cooperative robots. For more on the level, see fig. 1 and its associated description.
In some embodiments, the abnormal risk assessment model may be a graph neural network model (Graph Neural Network, GNN). The input of the abnormal risk assessment model may be a linkage map constructed in the foregoing manner, and the output may be a collision risk between the respective cooperative robots. Wherein, the side of the GNN outputs collision risk between two cooperative robots.
In some embodiments, the abnormal risk assessment model may also be other graph models, such as a graph roll-up neural network model (GCNN), or other processing layers may be added to the graph neural network model, their processing methods modified, and so on.
In some embodiments, the abnormal risk assessment model may be trained from a plurality of third training samples with third labels. In some embodiments, the third training sample comprises a sample linkage map constructed based on sample positioning information of the sample cooperative robot, a task performed by the sample at the time, and at least one instruction to be performed by the sample, and the third label is a collision risk between the sample cooperative robots. In some embodiments, the third training samples and labels may be determined based on historical data.
According to the abnormal risk assessment model, the abnormal risk of the linkage cooperative task is assessed by the abnormal risk assessment model, and the risk of collision between the cooperative robots can be effectively assessed.
And step 330, correcting the at least one cooperative robot in response to the abnormal risk meeting the second preset condition.
The second preset condition is related to an abnormal risk, which may be used to determine whether to correct the collaborative robot. In some embodiments, the second preset condition includes that the risk of abnormality is greater than a risk threshold. The risk threshold may be a system preset value, an empirical value, an artificial preset value, or any combination thereof, and may be set according to an actual requirement, which is not limited in this specification.
The correction means adjustment processing performed on the cooperative robot. For example, the moving speed, moving direction, position, cooperation route, and the like of the cooperation robot are adjusted.
In some embodiments, the correction includes at least a positioning correction.
Positioning correction means correcting the position of the cooperative robot.
In some embodiments, for each of the plurality of collaborative robots to be corrected for which the risk of anomaly meets a second preset condition, the processor may determine at least one simulated position; after constructing a map based on the data of the plurality of cooperative robots to be corrected after the simulated positions are generated and the data of the normal cooperative robots (namely, the cooperative robots with abnormal risks not meeting the second preset condition) respectively, inputting the GNN, determining the simulated positions of the cooperative robots corresponding to the map as final adjustment positions when the abnormal risks among the cooperative robots in a certain map are smaller than risk threshold values, and generating instructions based on the adjustment positions to correct the positions of the cooperative robots to be corrected.
In some embodiments, the correction further includes instruction correction.
Instruction correction means that the collaborative robot is corrected by generating an instruction. For example, the processor may correct the collaborative tasks performed by the collaborative robot.
In some embodiments, for each of the plurality of collaborative robots to be corrected whose abnormal risk satisfies the second preset condition, the processor may further search for a next instruction or instructions to be executed (referred to as a replacement instruction, which may have a plurality of instructions) that is the same as an execution level (e.g., an importance level, a priority level, etc.) of the current execution task according to the current execution task of the collaborative robot to be corrected, and replace the current execution task with the replacement instruction (referred to as a reference current execution task); and (3) respectively constructing a map with the data of the normal cooperative robots based on at least one group of cooperative robot data after the replacement instruction, inputting the GNN, and correcting the cooperative robots to be corrected based on the reference current execution task corresponding to the map when the abnormal risk among the cooperative robots in a certain map is smaller than the risk threshold value, namely replacing the current execution task with the reference current execution task.
In some embodiments, the positioning correction and the instruction correction may be performed separately, sequentially, or in a mixed manner, which is not limited in this specification.
According to the method and the device for correcting the collaborative robot through the instruction correction, the current position of the collaborative robot can be adjusted, the current execution instruction of the collaborative robot can be adjusted, and flexible correction of the collaborative robot is achieved.
In some embodiments, the correction has priority. The processor may preferentially correct the cooperative robot having the higher priority.
In some embodiments, the priority is ordered by abnormal risk, the greater the abnormal risk, the higher the priority.
According to the method and the device for correcting the cooperative robots according to the priority order, the cooperative robots with high abnormal probability are corrected preferentially, and the risk of collision of the cooperative robots is effectively reduced.
According to the method and the device for correcting the coordinated cooperative tasks, the abnormal risk of the coordinated cooperative tasks is evaluated based on the current characteristics of the multiple cooperative robots, the abnormal risk meets the second preset condition, at least one robot is corrected, and smooth progress of the coordinated cooperative tasks is effectively guaranteed.
One of the embodiments of the present specification provides a collaborative robotic device including at least one memory and at least one processor; the at least one memory is configured to store computer instructions, and the at least one processor is configured to execute a portion of the computer instructions to implement the collaborative robot control method according to any one of the embodiments described above.
One of the embodiments of the present specification provides a computer-readable storage medium storing computer instructions, and when the computer reads the computer instructions in the storage medium, the computer executes the cooperative robot control method according to any one of the embodiments.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
Meanwhile, the specification uses specific words to describe the embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present description. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the present description. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., referred to in this specification is incorporated herein by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the content of this specification, documents that are currently or later attached to this specification in which the broadest scope of the claims to this specification is limited are also. It is noted that, if the description, definition, and/or use of a term in an attached material in this specification does not conform to or conflict with what is described in this specification, the description, definition, and/or use of the term in this specification controls.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.

Claims (10)

1. A collaborative robot, characterized in that the collaborative robot comprises a movable base, a gripping device, a driving device, an identification guiding device, and a processor; the identification guide device comprises an electromagnetic identification guide device and a visual identification guide device;
the processor is configured to:
in response to receiving a collaboration instruction, extracting a first location parameter and a second location parameter from the collaboration instruction, the first location parameter indicating an initial location of a target object, the second location parameter indicating a target location of the target object;
determining at least one collaboration route based on the first location parameter and the second location parameter;
and controlling the movable base and the grabbing device to move under the guidance of the electromagnetic identification guiding device and/or the visual identification guiding device based on the at least one cooperation route so as to complete cooperation tasks.
2. The cooperative robot of claim 1, wherein the movable base includes at least one retractable foot, the processor further configured to:
determining the stability of the cooperative robot in real time;
and adjusting the operation parameters of the cooperative robot in response to the stability failing to meet a first preset condition, wherein the operation parameters at least comprise the telescopic parameters of the at least one telescopic supporting foot.
3. The cooperative robot of claim 2, wherein the recognition guide means further includes inclination recognition means, and the stability includes movement stability and gripping stability;
the processor is further configured to:
predicting the grabbing stability of the collaborative robot based on a grabbing stability prediction model;
predicting the movement stability of the collaborative robot based on a movement stability prediction model;
the grabbing stability prediction model and the moving stability prediction model are machine learning models.
4. A collaborative robot according to claim 3, wherein the processor is further configured to:
determining grabbing adjustment parameters of the cooperative robot based on the grabbing stability prediction model;
And determining a movement adjustment parameter of the cooperative robot based on the movement stability prediction model.
5. The collaborative robot of claim 1, wherein the processor is further to:
determining the current characteristics of at least one cooperative robot under a linkage cooperative task in response to the cooperative robot being in the execution of the linkage cooperative task;
evaluating an abnormal risk of the linked collaborative task based on the current characteristics of at least one of the collaborative robots;
and responding to the abnormal risk meeting a second preset condition, and correcting at least one cooperative robot, wherein the correction at least comprises positioning correction.
6. A collaborative robotic control system, the system comprising:
the device comprises an extraction module, a control module and a control module, wherein the extraction module is used for responding to a received cooperation instruction, extracting a first position parameter and a second position parameter from the cooperation instruction, wherein the first position parameter indicates the initial position of a target object, and the second position parameter indicates the target position of the target object;
a determining module for determining at least one collaboration route based on the first location parameter and the second location parameter;
The control module is used for controlling the movable base and the grabbing device to move under the guidance of the electromagnetic identification guiding device and/or the visual identification guiding device so as to complete the cooperation task.
7. The system of claim 6, wherein the determination module is further to: determining the stability of the cooperative robot in real time; the control module is further to: and adjusting the operation parameters of the cooperative robot in response to the stability failing to meet a first preset condition, wherein the operation parameters at least comprise the telescopic parameters of the at least one telescopic supporting foot.
8. The system of claim 7, wherein the determination module is further to:
predicting the grabbing stability of the collaborative robot based on a grabbing stability prediction model;
predicting the movement stability of the collaborative robot based on a movement stability prediction model;
the grabbing stability prediction model and the moving stability prediction model are machine learning models.
9. A collaborative robotic device comprising at least one memory and at least one processor; the at least one memory stores computer instructions, the at least one processor to execute portions of the computer instructions to implement the method of any of claims 1-5.
10. A computer readable storage medium storing computer instructions which, when read by a computer in the storage medium, operate the method of any one of claims 1-5.
CN202310732510.7A 2023-06-20 2023-06-20 Cooperative robot and control method Active CN116690565B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202410185007.9A CN117885099A (en) 2023-06-20 2023-06-20 Collaborative robot and operation management system
CN202310732510.7A CN116690565B (en) 2023-06-20 2023-06-20 Cooperative robot and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310732510.7A CN116690565B (en) 2023-06-20 2023-06-20 Cooperative robot and control method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410185007.9A Division CN117885099A (en) 2023-06-20 2023-06-20 Collaborative robot and operation management system

Publications (2)

Publication Number Publication Date
CN116690565A true CN116690565A (en) 2023-09-05
CN116690565B CN116690565B (en) 2023-12-26

Family

ID=87843021

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410185007.9A Pending CN117885099A (en) 2023-06-20 2023-06-20 Collaborative robot and operation management system
CN202310732510.7A Active CN116690565B (en) 2023-06-20 2023-06-20 Cooperative robot and control method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202410185007.9A Pending CN117885099A (en) 2023-06-20 2023-06-20 Collaborative robot and operation management system

Country Status (1)

Country Link
CN (2) CN117885099A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2347867A1 (en) * 2010-01-18 2011-07-27 Samsung Electronics Co., Ltd. Walking control apparatus of robot and method of controlling the same
US20160031087A1 (en) * 2002-04-15 2016-02-04 Qualcomm Incorporated Method and system for robotic positioning
CN105843230A (en) * 2016-06-04 2016-08-10 浙江侍维波机器人科技有限公司 Dynamic adaptive stability control system for mobile robot
CN106985145A (en) * 2017-04-24 2017-07-28 合肥工业大学 One kind carries transfer robot
CN107097227A (en) * 2017-04-17 2017-08-29 北京航空航天大学 A kind of man-machine collaboration robot system
CN109605371A (en) * 2018-12-17 2019-04-12 北京卫星制造厂有限公司 A kind of movable type series-parallel robot process system
CN112454352A (en) * 2020-10-30 2021-03-09 杨兴礼 Self-leveling and navigation and movement method and system, electronic equipment and medium
CN115122391A (en) * 2022-06-07 2022-09-30 深圳墨影科技有限公司 Mobile cooperative robot stabilizing system and control method thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160031087A1 (en) * 2002-04-15 2016-02-04 Qualcomm Incorporated Method and system for robotic positioning
EP2347867A1 (en) * 2010-01-18 2011-07-27 Samsung Electronics Co., Ltd. Walking control apparatus of robot and method of controlling the same
CN105843230A (en) * 2016-06-04 2016-08-10 浙江侍维波机器人科技有限公司 Dynamic adaptive stability control system for mobile robot
CN107097227A (en) * 2017-04-17 2017-08-29 北京航空航天大学 A kind of man-machine collaboration robot system
CN106985145A (en) * 2017-04-24 2017-07-28 合肥工业大学 One kind carries transfer robot
CN109605371A (en) * 2018-12-17 2019-04-12 北京卫星制造厂有限公司 A kind of movable type series-parallel robot process system
CN112454352A (en) * 2020-10-30 2021-03-09 杨兴礼 Self-leveling and navigation and movement method and system, electronic equipment and medium
CN115122391A (en) * 2022-06-07 2022-09-30 深圳墨影科技有限公司 Mobile cooperative robot stabilizing system and control method thereof

Also Published As

Publication number Publication date
CN117885099A (en) 2024-04-16
CN116690565B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
JP6822719B2 (en) Robot system with automatic package scanning and registration mechanism, and how it works
US20230124091A1 (en) Systems and methods for process tending with a robot arm
JP5503419B2 (en) Automated guided vehicle and travel control method
CN110673612A (en) Two-dimensional code guide control method for autonomous mobile robot
US9785911B2 (en) System and method for piece-picking or put-away with a mobile manipulation robot
KR102159847B1 (en) Determination of grip spaces related to object using a robot
JP2020534621A (en) Cost of Optimal Mutual Collision Avoidance-Dynamic Window Approach with Critics
US20150316925A1 (en) Automation system and a method for tending a production system
US6246931B1 (en) Method of and apparatus for determining optimum delivery route for articles
KR20180109107A (en) Intelligent cart robotic devices for use in stores and warehouses
US11345032B2 (en) Autonomous moving body and control program for autonomous moving body
CN109571408B (en) Robot, angle calibration method of inventory container and storage medium
CN114405866B (en) Visual guide steel plate sorting method, visual guide steel plate sorting device and system
US11117260B2 (en) Method for controlling a plurality of mobile driverless manipulator systems
Behrje et al. An autonomous forklift with 3d time-of-flight camera-based localization and navigation
CN112631269A (en) Autonomous mobile robot and control program for autonomous mobile robot
CN116690565B (en) Cooperative robot and control method
WO2022169772A1 (en) Automated unit load fulfillment methods and systems
CN112428248B (en) Robot system and control method
KR101933827B1 (en) The movable logistics transportation robot system having fork-type lifter and operation method thereof
US20190384307A1 (en) Autonomous moving body and control program for autonomous moving body
JP2019112236A (en) Movable projector type picking device
CN115157245A (en) Mechanical arm control system and method based on deep learning
WO2022014133A1 (en) Mobile manipulator, method for controlling same, program
JP6727159B2 (en) Movable projector type picking device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant