CN112847357A - Gear-taking robot control method and system - Google Patents

Gear-taking robot control method and system Download PDF

Info

Publication number
CN112847357A
CN112847357A CN202011640424.6A CN202011640424A CN112847357A CN 112847357 A CN112847357 A CN 112847357A CN 202011640424 A CN202011640424 A CN 202011640424A CN 112847357 A CN112847357 A CN 112847357A
Authority
CN
China
Prior art keywords
robot
gear
information
file
target file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011640424.6A
Other languages
Chinese (zh)
Other versions
CN112847357B (en
Inventor
李志红
徐天翔
朱军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Zhixing Wulian Technology Co ltd
Original Assignee
Ningbo Zhixing Wulian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Zhixing Wulian Technology Co ltd filed Critical Ningbo Zhixing Wulian Technology Co ltd
Priority to CN202011640424.6A priority Critical patent/CN112847357B/en
Publication of CN112847357A publication Critical patent/CN112847357A/en
Application granted granted Critical
Publication of CN112847357B publication Critical patent/CN112847357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a gear-taking robot control method and a gear-taking robot control system, wherein the method comprises the following steps: collecting image information of a target file, a target file box and a target file rack; acquiring current position information and state information of the gear-taking robot by adopting a visual positioning technology according to the image information; generating a correction instruction according to historical position information and historical state information of the gear-taking robot; sending a correction instruction to a gear-taking robot PLC system for correcting the current position information and state information of the current gear-taking robot; and driving the claw of the robot to grab the target file or the moving target file according to the corrected current position information and the state information.

Description

Gear-taking robot control method and system
Technical Field
The invention relates to the field of Internet of things, in particular to a gear-taking robot control method and system.
Background
The archive taking robot in the prior art only simply controls the robot to grab the archives corresponding to the positions through the electric control system, so that the existing archive taking robot has no sensing device, particularly identifies information such as position information of the archives and forms of an archive box, and the existing technical scheme cannot realize fine management and operation on the archives and cannot restore and return the positions of the archives if the archives deviate.
Disclosure of Invention
One of the purposes of the invention is to provide a method and a system for controlling a gear-taking robot, wherein the method and the system are provided with a visual identification module on a robot body, the visual identification module can identify the relevant conditions of a file or a file box through a deep learning model, and the fine control and management of the file can be realized.
One of the objectives of the present invention is to provide a method and a system for controlling a gear-shifting robot, wherein the method and the system have a laser range finder, the laser range finder can accurately position the robot, and the movements, rotations, etc. of the gripper of the robot can be more accurate during the gear-shifting process.
One of the objectives of the present invention is to provide a method and a system for controlling a document-fetching robot, wherein the method and the system adopt a self-adaptive manner to enable the robot to continuously perform self-correction according to acquired positioning data and historical positioning data, so as to improve the control management level of documents.
One of the objectives of the present invention is to provide a method and a system for controlling a document-fetching robot, wherein the method and the system adopt a dual-positioning mode of visual positioning and laser positioning to make management of documents more accurate.
In order to achieve at least one of the above-mentioned objects, the present invention further provides a method for controlling a gear-taking robot, the method comprising the steps of:
collecting image information of a target file, a target file box and a target file rack;
acquiring current position information and state information of the gear-taking robot by adopting a visual positioning technology according to the image information;
generating a correction instruction according to historical position information and historical state information of the gear-taking robot;
sending a correction instruction to a gear-taking robot PLC system for correcting the current position information and state information of the current gear-taking robot;
and driving the claw of the robot to grab the target file or the moving target file according to the corrected current position information and the state information.
According to a preferred embodiment of the invention, images of two side baffles of the file rack are identified, and the claw and the rotation direction of the robot are finely positioned by adopting a visual positioning technology according to the two side baffles.
According to another preferred embodiment of the invention, the RFID signal of the file is acquired, the target file is roughly positioned according to the RFID signal, the rough position information of the target file is acquired, and the robot paw is driven to grab the file according to the rough position information.
According to another preferred embodiment of the invention, a laser signal is sent to the target file and a reflected laser signal is received, the distance between the target file and the robot gripper is calculated according to the time difference between the laser signal and the reflected laser signal, and a pulse is generated by the PLC system according to the distance so as to drive the robot gripper to grab or move the target file.
According to another preferred embodiment of the invention, the image information of the target file and the image information of the target file frame are stored, historical offset is stored, the compensation offset of the next robot gripper to the file is calculated according to the historical offset, and the robot gripper is driven to move according to the compensation offset.
According to another preferred embodiment of the invention, the RFID signal is acquired to acquire the coarse position information of the file box, the laser is transmitted to the file box and the reflected laser is received, whether the position of the file box is empty is judged according to the time difference between the transmitted laser and the received laser, and if the position of the file box is empty, the grabbing operation is not executed.
According to another preferred embodiment of the present invention, an offset curve is formed according to the saved historical offsets, the positioning effect is analyzed according to the curve shape, and the cause of the positioning error is analyzed according to the curve offset mutation or inflection point.
According to another preferred embodiment of the present invention, the trained deep convolutional neural network is used to identify the state information of the archive box, wherein the state information includes the appearance change of the archive box.
According to another preferred embodiment of the invention, the opening amplitude of the claw of the robot is adjusted according to the identified appearance change, the bulge and the cracking image in the appearance change of the file box are identified, and the bulge and the cracking image are uploaded for sending out alarm information.
In order to achieve at least one of the above objects, the present invention further provides a gear-taking robot control system comprising:
a grabbing module;
a visual recognition module;
a control module;
a storage module;
the robot gripper comprises a gripping module, a visual identification module and a storage module, wherein the gripping module comprises a camera, the camera is installed on the gripper, the visual identification module collects image information of archives, an archive frame and an archive box, the current position information and state information of the robot gripper are acquired by adopting a visual positioning technology according to the image information of the archive frame, the control module receives the recognized image information and then recognizes the appearance form and appearance change of a target in the image information by adopting a convolutional neural network, uploads the image information according to the recognized appearance form, and the gripping mode of the gripping module is adjusted according to the recognized appearance change, and the storage module stores the image information.
Drawings
FIG. 1 is a schematic flow chart of a gear-shifting robot control method according to the present invention;
fig. 2 is a schematic diagram showing a control system module of the gear-shifting robot according to the present invention.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
It will be understood by those skilled in the art that in the present disclosure, the terms "longitudinal," "lateral," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for ease of description and simplicity of description, and do not indicate or imply that the referenced devices or components must be in a particular orientation, constructed and operated in a particular orientation, and thus the above terms are not to be construed as limiting the present invention.
It is understood that the terms "a" and "an" should be interpreted as meaning that a number of one element or element is one in one embodiment, while a number of other elements is one in another embodiment, and the terms "a" and "an" should not be interpreted as limiting the number.
Referring to fig. 1 and 2, the present invention discloses a method and a system for controlling a gear-taking robot, wherein the gear-taking robot includes a grabbing module, a visual recognition module, a control module, and a storage module, wherein the grabbing module includes a claw, a rotating shaft, a moving rod, and the like, and the claw is connected to a driving device including but not limited to a servo motor and the like, and is used for driving the claw to grab a target file. The vision identification module comprises at least one camera, the vision identification module is in communication connection with the control module, and the camera is installed on the claw and used for collecting the grabbing direction of the claw and image information near the claw and carrying out vision positioning on the claw according to the image information. The control module can be connected with a remote control center or a cloud end, the control module is used for sending a gripping instruction to the claw, and the control module corrects the positioning information and the historical positioning information acquired by the visual identification module. And after the control module acquires the image information, identifying relevant characteristics in the image information and uploading the relevant characteristics to the control platform.
Specifically, the position of the claw of the robot is determined by a visual positioning method, the visual identification module identifies the baffles on two sides of the file rack, and determines the position information and the state information of the current claw according to the positions of the baffles, wherein the state information comprises the angle of the claw relative to the file rack and the file box, the rotation or clamping angle of the claw, and the position information is the coordinate information of a virtual coordinate system formed by the claw in the control module. The visual positioning technology is a conventional technology, and only two recognizable reference planes which are not parallel to each other are required in an image. Therefore, in the present invention, the extension planes of the baffles on the two sides of the file rack are set to intersect, and the positioning method of the visual positioning technology will not be described in detail.
It should be noted that the reasons why the position of the robot gripper is inaccurate include: 1. the initial positioning is not accurate, that is, the initial coordinates of the profile may have a large difference from the actual coordinates during the initial operation, and thus the moving position of the robot paw is not accurate at the input initial coordinates. 2. The accumulated error caused by the loss of the machine and other factors in the operation process of the robot claw can increase along with the time, and 3, the files or the file rack slightly move due to human or other reasons, so that the files generate position deviation relative to the robot claw, wherein the deviation is represented by the deviation of the robot claw in the space and the correct position.
In order to correct the deviation, the invention further collects the grabbing operation image of the robot paw to the file box each time, calculates the robot paw position data and the file box position data of each operation according to the visual positioning technology, and stores the robot paw position data and the file box position data after each operation as historical data.
Further, the system modifies new operating instructions based on the historical data, for example: the claw needs to execute grabbing operation on the target file box, the coordinate position of the target file box needs to be input firstly, and n is calculated before the coordinate position is input1Historical coordinate data for a specified number of target archive boxes, where n1Can be set to be more than or equal to 100 and more than or equal to n1More than or equal to 1, counting the average value of the historical coordinate data, calculating the difference value between the historical coordinate average value and a preset coordinate, setting a first difference value threshold value, wherein the difference value is the distance in the space coordinate system, if the difference value is more than the first difference value threshold value, compensating the difference value to the preset coordinate value, and generating a compensation coordinate valueThe compensated coordinate position command drives the claw to a new coordinate according to the compensated coordinate position command, the method can effectively avoid large offset of the claw caused by accumulated error and inaccurate initialized positioning, and the method can be self-adaptive and corrected, thereby reducing the self management cost of the robot. In one preferred embodiment of the present invention, the coordinate position of the file box can be realized by the RFID module: the RFID label is fixed at the designated position of the file box, the RFID reader-writer is arranged on the claw of the robot, the RFID reader-writer is in communication connection with the control module, the RFID reader-writer acquires the coarse position of the target file box, and then the relatively accurate position data of the file box is acquired according to the visual positioning technology. In another preferred embodiment of the present invention, a laser range finder is installed on the robot gripper for emitting a laser beam to the outside, the laser range finder is connected to the control module in a communication manner, the control module senses position information on the file rack through the laser range finder and judges whether a file box exists at a corresponding position on the file rack according to a time difference between the emission of the laser beam and the reception of the reflected laser beam, and the laser range finder is further configured to: whether obstacles exist in the motion process of the robot claw, the distance between the robot claw and the file rack and the angle between the robot claw and the file rack are judged, and the robot claw is in the depth for grabbing the file box.
Furthermore, because the position of self can be judged according to two crossing reference surfaces to the visual positioning technique, because the position of archives frame can be more stable relatively to the position of archives box is based on the position arrangement setting of archives frame, consequently adopts the visual positioning technique can realize fixing a position robot claw self to the criss-cross baffle in archives support both sides, and the position data and the status information after implementing visual positioning to robot claw every time are saved, keep in as historical data storage module. When the system generates a new operating instruction for a target profile: calculating the grabbing coordinate position of the claw according to the position of the file box, storing the grabbing coordinate position as historical data, and setting n2A history of capture coordinate positions, where n2Can be set to be more than or equal to 100 and more than or equal to n2Not less than 1, calculating the position n of the grabbing coordinate2Calculating the average value of the historical data to obtain a capture coordinate position n2Setting a second difference threshold value according to the difference between the average value of the historical data and the actual coordinate position of the file box after visual positioning, if the difference is larger than the second difference threshold value, compensating the grabbing position, and partially compensating the difference to the grabbing coordinate position, so that the grabbing position is closer to the accurate grabbing position of the file box. It should be noted that the compensation method is a zoom-in spatial distance, and may be implemented by an incremental modification or a decremental modification of the coordinate data.
It is worth mentioning that the present invention further employs a deep convolutional neural network to identify appearance changes of the archive box, wherein the appearance changes include: color changes, surface structure changes of the filing case such as: and image data such as bulges, cracks and the like appear, it needs to be noted that the deep convolutional neural network is trained in advance, and the related training process and training parameters are not described in detail in the invention. When the collected image information has the characteristics of bulges, cracks and the like, the system can send alarm information for maintenance personnel to maintain or replace the file box, and the reasons for the appearance change of the file box mainly include: 1. file box file loads overfilledly for the file box atress warp, and the paper file in 2, the file box leads to the file box to appear damaging because environment humidity is too big, and 3, the file box appears damaging in snatching or the use.
Furthermore, in another preferred embodiment of the present invention, the trained deep convolutional neural network can be used to determine the text information on the surface of the file box, and determine whether the position of the text information in the shooting interface changes, and further determine the moving distance according to the position of the text information in the shooting interface.
RFID labels in the RFID modules are attached to all the file boxes, and readers and writers of the RFID modules collect all RFID label signals and associate each RFID label signal with the corresponding file box for checking and analyzing the file boxes in the moving process.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program, when executed by a Central Processing Unit (CPU), performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wire segments, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless section, wire section, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be understood by those skilled in the art that the embodiments of the present invention described above and illustrated in the drawings are given by way of example only and not by way of limitation, the objects of the invention having been fully and effectively achieved, the functional and structural principles of the present invention having been shown and described in the embodiments, and that various changes or modifications may be made in the embodiments of the present invention without departing from such principles.

Claims (10)

1. A gear-taking robot control method is characterized by comprising the following steps:
collecting image information of a target file, a target file box and a target file rack;
acquiring current position information and state information of the gear-taking robot by adopting a visual positioning technology according to the image information;
generating a correction instruction according to historical position information and historical state information of the gear-taking robot;
sending a correction instruction to a gear-taking robot PLC system for correcting the current position information and state information of the current gear-taking robot;
and driving the claw of the robot to grab the target file or the moving target file according to the corrected current position information and the state information.
2. The method as claimed in claim 1, wherein RFID signals of the files are obtained, the target files are coarsely positioned according to the RFID signals, coarse position information of the target files is obtained, and the robot gripper is driven to grab the files according to the coarse position information.
3. The method as claimed in claim 2, wherein images of two side barriers of the file rack are recognized, and the claw and the rotation direction of the robot are precisely positioned according to the two side barriers by using a visual positioning technology.
4. The method as claimed in claim 1, wherein the method comprises sending a laser signal to the target file and receiving a reflected laser signal, calculating a distance between the target file and the robot gripper according to a time difference between the laser signal and the reflected laser signal, and generating a pulse by the PLC system according to the distance to drive the robot gripper to grab or move the target file.
5. The method as claimed in claim 3, wherein the image information of the target file and the image information of the target file rack are stored, the historical offset is stored, the compensation offset of the next robot gripper to the file is calculated according to the historical offset, and the robot gripper is driven to move according to the compensation offset.
6. The method for controlling the gear-taking robot according to claim 1, wherein the RFID signal is acquired to acquire rough position information of the file box, the laser is transmitted to the file box and the reflected laser is received, whether the position of the file box is empty is judged according to the time difference between the transmitted laser and the received laser, and if the position of the file box is empty, the grabbing operation is not executed.
7. The method of claim 5, wherein an offset curve is formed based on the stored historical offsets, the positioning effect is analyzed based on the shape of the curve, and the cause of the positioning error is analyzed based on the sudden change or inflection point of the curve offset.
8. The method of claim 1, wherein the trained deep convolutional neural network is used to identify the status information of the archive box, wherein the status information comprises the appearance change of the archive box.
9. The method for controlling the gear-taking robot according to claim 1, wherein the opening amplitude of the claw of the robot is adjusted according to the identified appearance change, bulges and cracked images in the appearance change of the file box are identified, and the bulges and the cracked images are uploaded for sending out alarm information.
10. A gear-taking robot control system, comprising:
a grabbing module;
a visual recognition module;
a control module;
a storage module;
the robot gripper comprises a gripping module, a visual identification module and a storage module, wherein the gripping module comprises a camera, the camera is installed on the gripper, the visual identification module collects image information of archives, an archive frame and an archive box, the current position information and state information of the robot gripper are acquired by adopting a visual positioning technology according to the image information of the archive frame, the control module receives the recognized image information and then recognizes the appearance form and appearance change of a target in the image information by adopting a convolutional neural network, uploads the image information according to the recognized appearance form, and the gripping mode of the gripping module is adjusted according to the recognized appearance change, and the storage module stores the image information.
CN202011640424.6A 2020-12-31 2020-12-31 Gear-taking robot control method and system Active CN112847357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011640424.6A CN112847357B (en) 2020-12-31 2020-12-31 Gear-taking robot control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011640424.6A CN112847357B (en) 2020-12-31 2020-12-31 Gear-taking robot control method and system

Publications (2)

Publication Number Publication Date
CN112847357A true CN112847357A (en) 2021-05-28
CN112847357B CN112847357B (en) 2022-04-19

Family

ID=76000668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011640424.6A Active CN112847357B (en) 2020-12-31 2020-12-31 Gear-taking robot control method and system

Country Status (1)

Country Link
CN (1) CN112847357B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009151419A (en) * 2007-12-19 2009-07-09 Advanced Telecommunication Research Institute International Method and apparatus for specifying target
CN108177143A (en) * 2017-12-05 2018-06-19 上海工程技术大学 A kind of robot localization grasping means and system based on laser vision guiding
CN209063109U (en) * 2019-04-15 2019-07-05 南京航浦机械科技有限公司 A kind of file administration robot
CN110355736A (en) * 2019-08-05 2019-10-22 福建(泉州)哈工大工程技术研究院 A kind of file administration robot
CN210307790U (en) * 2019-04-28 2020-04-14 国家电网有限公司 Automatic addressing archives robot
CN111645066A (en) * 2020-04-30 2020-09-11 南京理工大学 File management system and method combining visual guidance grabbing robot with radio frequency monitoring
CN111823236A (en) * 2020-07-25 2020-10-27 湘潭大学 Library management robot and control method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009151419A (en) * 2007-12-19 2009-07-09 Advanced Telecommunication Research Institute International Method and apparatus for specifying target
CN108177143A (en) * 2017-12-05 2018-06-19 上海工程技术大学 A kind of robot localization grasping means and system based on laser vision guiding
CN209063109U (en) * 2019-04-15 2019-07-05 南京航浦机械科技有限公司 A kind of file administration robot
CN210307790U (en) * 2019-04-28 2020-04-14 国家电网有限公司 Automatic addressing archives robot
CN110355736A (en) * 2019-08-05 2019-10-22 福建(泉州)哈工大工程技术研究院 A kind of file administration robot
CN111645066A (en) * 2020-04-30 2020-09-11 南京理工大学 File management system and method combining visual guidance grabbing robot with radio frequency monitoring
CN111823236A (en) * 2020-07-25 2020-10-27 湘潭大学 Library management robot and control method thereof

Also Published As

Publication number Publication date
CN112847357B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN112476434B (en) Visual 3D pick-and-place method and system based on cooperative robot
EP3590093A1 (en) Methods and systems for detecting, recognizing, and localizing pallets
CN103419944B (en) Air bridge and automatic abutting method therefor
WO2019028075A1 (en) Intelligent robots
US11331799B1 (en) Determining final grasp pose of robot end effector after traversing to pre-grasp pose
CN110378360B (en) Target calibration method and device, electronic equipment and readable storage medium
CN110640730A (en) Method and system for generating three-dimensional model for robot scene
KR101927132B1 (en) Learning-based Logistics Automation System, Device and Method
CN112849898B (en) Self-driven robot and carrying method thereof
CN113269085B (en) Linear conveyor belt tracking control method, system, device and storage medium
US11318612B2 (en) Control device, control method, and storage medium
EP3974124A1 (en) Closed loop solution for loading/unloading cartons by truck unloader
CN112605993B (en) Automatic file grabbing robot control system and method based on binocular vision guidance
EP4114622A1 (en) Imaging process for detecting failure modes
US20240005115A1 (en) System and method to determine whether an image contains a specific barcode
WO2022010681A1 (en) Autonomous robotic navigation in storage site
CN112847357B (en) Gear-taking robot control method and system
KR102119161B1 (en) Indoor position recognition system of transpotation robot
CN210546403U (en) Three-degree-of-freedom visual detection platform
CN114187312A (en) Target object grabbing method, device, system, storage medium and equipment
US11559888B2 (en) Annotation device
CN115446846A (en) Robot is checked to books based on bar code identification
CN112149687A (en) Method for object recognition
KR102555708B1 (en) Method of position recognition and driving control for an autonomous mobile robot that tracks tile grid pattern
CN114019977B (en) Path control method and device for mobile robot, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant