CN114237242B - Method and device for controlling robot based on optical encoder - Google Patents

Method and device for controlling robot based on optical encoder Download PDF

Info

Publication number
CN114237242B
CN114237242B CN202111528080.4A CN202111528080A CN114237242B CN 114237242 B CN114237242 B CN 114237242B CN 202111528080 A CN202111528080 A CN 202111528080A CN 114237242 B CN114237242 B CN 114237242B
Authority
CN
China
Prior art keywords
robot
phase signal
displacement
optical encoder
task information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111528080.4A
Other languages
Chinese (zh)
Other versions
CN114237242A (en
Inventor
薛昊峰
支涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunji Technology Co Ltd
Original Assignee
Beijing Yunji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunji Technology Co Ltd filed Critical Beijing Yunji Technology Co Ltd
Priority to CN202111528080.4A priority Critical patent/CN114237242B/en
Publication of CN114237242A publication Critical patent/CN114237242A/en
Application granted granted Critical
Publication of CN114237242B publication Critical patent/CN114237242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The disclosure relates to the technical field of charging, and provides a method and a device for controlling a robot based on an optical encoder. The method comprises the following steps: acquiring an A-phase signal, a B-phase signal and a Z-phase signal of the optical encoder, and determining the line number of the optical encoder according to the A-phase signal, the B-phase signal and the Z-phase signal; calculating a speed of the robot by an optical encoder having determined the number of lines in the travel of the robot, wherein the optical encoder is provided on the robot; and acquiring task information of the robot, and controlling the robot according to the task information and the speed. By adopting the technical means, the problem that in the prior art, a robot can only change the process of executing the task by the robot through a user instruction acquired in real time is solved.

Description

Method and device for controlling robot based on optical encoder
Technical Field
The disclosure relates to the technical field of charging, in particular to a method and a device for controlling a robot based on an optical encoder.
Background
Currently, the control robot issues control instructions to the robot either through voice control or through an operation interface. Whether the user issues the voice or issues the instruction, the user is required to issue the command to the robot, but in many scenes, the robot needs to work independently, and the user cannot issue the command to the robot in real time. For example, in a scenario of robot delivering goods, the robot needs to execute the delivering task by itself according to the information of the goods, that is, the task information received by the robot, but in the prior art, when the robot executes the delivering task by itself, the robot normally executes the delivering task according to a preset program, or the process of delivering the goods is changed by acquiring a user command in real time.
In the process of implementing the disclosed concept, the inventor finds that at least the following technical problems exist in the related art: the robot can only change the progress of executing the task by the user instruction acquired in real time.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a method, an apparatus, an electronic device, and a computer-readable storage medium for controlling a robot based on an optical encoder, so as to solve the problem in the prior art that the robot can only change the process of executing a task by itself through a user instruction acquired in real time.
In a first aspect of the embodiments of the present disclosure, there is provided a method for controlling a robot based on an optical encoder, including: acquiring an A-phase signal, a B-phase signal and a Z-phase signal of the optical encoder, and determining the line number of the optical encoder according to the A-phase signal, the B-phase signal and the Z-phase signal; calculating a speed of the robot by an optical encoder having determined the number of lines in the travel of the robot, wherein the optical encoder is provided on the robot; and acquiring task information of the robot, and controlling the robot according to the task information and the speed.
In a second aspect of the embodiments of the present disclosure, there is provided an apparatus for controlling a robot based on an optical encoder, including: a determining module configured to acquire an a-phase signal, a B-phase signal, and a Z-phase signal of the optical encoder, and determine a line number of the optical encoder based on the a-phase signal, the B-phase signal, and the Z-phase signal; a calculation module configured to calculate a speed of the robot through an optical encoder having determined a number of lines while the robot is traveling, wherein the optical encoder is provided on the robot; and the control module is configured to acquire task information of the robot and control the robot according to the task information and the speed.
In a third aspect of the disclosed embodiments, an electronic device is provided, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
In a fourth aspect of the disclosed embodiments, a computer-readable storage medium is provided, which stores a computer program which, when executed by a processor, implements the steps of the above-described method.
Compared with the prior art, the embodiment of the disclosure has the beneficial effects that: acquiring an A-phase signal, a B-phase signal and a Z-phase signal of the optical encoder, and determining the line number of the optical encoder according to the A-phase signal, the B-phase signal and the Z-phase signal; calculating a speed of the robot by an optical encoder having determined the number of lines in the travel of the robot, wherein the optical encoder is provided on the robot; and acquiring task information of the robot, and controlling the robot according to the task information and the speed. By adopting the technical means, the problem that in the prior art, the robot can only change the process of executing the task by the robot through the user instruction acquired in real time can be solved, and the task execution efficiency of the robot is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are required for the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a scene schematic diagram of an application scene of an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for controlling a robot based on an optical encoder provided in an embodiment of the disclosure;
FIG. 3 is a schematic diagram of an apparatus for controlling a robot based on an optical encoder according to an embodiment of the disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
A method and apparatus for controlling a robot based on an optical encoder according to embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a scene diagram of an application scene of an embodiment of the present disclosure. The application scenario may include terminal devices 1, 2 and 3, a server 4 and a network 5.
The terminal devices 1, 2 and 3 may be hardware or software. When the terminal devices 1, 2 and 3 are hardware, they may be various electronic devices having a display screen and supporting communication with the server 4, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like; when the terminal apparatuses 1, 2, and 3 are software, they can be installed in the electronic apparatus as above. The terminal devices 1, 2 and 3 may be implemented as a plurality of software or software modules, or as a single software or software module, to which the embodiments of the present disclosure are not limited. Further, various applications, such as a data processing application, an instant messaging tool, social platform software, a search class application, a shopping class application, and the like, may be installed on the terminal devices 1, 2, and 3.
The server 4 may be a server that provides various services, for example, a background server that receives a request transmitted from a terminal device with which communication connection is established, and the background server may perform processing such as receiving and analyzing the request transmitted from the terminal device and generate a processing result. The server 4 may be a server, a server cluster formed by a plurality of servers, or a cloud computing service center, which is not limited in the embodiment of the present disclosure.
The server 4 may be hardware or software. When the server 4 is hardware, it may be various electronic devices that provide various services to the terminal devices 1, 2, and 3. When the server 4 is software, it may be a plurality of software or software modules providing various services to the terminal devices 1, 2, and 3, or may be a single software or software module providing various services to the terminal devices 1, 2, and 3, which is not limited by the embodiments of the present disclosure.
The network 5 may be a wired network using coaxial cable, twisted pair wire, and optical fiber connection, or may be a wireless network that can implement interconnection of various communication devices without wiring, for example, bluetooth (Bluetooth), near field communication (Near Field Communication, NFC), infrared (Infrared), etc., which is not limited by the embodiment of the present disclosure.
The user can establish a communication connection with the server 4 via the network 5 through the terminal devices 1, 2, and 3 to receive or transmit information or the like. It should be noted that the specific types, numbers and combinations of the terminal devices 1, 2 and 3, the server 4 and the network 5 may be adjusted according to the actual requirements of the application scenario, which is not limited by the embodiment of the present disclosure.
Fig. 2 is a flow chart of a method for controlling a robot based on an optical encoder according to an embodiment of the disclosure. The method of fig. 2 for controlling a robot based on an optical encoder may be performed by the terminal device or the server of fig. 1. As shown in fig. 2, the method for controlling a robot based on an optical encoder includes:
S201, acquiring an A-phase signal, a B-phase signal and a Z-phase signal of an optical encoder, and determining the line number of the optical encoder according to the A-phase signal, the B-phase signal and the Z-phase signal;
s202, calculating the speed of the robot through an optical encoder with the number of lines determined during the running of the robot, wherein the optical encoder is arranged on the robot;
s203, task information of the robot is acquired, and the robot is controlled according to the task information and the speed.
The photoelectric encoder is a sensor which converts mechanical geometric displacement on an output shaft into pulse or digital quantity through photoelectric conversion. The photoelectric encoder consists of a grating disk and a photoelectric detection device. The grating disk is provided with a plurality of rectangular holes at equal intervals on a circular plate with a certain diameter. Because the photoelectric encoder is coaxial with the motor, when the motor rotates, the grating disk rotates at the same speed with the motor, a plurality of pulse signals are detected and output through a detection device consisting of electronic elements such as a light emitting diode and the like, and the number of output pulses of the photoelectric encoder per second can reflect the current rotating speed of the motor. In addition, to determine the rotation direction, the code wheel may also provide two pulse signals, i.e., an a-phase signal and a B-phase signal, which are 90 ° out of phase. Encoders can be classified into optical, magnetic, inductive, and capacitive types according to the detection principle. According to the scale method and the signal output form, the method can be divided into three types of incremental type, absolute type and mixed type. The (REP) precision machine manufacturing automation engineering and manufacturing engineering such as packaging precision electronic manufacturing are used in many cases.
The number of lines of the encoder is the resolution of the encoder, i.e. the number of pulses (i.e. one cycle) emitted by one revolution. The Z phase signal is sent by a timer of the optical encoder, and a period of the optical encoder is arranged between two pulses of the Z phase signal. Since the line number of the optical encoder is already determined, that is, the number of pulses sent by the optical encoder in one period, by detecting the number of pulses sent by the optical encoder in unit time, the number of pulses sent by the optical encoder in unit time divided by the number of pulses sent by the optical encoder in one period can be obtained, and the speed of the robot (the rotation speed of the optical encoder is the speed of the robot) can be obtained by rotating the optical encoder by several rotations in unit time.
According to the technical scheme provided by the embodiment of the disclosure, an A-phase signal, a B-phase signal and a Z-phase signal of the optical encoder are obtained, and the line number of the optical encoder is determined according to the A-phase signal, the B-phase signal and the Z-phase signal; calculating a speed of the robot by an optical encoder having determined the number of lines in the travel of the robot, wherein the optical encoder is provided on the robot; and acquiring task information of the robot, and controlling the robot according to the task information and the speed. By adopting the technical means, the problem that in the prior art, the robot can only change the process of executing the task by the robot through the user instruction acquired in real time can be solved, and the task execution efficiency of the robot is further improved.
In step S201, an a-phase signal, a B-phase signal, and a Z-phase signal of the optical encoder are acquired, and the number of lines of the optical encoder is determined from the a-phase signal, the B-phase signal, and the Z-phase signal, including: receiving a Z-phase signal through a first pin of the micro control unit, and when a rising edge of the Z-phase signal is detected for the first time: receiving an A phase signal through a second pin of the micro control unit, counting rising edges and falling edges of the A phase signal, and stopping counting the rising edges and the falling edges of the A phase signal when the rising edges of the Z phase signal are detected for the second time to obtain the A phase pulse number; b phase signals are received through a third pin of the micro control unit, rising edges and falling edges of the B phase signals are counted, and when rising edges of Z phase signals are detected for the second time, the counting of the rising edges and the falling edges of the B phase signals is stopped, so that the B phase pulse number is obtained; the sum of the number of A-phase pulses and the number of B-phase pulses divided by four is used as the line number of the optical encoder, wherein the micro control unit is arranged on the optical encoder.
From the first detection of the rising edge of the Z-phase signal to the second detection of the rising edge of the Z-phase signal, the detection range is two pulses of the Z-phase signal, and the detection range is one period of operation of the optical encoder. Counting rising edges and falling edges of the A-phase signals in one period to obtain the A-phase pulse number; and counting the rising edge and the falling edge of the B phase signal in one period to obtain the B phase pulse number. Since the sum of the a-phase pulse number and the B-phase pulse number is calculated in one cycle by two signals, one pulse is counted twice (both the rising edge and the falling edge of one pulse are counted), this calculation corresponds to four times the frequency, and the value obtained by dividing the sum of the a-phase pulse number and the B-phase pulse number by four is used as the line number of the optical encoder.
In step S203, task information of the robot is acquired, and the robot is controlled according to the task information and the speed, including: acquiring a current position of a robot and an area map of an area where the robot is located; determining a target address of the robot and the latest time for the robot to reach the target address according to the task information; planning a task path taking the current position as a starting point and a target address as an end point for the robot based on the region map; the robot is controlled according to the task path, the latest time and the speed.
The current position of the robot and the regional map of the region where the robot is located can be obtained through networking, wherein the regional map can also be a map constructed by the robot on the region where the robot is located in advance through SLAM mapping technology, and the map can be stored in a map database. The task information includes: the target address of the robot (the target address is the address of the delivery object, for example, the delivery of the goods by the robot) and the latest time of arrival at the target address. Controlling the robot according to the task path, the latest time and the speed, it is understood that it is determined in real time whether the robot travels at the speed, if it is possible to complete the task path before the latest time, the robot should be instructed to speed up if it is impossible. The latest time more visual should be referred to as the latest time.
In step S203, task information of the robot is acquired, and the robot is controlled according to the task information and the speed, including: acquiring a starting position of a robot and an area map of an area where the robot is located; determining a target address of the robot according to the task information; planning a task path for the robot by taking a starting position as a starting point and a target address as an end point based on the region map; calculating the integral of the speed and time to obtain a first displacement of the robot, and determining the current position of the robot according to the first displacement and the initial position; and controlling the robot according to the task path and the current position.
The integration of speed with respect to time, to obtain displacement, is a common calculation on mathematical integration, and will not be described again. Illustrating: the robot delivers goods, the initial position is E point, the target address is F point, the second path is the path taking E as the starting point and taking F as the end point. After the first displacement is obtained, the current position G of the robot is determined by adding the first displacement to the starting position on the second path. And controlling the robot according to the position of the current position G on the second path. For example, the current position G is at the front section (near to the start point and far from the end point) on the second path, which indicates that the delivery path is long, and the speed should be increased.
In an alternative embodiment, there is provided another method of controlling a robot, comprising: acquiring an area map of an area where the robot is located, and acquiring a plurality of images in a preset time period through an image acquisition device, wherein the image acquisition device is arranged on the robot, and the acquired images are marked with acquired time; determining a second displacement of the robot at preset time according to the region map and the plurality of images marked with the acquired time; and acquiring task information of the robot, and controlling the robot according to the task information and the second displacement.
The plurality of images within the preset time period acquired by the image acquisition device are a plurality of images of the position of the robot. The position of the robot can be judged according to the regional map and the image of each position. The position of the robot judged according to the image obtained at the beginning is the position of the robot at the beginning, the position of the robot judged according to the image obtained at the last is the position of the robot at the last, and then the second displacement of the robot at the preset time can be determined according to the region map and the plurality of images marked with the obtained time. The robot is controlled according to the task information and the second displacement, and can be understood as the robot is controlled according to the task path and the second displacement corresponding to the current task in the task information.
Determining the second displacement of the robot at the preset time according to the region map and the plurality of images marked with the acquired time, comprising: extracting image features of a plurality of images; according to the time of acquiring each image in the plurality of images, constructing an optical flow field corresponding to the plurality of images according to the image characteristics of the plurality of images; and determining the second displacement of the robot at the preset time according to the optical flow field.
The image features of the multiple images may be any of the commonly used features. And constructing optical flow fields corresponding to the plurality of images by utilizing the image characteristics of the plurality of images according to the time sequence of acquiring each image in the plurality of images. An optical flow field refers to a two-dimensional (2D) instantaneous velocity field formed by all pixel points in an image, wherein a two-dimensional velocity vector is the projection of a three-dimensional velocity vector of a visible point in a scene on an imaging surface. The optical flow contains not only the motion information of the observed object but also rich information about the three-dimensional structure of the scene. The study of optical flow is an important part of the field of computer vision and related research. Because optical flow plays an important role in computer vision, it has very important applications in object segmentation, recognition, tracking, robotic navigation, shape information retrieval, and the like. Recovering the three-dimensional structure and motion of objects from optical flow fields is one of the most significant and challenging tasks faced by computer vision research.
After determining the second displacement of the robot at the preset time according to the area map and the plurality of images marked with the acquired time, the method further comprises the following steps: calculating the integral of the speed and time to obtain the first displacement of the robot; adding the first displacement and the second displacement according to a preset weight to obtain a third displacement; and acquiring task information of the robot, and controlling the robot according to the task information and the third displacement.
And adding the first displacement and the second displacement according to a preset weight to obtain a third displacement, wherein the first displacement is multiplied by one weight plus the second displacement is multiplied by the other weight. The method of controlling the robot based on the task information and the third displacement actually combines the first method and the second method, and this combination makes the calculated displacement of the robot more accurate. The first method is as follows: calculating the speed of the robot by the optical encoder, and obtaining the first displacement of the robot by integration; the second method is as follows: and constructing optical flow fields corresponding to the plurality of images, and determining a second displacement of the robot at preset time according to the optical flow fields.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein in detail.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method of the present disclosure.
Fig. 3 is a schematic diagram of an apparatus for controlling a robot based on an optical encoder according to an embodiment of the present disclosure. As shown in fig. 3, the apparatus for controlling a robot based on an optical encoder includes:
a determining module 301 configured to acquire an a-phase signal, a B-phase signal, and a Z-phase signal of the optical encoder, and determine the number of lines of the optical encoder from the a-phase signal, the B-phase signal, and the Z-phase signal;
a calculation module 302 configured to calculate a speed of the robot through an optical encoder having determined the number of lines, wherein the optical encoder is provided on the robot, while the robot is traveling;
and a control module 303 configured to acquire task information of the robot and control the robot according to the task information and the speed.
The photoelectric encoder is a sensor which converts mechanical geometric displacement on an output shaft into pulse or digital quantity through photoelectric conversion. The photoelectric encoder consists of a grating disk and a photoelectric detection device. The grating disk is provided with a plurality of rectangular holes at equal intervals on a circular plate with a certain diameter. Because the photoelectric encoder is coaxial with the motor, when the motor rotates, the grating disk rotates at the same speed with the motor, a plurality of pulse signals are detected and output through a detection device consisting of electronic elements such as a light emitting diode and the like, and the number of output pulses of the photoelectric encoder per second can reflect the current rotating speed of the motor. In addition, to determine the rotation direction, the code wheel may also provide two pulse signals, i.e., an a-phase signal and a B-phase signal, which are 90 ° out of phase. Encoders can be classified into optical, magnetic, inductive, and capacitive types according to the detection principle. According to the scale method and the signal output form, the method can be divided into three types of incremental type, absolute type and mixed type. The (REP) precision machine manufacturing automation engineering and manufacturing engineering such as packaging precision electronic manufacturing are used in many cases.
The number of lines of the encoder is the resolution of the encoder, i.e. the number of pulses (i.e. one cycle) emitted by one revolution. The Z phase signal is sent by a timer of the optical encoder, and a period of the optical encoder is arranged between two pulses of the Z phase signal. Since the line number of the optical encoder is already determined, that is, the number of pulses sent by the optical encoder in one period, by detecting the number of pulses sent by the optical encoder in unit time, the number of pulses sent by the optical encoder in unit time divided by the number of pulses sent by the optical encoder in one period can be obtained, and the speed of the robot (the rotation speed of the optical encoder is the speed of the robot) can be obtained by rotating the optical encoder by several rotations in unit time.
According to the technical scheme provided by the embodiment of the disclosure, an A-phase signal, a B-phase signal and a Z-phase signal of the optical encoder are obtained, and the line number of the optical encoder is determined according to the A-phase signal, the B-phase signal and the Z-phase signal; calculating a speed of the robot by an optical encoder having determined the number of lines in the travel of the robot, wherein the optical encoder is provided on the robot; and acquiring task information of the robot, and controlling the robot according to the task information and the speed. By adopting the technical means, the problem that in the prior art, the robot can only change the process of executing the task by the robot through the user instruction acquired in real time can be solved, and the task execution efficiency of the robot is further improved.
Optionally, the determining module 301 is further configured to receive the Z-phase signal through the first pin of the micro-control unit, when a rising edge of the Z-phase signal is detected for the first time: receiving an A phase signal through a second pin of the micro control unit, counting rising edges and falling edges of the A phase signal, and stopping counting the rising edges and the falling edges of the A phase signal when the rising edges of the Z phase signal are detected for the second time to obtain the A phase pulse number; b phase signals are received through a third pin of the micro control unit, rising edges and falling edges of the B phase signals are counted, and when rising edges of Z phase signals are detected for the second time, the counting of the rising edges and the falling edges of the B phase signals is stopped, so that the B phase pulse number is obtained; the sum of the number of A-phase pulses and the number of B-phase pulses divided by four is used as the line number of the optical encoder, wherein the micro control unit is arranged on the optical encoder.
From the first detection of the rising edge of the Z-phase signal to the second detection of the rising edge of the Z-phase signal, the detection range is two pulses of the Z-phase signal, and the detection range is one period of operation of the optical encoder. Counting rising edges and falling edges of the A-phase signals in one period to obtain the A-phase pulse number; and counting the rising edge and the falling edge of the B phase signal in one period to obtain the B phase pulse number. Since the sum of the a-phase pulse number and the B-phase pulse number is calculated in one cycle by two signals, one pulse is counted twice (both the rising edge and the falling edge of one pulse are counted), this calculation corresponds to four times the frequency, and the value obtained by dividing the sum of the a-phase pulse number and the B-phase pulse number by four is used as the line number of the optical encoder.
Optionally, the control module 303 is further configured to obtain a current position of the robot and an area map of an area where the robot is located; determining a target address of the robot and the latest time for the robot to reach the target address according to the task information; planning a task path taking the current position as a starting point and a target address as an end point for the robot based on the region map; the robot is controlled according to the task path, the latest time and the speed.
The current position of the robot and the regional map of the region where the robot is located can be obtained through networking, wherein the regional map can also be a map constructed by the robot on the region where the robot is located in advance through SLAM mapping technology, and the map can be stored in a map database. The task information includes: the target address of the robot (the target address is the address of the delivery object, for example, the delivery of the goods by the robot) and the latest time of arrival at the target address. Controlling the robot according to the task path, the latest time and the speed, it is understood that it is determined in real time whether the robot travels at the speed, if it is possible to complete the task path before the latest time, the robot should be instructed to speed up if it is impossible. The latest time more visual should be referred to as the latest time.
Optionally, the control module 303 is further configured to obtain a starting position of the robot and an area map of an area where the robot is located; determining a target address of the robot according to the task information; planning a task path for the robot by taking a starting position as a starting point and a target address as an end point based on the region map; calculating the integral of the speed and time to obtain a first displacement of the robot, and determining the current position of the robot according to the first displacement and the initial position; and controlling the robot according to the task path and the current position.
The integration of speed with respect to time, to obtain displacement, is a common calculation on mathematical integration, and will not be described again. Illustrating: the robot delivers goods, the initial position is E point, the target address is F point, the second path is the path taking E as the starting point and taking F as the end point. After the first displacement is obtained, the current position G of the robot is determined by adding the first displacement to the starting position on the second path. And controlling the robot according to the position of the current position G on the second path. For example, the current position G is at the front section (near to the start point and far from the end point) on the second path, which indicates that the delivery path is long, and the speed should be increased.
Optionally, the control module 303 is further configured to acquire an area map of an area where the robot is located, and acquire a plurality of images within a preset time period through an image acquisition device, where the image acquisition device is disposed on the robot, and the acquired plurality of images are marked with the acquired time; determining a second displacement of the robot at preset time according to the region map and the plurality of images marked with the acquired time; and acquiring task information of the robot, and controlling the robot according to the task information and the second displacement.
The plurality of images within the preset time period acquired by the image acquisition device are a plurality of images of the position of the robot. The position of the robot can be judged according to the regional map and the image of each position. The position of the robot judged according to the image obtained at the beginning is the position of the robot at the beginning, the position of the robot judged according to the image obtained at the last is the position of the robot at the last, and then the second displacement of the robot at the preset time can be determined according to the region map and the plurality of images marked with the obtained time. The robot is controlled according to the task information and the second displacement, and can be understood as the robot is controlled according to the task path and the second displacement corresponding to the current task in the task information.
Optionally, the control module 303 is further configured to extract image features of the plurality of images; according to the time of acquiring each image in the plurality of images, constructing an optical flow field corresponding to the plurality of images according to the image characteristics of the plurality of images; and determining the second displacement of the robot at the preset time according to the optical flow field.
The image features of the multiple images may be any of the commonly used features. And constructing optical flow fields corresponding to the plurality of images by utilizing the image characteristics of the plurality of images according to the time sequence of acquiring each image in the plurality of images. An optical flow field refers to a two-dimensional (2D) instantaneous velocity field formed by all pixel points in an image, wherein a two-dimensional velocity vector is the projection of a three-dimensional velocity vector of a visible point in a scene on an imaging surface. The optical flow contains not only the motion information of the observed object but also rich information about the three-dimensional structure of the scene. The study of optical flow is an important part of the field of computer vision and related research. Because optical flow plays an important role in computer vision, it has very important applications in object segmentation, recognition, tracking, robotic navigation, shape information retrieval, and the like. Recovering the three-dimensional structure and motion of objects from optical flow fields is one of the most significant and challenging tasks faced by computer vision research.
Optionally, the control module 303 is further configured to calculate an integral of the speed over time, resulting in a first displacement of the robot; adding the first displacement and the second displacement according to a preset weight to obtain a third displacement; and acquiring task information of the robot, and controlling the robot according to the task information and the third displacement.
And adding the first displacement and the second displacement according to a preset weight to obtain a third displacement, wherein the first displacement is multiplied by one weight plus the second displacement is multiplied by the other weight. The method of controlling the robot based on the task information and the third displacement actually combines the first method and the second method, and this combination makes the calculated displacement of the robot more accurate. The first method is as follows: calculating the speed of the robot by the optical encoder, and obtaining the first displacement of the robot by integration; the second method is as follows: and constructing optical flow fields corresponding to the plurality of images, and determining a second displacement of the robot at preset time according to the optical flow fields.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not constitute any limitation on the implementation process of the embodiments of the disclosure.
Fig. 4 is a schematic diagram of an electronic device 4 provided by an embodiment of the present disclosure. As shown in fig. 4, the electronic apparatus 4 of this embodiment includes: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and executable on the processor 401. The steps of the various method embodiments described above are implemented by processor 401 when executing computer program 403. Alternatively, the processor 401, when executing the computer program 403, performs the functions of the modules/units in the above-described apparatus embodiments.
Illustratively, the computer program 403 may be partitioned into one or more modules/units, which are stored in the memory 402 and executed by the processor 401 to complete the present disclosure. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 403 in the electronic device 4.
The electronic device 4 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The electronic device 4 may include, but is not limited to, a processor 401 and a memory 402. It will be appreciated by those skilled in the art that fig. 4 is merely an example of the electronic device 4 and is not meant to be limiting of the electronic device 4, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the electronic device may also include an input-output device, a network access device, a bus, etc.
The processor 401 may be a central processing unit (Central Processing Unit, CPU) or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 402 may be an internal storage unit of the electronic device 4, for example, a hard disk or a memory of the electronic device 4. The memory 402 may also be an external storage device of the electronic device 4, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the electronic device 4. Further, the memory 402 may also include both internal storage units and external storage devices of the electronic device 4. The memory 402 is used to store computer programs and other programs and data required by the electronic device. The memory 402 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other manners. For example, the apparatus/electronic device embodiments described above are merely illustrative, e.g., the division of modules or elements is merely a logical functional division, and there may be additional divisions of actual implementations, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present disclosure may implement all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of the method embodiments described above. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The above embodiments are merely for illustrating the technical solution of the present disclosure, and are not limiting thereof; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included in the scope of the present disclosure.

Claims (8)

1. A method of controlling a robot based on an optical encoder, comprising:
acquiring an A-phase signal, a B-phase signal and a Z-phase signal of an optical encoder, and determining the line number of the optical encoder according to the A-phase signal, the B-phase signal and the Z-phase signal;
calculating a speed of the robot by the optical encoder having determined the number of lines while the robot is traveling, wherein the optical encoder is provided on the robot;
acquiring task information of the robot, and controlling the robot according to the task information and the speed;
The method comprises the steps of acquiring an area map of an area where the robot is located, and acquiring a plurality of images in a preset time period through an image acquisition device, wherein the image acquisition device is arranged on the robot, and the acquired images are marked with acquired time; determining a second displacement of the robot at the preset time according to the region map and the plurality of images marked with the acquired time; acquiring the task information of the robot, and controlling the robot according to the task information and the second displacement;
wherein after determining the second displacement of the robot at the preset time according to the region map and the plurality of images marked with the acquired time, the method further comprises:
calculating the integral of the speed to time to obtain a first displacement of the robot;
adding the first displacement and the second displacement according to a preset weight to obtain a third displacement;
and acquiring the task information of the robot, and controlling the robot according to the task information and the third displacement.
2. The method of claim 1, wherein the acquiring the a-phase signal, the B-phase signal, and the Z-phase signal of the optical encoder and determining the number of lines of the optical encoder based on the a-phase signal, the B-phase signal, and the Z-phase signal comprises:
The Z-phase signal is received through a first pin of the micro control unit, and when the rising edge of the Z-phase signal is detected for the first time:
receiving the A phase signal through a second pin of the micro control unit, counting rising edges and falling edges of the A phase signal, and stopping counting the rising edges and the falling edges of the A phase signal when the rising edges of the Z phase signal are detected for the second time to obtain the A phase pulse number;
the third pin of the micro control unit is used for receiving the B phase signal, counting the rising edge and the falling edge of the B phase signal, and stopping counting the rising edge and the falling edge of the B phase signal when the rising edge of the Z phase signal is detected for the second time to obtain the B phase pulse number;
and dividing the sum of the A phase pulse number and the B phase pulse number by four to obtain a value as the line number of the optical encoder, wherein the micro control unit is arranged on the optical encoder.
3. The method of claim 1, wherein the acquiring task information of the robot, controlling the robot based on the task information and the speed, comprises:
acquiring the current position of the robot and an area map of an area where the robot is located;
Determining a target address of the robot and the latest time for the robot to reach the target address according to the task information;
planning a first path taking the current position as a starting point and the target address as an end point for the robot based on the area map;
and controlling the robot according to the first path, the latest time and the speed.
4. The method of claim 1, wherein the acquiring task information of the robot, controlling the robot based on the task information and the speed, comprises:
acquiring a starting position of the robot and an area map of an area where the robot is located;
determining a target address of the robot according to the task information;
planning a second path taking the initial position as a starting point and the target address as an end point for the robot based on the area map;
calculating the integral of the speed to time to obtain a first displacement of the robot, and determining the current position of the robot according to the first displacement and the initial position;
and controlling the robot according to the second path and the current position.
5. The method of claim 1, wherein the determining a second displacement of the robot at the preset time from the region map and the plurality of images labeled with the acquired time comprises:
extracting image features of the plurality of images;
according to the time of acquiring each image in the plurality of images, constructing an optical flow field corresponding to the plurality of images according to the image characteristics of the plurality of images;
and determining a second displacement of the robot at the preset time according to the optical flow field.
6. An apparatus for controlling a robot based on an optical encoder, comprising:
a determining module configured to acquire an a-phase signal, a B-phase signal, and a Z-phase signal of an optical encoder, and determine a line number of the optical encoder from the a-phase signal, the B-phase signal, and the Z-phase signal;
a calculation module configured to calculate a speed of the robot by the optical encoder having determined the number of lines, in a travel of the robot, wherein the optical encoder is provided on the robot;
the control module is configured to acquire task information of the robot and control the robot according to the task information and the speed;
The control module is further configured to acquire an area map of an area where the robot is located, and acquire a plurality of images within a preset time period through an image acquisition device, wherein the image acquisition device is arranged on the robot, and the acquired plurality of images are marked with acquired time; determining a second displacement of the robot at the preset time according to the region map and the plurality of images marked with the acquired time; acquiring the task information of the robot, and controlling the robot according to the task information and the second displacement;
the control module is further configured to calculate an integral of the speed over time, resulting in a first displacement of the robot; adding the first displacement and the second displacement according to a preset weight to obtain a third displacement; and acquiring the task information of the robot, and controlling the robot according to the task information and the third displacement.
7. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 5 when the computer program is executed.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 5.
CN202111528080.4A 2021-12-14 2021-12-14 Method and device for controlling robot based on optical encoder Active CN114237242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111528080.4A CN114237242B (en) 2021-12-14 2021-12-14 Method and device for controlling robot based on optical encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111528080.4A CN114237242B (en) 2021-12-14 2021-12-14 Method and device for controlling robot based on optical encoder

Publications (2)

Publication Number Publication Date
CN114237242A CN114237242A (en) 2022-03-25
CN114237242B true CN114237242B (en) 2024-02-23

Family

ID=80755926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111528080.4A Active CN114237242B (en) 2021-12-14 2021-12-14 Method and device for controlling robot based on optical encoder

Country Status (1)

Country Link
CN (1) CN114237242B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0389881A (en) * 1989-09-01 1991-04-15 Canon Inc Speed controller for motor
JP2005059170A (en) * 2003-08-18 2005-03-10 Honda Motor Co Ltd Information collecting robot
JP2006236132A (en) * 2005-02-25 2006-09-07 Matsushita Electric Works Ltd Autonomous mobile robot
CN205051618U (en) * 2015-10-20 2016-02-24 威尔凯电气(上海)股份有限公司 Electric motor controller of electric automobile
CN105651280A (en) * 2016-01-17 2016-06-08 济南大学 Integrated positioning method for unmanned haulage motor in mine
CN106772739A (en) * 2017-03-03 2017-05-31 武汉理工大学 A kind of dim light grid array preparation method and control system
WO2018010458A1 (en) * 2016-07-10 2018-01-18 北京工业大学 Rat hippocampal space cell-based method for constructing navigation map using robot
CN108255181A (en) * 2018-01-29 2018-07-06 广州市君望机器人自动化有限公司 Reverse car seeking method and computer readable storage medium based on robot
CN108362284A (en) * 2018-01-22 2018-08-03 北京工业大学 A kind of air navigation aid based on bionical hippocampus cognitive map
CN108519615A (en) * 2018-04-19 2018-09-11 河南科技学院 Mobile robot autonomous navigation method based on integrated navigation and Feature Points Matching
CN108680177A (en) * 2018-05-31 2018-10-19 安徽工程大学 Synchronous superposition method and device based on rodent models
WO2019076044A1 (en) * 2017-10-20 2019-04-25 纳恩博(北京)科技有限公司 Mobile robot local motion planning method and apparatus and computer storage medium
CN110363470A (en) * 2019-06-21 2019-10-22 顺丰科技有限公司 A kind of object based on robot sends method, apparatus, system and robot with charge free
CN111552298A (en) * 2020-05-26 2020-08-18 北京工业大学 Bionic positioning method based on rat brain hippocampus spatial cells
CN112740274A (en) * 2018-09-15 2021-04-30 高通股份有限公司 System and method for VSLAM scale estimation on robotic devices using optical flow sensors
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method
CN113203409A (en) * 2021-07-05 2021-08-03 北京航空航天大学 Method for constructing navigation map of mobile robot in complex indoor environment
CN113255998A (en) * 2021-05-25 2021-08-13 北京理工大学 Expressway unmanned vehicle formation method based on multi-agent reinforcement learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10788836B2 (en) * 2016-02-29 2020-09-29 AI Incorporated Obstacle recognition method for autonomous robots
CN107368079B (en) * 2017-08-31 2019-09-06 珠海市一微半导体有限公司 The planing method and chip in robot cleaning path
KR102231922B1 (en) * 2019-07-30 2021-03-25 엘지전자 주식회사 Artificial intelligence server for controlling a plurality of robots using artificial intelligence

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0389881A (en) * 1989-09-01 1991-04-15 Canon Inc Speed controller for motor
JP2005059170A (en) * 2003-08-18 2005-03-10 Honda Motor Co Ltd Information collecting robot
JP2006236132A (en) * 2005-02-25 2006-09-07 Matsushita Electric Works Ltd Autonomous mobile robot
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method
CN205051618U (en) * 2015-10-20 2016-02-24 威尔凯电气(上海)股份有限公司 Electric motor controller of electric automobile
CN105651280A (en) * 2016-01-17 2016-06-08 济南大学 Integrated positioning method for unmanned haulage motor in mine
WO2018010458A1 (en) * 2016-07-10 2018-01-18 北京工业大学 Rat hippocampal space cell-based method for constructing navigation map using robot
CN106772739A (en) * 2017-03-03 2017-05-31 武汉理工大学 A kind of dim light grid array preparation method and control system
WO2019076044A1 (en) * 2017-10-20 2019-04-25 纳恩博(北京)科技有限公司 Mobile robot local motion planning method and apparatus and computer storage medium
CN108362284A (en) * 2018-01-22 2018-08-03 北京工业大学 A kind of air navigation aid based on bionical hippocampus cognitive map
CN108255181A (en) * 2018-01-29 2018-07-06 广州市君望机器人自动化有限公司 Reverse car seeking method and computer readable storage medium based on robot
CN108519615A (en) * 2018-04-19 2018-09-11 河南科技学院 Mobile robot autonomous navigation method based on integrated navigation and Feature Points Matching
CN108680177A (en) * 2018-05-31 2018-10-19 安徽工程大学 Synchronous superposition method and device based on rodent models
CN112740274A (en) * 2018-09-15 2021-04-30 高通股份有限公司 System and method for VSLAM scale estimation on robotic devices using optical flow sensors
CN110363470A (en) * 2019-06-21 2019-10-22 顺丰科技有限公司 A kind of object based on robot sends method, apparatus, system and robot with charge free
CN111552298A (en) * 2020-05-26 2020-08-18 北京工业大学 Bionic positioning method based on rat brain hippocampus spatial cells
CN113255998A (en) * 2021-05-25 2021-08-13 北京理工大学 Expressway unmanned vehicle formation method based on multi-agent reinforcement learning
CN113203409A (en) * 2021-07-05 2021-08-03 北京航空航天大学 Method for constructing navigation map of mobile robot in complex indoor environment

Also Published As

Publication number Publication date
CN114237242A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN110689585B (en) Multi-phase external parameter combined calibration method, device, equipment and medium
CN110471409B (en) Robot inspection method and device, computer readable storage medium and robot
CN110879400A (en) Method, equipment and storage medium for fusion positioning of laser radar and IMU
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
CN111197954B (en) Absolute position measuring method and device of machine, storage medium and machine
CN107796395A (en) A kind of air navigation aid, device and terminal device for indoor objects position
US20200209365A1 (en) Laser data calibration method and robot using the same
CN110926478B (en) AR navigation route deviation rectifying method and system and computer readable storage medium
CN109949306B (en) Reflecting surface angle deviation detection method, terminal device and storage medium
CN109343037A (en) Optical detector installation error detection device, method and terminal device
CN113419233A (en) Method, device and equipment for testing perception effect
CN113601510B (en) Robot movement control method, device, system and equipment based on binocular vision
CN112478540B (en) Method and device for controlling rotation of tray
CN114237242B (en) Method and device for controlling robot based on optical encoder
CN107916509B (en) Needle position detection, needle stop position setting and needle stop control method, system, terminal and device
CN112104292B (en) Motor control method, device, terminal equipment and storage medium
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN111136689B (en) Self-checking method and device
CN110930455B (en) Positioning method, positioning device, terminal equipment and storage medium
CN114036721A (en) Method and device for constructing three-dimensional temperature cloud field of micro-module
CN115342830A (en) Calibration method, program product and calibration device for a positioning device and a odometer
CN112414391A (en) Robot repositioning method and device
CN114185351B (en) Operation method and device of disinfection robot
CN117375462B (en) Stepping motor calibration method, device, equipment and computer storage medium
CN116185046B (en) Mobile robot positioning method, mobile robot and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant