CN114237242A - Method and device for controlling robot based on optical encoder - Google Patents

Method and device for controlling robot based on optical encoder Download PDF

Info

Publication number
CN114237242A
CN114237242A CN202111528080.4A CN202111528080A CN114237242A CN 114237242 A CN114237242 A CN 114237242A CN 202111528080 A CN202111528080 A CN 202111528080A CN 114237242 A CN114237242 A CN 114237242A
Authority
CN
China
Prior art keywords
robot
phase signal
optical encoder
displacement
task information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111528080.4A
Other languages
Chinese (zh)
Other versions
CN114237242B (en
Inventor
薛昊峰
支涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunji Technology Co Ltd
Original Assignee
Beijing Yunji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunji Technology Co Ltd filed Critical Beijing Yunji Technology Co Ltd
Priority to CN202111528080.4A priority Critical patent/CN114237242B/en
Publication of CN114237242A publication Critical patent/CN114237242A/en
Application granted granted Critical
Publication of CN114237242B publication Critical patent/CN114237242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The disclosure relates to the technical field of charging, and provides a method and a device for controlling a robot based on an optical encoder. The method comprises the following steps: acquiring an A-phase signal, a B-phase signal and a Z-phase signal of the optical encoder, and determining the number of lines of the optical encoder according to the A-phase signal, the B-phase signal and the Z-phase signal; calculating the speed of the robot by the optical encoder which has determined the number of lines during the travel of the robot, wherein the optical encoder is arranged on the robot; and acquiring task information of the robot, and controlling the robot according to the task information and the speed. By adopting the technical means, the problem that the robot can only change the process of executing the task by the robot through the user instruction acquired in real time in the prior art is solved.

Description

Method and device for controlling robot based on optical encoder
Technical Field
The present disclosure relates to the field of charging technologies, and in particular, to a method and an apparatus for controlling a robot based on an optical encoder.
Background
At present, the robot is controlled either by voice or by issuing control instructions to the robot through an operation interface. Whether the command is issued by voice or an instruction, the user is required to issue the command to the robot, but the robot needs to work independently in many scenes, and the user cannot issue the command to the robot in real time. For example, in a scenario of delivering goods by a robot, the robot needs to execute a delivery task by itself according to information of the goods, that is, task information received by the robot, and in the prior art, when the robot executes the delivery task by itself, the robot either normally executes the delivery task according to a preset program, or changes a process of delivering goods by acquiring a user command in real time.
In the course of implementing the disclosed concept, the inventors found that there are at least the following technical problems in the related art: the robot can only change the process of executing the task by itself through the user instruction acquired in real time.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a method, an apparatus, an electronic device, and a computer-readable storage medium for controlling a robot based on an optical encoder, so as to solve the problem that in the prior art, the robot can only change the process of executing a task by itself through a user instruction obtained in real time.
In a first aspect of the embodiments of the present disclosure, a method for controlling a robot based on an optical encoder is provided, including: acquiring an A-phase signal, a B-phase signal and a Z-phase signal of the optical encoder, and determining the number of lines of the optical encoder according to the A-phase signal, the B-phase signal and the Z-phase signal; calculating the speed of the robot by the optical encoder which has determined the number of lines during the travel of the robot, wherein the optical encoder is arranged on the robot; and acquiring task information of the robot, and controlling the robot according to the task information and the speed.
In a second aspect of the embodiments of the present disclosure, there is provided an apparatus for controlling a robot based on an optical encoder, including: a determining module configured to acquire an a-phase signal, a B-phase signal, and a Z-phase signal of the optical encoder, and determine the number of lines of the optical encoder from the a-phase signal, the B-phase signal, and the Z-phase signal; a calculation module configured to calculate a speed of the robot through the optical encoder, which has determined the number of lines, while the robot travels, wherein the optical encoder is disposed on the robot; and the control module is configured to acquire task information of the robot and control the robot according to the task information and the speed.
In a third aspect of the embodiments of the present disclosure, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor, implements the steps of the above-mentioned method.
Compared with the prior art, the embodiment of the disclosure has the following beneficial effects: acquiring an A-phase signal, a B-phase signal and a Z-phase signal of the optical encoder, and determining the number of lines of the optical encoder according to the A-phase signal, the B-phase signal and the Z-phase signal; calculating the speed of the robot by the optical encoder which has determined the number of lines during the travel of the robot, wherein the optical encoder is arranged on the robot; and acquiring task information of the robot, and controlling the robot according to the task information and the speed. By adopting the technical means, the problem that the robot can only change the process of executing the task by the robot through the user instruction acquired in real time in the prior art can be solved, and the efficiency of executing the task by the robot is further improved.
Drawings
To more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive efforts.
FIG. 1 is a scenario diagram of an application scenario of an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for controlling a robot based on an optical encoder according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an apparatus for controlling a robot based on an optical encoder according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
A method and an apparatus for controlling a robot based on an optical encoder according to an embodiment of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a scene schematic diagram of an application scenario of an embodiment of the present disclosure. The application scenario may include terminal devices 1, 2, and 3, server 4, and network 5.
The terminal devices 1, 2, and 3 may be hardware or software. When the terminal devices 1, 2 and 3 are hardware, they may be various electronic devices having a display screen and supporting communication with the server 4, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like; when the terminal devices 1, 2, and 3 are software, they may be installed in the electronic devices as above. The terminal devices 1, 2 and 3 may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited by the embodiments of the present disclosure. Further, the terminal devices 1, 2, and 3 may have various applications installed thereon, such as a data processing application, an instant messaging tool, social platform software, a search-type application, a shopping-type application, and the like.
The server 4 may be a server providing various services, for example, a backend server receiving a request sent by a terminal device establishing a communication connection with the server, and the backend server may receive and analyze the request sent by the terminal device and generate a processing result. The server 4 may be one server, may also be a server cluster composed of a plurality of servers, or may also be a cloud computing service center, which is not limited in this disclosure.
The server 4 may be hardware or software. When the server 4 is hardware, it may be various electronic devices that provide various services to the terminal devices 1, 2, and 3. When the server 4 is software, it may be a plurality of software or software modules providing various services for the terminal devices 1, 2, and 3, or may be a single software or software module providing various services for the terminal devices 1, 2, and 3, which is not limited by the embodiment of the present disclosure.
The network 5 may be a wired network connected by a coaxial cable, a twisted pair and an optical fiber, or may be a wireless network that can interconnect various Communication devices without wiring, for example, Bluetooth (Bluetooth), Near Field Communication (NFC), Infrared (Infrared), and the like, which is not limited in the embodiment of the present disclosure.
A user can establish a communication connection with the server 4 via the network 5 through the terminal devices 1, 2, and 3 to receive or transmit information or the like. It should be noted that the specific types, numbers and combinations of the terminal devices 1, 2 and 3, the server 4 and the network 5 may be adjusted according to the actual requirements of the application scenarios, and the embodiment of the present disclosure does not limit this.
Fig. 2 is a schematic flowchart of a method for controlling a robot based on an optical encoder according to an embodiment of the present disclosure. The method of fig. 2 for controlling a robot based on an optical encoder may be performed by the terminal device or the server of fig. 1. As shown in fig. 2, the method for controlling a robot based on an optical encoder includes:
s201, acquiring an A-phase signal, a B-phase signal and a Z-phase signal of the optical encoder, and determining the number of lines of the optical encoder according to the A-phase signal, the B-phase signal and the Z-phase signal;
s202, calculating the speed of the robot through the optical encoder with the determined line number during the traveling of the robot, wherein the optical encoder is arranged on the robot;
and S203, acquiring task information of the robot, and controlling the robot according to the task information and the speed.
The photoelectric encoder is a sensor which converts the mechanical geometric displacement on an output shaft into pulse or digital quantity by photoelectric conversion. This is the most widely used sensor at present, and the photoelectric encoder is composed of a grating disk and a photoelectric detection device. The grating disk is formed by equally dividing a circular plate with a certain diameter into a plurality of rectangular holes. Because the photoelectric code disc is coaxial with the motor, when the motor rotates, the grating disc rotates at the same speed as the motor, a plurality of pulse signals are detected and output by a detection device composed of electronic elements such as light emitting diodes, and the current rotating speed of the motor can be reflected by calculating the number of pulses output by the photoelectric encoder per second. In addition, in order to judge the rotating direction, the code disc can also provide two paths of pulse signals with the phase difference of 90 degrees, namely an A-phase signal and a B-phase signal. Encoders can be classified into optical, magnetic, inductive, and capacitive types according to detection principles. The method can be divided into an incremental type, an absolute type and a mixed type according to the scale method and the signal output form. (REP) precision machine manufacturing automation processes and manufacturing processes such as precision electronic packaging are used in many cases.
The number of lines of the encoder is the resolution of the encoder, i.e., the number of pulses (i.e., one period) emitted per revolution. The Z-phase signal is sent from the timer of the optical encoder, and the period between two pulses of the Z-phase signal is one cycle of the operation of the optical encoder. Because the number of lines of the optical encoder is determined, that is, the number of pulses emitted by the optical encoder in one cycle, the number of revolutions of the optical encoder in a unit time can be obtained by detecting the number of pulses emitted by the optical encoder in the unit time and dividing the number of pulses emitted by the optical encoder in the unit time by the number of pulses emitted by the optical encoder in one cycle, and then the speed of the robot can be known (the rotating speed of the optical encoder is the speed of the robot).
According to the technical scheme provided by the embodiment of the disclosure, an A-phase signal, a B-phase signal and a Z-phase signal of an optical encoder are obtained, and the number of lines of the optical encoder is determined according to the A-phase signal, the B-phase signal and the Z-phase signal; calculating the speed of the robot by the optical encoder which has determined the number of lines during the travel of the robot, wherein the optical encoder is arranged on the robot; and acquiring task information of the robot, and controlling the robot according to the task information and the speed. By adopting the technical means, the problem that the robot can only change the process of executing the task by the robot through the user instruction acquired in real time in the prior art can be solved, and the efficiency of executing the task by the robot is further improved.
In step S201, acquiring an a-phase signal, a B-phase signal, and a Z-phase signal of an optical encoder, and determining the number of lines of the optical encoder according to the a-phase signal, the B-phase signal, and the Z-phase signal includes: receiving a Z-phase signal through a first pin of the micro control unit, and when a rising edge of the Z-phase signal is detected for the first time: receiving the A-phase signal through a second pin of the micro control unit, counting the rising edge and the falling edge of the A-phase signal, and stopping counting the rising edge and the falling edge of the A-phase signal when the rising edge of the Z-phase signal is detected for the second time to obtain the A-phase pulse number; receiving the B-phase signal through a third pin of the micro control unit, counting rising edges and falling edges of the B-phase signal, and stopping counting the rising edges and the falling edges of the B-phase signal when the rising edges of the Z-phase signal are detected for the second time to obtain the pulse number of the B-phase; and dividing the sum of the A-phase pulse number and the B-phase pulse number by four to obtain a value as the line number of the optical encoder, wherein the micro control unit is arranged on the optical encoder.
The detection range, which is one cycle of the operation of the optical encoder, is two pulses of the Z-phase signal from the first detection of the rising edge of the Z-phase signal to the second detection of the rising edge of the Z-phase signal. Counting the rising edge and the falling edge of the A-phase signal in one period to obtain the pulse number of the A-phase; and counting the rising edge and the falling edge of the B-phase signal in one period to obtain the B-phase pulse number. Since the sum of the number of pulses of the a phase and the number of pulses of the B phase counts two signals in one cycle, one pulse is counted twice (both the rising edge and the falling edge of one pulse are counted), which is equivalent to quadruple frequency, the sum of the number of pulses of the a phase and the number of pulses of the B phase is divided by four to obtain a value as the number of lines of the optical encoder.
In step S203, acquiring task information of the robot, and controlling the robot according to the task information and the speed includes: acquiring the current position of the robot and a regional map of a region where the robot is located; determining a target address of the robot and the latest time for the robot to reach the target address according to the task information; planning a task path for the robot by taking the current position as a starting point and the target address as a terminal point based on the regional map; and controlling the robot according to the task path, the latest time and the speed.
The current position of the robot and the regional map of the region where the robot is located are obtained through networking, wherein the regional map can also be a map which is constructed by the robot on the region where the robot is located through an SLAM mapping technology in advance, and the map can be stored in a map database. The task information includes: the target address of the robot (in the case of the robot delivering goods, the target address is the address of the delivery target) and the latest time to reach the target address. The robot is controlled according to the task path, the latest time and the speed, and the condition that whether the robot travels according to the speed and can finish the task path before the latest time is judged in real time, if the robot cannot finish the task path, the robot is commanded to accelerate. The latest time should be referred to as the latest moment.
In step S203, acquiring task information of the robot, and controlling the robot according to the task information and the speed includes: acquiring an initial position of the robot and a regional map of a region where the robot is located; determining a target address of the robot according to the task information; planning a task path for the robot by taking the initial position as a starting point and the target address as an end point based on the regional map; calculating the integral of the speed to the time to obtain a first displacement of the robot, and determining the current position of the robot according to the first displacement and the initial position; and controlling the robot according to the task path and the current position.
The displacement is obtained by integrating the speed with time, which is a common calculation on mathematical integration and is not described in detail. For example, the following steps are carried out: the robot delivers the goods, the starting position is point E, the target address is point F, and the second path is a path with the starting point E and the end point F. After the first displacement is obtained, the current position G of the robot is determined by adding the first displacement to the initial position on the second path. And controlling the robot according to the position of the current position G on the second path. For example, the current position G is located at the front stage on the second route (near the starting point and far from the end point), which indicates that the delivery route is still long and the speed should be increased.
In an alternative embodiment, there is provided another method of controlling a robot, comprising: acquiring a regional map of a region where the robot is located, and acquiring a plurality of images within a preset time period through an image acquisition device, wherein the image acquisition device is arranged on the robot, and the acquired images are all marked with the acquisition time; determining a second displacement of the robot at the preset time according to the regional map and the plurality of images marked with the acquired time; and acquiring task information of the robot, and controlling the robot according to the task information and the second displacement.
The plurality of images within the preset time period acquired by the image acquisition device are the plurality of images of the position of the robot. According to the regional map and the image of each position, the position of the robot can be judged. The position of the robot determined according to the image acquired at the beginning is the initial position of the robot, and the position of the robot determined according to the image acquired at the end is the final position of the robot, so that the second displacement of the robot at the preset time can be determined according to the area map and the plurality of images marked with the acquired time. The robot is controlled according to the task information and the second displacement, and the robot can be controlled according to the task path corresponding to the current task in the task information and the second displacement.
Determining a second displacement of the robot at a preset time according to the area map and the plurality of images marked with the acquired time, wherein the second displacement comprises the following steps: extracting image characteristics of a plurality of images; according to the acquisition time of each image in the plurality of images and the image characteristics of the plurality of images, constructing an optical flow field corresponding to the plurality of images; and determining the second displacement of the robot at the preset time according to the optical flow field.
The image characteristics of the multiple images may be any of the commonly used characteristics. And constructing an optical flow field corresponding to the plurality of images by using the image characteristics of the plurality of images according to the time sequence of the acquisition of each image in the plurality of images. The optical flow field is a two-dimensional (2D) instantaneous velocity field formed by all pixel points in an image, wherein a two-dimensional velocity vector is the projection of a three-dimensional velocity vector of a visible point in a scene on an imaging surface. The optical flow contains not only motion information of the observed object but also rich information about the three-dimensional structure of the scene. The study of optical flow has become an important part of the field of computer vision and related research. Because optical flow plays an important role in computer vision, it has very important applications in target object segmentation, recognition, tracking, robot navigation, shape information recovery, and the like. Recovering the three-dimensional structure and motion of an object from an optical flow field is one of the most meaningful and challenging tasks faced by computer vision research.
After determining the second displacement of the robot at the preset time according to the area map and the multiple images marked with the acquired time, the method further comprises the following steps: calculating the integral of the speed to the time to obtain a first displacement of the robot; adding the first displacement and the second displacement according to a preset weight to obtain a third displacement; and acquiring task information of the robot, and controlling the robot according to the task information and the third displacement.
The first displacement and the second displacement are added according to a preset weight to obtain a third displacement, which can be understood as the multiplication of the first displacement by a weight, the addition of the second displacement by another weight. The method of controlling the robot based on the task information and the third displacement is actually a combination of the first and second methods, which combination makes the calculated displacement of the robot more accurate. The first method comprises the following steps: the optical encoder calculates the speed of the robot, and the first displacement of the robot is obtained through integration; the second method comprises the following steps: and constructing an optical flow field corresponding to the plurality of images, and determining the second displacement of the robot at the preset time according to the optical flow field.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 3 is a schematic diagram of an apparatus for controlling a robot based on an optical encoder according to an embodiment of the present disclosure. As shown in fig. 3, the apparatus for controlling a robot based on an optical encoder includes:
a determining module 301 configured to acquire an a-phase signal, a B-phase signal, and a Z-phase signal of the optical encoder, and determine the number of lines of the optical encoder according to the a-phase signal, the B-phase signal, and the Z-phase signal;
a calculation module 302 configured to calculate a velocity of the robot through the optical encoder having determined the number of lines while the robot is traveling, wherein the optical encoder is disposed on the robot;
and the control module 303 is configured to acquire task information of the robot and control the robot according to the task information and the speed.
The photoelectric encoder is a sensor which converts the mechanical geometric displacement on an output shaft into pulse or digital quantity by photoelectric conversion. This is the most widely used sensor at present, and the photoelectric encoder is composed of a grating disk and a photoelectric detection device. The grating disk is formed by equally dividing a circular plate with a certain diameter into a plurality of rectangular holes. Because the photoelectric code disc is coaxial with the motor, when the motor rotates, the grating disc rotates at the same speed as the motor, a plurality of pulse signals are detected and output by a detection device composed of electronic elements such as light emitting diodes, and the current rotating speed of the motor can be reflected by calculating the number of pulses output by the photoelectric encoder per second. In addition, in order to judge the rotating direction, the code disc can also provide two paths of pulse signals with the phase difference of 90 degrees, namely an A-phase signal and a B-phase signal. Encoders can be classified into optical, magnetic, inductive, and capacitive types according to detection principles. The method can be divided into an incremental type, an absolute type and a mixed type according to the scale method and the signal output form. (REP) precision machine manufacturing automation processes and manufacturing processes such as precision electronic packaging are used in many cases.
The number of lines of the encoder is the resolution of the encoder, i.e., the number of pulses (i.e., one period) emitted per revolution. The Z-phase signal is sent from the timer of the optical encoder, and the period between two pulses of the Z-phase signal is one cycle of the operation of the optical encoder. Because the number of lines of the optical encoder is determined, that is, the number of pulses emitted by the optical encoder in one cycle, the number of revolutions of the optical encoder in a unit time can be obtained by detecting the number of pulses emitted by the optical encoder in the unit time and dividing the number of pulses emitted by the optical encoder in the unit time by the number of pulses emitted by the optical encoder in one cycle, and then the speed of the robot can be known (the rotating speed of the optical encoder is the speed of the robot).
According to the technical scheme provided by the embodiment of the disclosure, an A-phase signal, a B-phase signal and a Z-phase signal of an optical encoder are obtained, and the number of lines of the optical encoder is determined according to the A-phase signal, the B-phase signal and the Z-phase signal; calculating the speed of the robot by the optical encoder which has determined the number of lines during the travel of the robot, wherein the optical encoder is arranged on the robot; and acquiring task information of the robot, and controlling the robot according to the task information and the speed. By adopting the technical means, the problem that the robot can only change the process of executing the task by the robot through the user instruction acquired in real time in the prior art can be solved, and the efficiency of executing the task by the robot is further improved.
Optionally, the determining module 301 is further configured to receive a Z-phase signal through a first pin of the micro control unit, and when a rising edge of the Z-phase signal is detected for the first time: receiving the A-phase signal through a second pin of the micro control unit, counting the rising edge and the falling edge of the A-phase signal, and stopping counting the rising edge and the falling edge of the A-phase signal when the rising edge of the Z-phase signal is detected for the second time to obtain the A-phase pulse number; receiving the B-phase signal through a third pin of the micro control unit, counting rising edges and falling edges of the B-phase signal, and stopping counting the rising edges and the falling edges of the B-phase signal when the rising edges of the Z-phase signal are detected for the second time to obtain the pulse number of the B-phase; and dividing the sum of the A-phase pulse number and the B-phase pulse number by four to obtain a value as the line number of the optical encoder, wherein the micro control unit is arranged on the optical encoder.
The detection range, which is one cycle of the operation of the optical encoder, is two pulses of the Z-phase signal from the first detection of the rising edge of the Z-phase signal to the second detection of the rising edge of the Z-phase signal. Counting the rising edge and the falling edge of the A-phase signal in one period to obtain the pulse number of the A-phase; and counting the rising edge and the falling edge of the B-phase signal in one period to obtain the B-phase pulse number. Since the sum of the number of pulses of the a phase and the number of pulses of the B phase counts two signals in one cycle, one pulse is counted twice (both the rising edge and the falling edge of one pulse are counted), which is equivalent to quadruple frequency, the sum of the number of pulses of the a phase and the number of pulses of the B phase is divided by four to obtain a value as the number of lines of the optical encoder.
Optionally, the control module 303 is further configured to obtain a current position of the robot and an area map of an area where the robot is located; determining a target address of the robot and the latest time for the robot to reach the target address according to the task information; planning a task path for the robot by taking the current position as a starting point and the target address as a terminal point based on the regional map; and controlling the robot according to the task path, the latest time and the speed.
The current position of the robot and the regional map of the region where the robot is located are obtained through networking, wherein the regional map can also be a map which is constructed by the robot on the region where the robot is located through an SLAM mapping technology in advance, and the map can be stored in a map database. The task information includes: the target address of the robot (in the case of the robot delivering goods, the target address is the address of the delivery target) and the latest time to reach the target address. The robot is controlled according to the task path, the latest time and the speed, and the condition that whether the robot travels according to the speed and can finish the task path before the latest time is judged in real time, if the robot cannot finish the task path, the robot is commanded to accelerate. The latest time should be referred to as the latest moment.
Optionally, the control module 303 is further configured to obtain a starting position of the robot and an area map of an area where the robot is located; determining a target address of the robot according to the task information; planning a task path for the robot by taking the initial position as a starting point and the target address as an end point based on the regional map; calculating the integral of the speed to the time to obtain a first displacement of the robot, and determining the current position of the robot according to the first displacement and the initial position; and controlling the robot according to the task path and the current position.
The displacement is obtained by integrating the speed with time, which is a common calculation on mathematical integration and is not described in detail. For example, the following steps are carried out: the robot delivers the goods, the starting position is point E, the target address is point F, and the second path is a path with the starting point E and the end point F. After the first displacement is obtained, the current position G of the robot is determined by adding the first displacement to the initial position on the second path. And controlling the robot according to the position of the current position G on the second path. For example, the current position G is located at the front stage on the second route (near the starting point and far from the end point), which indicates that the delivery route is still long and the speed should be increased.
Optionally, the control module 303 is further configured to acquire an area map of an area where the robot is located, and acquire, by using an image acquisition device, a plurality of images within a preset time period, where the image acquisition device is disposed on the robot, and the acquired plurality of images are all marked with the acquisition time; determining a second displacement of the robot at the preset time according to the regional map and the plurality of images marked with the acquired time; and acquiring task information of the robot, and controlling the robot according to the task information and the second displacement.
The plurality of images within the preset time period acquired by the image acquisition device are the plurality of images of the position of the robot. According to the regional map and the image of each position, the position of the robot can be judged. The position of the robot determined according to the image acquired at the beginning is the initial position of the robot, and the position of the robot determined according to the image acquired at the end is the final position of the robot, so that the second displacement of the robot at the preset time can be determined according to the area map and the plurality of images marked with the acquired time. The robot is controlled according to the task information and the second displacement, and the robot can be controlled according to the task path corresponding to the current task in the task information and the second displacement.
Optionally, the control module 303 is further configured to extract image features of the plurality of images; according to the acquisition time of each image in the plurality of images and the image characteristics of the plurality of images, constructing an optical flow field corresponding to the plurality of images; and determining the second displacement of the robot at the preset time according to the optical flow field.
The image characteristics of the multiple images may be any of the commonly used characteristics. And constructing an optical flow field corresponding to the plurality of images by using the image characteristics of the plurality of images according to the time sequence of the acquisition of each image in the plurality of images. The optical flow field is a two-dimensional (2D) instantaneous velocity field formed by all pixel points in an image, wherein a two-dimensional velocity vector is the projection of a three-dimensional velocity vector of a visible point in a scene on an imaging surface. The optical flow contains not only motion information of the observed object but also rich information about the three-dimensional structure of the scene. The study of optical flow has become an important part of the field of computer vision and related research. Because optical flow plays an important role in computer vision, it has very important applications in target object segmentation, recognition, tracking, robot navigation, shape information recovery, and the like. Recovering the three-dimensional structure and motion of an object from an optical flow field is one of the most meaningful and challenging tasks faced by computer vision research.
Optionally, the control module 303 is further configured to calculate an integral of the velocity over time, resulting in a first displacement of the robot; adding the first displacement and the second displacement according to a preset weight to obtain a third displacement; and acquiring task information of the robot, and controlling the robot according to the task information and the third displacement.
The first displacement and the second displacement are added according to a preset weight to obtain a third displacement, which can be understood as the multiplication of the first displacement by a weight, the addition of the second displacement by another weight. The method of controlling the robot based on the task information and the third displacement is actually a combination of the first and second methods, which combination makes the calculated displacement of the robot more accurate. The first method comprises the following steps: the optical encoder calculates the speed of the robot, and the first displacement of the robot is obtained through integration; the second method comprises the following steps: and constructing an optical flow field corresponding to the plurality of images, and determining the second displacement of the robot at the preset time according to the optical flow field.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
Fig. 4 is a schematic diagram of an electronic device 4 provided by the embodiment of the present disclosure. As shown in fig. 4, the electronic apparatus 4 of this embodiment includes: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and executable on the processor 401. The steps in the various method embodiments described above are implemented when the processor 401 executes the computer program 403. Alternatively, the processor 401 implements the functions of the respective modules/units in the above-described respective apparatus embodiments when executing the computer program 403.
Illustratively, the computer program 403 may be partitioned into one or more modules/units, which are stored in the memory 402 and executed by the processor 401 to accomplish the present disclosure. One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 403 in the electronic device 4.
The electronic device 4 may be a desktop computer, a notebook, a palm computer, a cloud server, or other electronic devices. The electronic device 4 may include, but is not limited to, a processor 401 and a memory 402. Those skilled in the art will appreciate that fig. 4 is merely an example of the electronic device 4, and does not constitute a limitation of the electronic device 4, and may include more or less components than those shown, or combine certain components, or different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.
The Processor 401 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 402 may be an internal storage unit of the electronic device 4, for example, a hard disk or a memory of the electronic device 4. The memory 402 may also be an external storage device of the electronic device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 4. Further, the memory 402 may also include both internal storage units of the electronic device 4 and external storage devices. The memory 402 is used for storing computer programs and other programs and data required by the electronic device. The memory 402 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, and multiple units or components may be combined or integrated into another system, or some features may be omitted or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, the present disclosure may implement all or part of the flow of the method in the above embodiments, and may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the above methods and embodiments. The computer program may comprise computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain suitable additions or additions that may be required in accordance with legislative and patent practices within the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals or telecommunications signals in accordance with legislative and patent practices.
The above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present disclosure, and are intended to be included within the scope of the present disclosure.

Claims (10)

1. A method of controlling a robot based on an optical encoder, comprising:
acquiring an A-phase signal, a B-phase signal and a Z-phase signal of an optical encoder, and determining the number of lines of the optical encoder according to the A-phase signal, the B-phase signal and the Z-phase signal;
calculating the speed of the robot by the optical encoder having determined the number of lines while the robot travels, wherein the optical encoder is provided on the robot;
and acquiring task information of the robot, and controlling the robot according to the task information and the speed.
2. The method of claim 1, wherein obtaining the a-phase signal, the B-phase signal, and the Z-phase signal of the optical encoder and determining the number of lines of the optical encoder based on the a-phase signal, the B-phase signal, and the Z-phase signal comprises:
receiving the Z-phase signal through a first pin of a micro control unit, and when a rising edge of the Z-phase signal is detected for the first time:
receiving the A-phase signal through a second pin of the micro control unit, counting rising edges and falling edges of the A-phase signal, and stopping counting the rising edges and the falling edges of the A-phase signal when the rising edges of the Z-phase signal are detected for the second time to obtain the pulse number of the A-phase;
receiving the B-phase signal through a third pin of the micro control unit, counting rising edges and falling edges of the B-phase signal, and stopping counting the rising edges and the falling edges of the B-phase signal when the rising edges of the Z-phase signal are detected for the second time to obtain the pulse number of the B-phase;
and dividing the sum of the A-phase pulse number and the B-phase pulse number by four to obtain a value as the line number of the optical encoder, wherein the micro control unit is arranged on the optical encoder.
3. The method of claim 1, wherein the obtaining task information for the robot and controlling the robot based on the task information and the speed comprises:
acquiring the current position of the robot and a regional map of a region where the robot is located;
determining a target address of the robot and the latest time for the robot to reach the target address according to the task information;
planning a first path with the current position as a starting point and the target address as an end point for the robot based on the regional map;
controlling the robot according to the first path, the latest time and the speed.
4. The method of claim 1, wherein the obtaining task information for the robot and controlling the robot based on the task information and the speed comprises:
acquiring an initial position of the robot and a regional map of a region where the robot is located;
determining a target address of the robot according to the task information;
planning a second path with the starting position as a starting point and the target address as an end point for the robot based on the regional map;
calculating the integral of the speed to the time to obtain a first displacement of the robot, and determining the current position of the robot according to the first displacement and the initial position;
controlling the robot according to the second path and the current position.
5. The method of claim 1, comprising:
acquiring a regional map of a region where the robot is located, and acquiring a plurality of images within a preset time period through an image acquisition device, wherein the image acquisition device is arranged on the robot, and the acquired images are all marked with the acquisition time;
determining a second displacement of the robot at the preset time according to the area map and the plurality of images marked with the acquired time;
and acquiring the task information of the robot, and controlling the robot according to the task information and the second displacement.
6. The method of claim 5, wherein determining the second displacement of the robot at the preset time from the area map and the plurality of images annotated with the acquired time comprises:
extracting image features of the multiple images;
according to the acquisition time of each image in the plurality of images and the image characteristics of the plurality of images, constructing an optical flow field corresponding to the plurality of images;
and determining the second displacement of the robot at the preset time according to the optical flow field.
7. The method of claim 5, wherein the determining the second displacement of the robot at the preset time is based on the area map and the plurality of images annotated with the acquired time, the method further comprising:
calculating the integral of the speed to the time to obtain a first displacement of the robot;
according to a preset weight, adding the first displacement and the second displacement to obtain a third displacement;
and acquiring the task information of the robot, and controlling the robot according to the task information and the third displacement.
8. An apparatus for controlling a robot based on an optical encoder, comprising:
a determining module configured to acquire an A-phase signal, a B-phase signal and a Z-phase signal of an optical encoder and determine the number of lines of the optical encoder according to the A-phase signal, the B-phase signal and the Z-phase signal;
a calculation module configured to calculate a speed of the robot through the optical encoder, which has determined the number of lines, while the robot travels, wherein the optical encoder is provided on the robot;
a control module configured to acquire task information of the robot and control the robot according to the task information and the speed.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202111528080.4A 2021-12-14 2021-12-14 Method and device for controlling robot based on optical encoder Active CN114237242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111528080.4A CN114237242B (en) 2021-12-14 2021-12-14 Method and device for controlling robot based on optical encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111528080.4A CN114237242B (en) 2021-12-14 2021-12-14 Method and device for controlling robot based on optical encoder

Publications (2)

Publication Number Publication Date
CN114237242A true CN114237242A (en) 2022-03-25
CN114237242B CN114237242B (en) 2024-02-23

Family

ID=80755926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111528080.4A Active CN114237242B (en) 2021-12-14 2021-12-14 Method and device for controlling robot based on optical encoder

Country Status (1)

Country Link
CN (1) CN114237242B (en)

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0389881A (en) * 1989-09-01 1991-04-15 Canon Inc Speed controller for motor
JP2005059170A (en) * 2003-08-18 2005-03-10 Honda Motor Co Ltd Information collecting robot
JP2006236132A (en) * 2005-02-25 2006-09-07 Matsushita Electric Works Ltd Autonomous mobile robot
CN205051618U (en) * 2015-10-20 2016-02-24 威尔凯电气(上海)股份有限公司 Electric motor controller of electric automobile
CN105651280A (en) * 2016-01-17 2016-06-08 济南大学 Integrated positioning method for unmanned haulage motor in mine
CN106772739A (en) * 2017-03-03 2017-05-31 武汉理工大学 A kind of dim light grid array preparation method and control system
WO2018010458A1 (en) * 2016-07-10 2018-01-18 北京工业大学 Rat hippocampal space cell-based method for constructing navigation map using robot
CN108255181A (en) * 2018-01-29 2018-07-06 广州市君望机器人自动化有限公司 Reverse car seeking method and computer readable storage medium based on robot
CN108362284A (en) * 2018-01-22 2018-08-03 北京工业大学 A kind of air navigation aid based on bionical hippocampus cognitive map
CN108519615A (en) * 2018-04-19 2018-09-11 河南科技学院 Mobile robot autonomous navigation method based on integrated navigation and Feature Points Matching
CN108680177A (en) * 2018-05-31 2018-10-19 安徽工程大学 Synchronous superposition method and device based on rodent models
US20190061156A1 (en) * 2017-08-31 2019-02-28 Yongyong LI Method of planning a cleaning route for a cleaning robot and a chip for achieving the same
WO2019076044A1 (en) * 2017-10-20 2019-04-25 纳恩博(北京)科技有限公司 Mobile robot local motion planning method and apparatus and computer storage medium
CN110363470A (en) * 2019-06-21 2019-10-22 顺丰科技有限公司 A kind of object based on robot sends method, apparatus, system and robot with charge free
US20200005144A1 (en) * 2019-07-30 2020-01-02 Lg Electronics Inc. Artificial intelligence server for controlling a plurality of robots using artificial intelligence
US20200225673A1 (en) * 2016-02-29 2020-07-16 AI Incorporated Obstacle recognition method for autonomous robots
CN111552298A (en) * 2020-05-26 2020-08-18 北京工业大学 Bionic positioning method based on rat brain hippocampus spatial cells
CN112740274A (en) * 2018-09-15 2021-04-30 高通股份有限公司 System and method for VSLAM scale estimation on robotic devices using optical flow sensors
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method
CN113203409A (en) * 2021-07-05 2021-08-03 北京航空航天大学 Method for constructing navigation map of mobile robot in complex indoor environment
CN113255998A (en) * 2021-05-25 2021-08-13 北京理工大学 Expressway unmanned vehicle formation method based on multi-agent reinforcement learning

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0389881A (en) * 1989-09-01 1991-04-15 Canon Inc Speed controller for motor
JP2005059170A (en) * 2003-08-18 2005-03-10 Honda Motor Co Ltd Information collecting robot
JP2006236132A (en) * 2005-02-25 2006-09-07 Matsushita Electric Works Ltd Autonomous mobile robot
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method
CN205051618U (en) * 2015-10-20 2016-02-24 威尔凯电气(上海)股份有限公司 Electric motor controller of electric automobile
CN105651280A (en) * 2016-01-17 2016-06-08 济南大学 Integrated positioning method for unmanned haulage motor in mine
US20200225673A1 (en) * 2016-02-29 2020-07-16 AI Incorporated Obstacle recognition method for autonomous robots
WO2018010458A1 (en) * 2016-07-10 2018-01-18 北京工业大学 Rat hippocampal space cell-based method for constructing navigation map using robot
CN106772739A (en) * 2017-03-03 2017-05-31 武汉理工大学 A kind of dim light grid array preparation method and control system
US20190061156A1 (en) * 2017-08-31 2019-02-28 Yongyong LI Method of planning a cleaning route for a cleaning robot and a chip for achieving the same
WO2019076044A1 (en) * 2017-10-20 2019-04-25 纳恩博(北京)科技有限公司 Mobile robot local motion planning method and apparatus and computer storage medium
CN108362284A (en) * 2018-01-22 2018-08-03 北京工业大学 A kind of air navigation aid based on bionical hippocampus cognitive map
CN108255181A (en) * 2018-01-29 2018-07-06 广州市君望机器人自动化有限公司 Reverse car seeking method and computer readable storage medium based on robot
CN108519615A (en) * 2018-04-19 2018-09-11 河南科技学院 Mobile robot autonomous navigation method based on integrated navigation and Feature Points Matching
CN108680177A (en) * 2018-05-31 2018-10-19 安徽工程大学 Synchronous superposition method and device based on rodent models
CN112740274A (en) * 2018-09-15 2021-04-30 高通股份有限公司 System and method for VSLAM scale estimation on robotic devices using optical flow sensors
CN110363470A (en) * 2019-06-21 2019-10-22 顺丰科技有限公司 A kind of object based on robot sends method, apparatus, system and robot with charge free
US20200005144A1 (en) * 2019-07-30 2020-01-02 Lg Electronics Inc. Artificial intelligence server for controlling a plurality of robots using artificial intelligence
CN111552298A (en) * 2020-05-26 2020-08-18 北京工业大学 Bionic positioning method based on rat brain hippocampus spatial cells
CN113255998A (en) * 2021-05-25 2021-08-13 北京理工大学 Expressway unmanned vehicle formation method based on multi-agent reinforcement learning
CN113203409A (en) * 2021-07-05 2021-08-03 北京航空航天大学 Method for constructing navigation map of mobile robot in complex indoor environment

Also Published As

Publication number Publication date
CN114237242B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN110689585B (en) Multi-phase external parameter combined calibration method, device, equipment and medium
CN110879400A (en) Method, equipment and storage medium for fusion positioning of laser radar and IMU
JP2021119507A (en) Traffic lane determination method, traffic lane positioning accuracy evaluation method, traffic lane determination apparatus, traffic lane positioning accuracy evaluation apparatus, electronic device, computer readable storage medium, and program
CN109949306B (en) Reflecting surface angle deviation detection method, terminal device and storage medium
CN107504917B (en) Three-dimensional size measuring method and device
CN109343037A (en) Optical detector installation error detection device, method and terminal device
CN113419233A (en) Method, device and equipment for testing perception effect
CN111353453A (en) Obstacle detection method and apparatus for vehicle
CN111145634B (en) Method and device for correcting map
CN112184914A (en) Method and device for determining three-dimensional position of target object and road side equipment
CN112184828B (en) Laser radar and camera external parameter calibration method and device and automatic driving vehicle
CN114237242B (en) Method and device for controlling robot based on optical encoder
CN114674328B (en) Map generation method, map generation device, electronic device, storage medium, and vehicle
CN109982074A (en) A kind of method, apparatus and assemble method of the tilt angle obtaining TOF mould group
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN112104292B (en) Motor control method, device, terminal equipment and storage medium
CN115131507A (en) Image processing method, image processing apparatus, and three-dimensional reconstruction method of metauniverse
EP3842757B1 (en) Verification method and device for modeling route, unmanned vehicle, and storage medium
CN111136689B (en) Self-checking method and device
CN110930455B (en) Positioning method, positioning device, terminal equipment and storage medium
CN110634159A (en) Target detection method and device
CN114199268A (en) Robot navigation and guidance method and device based on voice prompt and guidance robot
CN110675445B (en) Visual positioning method, device and storage medium
CN114036721A (en) Method and device for constructing three-dimensional temperature cloud field of micro-module
CN115342830A (en) Calibration method, program product and calibration device for a positioning device and a odometer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant