CN113804222B - Positioning accuracy testing method, device, equipment and storage medium - Google Patents

Positioning accuracy testing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113804222B
CN113804222B CN202111354126.5A CN202111354126A CN113804222B CN 113804222 B CN113804222 B CN 113804222B CN 202111354126 A CN202111354126 A CN 202111354126A CN 113804222 B CN113804222 B CN 113804222B
Authority
CN
China
Prior art keywords
point
positioning
measured
image
target robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111354126.5A
Other languages
Chinese (zh)
Other versions
CN113804222A (en
Inventor
崔鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sineva Intelligent Technology Co ltd
Original Assignee
Zhejiang Sineva Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sineva Intelligent Technology Co ltd filed Critical Zhejiang Sineva Intelligent Technology Co ltd
Priority to CN202111354126.5A priority Critical patent/CN113804222B/en
Publication of CN113804222A publication Critical patent/CN113804222A/en
Application granted granted Critical
Publication of CN113804222B publication Critical patent/CN113804222B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manufacturing & Machinery (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present disclosure relates to the field of robotics, and in particular, to a method, an apparatus, a device, and a storage medium for testing positioning accuracy. The method is used for solving the problems of high test cost, complex test operation and complex maintenance of the positioning precision test mode of the mobile robot in the prior art, and comprises the following steps: acquiring a test pose image of a target robot at a point to be measured, comparing the test pose image with a reference pose image corresponding to the point to be measured, and determining a positioning accuracy test result of the point to be measured based on a comparison result; therefore, the vision integration subsystem breaks through the limitation that a high-precision position measuring tool is required in the traditional testing mode, and reduces the testing cost; the image processing device is adopted to carry out post-processing on the test pose image and the reference pose image, so that the post-processing process is simplified, the positioning precision testing system is convenient to expand and maintain, the scene migration and reuse are convenient, the automation degree is improved, and the application range is expanded.

Description

Positioning accuracy testing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of robotics, and in particular, to a method, an apparatus, a device, and a storage medium for testing positioning accuracy.
Background
With the development of science and technology, mobile robots have been widely used worldwide. In the field of mobile robot products, the positioning accuracy performance is an important test and evaluation index, so that the positioning accuracy performance of the mobile robot is good or bad, and the overall capability of the mobile robot is determined to a great extent.
In the prior art, a high-precision position measuring tool is generally required to be adopted to carry out positioning test on a mobile robot; however, the above test method has high requirement on the test scene space, increases the test cost, and is complicated in test operation and maintenance, so that it is inconvenient to perform a large number of repeated positioning accuracy tests.
In summary, a new method needs to be devised to solve the above problems.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for testing positioning accuracy, and aims to solve the problems of high testing cost, complex testing operation and complex maintenance of a positioning accuracy testing mode of a mobile robot in the prior art.
The embodiment of the application provides the following specific technical scheme:
in a first aspect, a method for testing positioning accuracy is applied to an image processing device in a positioning accuracy testing system, where the positioning accuracy testing system includes the image processing device and at least one vision integration subsystem, and the vision integration subsystem includes at least a wireless transceiver, a vision acquisition device, and a marking plate, and the method includes:
acquiring a test pose image of a target robot located at a point to be measured, wherein the target robot is a robot which executes a task to be measured and moves to the point to be measured, the test pose image is a first image of a marking plate corresponding to the point to be measured, which is acquired by the adjusted vision integration subsystem when the target robot moves to the point to be measured and is in a static state, and the first image comprises a first positioning mark emitted by a laser emitter on the body of the target robot;
comparing the test pose image with a reference pose image corresponding to the point to be measured, and determining a positioning precision test result of the point to be measured based on a comparison result; the reference pose image corresponding to the point to be measured is a second image of the marking plate corresponding to the point to be measured, which is acquired by the vision integration subsystem after adjustment when the target robot is placed at the point to be measured before the target robot executes the task to be measured, the center of a second positioning mark in the second image acquired by the vision integration subsystem after adjustment is coincident with the center of the marking plate, and the second positioning mark is emitted by a laser emitter on the body of the target robot.
According to the method, based on the first positioning mark of the first image and the second positioning mark of the second image acquired by the adjusted vision integration subsystem, the positioning precision of the target robot in the process of executing the task to be tested can be accurately determined, so that a more accurate positioning precision test result is obtained; meanwhile, the vision integration subsystem breaks through the limitation that a high-precision position measuring tool needs to be used in the traditional testing mode, and effectively reduces the testing cost; and the image processing device is adopted to carry out post-processing on the test pose image and the reference pose image, so that the post-processing process is simplified, the positioning precision testing system is convenient to expand and maintain, the system is suitable for multi-point and multi-type positioning precision testing, scene migration and multiplexing are convenient, the automation degree of the positioning precision testing system is further improved to a greater extent, and the application range of the positioning precision testing system is expanded.
Optionally, before the acquiring a test pose image of the target robot located at the point to be measured, the method further includes:
and if the task to be detected is a task which indicates that the target robot moves for one time or multiple times according to a preset path, and the preset path comprises the point to be detected, triggering the adjusted vision integration subsystem to acquire the test pose image when the target robot is determined to be positioned at the point to be detected.
In the method, the image processing device can trigger the adjusted vision integration subsystem to acquire the test pose image of the point to be measured when the target robot is determined to be positioned at the point to be measured, so that the vision integration subsystem can be triggered to acquire the corresponding test pose image and determine the positioning precision test result of the point to be measured, and the positioning precision test system is suitable for multi-point and multi-type positioning precision tests no matter whether the task to be measured indicates that the target robot moves once according to the preset path or indicates that the target robot moves for multiple times according to the preset path.
Optionally, determining that the target robot is located at the point to be measured by performing the following operations:
periodically reading the position information of the target robot in the process of executing the task to be detected by the target robot;
and if the position information read in the preset time range is the same, determining that the target robot is located at the point to be measured.
According to the method, the position information of the target robot is read through the image processing device, and whether the position information read in the preset time range is the same or not is compared, so that whether the target robot is located at the point to be tested is determined, the testing cost is reduced, the use range of the positioning precision testing system is expanded, and the automation degree of the positioning precision testing system is further improved to a greater extent.
Optionally, the reference pose image is obtained by performing the following operations:
when an adjustment completion instruction is received, triggering the vision integration subsystem corresponding to the point to be measured to acquire a reference pose image of the point to be measured; and the adjusting finishing instruction is triggered after the user places the target robot at the point to be measured and finishes adjusting the pose of the vision integration subsystem corresponding to the point to be measured.
According to the method, the vision integration subsystem is adopted, and a simple reference standard is obtained by adjusting the vision integration subsystem, so that the limitation that a high-precision position measuring tool needs to be used in a traditional testing mode is broken through, the testing cost is effectively reduced, the positioning precision testing system is convenient to expand and maintain, scene migration and multiplexing are facilitated, and the application range of the positioning precision testing system is expanded.
Optionally, the comparing the test pose image with the reference pose image includes:
determining first coordinate information of a first positioning mark according to the first positioning mark in the test pose image and the scale value on the mark plate; determining second coordinate information of a second positioning mark according to the second positioning mark in the reference pose image and the scale value on the mark plate;
determining a positioning deviation value of the point to be measured by adopting a Euclidean distance and/or a Manhattan distance according to the first coordinate information and the second coordinate information, wherein the positioning deviation value comprises a position deviation value and/or an angle deviation value;
determining a positioning precision test result of the point to be tested based on the comparison result, comprising:
and determining the positioning deviation value of the point to be measured as a positioning precision test result of the point to be measured.
According to the method, the acquired test pose image and the reference pose image of the point to be measured are compared by the image processing device, so that the post-processing process of the positioning precision testing system is simplified, the testing cost is effectively reduced, and the automation degree of the positioning precision testing system is improved.
Optionally, after determining the positioning accuracy test result of the point to be tested based on the comparison result, the method further includes:
determining a maximum positioning deviation value, a minimum positioning deviation value, an average positioning deviation value and a mean square error positioning deviation value from all the positioning deviation values corresponding to the same to-be-measured point;
determining a positioning precision range based on the maximum positioning deviation value and the minimum positioning deviation value;
and determining a positioning precision test statistical result of the point to be tested based on the positioning precision test result, the positioning precision range, the average positioning deviation value and the mean square error positioning deviation value.
The method can determine the positioning precision test statistical result of any one point to be tested in the task to be tested, so that the positioning precision test system is suitable for multi-point and multi-type positioning precision tests, and the application range of the positioning precision test system is expanded.
In a second aspect, a positioning accuracy testing apparatus is applied to an image processing apparatus in a positioning accuracy testing system, where the positioning accuracy testing system includes the image processing apparatus and at least one vision integration subsystem, the vision integration subsystem includes at least a wireless transceiver, a vision acquisition apparatus and a marking board, and the apparatus includes:
the system comprises an acquisition module, a detection module and a display module, wherein the acquisition module is used for acquiring a test pose image of a target robot positioned at a point to be detected, the target robot is a robot which executes a task to be detected and moves to the point to be detected, the test pose image is a first image of a marking plate corresponding to the point to be detected, which is acquired by an adjusted vision integration subsystem when the target robot moves to the point to be detected and is in a static state, and the first image comprises a first positioning mark emitted by a laser emitter on the body of the target robot;
the determining module is used for comparing the test pose image with a reference pose image corresponding to the point to be measured and determining a positioning precision test result of the point to be measured based on the comparison result; the reference pose image corresponding to the point to be measured is a second image of the marking plate corresponding to the point to be measured, which is acquired by the vision integration subsystem after adjustment when the target robot is placed at the point to be measured before the target robot executes the task to be measured, the center of a second positioning mark in the second image acquired by the vision integration subsystem after adjustment is coincident with the center of the marking plate, and the second positioning mark is emitted by a laser emitter on the body of the target robot.
Optionally, before the acquiring the test pose image of the target robot at the point to be measured, the acquiring module is further configured to:
and if the task to be detected is a task which indicates that the target robot moves for one time or multiple times according to a preset path, and the preset path comprises the point to be detected, triggering the adjusted vision integration subsystem to acquire the test pose image when the target robot is determined to be positioned at the point to be detected.
Optionally, the obtaining module is configured to determine that the target robot is located at the point to be measured by performing the following operations:
periodically reading the position information of the target robot in the process of executing the task to be detected by the target robot;
and if the position information read in the preset time range is the same, determining that the target robot is located at the point to be measured.
Optionally, the reference pose image is obtained by performing the following operations:
when an adjustment completion instruction is received, triggering the vision integration subsystem corresponding to the point to be measured to acquire a reference pose image of the point to be measured; and the adjusting finishing instruction is triggered after the user places the target robot at the point to be measured and finishes adjusting the pose of the vision integration subsystem corresponding to the point to be measured.
Optionally, the comparing the test pose image with the reference pose image, and the determining module is configured to:
determining first coordinate information of a first positioning mark according to the first positioning mark in the test pose image and the scale value on the mark plate; determining second coordinate information of a second positioning mark according to the second positioning mark in the reference pose image and the scale value on the mark plate;
determining a positioning deviation value of the point to be measured by adopting a Euclidean distance and/or a Manhattan distance according to the first coordinate information and the second coordinate information, wherein the positioning deviation value comprises a position deviation value and/or an angle deviation value;
and determining a positioning precision test result of the point to be tested based on the comparison result, wherein the determination module is used for:
and determining the positioning deviation value of the point to be measured as a positioning precision test result of the point to be measured.
Optionally, after determining the positioning accuracy test result of the point to be tested based on the comparison result, the determining module is further configured to:
determining a maximum positioning deviation value, a minimum positioning deviation value, an average positioning deviation value and a mean square error positioning deviation value from all the positioning deviation values corresponding to the same to-be-measured point;
determining a positioning precision range based on the maximum positioning deviation value and the minimum positioning deviation value;
and determining a positioning precision test statistical result of the point to be tested based on the positioning precision test result, the positioning precision range, the average positioning deviation value and the mean square error positioning deviation value.
In a third aspect, an embodiment of the present application provides an intelligent device, including:
a memory for storing a computer program executable by the processor;
the processor is connected to the memory and configured to perform the method according to any of the above first aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where instructions, when executed by a processor, enable the processor to perform the method of any one of the above first aspects.
In addition, for technical effects brought by any one implementation manner of the second aspect to the fourth aspect, reference may be made to technical effects brought by different implementation manners of the first aspect, and details are not described here.
Drawings
Fig. 1 is a schematic structural diagram of a positioning accuracy testing system in an embodiment of the present application;
FIG. 2 is a schematic diagram of a reference pose image for determining a point to be measured in an embodiment of the present application;
FIG. 3 is a schematic diagram of a process for determining that a target robot is located at a point to be measured in an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for testing positioning accuracy in an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating a statistical result of a positioning accuracy test for determining points to be tested according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a statistical result of a positioning accuracy test for determining points to be tested according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a logic structure of a positioning accuracy testing apparatus according to an embodiment of the present application;
fig. 8 is a schematic entity architecture diagram of an intelligent device in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," "third," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein.
In order to solve the problems of high test cost, complex test operation and complex maintenance of a positioning precision test mode of a mobile robot in the prior art, in the embodiment of the application, firstly, before the target robot executes a task to be tested, when the target robot is placed at a point to be tested, a second image of a marking plate corresponding to the point to be tested is acquired by an adjusted vision integration subsystem, and the second image is used as a reference pose image of the point to be tested, wherein the center of a second positioning mark in the second image acquired by the adjusted vision integration subsystem coincides with the center of the marking plate, and the second positioning mark is emitted by a laser emitter on the body of the target robot.
Then, the target robot executes a task to be tested, and a test pose image of the target robot at the point to be tested is obtained, wherein the test pose image is a first image of a marking plate corresponding to the point to be tested, which is acquired by the adjusted vision integration subsystem when the target robot moves to the point to be tested, and the first image comprises a first positioning mark emitted by a laser emitter on the body of the target robot; finally, the positioning precision test result of the point to be tested is determined based on the comparison result of the reference pose image and the test pose image, so that the test process is completed by using the vision integration subsystem.
In the following, preferred embodiments of the present application will be described in further detail with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are merely for purposes of illustration and explanation of the present application and are not intended to limit the present application, and that the features of the embodiments and examples of the present application may be combined with one another without conflict.
Fig. 1 shows a system structure diagram of a positioning accuracy testing system. Referring to fig. 1, in the embodiment of the present application, the positioning accuracy testing system 100 includes an image processing apparatus 110 and a vision integration subsystem 120.
The image processing device 110 is configured to, in an image acquisition link, read position information of the robot, send an image acquisition instruction to the vision integration subsystem when it is determined that the robot is located at a point to be measured, and receive a test pose image returned by the vision integration subsystem; the system is also used for acquiring a test pose image of the robot at the point to be measured in an image processing link, respectively comparing the test pose image with a reference pose image corresponding to the point to be measured, and determining a positioning precision test result of the point to be measured based on the comparison result;
the vision integration subsystem 120 comprises a wireless transceiver 1201, a vision acquisition device 1202 and a marking plate 1203, and the vision integration subsystem 120 is configured to receive an image acquisition instruction sent by the image processing device 110, acquire an image of the marking plate 1203 through the vision acquisition device 1202 based on the image acquisition instruction, and return the acquired image to the image processing device 110.
In the embodiment of the application, before specifically describing a method for testing the positioning accuracy of a robot, how to acquire a reference pose image of each point to be measured is described first.
In the embodiment of the application, before acquiring the reference pose image of each point to be measured, a test scene needs to be determined, and a robot for executing a task to be measured in the test scene needs to be prepared and recorded as a target robot; and a laser reflector fixed relative to the target robot is mounted on the target robot body.
Then, based on the test scene, in a control system of the target robot, a map consistent with the test scene is created, N points to be tested are selected on a preset path in the map, and a set of vision integration subsystems are respectively configured for the N points to be tested, wherein N is a positive integer greater than or equal to 1.
Optionally, the laser emitter on the target robot fuselage can be cross laser emitter, also can be ring laser emitter etc. in this application embodiment, do not do specific restriction to laser emitter.
It should be noted that, in the embodiment of the present application, a specific manner of creating the map in accordance with the test scenario by the target robot is based on an implementation in the prior art, and is not described in detail herein.
In the embodiment of the application, after selecting N points to be measured on the preset path in the map, the user sequentially places the target robot at the N points to be measured, and obtains respective corresponding reference pose images by adjusting the poses of the vision integration subsystems corresponding to the N points to be measured, where the reference pose image of any point to be measured is a second image of the marking plate corresponding to the point to be measured, which is acquired by the vision integration subsystem of the point to be measured after adjustment, when the target robot is placed at the point to be measured before the target robot performs a task to be measured, the center of a second positioning mark in the second image coincides with the center of the marking plate, and the second positioning mark is emitted by a laser emitter on the body of the target robot.
In the embodiment of the present application, since the image processing apparatus obtains the reference pose images corresponding to the N to-be-measured points in the same manner, only a method for obtaining a reference pose image of any one to-be-measured point (hereinafter referred to as a to-be-measured point) among the N to-be-measured points is described below, and for other to-be-measured points, the corresponding reference pose images can be obtained by the same method as described below, which is not described herein again.
It should be noted that, before the target robot executes the task to be tested, the reference pose images corresponding to the N points to be tested need to be obtained preferentially, so that, when performing the subsequent test, it may be agreed to test all of the N points to be tested, it may also be agreed to test some of the N points to be tested, or it may be agreed to test any combination of the N points to be tested, thereby determining the positioning accuracy test result meeting the actual requirement.
In specific implementation, a user places a target robot on a point to be measured, and after the target robot stops stably (i.e., is in a static state), the pose of the vision integration subsystem corresponding to the point to be measured is adjusted until the center of the second positioning mark in the second image acquired by the vision integration subsystem coincides with the center of the mark plate of the point to be measured.
In the embodiment of the application, after the user finishes adjusting the pose of the vision integration subsystem corresponding to the point to be measured, an adjustment finishing instruction is triggered, wherein the adjustment finishing instruction is triggered after the user places the target robot at the point to be measured and finishes adjusting the pose of the vision integration subsystem corresponding to the point to be measured, and is used for instructing the image processing device to trigger the vision integration subsystem corresponding to the point to be measured to acquire the reference pose image of the point to be measured.
For example, a User may set a value of a register to a preset value through a User Interface (UI) configured on the body of the robot a, where the preset value is used to instruct the image processing device to trigger the vision integration subsystem corresponding to the point to be measured to acquire the reference pose image of the point to be measured.
Then, when the image processing device reads that the numerical value of the register of the robot A is a preset value, the image processing device triggers the vision integration subsystem corresponding to the point to be measured to acquire the reference pose image of the point to be measured.
It should be noted that, in the embodiment of the present application, after the vision integration subsystem corresponding to the point to be measured is triggered to acquire the reference pose image of the point to be measured, the relative position between the marking plate of the vision integration subsystem corresponding to the point to be measured and the vision acquisition device needs to be maintained, so as to ensure the accuracy of the positioning accuracy test result obtained by the subsequent positioning accuracy test.
For example, referring to fig. 2, a robot a is taken as an example.
Assuming that a test scene is a warehouse, the task to be tested is from a point A to a point B, and two points to be tested are arranged between the point A and the point B and are respectively a point 1 to be tested and a point 2 to be tested.
Assuming that a tester (namely, a user) places the robot A at the point to be measured 1, and after the robot A stops stably, adjusting the pose of the vision integration subsystem corresponding to the point to be measured 1, so that the center of a second positioning mark in a second image acquired by the vision integration subsystem coincides with the center of a mark plate of the point to be measured 1; and sets the value of the register to Y on the UI of robot a.
The image processing device reads the position information (i.e. the numerical value of the register) of the robot a, determines that the robot a is located at the point to be measured 1, triggers the vision integration subsystem of the point to be measured 1 to acquire the reference pose image of the point to be measured 1, and stores the reference pose image of the point to be measured 1 acquired by the vision integration subsystem of the point to be measured 1.
Then, the tester clears the numerical value of the register of the robot A through the UI interface of the robot A, places the robot A at the point 2 to be measured, and adjusts the pose of the vision integration subsystem corresponding to the point 2 to be measured after the robot A is parked stably, so that the center of a second positioning mark in a second image acquired by the vision integration subsystem coincides with the center of a mark plate of the point 2 to be measured; and sets the value of the register to Y on the UI of robot a.
The image processing device reads the position information (i.e. the numerical value of the register) of the robot a, determines that the robot a is located at the point 2 to be measured, triggers the vision integration subsystem of the point 2 to be measured to acquire the reference pose image of the point 2 to be measured, and stores the reference pose image of the point 2 to be measured acquired by the vision integration subsystem of the point 2 to be measured.
In the embodiment of the application, based on the method for acquiring the reference pose images of the point to be measured 1 or the point to be measured 2, the reference pose images corresponding to the N points to be measured can be acquired, so that after the image processing device acquires the reference pose images corresponding to the N points to be measured, the target robot can be instructed to execute the task to be measured, and the positioning accuracy of the target robot can be tested.
In the embodiment of the application, for convenience of description, only one to-be-measured point (hereinafter referred to as a to-be-measured point) among the N to-be-measured points is taken as an example for description, and the other to-be-measured points may adopt the same method, which is not described herein again.
In specific implementation, the target robot executes a task to be measured, where the task to be measured may be a task that instructs the target robot to move once according to a preset path, or may be a task that instructs the target robot to move multiple times according to a preset path, where the preset path includes the point to be measured.
In the embodiment of the application, when the target robot is determined to be located at a point to be measured in the process of executing the task to be measured, the adjusted vision integration subsystem is triggered to acquire the image of the test pose.
In the embodiment of the present application, referring to fig. 3, the image processing apparatus may determine that the target robot is located at the point to be measured by the following steps:
step 300: and periodically reading the position information of the target robot in the process of executing the task to be measured by the target robot.
In the embodiment of the application, in the process of executing the task to be measured, the image processing device reads the position information of the target robot according to the preset period, and the position information is used for judging whether the target robot moves to the point to be measured.
For example, when the robot a moves to the point to be measured, the robot a sets the value of the register configured by the robot a to the preset value (for example, the preset value is Y), otherwise, the value of the register is cleared.
Step 310: and if the plurality of pieces of position information read in the preset time range are the same, determining that the target robot is located at the point to be measured.
In the embodiment of the application, if the image processing device periodically reads the position information of the target robot, it is determined that the target robot moves to the point to be measured, and the position information of the target robot is not changed within the preset time range, that is, the target robot is in a static state, it is determined that the target robot is located at the point to be measured.
For example, robot a is still taken as an example.
Assuming that the robot a moves to the point 1 to be measured, the robot a sets the value of its own register to Y, and the preset time range is 5 s.
The image processing apparatus reads the position information (i.e., the numerical value of the register) of the robot a, and determines that the robot a is located at the point to be measured 1 when the plurality of read position information are the same within a preset time range, i.e., when the plurality of read position information are all Y within 5 s.
Referring to fig. 4, in the embodiment of the present application, a positioning accuracy testing method is provided, which is applied to an image processing device in a positioning accuracy testing system, where the positioning accuracy testing system includes the image processing device and at least one vision integration subsystem, the vision integration subsystem includes at least a wireless transceiver, a vision acquisition device and a marking board, and the positioning accuracy testing method includes the following specific processes:
step 400: and acquiring a test pose image of the target robot positioned at the point to be measured, wherein the target robot is the robot which executes the task to be measured and moves to the point to be measured, the test pose image is a first image of a marking plate corresponding to the point to be measured, which is acquired by the adjusted vision integration subsystem when the target robot moves to the point to be measured and is in a static state, and the first image comprises a first positioning mark emitted by a laser emitter on the body of the target robot.
In the embodiment of the application, the image processing device acquires a test pose image of a target robot which executes a task to be tested and moves to a point to be tested, the test pose image is a first image of a marking plate corresponding to the point to be tested, which is acquired by the adjusted vision integration subsystem when the target robot moves to the point to be tested and is in a static state, and the first image comprises a first positioning mark emitted by a laser emitter on the body of the target robot.
It should be noted that the test pose image may be a test pose image acquired by triggering the adjusted vision integration subsystem when the target robot executes a task to be tested that moves once according to a preset path, or may be a test pose image acquired by triggering any one of the test pose images acquired by the adjusted vision integration subsystem when the target robot executes a task to be tested that moves multiple times according to a preset path.
Step 410: comparing the test pose image with a reference pose image corresponding to the point to be tested; the reference pose image corresponding to the point to be measured is a second image of the marking plate corresponding to the point to be measured, which is acquired by the adjusted vision integration subsystem before the target robot executes the task to be measured, the center of a second positioning mark in the second image acquired by the adjusted vision integration subsystem coincides with the center of the marking plate, and the second positioning mark is emitted by a laser emitter on the body of the target robot.
In the embodiment of the present application, after the image processing apparatus executes step 400, if a test pose image of the target robot at the point to be measured is obtained, then when step 410 is executed, the following operation is specifically executed:
the first operation is that the first coordinate information of the first positioning mark is determined according to the first positioning mark in the test pose image and the scale value on the mark plate.
And secondly, determining second coordinate information of the second positioning mark according to the second positioning mark in the reference pose image and the scale value on the mark plate.
And thirdly, determining a positioning deviation value of the point to be measured by adopting a Euclidean distance and/or a Manhattan distance according to the first coordinate information and the second coordinate information, wherein the positioning deviation value comprises a position deviation value and/or an angle deviation value.
Step 420: and determining a positioning precision test result of the point to be tested based on the comparison result.
In the embodiment of the present application, after the image processing apparatus performs the first to third operations, the positioning deviation value of the point to be measured is determined, and then, in step 420, the positioning deviation value of the point to be measured is determined as the positioning accuracy test result of the point to be measured.
In this embodiment of the application, if the task to be tested is a task indicating that the target robot moves for multiple times according to the preset path, the image processing apparatus may determine multiple positioning accuracy test results of the point to be tested after performing steps 400 to 420, and then, after the image processing apparatus determines the positioning accuracy test results of the point to be tested based on the comparison result, referring to fig. 5, the following steps may be further performed to determine the positioning accuracy test statistical result of the point to be tested:
step 4201: and determining a maximum positioning deviation value, a minimum positioning deviation value, an average positioning deviation value and a mean square error positioning deviation value from all the positioning deviation values corresponding to the same point to be measured.
In this embodiment of the application, if the task to be measured is a task that indicates the target robot to move for multiple times according to the preset path, then the image processing apparatus may obtain multiple positioning deviation values for the same point to be measured after performing step 420, and may further process all the positioning deviation values corresponding to the same point to be measured when performing 4201, so as to determine the maximum positioning deviation value, the minimum positioning deviation value, the average positioning deviation value, and the mean square deviation positioning deviation value of the same point to be measured.
Specifically, a maximum position deviation value, a minimum position deviation value, an average position deviation value and a mean square error position deviation value are determined from all position deviation values corresponding to the same to-be-measured point; and/or determining the maximum angle deviation value, the minimum angle deviation value, the average angle deviation value and the mean square error angle deviation value from all the angle deviation values corresponding to the same point to be measured.
Step 4202: and determining a positioning precision range based on the maximum positioning deviation value and the minimum positioning deviation value.
In the embodiment of the present application, the corresponding positioning accuracy range is determined based on the maximum positioning deviation value and the minimum positioning deviation value determined in step 4201.
Step 4203: and determining a positioning precision test statistical result of the point to be tested based on the positioning precision test result, the positioning precision range, the average positioning deviation value and the mean square error positioning deviation value.
Therefore, the positioning accuracy of the target robot can be better evaluated through the positioning accuracy test statistical result.
For example, referring to fig. 6, robot a is still taken as an example.
The task to be measured is assumed to be a task for instructing the robot a to move for 3 times according to a preset path, and the preset path comprises two points to be measured, namely a point to be measured 1 and a point to be measured 2.
The determination method of the positioning accuracy test result for the point 1 to be tested is as follows:
assume that the reference pose image corresponding to the point to be measured 1 is image 0 before the robot a executes the task to be measured, and the three test pose images of the point to be measured 1 are image 1, image 2 and image 3 after the robot a completes the task to be measured.
Then, the image processing device acquires the test pose images (namely an image 1, an image 2 and an image 3) of the point to be measured; and respectively comparing the test pose image with a reference pose image (namely, image 0) corresponding to the point 1 to be measured.
Assume that the reference pose image corresponds to reference pose data (i.e., image 0) of
Figure 280923DEST_PATH_IMAGE001
(i.e., image 0), and test pose data respectively corresponding to the test pose images are
Figure 517869DEST_PATH_IMAGE002
(image 1),
Figure 321877DEST_PATH_IMAGE003
(image 2),
Figure 909853DEST_PATH_IMAGE004
(image 3).
Then, the following positioning deviations in different dimensions are obtained:
(1) deviation value of position
Maximum position deviation value:
Figure 191930DEST_PATH_IMAGE005
minimum position deviation value:
Figure 548962DEST_PATH_IMAGE006
average position deviation value:
Figure 789451DEST_PATH_IMAGE007
root mean square position deviation value:
Figure 333564DEST_PATH_IMAGE008
(2) deviation value of angle
Maximum angle deviation value:
Figure 950490DEST_PATH_IMAGE009
minimum angular deviation value:
Figure 771816DEST_PATH_IMAGE010
average angle deviation value:
Figure 307840DEST_PATH_IMAGE011
root mean square angle deviation value:
Figure 480195DEST_PATH_IMAGE012
then, the image processing apparatus determines the 3 positioning accuracy test results of the point to be measured 1, respectively, based on the above positioning deviation values (i.e., the position deviation values and/or the angle deviation values).
Optionally, in this embodiment of the application, the image processing apparatus may further determine a positioning accuracy test statistical result of the corresponding point to be tested 1 based on the 3 positioning accuracy test results.
Optionally, in this embodiment of the application, the image processing apparatus may determine the positioning accuracy test statistical result of the point 2 to be tested in the above manner, which is not described herein again.
Based on the same inventive concept, referring to fig. 7, an embodiment of the present application provides a positioning accuracy testing apparatus, which is applied to an image processing apparatus in a positioning accuracy testing system, where the positioning accuracy testing system includes the image processing apparatus and at least one vision integration subsystem, the vision integration subsystem includes at least a wireless transceiver, a vision acquisition apparatus and a marking plate, and the image processing apparatus includes:
an obtaining module 710, configured to obtain a test pose image of a target robot located at a point to be measured, where the target robot is a robot that moves to the point to be measured to execute a task to be measured, the test pose image is a first image of a marking plate corresponding to the point to be measured, where the first image is acquired by the adjusted vision integration subsystem when the target robot moves to the point to be measured and is in a static state, and the first image includes a first positioning mark emitted by a laser emitter on a body of the target robot;
a determining module 720, configured to compare the test pose image with a reference pose image corresponding to the point to be measured, and determine a positioning accuracy test result of the point to be measured based on the comparison result; the reference pose image corresponding to the point to be measured is a second image of the marking plate corresponding to the point to be measured, which is acquired by the vision integration subsystem after adjustment when the target robot is placed at the point to be measured before the target robot executes the task to be measured, the center of a second positioning mark in the second image acquired by the vision integration subsystem after adjustment is coincident with the center of the marking plate, and the second positioning mark is emitted by a laser emitter on the body of the target robot.
Optionally, before acquiring the test pose image of the target robot at the point to be measured, the acquiring module 710 is further configured to:
and if the task to be detected is a task which indicates that the target robot moves for one time or multiple times according to a preset path, and the preset path comprises the point to be detected, triggering the adjusted vision integration subsystem to acquire the test pose image when the target robot is determined to be positioned at the point to be detected.
Optionally, the obtaining module 710 is configured to determine that the target robot is located at the point to be measured by performing the following operations:
periodically reading the position information of the target robot in the process of executing the task to be detected by the target robot;
and if the position information read in the preset time range is the same, determining that the target robot is located at the point to be measured.
Optionally, the reference pose image is obtained by performing the following operations:
when an adjustment completion instruction is received, triggering the vision integration subsystem corresponding to the point to be measured to acquire a reference pose image of the point to be measured; and the adjusting finishing instruction is triggered after the user places the target robot at the point to be measured and finishes adjusting the pose of the vision integration subsystem corresponding to the point to be measured.
Optionally, the determining module 720 is configured to compare the test pose image with a reference pose image, and:
determining first coordinate information of a first positioning mark according to the first positioning mark in the test pose image and the scale value on the mark plate; determining second coordinate information of a second positioning mark according to the second positioning mark in the reference pose image and the scale value on the mark plate;
determining a positioning deviation value of the point to be measured by adopting a Euclidean distance and/or a Manhattan distance according to the first coordinate information and the second coordinate information, wherein the positioning deviation value comprises a position deviation value and/or an angle deviation value;
determining a positioning precision test result of the point to be tested based on the comparison result, wherein the determining module 720 is configured to:
and determining the positioning deviation value of the point to be measured as a positioning precision test result of the point to be measured.
Optionally, after determining the positioning accuracy test result of the point to be tested based on the comparison result, the determining module 720 is further configured to:
determining a maximum positioning deviation value, a minimum positioning deviation value, an average positioning deviation value and a mean square error positioning deviation value from all the positioning deviation values corresponding to the same to-be-measured point;
determining a positioning precision range based on the maximum positioning deviation value and the minimum positioning deviation value;
and determining a positioning precision test statistical result of the point to be tested based on the positioning precision test result, the positioning precision range, the average positioning deviation value and the mean square error positioning deviation value.
Referring to fig. 8, an embodiment of the present application provides an intelligent device, including:
a memory 801 for storing computer programs executable by the processor 802;
the processor 802 is connected to the memory 801 and is configured to execute any one of the methods executed by the testing apparatus for positioning accuracy in the above embodiments.
Based on the same inventive concept, the present application provides a computer-readable storage medium, and when instructions in the storage medium are executed by a processor, the processor is enabled to execute any one of the methods performed by the positioning accuracy testing apparatus in the foregoing embodiments.
In summary, in the embodiment of the present application, a test pose image of a target robot located at a point to be measured is obtained, the test pose image is compared with a reference pose image corresponding to the point to be measured, and a positioning accuracy test result of the point to be measured is determined based on the comparison result, where the target robot moves to the robot to be measured for executing a task to be measured, the reference pose image corresponding to the point to be measured is a second image of a marking plate corresponding to the point to be measured, which is acquired by an adjusted vision integration subsystem when the target robot is placed at the point to be measured before the target robot executes the task to be measured, a center of a second positioning mark in the second image acquired by the adjusted vision integration subsystem coincides with a center of the marking plate, and the second positioning mark is emitted by a laser emitter on a body of the target robot; the test pose image is a first image of a marking plate corresponding to the point to be measured, which is acquired by the adjusted vision integration subsystem when the target robot moves to the point to be measured and is in a static state, and the first image comprises a first positioning mark emitted by a laser emitter on the body of the target robot; therefore, based on the first positioning mark of the first image and the second positioning mark of the second image acquired by the adjusted vision integration subsystem, the positioning precision of the target robot in the process of executing the task to be tested can be accurately determined, and a more accurate positioning precision test result is obtained; meanwhile, the vision integration subsystem breaks through the limitation that a high-precision position measuring tool needs to be used in the traditional testing mode, and effectively reduces the testing cost; and the image processing device is adopted to carry out post-processing on the test pose image and the reference pose image, so that the post-processing process is simplified, the positioning precision testing system is convenient to expand and maintain, the system is suitable for multi-point and multi-type positioning precision testing, scene migration and multiplexing are convenient, the automation degree of the positioning precision testing system is further improved to a greater extent, and the application range of the positioning precision testing system is expanded.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (12)

1. A method for testing positioning accuracy is characterized in that the method is applied to an image processing device in a positioning accuracy testing system, the positioning accuracy testing system comprises the image processing device and at least one vision integration subsystem, the vision integration subsystem at least comprises a wireless transceiver, a vision acquisition device and a marking plate, and the method comprises the following steps:
acquiring a test pose image of a target robot located at a point to be measured, wherein the target robot is a robot which executes a task to be measured and moves to the point to be measured, the test pose image is a first image of a marking plate corresponding to the point to be measured, which is acquired by the adjusted vision integration subsystem when the target robot moves to the point to be measured and is in a static state, and the first image comprises a first positioning mark emitted by a laser emitter on the body of the target robot;
comparing the test pose image with a reference pose image corresponding to the point to be measured, and determining a positioning precision test result of the point to be measured based on a comparison result; the reference pose image corresponding to the point to be detected is a second image of the marking plate corresponding to the point to be detected, which is acquired by the vision integration subsystem after adjustment when the target robot is placed at the point to be detected before the target robot executes the task to be detected, wherein the center of a second positioning mark in the second image acquired by the vision integration subsystem after adjustment is coincident with the center of the marking plate, and the second positioning mark is emitted by a laser emitter on the body of the target robot;
the comparing the test pose image with the reference pose image comprises:
determining first coordinate information of a first positioning mark according to the first positioning mark in the test pose image and the scale value on the mark plate; determining second coordinate information of a second positioning mark according to the second positioning mark in the reference pose image and the scale value on the mark plate;
determining a positioning deviation value of the point to be measured by adopting a Euclidean distance and/or a Manhattan distance according to the first coordinate information and the second coordinate information, wherein the positioning deviation value comprises a position deviation value and/or an angle deviation value;
the determining of the positioning precision test result of the point to be tested based on the comparison result comprises the following steps:
and determining the positioning deviation value of the point to be measured as a positioning precision test result of the point to be measured.
2. The method according to claim 1, before said acquiring the test pose image of the target robot at the point to be measured, further comprising:
and if the task to be detected is a task which indicates that the target robot moves for one time or multiple times according to a preset path, and the preset path comprises the point to be detected, triggering the adjusted vision integration subsystem to acquire the test pose image when the target robot is determined to be positioned at the point to be detected.
3. The method of claim 1, wherein the target robot is determined to be at the point to be measured by performing the operations of:
periodically reading the position information of the target robot in the process of executing the task to be detected by the target robot;
and if the position information read in the preset time range is the same, determining that the target robot is located at the point to be measured.
4. The method according to claim 1, characterized in that the reference pose image is acquired by performing:
when an adjustment completion instruction is received, triggering the vision integration subsystem corresponding to the point to be measured to acquire a reference pose image of the point to be measured; and the adjusting finishing instruction is triggered after the user places the target robot at the point to be measured and finishes adjusting the pose of the vision integration subsystem corresponding to the point to be measured.
5. The method as claimed in any one of claims 1-4, wherein after determining the result of the positioning accuracy test of the point to be tested based on the comparison result, the method further comprises:
determining a maximum positioning deviation value, a minimum positioning deviation value, an average positioning deviation value and a mean square error positioning deviation value from all the positioning deviation values corresponding to the same to-be-measured point;
determining a positioning precision range based on the maximum positioning deviation value and the minimum positioning deviation value;
and determining a positioning precision test statistical result of the point to be tested based on the positioning precision test result, the positioning precision range, the average positioning deviation value and the mean square error positioning deviation value.
6. A positioning accuracy testing device is characterized in that the device is applied to an image processing device in a positioning accuracy testing system, the positioning accuracy testing system comprises the image processing device and at least one vision integration subsystem, the vision integration subsystem at least comprises a wireless transceiver, a vision acquisition device and a marking plate, and the device comprises:
the system comprises an acquisition module, a detection module and a display module, wherein the acquisition module is used for acquiring a test pose image of a target robot positioned at a point to be detected, the target robot is a robot which executes a task to be detected and moves to the point to be detected, the test pose image is a first image of a marking plate corresponding to the point to be detected, which is acquired by an adjusted vision integration subsystem when the target robot moves to the point to be detected and is in a static state, and the first image comprises a first positioning mark emitted by a laser emitter on the body of the target robot;
the determining module is used for comparing the test pose image with a reference pose image corresponding to the point to be measured and determining a positioning precision test result of the point to be measured based on the comparison result; the reference pose image corresponding to the point to be detected is a second image of the marking plate corresponding to the point to be detected, which is acquired by the vision integration subsystem after adjustment when the target robot is placed at the point to be detected before the target robot executes the task to be detected, wherein the center of a second positioning mark in the second image acquired by the vision integration subsystem after adjustment is coincident with the center of the marking plate, and the second positioning mark is emitted by a laser emitter on the body of the target robot;
the comparing of the test pose image and the reference pose image, the determining module being configured to:
determining first coordinate information of a first positioning mark according to the first positioning mark in the test pose image and the scale value on the mark plate; determining second coordinate information of a second positioning mark according to the second positioning mark in the reference pose image and the scale value on the mark plate;
determining a positioning deviation value of the point to be measured by adopting a Euclidean distance and/or a Manhattan distance according to the first coordinate information and the second coordinate information, wherein the positioning deviation value comprises a position deviation value and/or an angle deviation value;
the positioning precision test result of the point to be tested is determined based on the comparison result, and the determination module is used for:
and determining the positioning deviation value of the point to be measured as a positioning precision test result of the point to be measured.
7. The apparatus according to claim 6, wherein before said acquiring the test pose image of the target robot at the point to be measured, said acquiring module is further configured to:
and if the task to be detected is a task which indicates that the target robot moves for one time or multiple times according to a preset path, and the preset path comprises the point to be detected, triggering the adjusted vision integration subsystem to acquire the test pose image when the target robot is determined to be positioned at the point to be detected.
8. The apparatus of claim 6, wherein the acquisition module is configured to determine that the target robot is located at the point to be measured by:
periodically reading the position information of the target robot in the process of executing the task to be detected by the target robot;
and if the position information read in the preset time range is the same, determining that the target robot is located at the point to be measured.
9. The apparatus of claim 6, wherein the reference pose image is acquired by performing operations of:
when an adjustment completion instruction is received, triggering the vision integration subsystem corresponding to the point to be measured to acquire a reference pose image of the point to be measured; and the adjusting finishing instruction is triggered after the user places the target robot at the point to be measured and finishes adjusting the pose of the vision integration subsystem corresponding to the point to be measured.
10. The apparatus according to any of claims 6-9, wherein after determining the result of the positioning accuracy test of the point to be tested based on the comparison result, the determining module is further configured to:
determining a maximum positioning deviation value, a minimum positioning deviation value, an average positioning deviation value and a mean square error positioning deviation value from all the positioning deviation values corresponding to the same to-be-measured point;
determining a positioning precision range based on the maximum positioning deviation value and the minimum positioning deviation value;
and determining a positioning precision test statistical result of the point to be tested based on the positioning precision test result, the positioning precision range, the average positioning deviation value and the mean square error positioning deviation value.
11. A smart device, comprising:
a memory for storing a computer program executable by the processor;
the processor is coupled to the memory and configured to perform the method of any of claims 1-5.
12. A computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor, enable the processor to perform the method of any of claims 1-5.
CN202111354126.5A 2021-11-16 2021-11-16 Positioning accuracy testing method, device, equipment and storage medium Active CN113804222B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111354126.5A CN113804222B (en) 2021-11-16 2021-11-16 Positioning accuracy testing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111354126.5A CN113804222B (en) 2021-11-16 2021-11-16 Positioning accuracy testing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113804222A CN113804222A (en) 2021-12-17
CN113804222B true CN113804222B (en) 2022-03-04

Family

ID=78938329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111354126.5A Active CN113804222B (en) 2021-11-16 2021-11-16 Positioning accuracy testing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113804222B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115388911A (en) * 2022-08-24 2022-11-25 Oppo广东移动通信有限公司 Precision measurement method and device of optical motion capture system and electronic equipment
CN116499470B (en) * 2023-06-28 2023-09-05 苏州中德睿博智能科技有限公司 Optimal control method, device and system for positioning system of looking-around camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110116407A (en) * 2019-04-26 2019-08-13 哈尔滨工业大学(深圳) Flexible robot's pose measuring method and device
WO2019219077A1 (en) * 2018-05-18 2019-11-21 京东方科技集团股份有限公司 Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
CN111044046A (en) * 2019-12-09 2020-04-21 深圳市优必选科技股份有限公司 Method and device for testing positioning accuracy of robot
CN111046125A (en) * 2019-12-16 2020-04-21 视辰信息科技(上海)有限公司 Visual positioning method, system and computer readable storage medium
CN113259597A (en) * 2021-07-16 2021-08-13 上海豪承信息技术有限公司 Image processing method, apparatus, device, medium, and program product
CN113592946A (en) * 2021-07-27 2021-11-02 深圳甲壳虫智能有限公司 Pose positioning method and device, intelligent robot and storage medium
WO2021218683A1 (en) * 2020-04-30 2021-11-04 华为技术有限公司 Image processing method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019219077A1 (en) * 2018-05-18 2019-11-21 京东方科技集团股份有限公司 Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
CN110116407A (en) * 2019-04-26 2019-08-13 哈尔滨工业大学(深圳) Flexible robot's pose measuring method and device
CN111044046A (en) * 2019-12-09 2020-04-21 深圳市优必选科技股份有限公司 Method and device for testing positioning accuracy of robot
CN111046125A (en) * 2019-12-16 2020-04-21 视辰信息科技(上海)有限公司 Visual positioning method, system and computer readable storage medium
WO2021218683A1 (en) * 2020-04-30 2021-11-04 华为技术有限公司 Image processing method and apparatus
CN113259597A (en) * 2021-07-16 2021-08-13 上海豪承信息技术有限公司 Image processing method, apparatus, device, medium, and program product
CN113592946A (en) * 2021-07-27 2021-11-02 深圳甲壳虫智能有限公司 Pose positioning method and device, intelligent robot and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于单目视觉的工业机器人定位系统设计;黄敏高;《现代电子技术》;20170930(第18期);第114-119页 *

Also Published As

Publication number Publication date
CN113804222A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN113804222B (en) Positioning accuracy testing method, device, equipment and storage medium
CN107687855B (en) Robot positioning method and device and robot
US8700203B2 (en) Calibration method for a spherical measurement probe
US10386497B2 (en) Automated localization for GNSS device
CN110573832B (en) Machine vision system
US7830374B2 (en) System and method for integrating dispersed point-clouds of multiple scans of an object
de Araujo et al. Computer vision system for workpiece referencing in three-axis machining centers
CN108983001A (en) Curved screen touch-control performance test control method, device and test macro
CN104626205A (en) Method and device for detecting mechanical arm of robot
CN109633529B (en) Detection equipment, method and device for positioning accuracy of positioning system
CN117359135B (en) Galvanometer correction method, galvanometer correction device, computer apparatus, storage medium, and program product
CN114668415A (en) Method, device and equipment for testing displacement of teleoperation ultrasonic scanning robot
EP3693697A1 (en) Method for calibrating a 3d measurement arrangement
CN113554616A (en) Online measurement guiding method and system based on numerical control machine tool
CN111141217A (en) Object measuring method, device, terminal equipment and computer storage medium
CN112809668A (en) Method, system and terminal for automatic hand-eye calibration of mechanical arm
Krotova et al. Development of a trajectory planning algorithm for moving measuring instrument for binding a basic coordinate system based on a machine vision system
JP2002267438A (en) Free curved surface shape measuring method
CN108984833B (en) Tire mold-entering angle analysis method and device
CN112330737B (en) Parallel detection method, device, storage medium and apparatus
US8594970B2 (en) System and method for testing objects using a mechanical arm
US9002688B2 (en) System and method for simulating measuring process of workpiece
CN102166747A (en) System for testing object by mechanical arm and method thereof
CN112720074B (en) Method and device for processing workpiece information on numerical control machine tool
CN115816165A (en) Method for measuring a workpiece in a machine tool

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant