CN113504063B - Three-dimensional space touch screen equipment visualization test method based on multi-axis mechanical arm - Google Patents

Three-dimensional space touch screen equipment visualization test method based on multi-axis mechanical arm Download PDF

Info

Publication number
CN113504063B
CN113504063B CN202110734355.3A CN202110734355A CN113504063B CN 113504063 B CN113504063 B CN 113504063B CN 202110734355 A CN202110734355 A CN 202110734355A CN 113504063 B CN113504063 B CN 113504063B
Authority
CN
China
Prior art keywords
touch screen
coordinate system
mechanical arm
calibration
touch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110734355.3A
Other languages
Chinese (zh)
Other versions
CN113504063A (en
Inventor
钱巨
金苇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202110734355.3A priority Critical patent/CN113504063B/en
Publication of CN113504063A publication Critical patent/CN113504063A/en
Application granted granted Critical
Publication of CN113504063B publication Critical patent/CN113504063B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M99/00Subject matter not provided for in other groups of this subclass
    • G01M99/008Subject matter not provided for in other groups of this subclass by doing functionality tests
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Numerical Control (AREA)

Abstract

The invention discloses a visual test method for three-dimensional space touch screen equipment based on a multi-axis mechanical arm, which comprises a set of robot device for testing the three-dimensional space touch screen equipment and an execution method for a set of visual test script depending on the device. The robot device comprises a multi-axis mechanical arm, a depth camera, an operation table and a device to be tested. The test engine comprises a set of calibration mechanism for reducing touch errors, a set of positioning algorithm for determining the position of a target control in a three-dimensional space, a set of touch motion planning algorithm and a test script execution method formed by the method. According to the testing device and the testing method based on the multi-axis mechanical arm, the non-invasive testing of the three-dimensional space equipment is realized, and the problem that graphical interface software on the three-dimensional space equipment is difficult to test is solved.

Description

Three-dimensional space touch screen equipment visualization test method based on multi-axis mechanical arm
Technical Field
The invention belongs to the technical field of computer peripheral equipment, and particularly relates to a visual test method for three-dimensional space touch screen equipment based on a multi-axis mechanical arm.
Background
With the wide-range popularization of touch screen devices, a large number of touch screen application programs are developed vigorously, and challenges are brought to software testing while a touch screen application ecosystem is established. Without good software testing techniques, testers would spend a great deal of effort in repeated and tedious tests, thereby wasting a great deal of manpower, material resources, financial resources, and time.
The test automation refers to a process of automatically running related applications under preset conditions, checking running results and converting test behaviors driven by people into machine execution. Compared with the traditional manual test, the automatic test of the touch screen equipment can greatly reduce time consumption and labor waste, avoid repeated manual operation, reduce test cost and improve test efficiency.
In the field of automation of touch screen device testing, some techniques have been proposed and related tools have been devised. These test automation frameworks can be broadly divided into three categories: the traditional test automation technology, the visual test technology and the non-invasive robot test technology.
Conventional test automation techniques perform GUI (user graphical interface) operations such as uiautomation, robotium and Appium66 through control tabs, indexes, coordinates, etc. The traditional test automation technology has strong dependency on the interface framework of the tested device system or the application program because of the need of controlling the bottom layer control of the operating system.
The visualization testing technology introduces a computer vision technology, and performs a GUI test by using a control image, so as to get rid of the dependency on an interface framework, such as Sikuli, JAutomate, airTest and the like. The visual GUI test mainly obtains the position of the control by identifying the control on the obtained screen image, only needs to interact with an operating system, does not need to know the interface layout structure through an interface framework, and has stronger cross-platform performance. However, the above two testing techniques do not get rid of the system constraints, both need to provide a system interface through the device under test, and belong to intrusive testing, and for some closed systems (such as GoPro, ATM machine, etc.) which cannot provide an operating system interface, testing cannot be performed. On the other hand, the test actions of the above two test technologies are directly triggered in the underlying system, rather than through a real user interface, which often fails to simulate real user experience, and the results may not be very credible.
The non-invasive robot testing technology simulates real interactive experience of a user and the tested equipment by introducing the robot technology, a testing action is not triggered by a bottom system any more, but is realized by clicking a screen through a two-dimensional robot, the dependence on the system authority of the tested equipment is avoided, the cross-platform performance is stronger, and meanwhile, the interactive process of the user and the equipment is simulated, so that the testing result is more credible. However, the two-dimensional robot used in the technology is only suitable for automated testing of horizontally placed mobile equipment, and cannot effectively test some three-dimensional space equipment (such as an ATM) with complex structure and complex posture, and the two-dimensional robot has a small application range and still has certain limitation.
Disclosure of Invention
The invention aims to provide a three-dimensional space touch screen equipment visualization testing method based on a multi-axis mechanical arm, so as to solve the technical problems in the prior art.
The specific technical scheme of the invention is as follows:
a three-dimensional space touch screen equipment visualization test method based on a multi-axis mechanical arm comprises the following steps:
step 1, constructing a robot device for testing a three-dimensional space touch screen device;
the robot device comprises a multi-axis mechanical arm, a depth camera, an operation table and a tested device;
the device to be tested is provided with a touch screen, the device to be tested is arranged on an operating platform, and the surface of the touch screen is not on the same horizontal line with the operating platform;
the multi-axis mechanical arm and the depth camera are fixed on the operating platform, the depth camera is positioned right in front of the tested equipment, and the multi-axis mechanical arm is positioned on the side of the tested equipment;
the depth camera shoots the full screen of the tested equipment without shielding;
the multi-axis mechanical arm can reach any position on the screen of the tested device;
the touch pen is arranged at the tail end of the multi-axis mechanical arm and used for realizing the touch of the screen of the tested equipment;
and a conversion matrix of a camera coordinate system and a robot coordinate system is calculated between the depth camera and the multi-axis mechanical arm by using a hand-eye calibration technology.
Step 2, an error calibration mechanism is utilized to develop touch control attempts with feedback, a touch control position error correction function is fitted based on the result, the correction function is utilized to overcome the remote motion deviation generated by the gravity of the multi-axis mechanical arm, and the influence caused by the depth data calculation deviation of the depth camera is reduced;
step 3, calculating a plane equation of a plane where the screen is located and determining a screen area range by using an RGB-D image obtained by the depth camera after error calibration is completed, and constructing a three-dimensional space model of the screen of the tested device;
step 4, executing each test instruction in the script one by one, and when the test instruction is executed, firstly analyzing the touch screen action type and the screen target control primitive from the script; positioning the three-dimensional space coordinate of the target control under a camera coordinate system according to the control primitive and a three-dimensional control positioning algorithm; then converting the coordinates under the camera coordinate system into three-dimensional coordinates under the robot coordinate system by using the hand-eye calibration result of the robot device;
step 5, correcting the coordinates of the robot in the coordinate system by using the error calibration result in the step 2, improving the accuracy of control positioning, and sending the control positioning and the analyzed action types to a robot control program;
and 6, a robot control procedure plans the basic motion steps of the robot by using a touch motion planning algorithm according to the received motion types and the three-dimensional coordinates of the target control under the robot coordinate system, disassembles one motion in the script into a plurality of basic motion steps of the mechanical arm, converts each basic motion step into a joint angle sequence of the multi-axis mechanical arm by using an inverse kinematics algorithm and a motion planning algorithm, and sends the joint angle sequence to a bottom layer of the mechanical arm for driving, so that the multi-axis mechanical arm is driven to click and slide at different positions of a screen.
Further, the error calibration mechanism in the step 2 is realized by a calibration board formed by a tablet computer, and a calibration program is loaded into the tablet computer to form the calibration board; attaching the calibration plate on the touch screen surface of the tested device, then automatically controlling the multi-axis mechanical arm to sequentially click calibration points in the calibration program and collect feedback, completing the calibration operation of touch errors, and arranging the calibration plate to perform specific calibration steps as follows:
step 2.1, starting a calibration program of the calibration board, wherein the program displays S rows and S columns, and M (M = S × S) calibration points are used as targets to be tried to touch;
2.2, firstly identifying calibration points by using a Hough transform circle detection algorithm to obtain coordinates of the calibration points in the RGB image shot by the depth camera
Figure BDA0003141013460000041
Calculating the three-dimensional coordinate position of the calibration point on the calibration plate under the camera coordinate system by using the three-dimensional control positioning algorithm in the step 4, and converting the three-dimensional coordinate position into the coordinate under the robot coordinate system as the target touch position A of the calibration point by using the hand-eye calibration result i Finally using the robot control program of step 6 according to A i Click calibration plate in sequenceA calibration point of (a);
step 2.3, recording the coordinate position and touch contact time of each click in a two-dimensional coordinate system of the calibration board by a calibration program; if the touch control is not detected in the specified time by a certain calibration point, recalculating the three-dimensional coordinates of the calibration point by using the step 2.2, and gradually increasing the touch control depth until the calibration board detects the touch control; if the touch contact time exceeds a specified value, gradually reducing the depth until the touch contact time is within a specified range; recording calibration point N i The coordinate of the corresponding target touch position in the robot coordinate system is A i =(x i ,y i ,z i )(i=1...M),N i The actual touch position of the touch screen is represented by coordinates in a calibration board two-dimensional coordinate system with the upper edge of the calibration board screen facing to the right as the positive X-axis direction, the left edge facing to the lower as the positive Y-axis direction and the calibration board screen pixels as units
Figure BDA0003141013460000042
The coordinate system of the screen front-view image takes the top point of the upper left corner of the screen front-view image as the origin, the upper edge of the image is in the positive direction of the x axis towards the right, the left edge is in the positive direction of the y axis towards the bottom, and the picture pixels are taken as units; according to the proportion of the coordinate system of the calibration plate to the coordinate system of the screen front-view image
Figure BDA0003141013460000043
Conversion to coordinates in a screen orthographic image coordinate system
Figure BDA0003141013460000044
Figure BDA0003141013460000045
Then, the three-dimensional control positioning algorithm in the step 4 and a conversion matrix obtained by the hand-eye calibration are used for calculating
Figure BDA0003141013460000051
The coordinate in the robot coordinate system is C i =(x′ i ,y i ′,z′ i )(i=1...M);
Actual touch position for recording calibration pointCoordinate C placed under robot coordinate system i Coordinate A of target touch position of calibration point in robot coordinate system i A deviation of d i =(x′ i -x i ,y i y i ′-y,z′ i -z i ) (i =1.. M), then according to B using a spline interpolation algorithm i And d i Calculating an error correction function F (x, y); when the coordinates of the target touch point in the screen elevation image coordinate system are (x, y), the calculation result of F (x, y) = (dx, dy, dz) indicates that the correction value of (x, y) coordinates (x ', y ', z ') in the robot coordinate system is (dx, dy, dz).
Further, the method for constructing the screen three-dimensional space model to assist in positioning the coordinates of the target control in the three-dimensional space in the step 3 to improve the positioning accuracy comprises the following steps:
step 3.1, extracting the screen contour
The method comprises the following steps of extracting the outline of a screen of the tested device from an RGB image shot by a depth camera:
step 3.1.1, acquiring an RGB image of the tested equipment from the depth camera, and recognizing a polygonal area range of the touch screen in the distorted and deformed image by using a target recognition algorithm based on deep learning; finding out the minimum circumscribed rectangle of the polygonal area, then expanding the area of the rectangular area by 20 percent to ensure that the screen is completely contained in the area, simultaneously, the area does not comprise a complex background, and finally, intercepting the area;
3.1.2, recognizing the screen contour in the intercepted rectangular area by utilizing an edge detection technology, firstly finding out a contour line with the largest enclosed area, and then finding out four points which are respectively closest to four vertex angles of the image obtained by 2.1.1 from the contour line as four vertexes of the screen of the tested equipment;
step 3.1.3, projecting the area surrounded by the four vertexes to an orthographic plane by utilizing a perspective transformation technology to obtain a screen orthographic image S; constructing a screen front-view image coordinate system according to the screen front-view image S; extracting the screen outline of the tested equipment according to the steps so as to avoid the influence of a complex background on the subsequent test;
step 3.2, constructing a three-dimensional space model of the screen
The method for determining the specific position and size of the screen of the tested device in the three-dimensional space comprises the following steps:
step 3.2.1, randomly selecting n sampling points and two-dimensional coordinates of 4 screen vertexes in the screen range after perspective transformation, and calculating the positions of the n +4 points in an original image shot by a camera by utilizing reverse perspective transformation;
step 3.2.2, acquiring three-dimensional coordinates of the n +4 sampling points in a camera coordinate system by using an RGB-D image of the depth camera, and performing plane fitting on the n +4 three-dimensional coordinates to calculate a three-dimensional space equation P (x, y) and a plane normal vector of a plane where the screen is located in the camera coordinate system
Figure BDA0003141013460000061
Step 3.2.3, screen range correction: projecting the three-dimensional coordinates of the 4 screen vertexes acquired by the depth camera to the plane calculated in the previous step to serve as the new four vertexes of the screen so as to finally determine the range of the screen; finally obtaining an equation P (x, y) of the plane where the screen is located and a plane normal vector under the camera coordinate system
Figure BDA0003141013460000062
Coordinate positions of four vertexes and physical world length h of screen obtained by calculating distances between screen vertexes r And the physical world width w r And thus, the construction of the screen three-dimensional space model is completed.
Further, in the step 4, the position of the target control under the camera coordinate system is determined by using the following three-dimensional control positioning algorithm, which specifically comprises the following steps:
step 4.1, firstly, calculating the coordinates of the target control in a real screen two-dimensional coordinate system: obtaining the coordinate (x) of the target control image in the screen orthographic image coordinate system obtained in the step 3.1 by utilizing an image matching algorithm 0 ,y 0 ) And then looking at the length and width h of the image through the screen p ,w p And step 3.2Calculated length and width h of physical world screen r ,w r The coordinate (x) of the target control in the coordinate system of the physical world screen is calculated according to the ratio 1 ,y 1 ) (ii) a The physical world screen coordinate system takes the upper left corner of a screen in the physical world as an origin, the upper edge of the screen to the right is the positive direction of an x axis, the left edge to the bottom is the positive direction of a y axis, and the physical world size is taken as a unit;
Figure BDA0003141013460000071
Figure BDA0003141013460000072
step 4.2, calculating the three-dimensional position of the target control: obtaining the coordinate (x) of the target control in the coordinate system of the physical world screen 1 ,y 1 ) Then, two orthogonal unit vectors u, v are constructed in the screen three-dimensional space model under the camera coordinate system constructed in the step 3.2; u and v are unit vectors in the directions of the upper edge and the left edge of the screen in the screen space model respectively, then the coordinate p of the vertex at the upper left corner of the screen in the screen space model is used as an origin, and the coordinate p of the target control in the camera coordinate system is calculated by utilizing the orthogonal unit vectors u, v and the coordinate p c =(x c ,y c ,z c ),
p c =p+x 1 ·u+y 1 ·v。
Further, the touch motion planning algorithm in step 6 decomposes the action type of one instruction in the script into a plurality of basic motion steps of the mechanical arm, including the following steps:
step 6.1, calculating the distance D of the touch position right above the normal direction of the screen 1 As a first target position t of the multi-axis robot arm to perform the motion 1 Taking the touch position as a second target position t of the multi-axis mechanical arm to execute the action 2
Step 6.2, planning a third touch position according to the action type, if the action type is click,And if the long press is performed, the third target position is the same as the first target position, the multi-axis mechanical arm returns to the position after reaching the touch point, and the moving sequence is t when the click and long press actions are finished 1 →t 2 →t 1
And 6.3, if the action type is sliding, moving the distance D along the sliding direction 2 Calculating a third target position t 3 Then calculating t 3 A distance D directly above 1 Fourth target position t 4 It is taken as the last target position of the sliding motion, and the moving sequence is t 1 →t 2 →t 3 →t 4
6.4, after all target positions of an action are obtained, determining the corresponding posture when the touch pen at the tail end of the multi-axis mechanical arm reaches the target positions; representing the posture of the stylus by Euler angles (alpha, beta, gamma), wherein alpha represents the rotation radian of the stylus around an x axis of a robot coordinate system, beta represents the rotation radian of the stylus around a y axis of the robot coordinate system, and gamma represents the rotation radian of the stylus around a z axis of the robot; for a basic movement step that the target position is not on the screen, the pen point postures are set to be perpendicular to the screen inwards; for a basic movement step of a target position on a screen, vertically dividing the screen into k areas, and judging which area of the screen the target position is located in; setting the gesture of the stylus pen point to different values for points in different areas; for the screen area that is close to multiaxis arm, the nib gesture is set to be inwards perpendicular to the screen direction, and for the screen area of keeping away from multiaxis arm, the vector that the nib gesture represented and screen plane contained angle reduce gradually, and specific computational formula is as follows:
Figure BDA0003141013460000081
Figure BDA0003141013460000082
wherein
Figure BDA0003141013460000083
Is a normal vector of the plane of the screen under the robot coordinate system,
Figure BDA0003141013460000084
is composed of
Figure BDA0003141013460000085
A projection vector projected to the z =0 plane,
Figure BDA0003141013460000086
is a unit vector of the positive direction of the z axis under the robot coordinate system; if the touch position is located in the region D i (i =0., k-1), the pen tip attitude value is set to
Figure BDA0003141013460000087
6.5, obtaining the coordinates of the target position and the corresponding pen point posture when the target position is reached, then calculating a motion path from the current position to the target position of the multi-axis mechanical arm by using a path planning algorithm, wherein the motion path consists of route points, for each route point, an inverse kinematics algorithm is used for calculating the angle to which each joint shaft should rotate when the multi-axis mechanical arm moves to the point, one route point corresponds to a group of joint angle values, the joint angle value corresponding to each route point is sequentially sent to the bottom layer of the mechanical arm for driving, and the multi-axis mechanical arm reaches the target position along the motion path in a specified posture to complete one motion;
6.6, before executing the next test action, the multi-axis mechanical arm carries out evasive action: the multi-axis mechanical arm is provided with a fixed avoidance gesture, when each complete action is executed, the multi-axis mechanical arm automatically enters the avoidance gesture, and under the avoidance gesture, the multi-axis mechanical arm does not shield a camera from shooting a screen of the tested equipment and is not far away from the tested equipment, so that the multi-axis mechanical arm can respond more quickly when the action is executed next time; and simultaneously, in the process of planning the path in the step 6.5, avoiding the tested equipment.
The visual test method for the three-dimensional space touch screen equipment based on the multi-axis mechanical arm has the following advantages:
(1) The invention provides a set of robot devices for non-invasive testing of a three-dimensional space apparatus. In the test process, the device to be tested does not need to provide an interface, the access authority of the device to be tested does not need to be acquired, and the interface requirement on the software to be tested on the device to be tested is avoided. The method is not only suitable for mobile phones and flat panels, but also suitable for embedded three-dimensional space equipment such as ATM machines and the like, and has wide application range and strong universality.
(2) The method has high automation of error calibration, control positioning and multi-axis mechanical arm movement, does not need excessive user intervention, and can simulate various interaction modes of people and tested equipment, such as clicking, long pressing, sliding and the like.
(3) The method adopts a three-dimensional space positioning technology based on Mask-RCNN and RANSAC random sampling consistency technology to obtain the position of the target control in the three-dimensional space, and has strong robustness.
(4) The touch motion planning algorithm provided by the method can decompose common actions (such as clicking, sliding and the like) of man-machine interaction into one or more mechanical arm basic motion steps, and enables the multi-axis mechanical arm to effectively complete touch motions of the edge area of a screen through self-adaptive pen point angle postures, so that the accuracy of test execution is improved.
Drawings
FIG. 1 is a diagram of a physical apparatus of a visual test method according to the present invention;
FIG. 2 is an exemplary diagram of a test script according to the present invention;
FIG. 3 is a flowchart illustrating a script execution method according to the present invention;
FIG. 4 is a flowchart of a screen contour extraction algorithm proposed by the present invention;
FIG. 5 is a flow chart of a screen three-dimensional space model construction algorithm proposed by the present invention;
FIG. 6 is a flow chart of a three-dimensional control positioning algorithm proposed by the present invention;
FIG. 7 is a schematic view of the hand-eye calibration of the present invention;
FIG. 8 is a flow chart of the error calibration mechanism of the present invention;
FIG. 9 is a schematic diagram of a calibration plate coordinate system according to the present invention;
FIG. 10 is a schematic diagram of a front view image coordinate system of the screen according to the present invention;
FIG. 11 is a schematic diagram of an error calibration mechanism according to the present invention;
FIG. 12 is a schematic diagram of the construction of a three-dimensional spatial model of a screen of a device under test according to the present invention;
FIG. 13 is a schematic view of target control positioning according to the present invention;
FIG. 14 is an exploded view of a clicking action according to the present invention;
FIG. 15 is an exploded view of the sliding motion of the present invention;
FIG. 16 is a diagram illustrating a tip attitude of a stylus according to the present invention.
The symbols in the figure illustrate: 1. a multi-axis robotic arm; 2. a device under test; 3. a stylus; 4. a depth camera; 5. an operation table; 6. actual touch position C of calibration point i (ii) a 7. Calibration point target touch position A i (ii) a 8. Fitting a sampling point on a plane; 9. a screen profile; 10. a screen orthographic view image coordinate system; 11. a physical world screen coordinate system; 12. a camera coordinate system; 13. coordinates (x) of target control in screen orthographic image coordinate system 0 ,y 0 ) (ii) a 14. Coordinates (x) of target control in physical world screen coordinate system 1 ,y 1 ) (ii) a 15. Coordinates (x) of the target control in the camera coordinate system c ,y c ,z c ) (ii) a 16. A screen of the device to be tested; 17. the motion direction of the first step of the click action; 18. clicking the motion direction of the second step; 19. the movement direction of the first step of the sliding action; 20. the movement direction of the second step of the sliding action; 21. the movement direction of the third step of the sliding action; 22. a first nib pose; 23. a second nib pose;
Detailed Description
In order to better understand the purpose, structure and function of the present invention, a method for visually testing a stereoscopic space touch screen device based on a multi-axis mechanical arm according to the present invention is described in further detail below with reference to the accompanying drawings.
1. First stage (robot device for constructing three-dimensional space touch screen equipment test)
The invention firstly constructs a robot device for testing three-dimensional space touch screen equipment, which comprises a multi-axis mechanical arm 1, a depth camera 4, an operation table 5 and tested equipment 2.
The operation table 5 provides a stable operation space, and the schematic diagram of the construction is shown in fig. 1. The device under test 2 is disposed on or beside an operating table with a screen 16 of the device under test at an angle to the operating table. The multi-axis mechanical arm 1 and the depth camera 4 are fixed on an operation table. The depth camera 4 is located directly in front of the device under test 2, and the multi-axis robotic arm 1 is located to the side of the device under test 2. The depth camera 4 can shoot the whole appearance of the screen 16 of the tested device without shielding under the layout, and the multi-axis mechanical arm 1 can reach any position on the screen 16 of the tested device. The touch pen 3 is arranged at the tail end of the multi-axis mechanical arm 1 and used for realizing screen touch. The features and functions of the various components are as follows:
(1) A multi-axis robot arm 1. The multi-axis mechanical arm 1 is a six-axis mechanical arm, the touch pen 3 is arranged at the tail end of the multi-axis mechanical arm 1 and has a certain buffering effect, and the multi-axis mechanical arm 1 can reach an appointed position in a three-dimensional space in an appointed posture to complete touch operation on a screen of three-dimensional space equipment.
(2) A depth camera 4. The method is mainly used for shooting a screen 16 of the tested device, obtaining screen depth information, completing construction of a three-dimensional space model of the screen and matching of controls, and finally calculating the position of a target control in the three-dimensional space. Meanwhile, the test result can be judged according to the shot screen picture.
The robot device for testing the stereoscopic space touch screen equipment can be applied to testing of various mobile phones and tablet computers of models including android, apples and the like, and can meet testing requirements of stereoscopic embedded equipment with a touch screen, such as an ATM (automatic teller machine), a touch display and control desk and the like.
2. Second stage (hand and eye calibration)
After the robot device for testing the three-dimensional space touch screen equipment is built, the multi-axis mechanical arm 1 is required to grab a calibration plate and a camera shooting calibration plate for hand-eye calibration, namely, the conversion relation between a camera coordinate system 13 and a robot coordinate system is determined, and a schematic diagram is shown in fig. 7. The transformation relationship is as follows:
a: pose (including coordinate position and attitude) of the tail end of the multi-axis mechanical arm 1 under a robot coordinate system
B: calibrating the pose of the plate under the coordinate system at the tail end of the mechanical arm;
c: the pose of the depth camera 4 under the coordinate system of the calibration plate;
d: the pose of the depth camera 4 in the robot coordinate system;
wherein D is a conversion relationship between the camera coordinate system 12 and the robot coordinate system, which is finally obtained, and the following equation can be obtained through the schematic diagram of fig. 7:
D=A·B·C
different postures are changed by moving the multi-axis mechanical arm 1, and the camera of the depth camera 4 is used for shooting calibration plates in different postures. The multi-axis mechanical arm 1 is allowed to change two postures, and the depth camera 4 is kept fixed in the robot coordinate system, namely the posture of the camera in the robot coordinate system is kept unchanged and has a posture D 1 =D 2 Then can obtain
A 1 ·B 1 ·C 1 =A 2 ·B 2 ·C 2
Wherein, A 1 And A 2 The pose of the tail end of the multi-axis mechanical arm 1 under the robot coordinate system is respectively represented by B 1 And B 2 Respectively the poses of the calibration plate under two poses of the multi-axis mechanical arm 1 under a robot tail end coordinate system, C 1 And C 2 The poses of the camera under the coordinate system of the calibration plate under two poses of the multi-axis mechanical arm 1 are respectively, and the calibration plate is always fixed relative to the tail end of the multi-axis mechanical arm, so that the pose B 1 And B 2 Are equal and can be obtained after transformation.
Figure BDA0003141013460000121
B is a homogeneous transformation matrix, R represents rotation transformation, and t represents translation transformation.
Figure BDA0003141013460000122
The transformation A can be obtained through positive kinematics calculation of the multi-axis mechanical arm 1, the transformation C can be obtained through calculation of a camera shooting calibration plate, then the transformation B can be calculated, and the conversion relation D of the hand-eye calibration is finally obtained after the transformation A, the transformation B and the transformation C exist.
3. Third stage (Multi-axis robot 1 error calibration, as shown in FIG. 8.)
The calibration of the hand and eye will be followed by error calibration. Due to the influence of gravity factors of the multi-axis mechanical arm 1, when the tail end of the multi-axis mechanical arm 1 moves to a position far away from the multi-axis mechanical arm 1, the actual position of the tail end deviates from a preset coordinate, and a calibration program is loaded into a tablet personal computer to form a calibration board; the calibration plate is attached to the surface of the screen 16 of the tested device, then the multi-axis mechanical arm 1 is automatically controlled to sequentially click the calibration points in the calibration program and collect feedback, so that the touch error calibration operation is completed, and the accuracy of the test method execution operation is improved. The method comprises the following specific steps:
step a, starting a calibration program of the calibration board, as shown in fig. 9 and fig. 10, the program displays S (e.g. 6) rows and S columns, and totally M (M = S × S) calibration points as targets to be tried to touch;
b, recognizing the calibration points by using a Hough transform circle detection algorithm to obtain the coordinates of the calibration points in the RG image shot by the depth camera 4
Figure BDA0003141013460000131
Then, the three-dimensional coordinate position of the calibration point on the calibration plate under the camera coordinate system 12 is calculated by using a three-dimensional control positioning algorithm, and then the calibration plate is converted into a target touch position A of the calibration point by using a conversion matrix obtained by hand-eye calibration and using the coordinate under the robot coordinate system as the coordinate i Finally, the robot control program of the fourth stage is utilized to carry out the operation according to A i Sequentially clicking calibration points on the calibration plate;
and c, recording the coordinate position and the touch contact time of each click in the two-dimensional coordinate system of the calibration board by the calibration program. If touch is not detected in a certain calibration point within a specified time, recalculating the calibrationAnd (3) gradually increasing the touch depth until the calibration board detects touch, and gradually decreasing the depth until the touch contact time is within a specified range if the touch contact time exceeds a specified value. Recording the calibration point N i The coordinate of the corresponding target touch position in the robot coordinate system is A i =(x i ,y i ,z i )(i=1...M),N i The actual touch position is represented by coordinates in a calibration board two-dimensional coordinate system with the upper edge of the calibration board screen as a positive X-axis direction to the right, the left edge of the calibration board screen as a positive Y-axis direction to the bottom and the pixels of the calibration board screen as units
Figure BDA0003141013460000132
The coordinate system of the screen front-view image takes the top point of the left upper corner of the screen front-view image as the origin, the upper edge of the image is in the positive x-axis direction towards the right, the left edge is in the positive y-axis direction towards the bottom, and the picture pixels are taken as units. According to the proportion of the coordinate system of the calibration plate to the coordinate system of the screen front-view image
Figure BDA0003141013460000141
Conversion to coordinates in a screen orthographic image coordinate system
Figure BDA0003141013460000142
Then, the conversion matrix obtained by the three-dimensional control positioning algorithm and the second-stage hand-eye calibration is used for calculating
Figure BDA0003141013460000143
The coordinate in the robot coordinate system is C i =(x′ i ,y i ′,z′ i )(i=1...M)
Step d, recording the actual touch position C of the calibration point i 6 and calibration point target touch position A i Deviation of 7 is d i =(x′ i -x i ,y i ′-y,z′ i -z i ) (i =1.. M), as shown in fig. 11. Then, spline interpolation algorithm is used to obtain the B i And d i An error correction function F (x, y) is calculated. When the coordinates of the target touch point in the screen orthographic image coordinate system are (x, y), F (x, y)) The calculation result of = (dx, dy, dz) indicates that the correction value of (x, y) coordinates (x ', y ', z ') in the robot coordinate system is (dx, dy, dz).
4. Fourth stage (calculating target control position using script execution Engine)
The invention analyzes the script command through the script execution method, and executes the test script according to steps as shown in figure 3.
Step a, firstly, analyzing screen control primitives and action types needing to be clicked from a script, wherein the script is shown in figure 2;
and b, acquiring a plane equation of a plane where the screen of the tested device is located and the spatial position of the vertex of the screen by using the depth camera 4, the perspective transformation technology of the image and the RANSAC random sampling consistency technology to complete the construction of the three-dimensional space model of the screen.
And c, calculating the three-dimensional coordinates of the target control under the camera coordinate system 12 by using the three-dimensional control positioning algorithm provided by the method.
As shown in fig. 4, the screen contour extraction technology based on depth target recognition (based on algorithms such as Mask-RCNN), the screen three-dimensional space model construction technology based on random sample consensus (RANSAC), and the three-dimensional control positioning technology provided by the method specifically process the following steps:
step 4.1, extracting the screen contour
Firstly, extracting the outline of the screen of the device to be tested from an RGB image shot by a depth camera 4, and the specific steps comprise:
step 4.1.1, acquiring an RGB image of the tested device from the depth camera 4, and identifying a polygonal area range of the touch screen in the distorted and deformed image by using a target identification algorithm (such as Mask-RCNN) based on deep learning; finding out the minimum circumscribed rectangle of the polygonal area, then expanding the area of the rectangular area by 20% to ensure that the screen is completely contained in the area, and meanwhile, the area does not comprise a complex background, and finally intercepting the area.
And 4.1.2, identifying the screen contour in the intercepted rectangular area by utilizing an edge detection technology, firstly finding out a contour line with the largest enclosed area, and then finding out four points which are respectively closest to four vertex angles of the image obtained by 4.2.1 from the contour line to be used as four vertexes of the screen of the tested equipment.
And 4.1.3, projecting the area surrounded by the four vertexes to an orthographic plane by utilizing a perspective transformation technology to obtain a screen orthographic image S. And constructing a screen front-view image coordinate system according to the screen front-view image S. According to the steps, the screen outline of the tested device can be extracted, so that the influence of a complex background on subsequent testing is avoided.
And 4.2, constructing a three-dimensional space model of the screen, as shown in fig. 5.
Next, a three-dimensional space model of the screen is constructed to determine the specific position and size of the device under test screen 16 in the three-dimensional space, as shown in fig. 12, including the plane fitting sampling point 8 and the screen profile 9, and the main steps include:
4.2.1, randomly selecting n sampling points and two-dimensional coordinates of 4 screen vertexes in the screen range after perspective transformation, and calculating the positions of the n +4 points in an original image shot by a camera by utilizing reverse perspective transformation;
step 4.2.2, acquiring three-dimensional coordinates of the n +4 sampling points in the camera coordinate system 12 by using the depth camera, and performing plane fitting on the n +4 three-dimensional coordinates to calculate a three-dimensional space equation P (x, y) and a plane normal vector of a plane where the screen is located in the camera coordinate system 12
Figure BDA0003141013460000161
Step 4.2.3, screen range correction: and projecting the three-dimensional coordinates of the 4 screen vertexes acquired by the depth camera to the plane calculated in the last step to serve as new four vertexes of the screen so as to finally determine the range of the screen. Finally obtaining an equation P (x, y) and a plane normal vector of the plane where the screen is located under a camera coordinate system
Figure BDA0003141013460000162
Physical world actual length h of screen r And the actual width w r And coordinate positions of four vertexes, thereby completing the structure of the screen three-dimensional space modelAnd (4) building.
And 4.3, positioning the target control, as shown in FIG. 6.
Step 4.3.1, the target control positioning process is shown in fig. 13. Firstly, calculating the coordinates of the target control in a two-dimensional coordinate system of a physical world screen: acquiring the coordinate (x) of the target control in the screen orthographic image coordinate system, which is obtained in the step 4.1, of the target control image by using an image matching algorithm 0 ,y 0 ) 13, looking forward at the length and width h of the image through the screen p ,w p The length and width h of the physical world screen calculated in the step 4.2 r ,w r The coordinate (x) of the target control in the coordinate system of the physical world screen is calculated according to the ratio 1 ,y 1 )14。
Figure BDA0003141013460000163
Figure BDA0003141013460000164
Step 4.3.2, calculating the three-dimensional position of the target control: obtaining the coordinate (x) of the target control in the coordinate system of the physical world screen 1 ,y 1 ) Then, two orthogonal unit vectors u, v are constructed in the screen three-dimensional space model under the camera coordinate system 12 constructed in step 3.2. u and v are unit vectors of the screen upper edge and the left edge in the screen space model (under the camera coordinate system 12), respectively, a coordinate p of a vertex at the upper left corner of the screen in the screen space model is used as an origin, and the coordinate p of the target control in the camera coordinate system is calculated by utilizing the orthogonal unit vectors u, v and the coordinate p c =(x c ,y c ,z c )15;
p c =p+x 1 ·u+y 1 ·v。
5. Stage five (robot executing operation)
After the script execution engine calculates the coordinate position of the target control under a camera coordinate system, the coordinate is converted into the coordinate under a robot coordinate system by using a hand-eye calibration result, the position and the corresponding action type are sent to a robot control program, and after the robot control program receives the position and the action type of the target control sent by an upper computer, the robot control program drives the multi-axis mechanical arm to execute corresponding operation by using a touch motion planning algorithm, and the specific steps are as follows:
step 5.1, coordinate transformation
The script execution engine uses a conversion matrix M obtained by hand-eye calibration to obtain a target control position p in a camera coordinate system c Converting into three-dimensional coordinates p under the robot coordinate system r
p r =M·p c
Step 5.2, error calibration result application
Obtaining the coordinate p of the target control under the robot coordinate system r =(x r ,y r ,z r ) Then, the coordinates need to be revised by using the calibration function F (x, y) obtained by the second stage error calibration, if the coordinates of the target control in the screen orthographic image coordinate system are (x) p ,y p ) If the coordinate of the calibrated target control in the robot coordinate system, that is, the value actually sent to the robot control program, is equal to
p′=(x′ r ,y r ′,z′ r )=p r +F(x p ,y p )
Finally, the p' is used as a touch position and is sent to the robot control program together with the corresponding action type;
step 5.3, touch motion planning
Step 5.3.1, calculating the distance D right above the touch position according to the touch position 1 As a first target position t of the multi-axis robot arm to perform the motion 1 . Taking the touch position as a second target position t of the multi-axis mechanical arm to execute the action 2
And 5.3.2, planning a third touch position according to the action type, wherein if the action type is clicking and long pressing, the third target position is the same as the first target position, and the multi-axis mechanical arm returns to the position after reaching the touch point. So far, the click and long press actions are finished, and the operation is movedThe sequence of movements being t 1 →t 2 →t 1 As shown in fig. 14, the direction of movement 17 of the first step of the clicking action and the direction of movement 18 of the second step of the clicking action are included.
Step 5.3.3, if the action type is sliding, moving the distance D along the sliding direction 2 Calculating a third target position t 3 Then calculating t 3 Directly above distance D 1 Fourth target position t 4 It is taken as the last target position of the sliding motion, and the moving sequence is t 1 →t 2 →t 3 →t 4 As shown in fig. 15, the direction of movement 19 for the first step of the sliding motion, the direction of movement 20 for the second step of the sliding motion, and the direction of movement 21 for the third step of the sliding motion are included.
And 5.3.4, after all target positions of an action are obtained, determining the corresponding postures of the multi-axis mechanical arm tail end touch control pen when the touch control pen reaches the target positions, and expressing the postures of the touch control pen by Euler angles (alpha, beta, gamma), wherein the alpha, the beta and the gamma respectively represent the rotation radian alpha of the touch control pen around the x axis, the rotation radian beta around the y axis and the rotation radian gamma around the z axis of the robot coordinate system. For a basic motion step in which the target position is not on the screen, the pen point postures are set to be perpendicular to the screen inwards. For a basic motion step of the target position on the screen, the screen is divided vertically into k (e.g., 3) regions, and it is determined in which region of the screen the target position is located. The stylus tip pose is set to different values for different regions of points. For the screen area that is close to multiaxis arm, the nib gesture is set to be inwards perpendicular to the screen direction, and for the screen area of keeping away from multiaxis arm, the vector that the nib gesture represented and screen plane contained angle reduce gradually, and specific computational formula is as follows:
Figure BDA0003141013460000181
Figure BDA0003141013460000182
wherein
Figure BDA0003141013460000183
Is a normal vector of the plane of the screen under the robot coordinate system,
Figure BDA0003141013460000184
is composed of
Figure BDA0003141013460000185
A projection vector projected to the z =0 plane,
Figure BDA0003141013460000186
is a unit vector of the positive direction of the z axis under the robot coordinate system. If the touch position is located in the region D i K-1, the pen tip attitude value is set to
Figure BDA0003141013460000191
The first pen tip pose 22 and the second pen tip pose 23 in fig. 16 are schematic pen tip poses of the stylus in a screen area a near the multi-axis manipulator and a screen area B far from the multi-axis manipulator, respectively.
5.3.5, a target position and a corresponding posture when the target position is reached are provided, a path planning algorithm is used for calculating a motion path from the current position to the target position of the multi-axis mechanical arm, the motion path is composed of a plurality of waypoints, for each waypoint, an inverse kinematics algorithm is used for calculating an angle to which each joint shaft should rotate when the multi-axis mechanical arm moves to the point, one waypoint corresponds to a group of joint angle values, the joint angle values corresponding to each waypoint are sequentially sent to a bottom layer of the mechanical arm for driving, and the multi-axis mechanical arm reaches the target position in a specified posture along the motion path to complete one motion.
5.3.6, in order to continuously execute the test action and simultaneously avoid the shielding camera from shooting the screen picture, the multi-shaft mechanical arm carries out evasion action before executing the next test action. The method sets a fixed avoidance gesture for the multi-axis mechanical arm, when each complete action (such as clicking and sliding) is executed, the multi-axis mechanical arm automatically enters the avoidance gesture, and under the avoidance gesture, the multi-axis mechanical arm does not shield a camera from shooting a screen of the tested equipment and is not far away from the tested equipment as far as possible, so that the multi-axis mechanical arm can respond more quickly when the action is executed next time. Meanwhile, in the path planning process of the step 5.3.5, some obstacles also need to be avoided, in the method, in order to avoid collision between the multi-axis mechanical arm and the equipment to be tested in the motion process, the equipment to be tested needs to be regarded as an obstacle, and a route is prevented from passing through the equipment in the path planning process.
In summary, the invention completes the automatic test of the three-dimensional space equipment through two aspects, namely a set of robot device for the three-dimensional space touch screen equipment test and a set of execution method of the visual test script depending on the device. The image processing technology, the robot driving technology and the like ensure that the testing method can obtain better effect on different tested equipment.
In the experimental process, 60 different test scripts are designed for 12 different applications such as WeChat, reminding items and Taobao, and the like, and the test script relates to multiple application fields such as social contact and online shopping and approximately ten actions such as screen clicking, character input, sliding, long pressing and dragging. Through actual inspection, 95% of test scripts can well execute test steps according to preset requirements, and the fact shows that the technology provided by the invention can meet most of test requirements on three-dimensional space equipment in a non-invasive mode.
It is to be understood that the present invention has been described with reference to certain embodiments, and that various changes in the features and embodiments, or equivalent substitutions may be made therein by those skilled in the art without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (5)

1. A three-dimensional space touch screen equipment visualization test method based on a multi-axis mechanical arm is characterized by comprising the following steps:
step 1, constructing a robot device for testing a three-dimensional space touch screen device;
the robot device comprises a multi-axis mechanical arm, a depth camera, an operating platform and tested equipment;
a touch screen is arranged on the tested equipment; the device to be tested is arranged on the operating platform, and the surface where the touch screen is located is not on the same horizontal line with the operating platform;
the multi-axis mechanical arm and the depth camera are fixed on the operating platform, the depth camera is positioned right in front of the tested equipment, and the multi-axis mechanical arm is positioned on the side of the tested equipment;
the depth camera shoots the full view of a touch screen of the tested equipment without shielding;
the multi-axis mechanical arm can reach any position on a touch screen of the tested equipment;
the touch pen is arranged at the tail end of the multi-axis mechanical arm and used for realizing the control of the touch screen of the tested equipment;
a conversion matrix of a camera coordinate system and a robot coordinate system is calculated between the depth camera and the multi-axis mechanical arm by using a hand-eye calibration technology;
step 2, an error calibration mechanism is utilized to develop touch control attempts with feedback, a touch control position error correction function is fitted based on the result, the correction function is utilized to overcome the remote motion deviation generated by the gravity of the multi-axis mechanical arm, and the influence caused by the depth data calculation deviation of the depth camera is reduced;
step 3, calculating a plane equation of a plane where the touch screen of the tested device is located by using a depth camera after error calibration is completed, determining the area range of the touch screen, and constructing a three-dimensional space model of the touch screen of the tested device;
step 4, executing each test instruction in the script one by one, and when the test instruction is executed, firstly analyzing the type of the action on the touch screen and the target control primitive from the script; then, according to the control primitive and a three-dimensional control positioning algorithm, positioning a three-dimensional space coordinate of the target control under a camera coordinate system; then, converting the coordinate under the camera coordinate system into a three-dimensional coordinate under the robot coordinate system by using the hand-eye calibration result of the robot device;
step 5, correcting the three-dimensional coordinate of the robot coordinate system by using the error calibration result in the step 2, improving the accuracy of control positioning, and sending the control positioning and the analyzed action type to a robot control program;
and 6, planning basic motion steps of the multi-axis mechanical arm by using a touch motion planning algorithm according to the received motion type and the three-dimensional coordinate of the target control under the robot coordinate system by using a robot control process, disassembling one motion in the script into a plurality of mechanical arm basic motion steps, converting each basic motion step into a joint angle sequence of the multi-axis mechanical arm by using an inverse kinematics algorithm and a motion planning algorithm, and sending the joint angle sequence to the bottom layer of the multi-axis mechanical arm for driving the multi-axis mechanical arm to click and slide at different positions of the touch screen of the tested equipment.
2. The visual testing method for the stereoscopic space touch screen device based on the multi-axis mechanical arm according to claim 1, wherein the error calibration mechanism of the step 2 is realized by a calibration board formed by a tablet computer, and a calibration program is loaded into the tablet computer to form the calibration board; attaching the calibration plate on the surface of the touch screen of the tested equipment, then automatically controlling the multi-axis mechanical arm to sequentially click calibration points in the calibration program and collect feedback, completing the operation of touch error calibration, and the specific calibration steps after the calibration plate is arranged are as follows:
step 2.1, starting a calibration program of the calibration board, wherein the program displays S rows and S columns, and M calibration points are used as targets to be tried to touch;
2.2, firstly identifying calibration points by using a Hough transform circle detection algorithm to obtain coordinates of the calibration points in the RGB two-dimensional image shot by the depth camera
Figure FDA0003687929810000021
Calculating the three-dimensional coordinate position of the calibration point on the calibration plate under the camera coordinate system by using the three-dimensional control positioning algorithm in the step 4, and then utilizingConverting the hand-eye calibration result into a three-dimensional coordinate in a robot coordinate system as a target touch position A of a calibration point i Finally, the robot control program of step 6 is used to control the robot according to A i Sequentially clicking calibration points on the calibration plate;
step 2.3, recording the coordinate position and touch contact time of each click in a two-dimensional coordinate system of the calibration board by a calibration program; if the touch control is not detected at a certain calibration point within the specified time, the three-dimensional coordinates of the calibration point are recalculated by using the step 2.2, and the touch control depth is gradually increased until the calibration board detects the touch control; if the touch contact time exceeds a specified value, gradually reducing the depth until the touch contact time is within a specified range; recording calibration point N i The coordinate of the corresponding target touch position in the robot coordinate system is A i =(x i ,y i ,z i )(i=1...M),N i The actual touch position of the touch screen is represented by coordinates in a calibration board two-dimensional coordinate system with the upper edge of the calibration board screen facing to the right as the positive X-axis direction, the left edge facing to the lower as the positive Y-axis direction and the calibration board screen pixels as units
Figure FDA0003687929810000031
The calibration board screen front-view image coordinate system takes the top point of the left top corner of the calibration board screen front-view image as the origin, the upper edge of the image is in the positive x-axis direction rightwards, the left edge of the image is in the positive y-axis direction downwards, and the picture pixels are taken as units; according to the proportion of the two-dimensional coordinate system of the calibration plate and the front-view image coordinate system of the screen of the calibration plate
Figure FDA0003687929810000032
Conversion into coordinates in the orthographic image coordinate system of the calibration plate screen
Figure FDA0003687929810000033
Then, the three-dimensional control positioning algorithm in the step 4 and a conversion matrix obtained by the hand-eye calibration are used for calculating
Figure FDA0003687929810000034
In the robot coordinateCoordinate under the system is C i =(x′ i ,y′ i ,z′ i )(i=1...M);
Recording coordinate C of actual touch position of calibration point in robot coordinate system i (x i ′,y i ′,z i ') coordinates A of the target touch position with the calibration point in the robot coordinate system i (x i ,y i ,z i ) A deviation of d i =(x′ i -x i ,y′ i -y i ,z′ i -z i ) (i =1.. M), then using spline interpolation algorithm according to C i And d i Calculating an error correction function F (x, y); when the coordinates of the target touch point in the front view image coordinate system of the device under test touch screen are (x, y), the calculation result of F (x, y) = (dx, dy, dz) indicates that the correction value of the coordinates (x, y) of (x, y) in the robot coordinate system is (dx, dy, dz).
3. The multi-axis mechanical arm-based visual testing method for the stereoscopic space touch screen equipment, according to claim 2, wherein the step 3 of constructing the three-dimensional space model of the touch screen of the device to be tested to assist in positioning the coordinates of the target control in the three-dimensional space and improve the positioning accuracy comprises the following steps:
step 3.1, extracting the outline of the touch screen of the tested equipment;
the method comprises the following steps of extracting the outline of a touch screen from an RGB (red, green and blue) two-dimensional image of a device to be tested shot by a depth camera:
step 3.1.1, acquiring an RGB (red, green and blue) two-dimensional image of the tested equipment from a depth camera, and identifying a polygonal area range of a touch screen in a distorted and deformed image by using a target identification algorithm based on deep learning; finding out the minimum circumscribed rectangle of the polygonal area, then expanding the area of the rectangular area by 20% to ensure that the touch screen is completely contained in the area, and meanwhile, the area does not comprise a complex background, and finally intercepting the area;
step 3.1.2, identifying the outline of the touch screen in the intercepted rectangular area by utilizing an edge detection technology, firstly finding out a contour line with the largest enclosed area, and then finding out four points which are respectively closest to four vertex angles of the image obtained in the step 3.1.1 from the contour line as four vertexes of the touch screen;
step 3.1.3, projecting the area surrounded by the four vertexes to a front view plane by utilizing a perspective transformation technology to obtain a front view image S of the touch screen, and constructing a screen front view image coordinate system according to the screen front view image S; extracting the screen outline of the tested equipment to avoid the influence of a complex background on subsequent testing;
3.2, constructing a three-dimensional space model of the touch screen;
the method for determining the specific position and size of the touch screen in the three-dimensional space comprises the following steps:
step 3.2.1, randomly selecting n sampling points and two-dimensional coordinates of 4 touch screen vertexes in the range of the touch screen after perspective transformation, and calculating the positions of the n +4 points in the original RGB two-dimensional image shot by the depth camera by utilizing reverse perspective transformation;
step 3.2.2, acquiring three-dimensional coordinates of the n +4 sampling points in a camera coordinate system by using a depth camera, and performing plane fitting on the n +4 three-dimensional coordinates to calculate a three-dimensional space equation P (x, y) and a plane normal vector of a plane where the touch screen is located in the camera coordinate system
Figure FDA0003687929810000041
Step 3.2.3, touch screen range correction: projecting the three-dimensional coordinates of the 4 touch screen vertexes acquired by the depth camera to the plane calculated in the previous step to serve as the new four vertexes of the touch screen, so as to finally determine the range of the touch screen; finally, an equation P (x, y) of the plane where the touch screen is located and a plane normal vector under the camera coordinate system are obtained
Figure FDA0003687929810000042
Coordinate positions of four vertexes and physical world length h of the touch screen obtained by calculating distances among the vertexes of the touch screen r With the physical worldWidth w r And therefore, the construction of the touch screen three-dimensional space model is completed.
4. The multi-axis mechanical arm-based visualization testing method for the stereoscopic space touch screen device, according to claim 3, wherein the step 4 of determining the position of the target control under the camera coordinate system by using a three-dimensional control positioning algorithm specifically comprises the following steps:
step 4.1, firstly, calculating the coordinates of the target control in a real touch screen two-dimensional coordinate system: obtaining the coordinates (x) of the target control image in the orthographic image coordinate system of the touch screen obtained in the step 3.1 by using an image matching algorithm 0 ,y 0 ) And then looking at the length and width h of the image through the touch screen p ,w p And the length and width h of the physical world touch screen calculated in the step 3.2 r ,w r The coordinate (x) of the target control in the coordinate system of the physical world touch screen is calculated according to the ratio 1 ,y 1 ) (ii) a The coordinate system of the physical world touch screen takes the upper left corner of the touch screen in the physical world as an origin, the upper edge of the touch screen is in the positive x-axis direction towards the right, the left edge is in the positive y-axis direction towards the bottom, and the physical world size is taken as a unit;
Figure FDA0003687929810000051
Figure FDA0003687929810000052
step 4.2, calculating the three-dimensional position of the target control: obtaining the coordinate (x) of the target control in the coordinate system of the physical world touch screen 1 ,y 1 ) Secondly, constructing two orthogonal unit vectors u and v in the touch screen three-dimensional space model under the camera coordinate system constructed in the step 3.2; u and v are unit vectors of the upper edge and the left edge of the screen in the touch screen space model respectively, then the coordinate p of the vertex at the upper left corner of the touch screen in the touch screen space model is used as an origin, and an orthogonal unit is usedThe bit vector u, v and the coordinate p calculate the coordinate p of the target control in the camera coordinate system c =(x c ,y c ,z c ),
p c =p+x 1 ·u+y 1 ·v。
5. The visual test method for the stereoscopic space touch screen equipment based on the multi-axis mechanical arm as claimed in claim 4, wherein the touch motion planning algorithm in the step 6 is used for decomposing the action type of one instruction in the script into a plurality of basic motion steps of the multi-axis mechanical arm, and comprises the following steps:
step 6.1, calculating the distance D of the touch position right above the normal direction of the touch screen 1 As a first target position t of the multi-axis robot arm to perform the motion 1 Taking the touch position as a second target position t of the multi-axis mechanical arm to execute the action 2
Step 6.2, planning a third target position according to the action type, if the action type is clicking and long pressing, the third target position is the same as the first target position, the multi-axis mechanical arm returns to the position after reaching the touch point, and the moving sequence is t after the clicking and long pressing actions are finished 1 →t 2 →t 1
And 6.3, if the action type is sliding, moving the distance D along the sliding direction 2 Calculating a third target position t 3 Then calculating t 3 A distance directly above D 1 Fourth target position t 4 The last target position of the sliding motion is defined as the moving sequence t 1 →t 2 →t 3 →t 4
6.4, after all target positions of an action are obtained, determining the corresponding posture when the touch pen at the tail end of the multi-axis mechanical arm reaches the target positions; representing the posture of the stylus by Euler angles (alpha, beta, gamma), wherein alpha represents the rotation radian of the stylus around an x axis of a robot coordinate system, beta represents the rotation radian of the stylus around a y axis of the robot coordinate system, and gamma represents the rotation radian of the stylus around a z axis of the robot; for a basic movement step that the target position is not on the touch screen, the pen point postures are set to be vertical to the touch screen inwards; for a basic movement step of a target position on a touch screen, vertically dividing the touch screen into k areas, and judging which area of the touch screen the target position is located in; setting the gesture of the stylus tip to different values for points in different areas; for the touch screen area close to the multi-axis mechanical arm, the pen point posture is set to be inward in the direction perpendicular to the touch screen, for the touch screen area far away from the multi-axis mechanical arm, the included angle between the vector represented by the pen point posture and the plane of the touch screen is gradually reduced, and the specific calculation formula is as follows:
Figure FDA0003687929810000061
Figure FDA0003687929810000071
wherein
Figure FDA0003687929810000072
Is a normal vector of the plane of the touch screen under the robot coordinate system,
Figure FDA0003687929810000073
Figure FDA0003687929810000074
is composed of
Figure FDA0003687929810000075
A projection vector projected to the z =0 plane,
Figure FDA0003687929810000076
is a unit vector of the positive direction of the z axis under the robot coordinate system; if the touch position is located in the region D i K-1, the pen tip attitude value is set to
Figure FDA0003687929810000077
Wherein alpha is 0 Angle of rotation around the x-axis of the robot coordinate system, β, with stylus perpendicular to the screen 0 The angle of rotation of the touch pen around the z axis of the robot coordinate system when the touch pen is perpendicular to the screen;
6.5, obtaining the coordinates of the target position and the corresponding pen point posture when the target position is reached, then calculating a motion path from the current position to the target position of the multi-axis mechanical arm by using a path planning algorithm, wherein the motion path consists of route points, for each route point, an inverse kinematics algorithm is used for calculating the angle to which each joint shaft should rotate when the multi-axis mechanical arm moves to the point, one route point corresponds to a group of joint angle values, the joint angle value corresponding to each route point is sequentially sent to the bottom layer of the multi-axis mechanical arm for driving, and the multi-axis mechanical arm reaches the target position along the motion path in a specified posture to complete one motion;
6.6, before executing the next test action, the multi-axis mechanical arm performs an avoiding action: the multi-axis mechanical arm is provided with a fixed avoiding gesture, when each complete action is executed, the multi-axis mechanical arm automatically enters the avoiding gesture, and under the avoiding gesture, the multi-axis mechanical arm does not shade a touch screen shot by a depth camera and is not far away from the tested equipment, so that the multi-axis mechanical arm can respond more quickly when the action is executed next time; and simultaneously, in the process of planning the path in the step 6.5, avoiding the tested equipment.
CN202110734355.3A 2021-06-30 2021-06-30 Three-dimensional space touch screen equipment visualization test method based on multi-axis mechanical arm Active CN113504063B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110734355.3A CN113504063B (en) 2021-06-30 2021-06-30 Three-dimensional space touch screen equipment visualization test method based on multi-axis mechanical arm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110734355.3A CN113504063B (en) 2021-06-30 2021-06-30 Three-dimensional space touch screen equipment visualization test method based on multi-axis mechanical arm

Publications (2)

Publication Number Publication Date
CN113504063A CN113504063A (en) 2021-10-15
CN113504063B true CN113504063B (en) 2022-10-21

Family

ID=78009449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110734355.3A Active CN113504063B (en) 2021-06-30 2021-06-30 Three-dimensional space touch screen equipment visualization test method based on multi-axis mechanical arm

Country Status (1)

Country Link
CN (1) CN113504063B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114393576B (en) * 2021-12-27 2024-09-10 江苏明月智能科技有限公司 Method and system for clicking and calibrating position of four-axis mechanical arm based on artificial intelligence
CN114543669B (en) * 2022-01-27 2023-08-01 珠海亿智电子科技有限公司 Mechanical arm calibration method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104180753A (en) * 2014-07-31 2014-12-03 东莞市奥普特自动化科技有限公司 Rapid calibration method of robot visual system
CN111113414A (en) * 2019-12-19 2020-05-08 长安大学 Robot three-dimensional space scale prompting method and system based on screen identification

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5437992B2 (en) * 2007-04-13 2014-03-12 ナイキ インターナショナル リミテッド Visual ability inspection device and inspection method
JP6373722B2 (en) * 2014-10-29 2018-08-15 京セラ株式会社 Portable terminal and control method
CN105547119B (en) * 2015-12-15 2018-07-06 中国矿业大学 A kind of planar robot's method for detecting position and system based on electric resistance touch screen
CN205899515U (en) * 2016-06-28 2017-01-18 深圳市智致物联科技有限公司 Touch -sensitive screen test equipment
CN110238845B (en) * 2019-05-22 2021-12-10 湖南视比特机器人有限公司 Automatic hand-eye calibration method and device for optimal calibration point selection and error self-measurement
CN110619630B (en) * 2019-09-10 2023-04-07 南京知倍信息技术有限公司 Mobile equipment visual test system and test method based on robot
CN112306890B (en) * 2020-11-23 2024-01-23 国网北京市电力公司 Man-machine interaction test system, control method, control device and processor
CN112836603B (en) * 2021-01-21 2024-04-05 南京航空航天大学 Robot-based touch screen equipment rapid exploration testing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104180753A (en) * 2014-07-31 2014-12-03 东莞市奥普特自动化科技有限公司 Rapid calibration method of robot visual system
CN111113414A (en) * 2019-12-19 2020-05-08 长安大学 Robot three-dimensional space scale prompting method and system based on screen identification

Also Published As

Publication number Publication date
CN113504063A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
Ong et al. Augmented reality-assisted robot programming system for industrial applications
US11440179B2 (en) System and method for robot teaching based on RGB-D images and teach pendant
CN111695562B (en) Autonomous robot grabbing method based on convolutional neural network
US10751877B2 (en) Industrial robot training using mixed reality
RU2700246C1 (en) Method and system for capturing an object using a robot device
CN113504063B (en) Three-dimensional space touch screen equipment visualization test method based on multi-axis mechanical arm
US9880553B1 (en) System and method for robot supervisory control with an augmented reality user interface
US10864633B2 (en) Automated personalized feedback for interactive learning applications
CN110900581A (en) Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
WO2021218542A1 (en) Visual perception device based spatial calibration method and apparatus for robot body coordinate system, and storage medium
WO2022021156A1 (en) Method and apparatus for robot to grab three-dimensional object
JP2010042466A (en) Robot teaching system and method for displaying simulation result of operation of robot
US10437342B2 (en) Calibration systems and methods for depth-based interfaces with disparate fields of view
Zhang et al. Robot programming by demonstration: A novel system for robot trajectory programming based on robot operating system
CN112657176A (en) Binocular projection man-machine interaction method combined with portrait behavior information
CN110619630A (en) Mobile equipment visual test system and test method based on robot
Rodrigues et al. Robot trajectory planning using OLP and structured light 3D machine vision
Gong et al. Projection-based augmented reality interface for robot grasping tasks
CN210361314U (en) Robot teaching device based on augmented reality technology
Frank et al. Towards teleoperation-based interactive learning of robot kinematics using a mobile augmented reality interface on a tablet
Sreenath et al. Monocular tracking of human hand on a smart phone camera using mediapipe and its application in robotics
Khalil et al. Visual monitoring of surface deformations on objects manipulated with a robotic hand
CN114581632A (en) Method, equipment and device for detecting assembly error of part based on augmented reality technology
Barber et al. Sketch-based robot programming
CN113807191B (en) Non-invasive visual test script automatic recording method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant