CN111113414B - Robot three-dimensional space scale prompting method and system based on screen identification - Google Patents

Robot three-dimensional space scale prompting method and system based on screen identification Download PDF

Info

Publication number
CN111113414B
CN111113414B CN201911316517.0A CN201911316517A CN111113414B CN 111113414 B CN111113414 B CN 111113414B CN 201911316517 A CN201911316517 A CN 201911316517A CN 111113414 B CN111113414 B CN 111113414B
Authority
CN
China
Prior art keywords
virtual
screen
robot model
tail end
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911316517.0A
Other languages
Chinese (zh)
Other versions
CN111113414A (en
Inventor
柳有权
陆勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201911316517.0A priority Critical patent/CN111113414B/en
Publication of CN111113414A publication Critical patent/CN111113414A/en
Application granted granted Critical
Publication of CN111113414B publication Critical patent/CN111113414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a robot three-dimensional space scale prompting method and system based on screen identification.A first step of establishing an interactive system, importing an adjusted virtual robot model into three-dimensional game engine software, and adding a virtual camera and an operation interface; designing a screen identifier in three-dimensional game engine software, and binding the screen identifier at the tail end of a mechanical arm of a virtual robot model as a prompt of a spatial scale; and finally, designing a visual angle scale of a virtual camera in an operation interface, converting the tail end coordinate of the mechanical arm of the virtual robot model into a joint angle of the mechanical arm by using inverse kinematics, adding the Euclidean distance between the virtual camera and the robot into the joint angle, calculating the Euclidean distance between the tail end of the mechanical arm of the virtual robot model in the space and the homogeneous coordinate of the tail end of the screen identifier, and displaying the distance at the tail end of the screen identifier. The invention is novel and reasonable, convenient to control and good in use effect, is beneficial to the accurate operation of the robot, and can be applied to the field of robot control.

Description

Robot three-dimensional space scale prompting method and system based on screen identification
Technical Field
The invention belongs to the field of human-computer interaction combined with a virtual reality technology, and relates to a three-dimensional space scale prompting method and system for a robot based on screen identification.
Background
With the rapid development of robot technology, robots are applied to industrial production, and gradually, robots begin to replace many traditional human resources in some fields, thereby changing some human-intensive industries. Currently, the most widely used robots are industrial robots, such as painting robots and handling robots in automobile workshops, space maintenance robots in aerospace fields, and teleoperation robots in medical fields. However, the maintenance and operation robot needs to provide flexible control capability for the operator, and especially when image data of an operation site cannot be obtained, how to realize accurate control of the robot becomes a serious problem. Meanwhile, due to the uncertainty of the observation direction, the observation visual angle and the operation of the hand controller are lack of coordination, and the control difficulty is caused by the fact that the motion direction of the controller is inconsistent with the motion direction of visual perception.
Current major solutions for hand-eye coordination include: the training neural network enables the robot to learn the hand-eye coordination operation so as to have the hand-eye coordination capability, and the method has the defects that a large amount of training data is needed for training the neural network; in addition, it is also a common method to dynamically adjust the position of the robot hand according to the feedback of the robot vision sensor, but the real-time performance and the view angle cannot be switched at will, which become the main factors restricting the method.
Disclosure of Invention
Aiming at the defects and shortcomings in the prior art, the invention provides a three-dimensional space scale prompting method of a robot based on screen identification, so as to realize accurate control of the robot.
In order to achieve the purpose, the invention adopts the following technical scheme:
a robot three-dimensional space scale prompting method based on screen identification comprises the following steps:
step 1, importing and adjusting an original virtual robot model to obtain a virtual robot model, then importing the virtual robot model into three-dimensional game engine software, placing the virtual robot model at a world coordinate system origin in the three-dimensional game engine software, and adding a virtual camera and an operation interface in the three-dimensional game engine software;
step 2, importing a slider control into three-dimensional game engine software, drawing a screen identification original drawing and dragging the screen identification original drawing into the slider control to obtain a screen identification, and binding the screen identification at the tail end of a mechanical arm of the virtual robot model as a prompt of a spatial scale;
step 3, importing a glider control into three-dimensional game engine software as a display control of a virtual camera visual angle scale, and adding far and near marks at two ends of the glider control;
step 4, the scale slide block of the virtual camera can slide along with the direction of the mouse roller, and the distance between the virtual camera and the original point in the three-dimensional game engine software also changes; setting a hollow object at the origin of a world coordinate system in three-dimensional game engine software, and changing the distance between a virtual camera and the hollow object through the direction of a mouse roller; calculating the Euclidean distance between the virtual camera and the empty object in a three-dimensional space in real time, and assigning the distance to a value attribute in the Slider control;
step 5, obtaining the coordinates of the tail end of the mechanical arm of the virtual robot model in the screen space, and calculating the coordinates of the tail end of the prompt line in the screen space, wherein the coordinates of the tail end of the mechanical arm in the screen space are taken as a starting point, and the tail end of the screen mark is taken as an end point;
step 6, converting the coordinates of the tail end of the screen identifier in the screen into a three-dimensional space to obtain homogeneous coordinates of the point in the three-dimensional space;
step 7, acquiring three-dimensional point coordinates of the tail end of the mechanical arm of the virtual robot model in the space, and calculating the Euclidean distance between the homogeneous coordinate acquired in the step 6 and the three-dimensional point coordinates of the tail end of the mechanical arm in the space;
step 8, adding the Euclidean distance in the step 4 into the calculation of the reverse kinematics offset, and converting the mechanical arm tail end coordinate of the virtual robot model into a mechanical arm joint angle of the virtual robot model by using reverse kinematics; adjusting the visual angle of the virtual camera and the distance between the virtual camera and the virtual robot model through a mouse to enable the distance calculated in the step 7 to be consistent with the expected moving distance, and controlling the mechanical arm motion of the virtual robot model by using the mechanical arm joint angle of the virtual robot model;
and 9, displaying the distance calculated in the step 7 at the tail end of the screen mark, wherein the distance represents the distance moved by the length of the screen mark and corresponds to the distance moved by the tail end of the mechanical arm of the real robot in the actual space.
The invention also comprises the following technical characteristics:
specifically, the step 1 specifically includes the following steps:
step 1.1, establishing an interactive system, wherein the interactive system comprises a real robot, a computer for remotely controlling the real robot and an operating handle connected with the computer;
step 1.2, importing and adjusting an original virtual robot model: inputting a virtual robot model in a computer, installing three-dimensional modeling software, and performing axis adjustment operation on each joint of the original virtual robot model by using the three-dimensional modeling software to obtain the virtual robot model, so that the rotation directions of the joints of the virtual robot model and the real robot are consistent; the model of the original virtual robot model is consistent with that of the real robot;
step 1.3, importing a virtual robot model: installing three-dimensional game engine software in a computer, importing the virtual robot model obtained by the processing in the step 1.2 into the three-dimensional game engine software, and aligning the coordinate origin of the virtual robot model with the world coordinate system origin in the three-dimensional game engine software;
step 1.4, adding a virtual camera and an operation interface: adding a virtual camera in three-dimensional game engine software as a global observation camera to ensure that the virtual camera is over against a virtual robot model; an operation interface is added in the three-dimensional game engine software, and three check boxes are arranged above the operation interface to switch different visual angles.
Specifically, the adding of the virtual camera as the global observation camera in step 1.4 specifically includes:
a virtual camera is added in the three-dimensional game engine software, so that the virtual camera is over against the virtual robot model, and the orientation of the camera always faces the origin of a world coordinate system, thereby realizing the random switching of the visual angle of the virtual camera.
Specifically, the step 2 specifically includes the following steps:
step 2.1, two Slider controls are led into three-dimensional game engine software to be used as basic controls of an X axis and a Y axis respectively, and the orientation of the two Slider controls is adjusted to enable the X axis and the Y axis to form a two-dimensional coordinate system; hiding the FillArea attribute of the Slider control;
step 2.2, drawing the screen identification original image: drawing original pictures of screen marks in drawing software, wherein the colors of the original pictures are red, arrows are added to two ends of the screen marks, and the original pictures are stored as image files;
step 2.3, dragging the screen identification original image drawn in the step 2.2 to a Slider control corresponding to the X axis and the Y axis to be used as a background image, and obtaining a screen identification;
and 2.4, binding the screen identifier obtained in the step 2.3 at the tail end of the mechanical arm of the virtual robot model.
Specifically, scales are marked on an X axis and a Y axis of the screen mark, and a digital display is set at the endpoint of each coordinate axis; the number represents the distance that the end of the arm of the virtual robot model moves in the screen space, the transparency of the screen logo is set to be semi-transparent and the color of the screen logo is set to be red, corresponding to the distance that the end of the arm of the real robot moves in the real three-dimensional space.
Specifically, the step 8 of converting the mechanical arm end coordinates of the virtual robot model into the mechanical arm joint angle of the virtual robot model by using inverse kinematics includes:
in the conversion process from the camera space to the robot space, the terminal coordinates of the mechanical arm of the virtual robot model are converted into the joint angle of the mechanical arm of the virtual robot model by using inverse kinematics, the obtained joint angle is used for driving the terminal of the mechanical arm of the virtual robot model to move, and the Euclidean distance between the virtual camera and the virtual robot model is added into the solving process of the inverse kinematics, so that the movement amplitude of the terminal of the mechanical arm of the virtual robot model is related to the distance between the virtual camera and the virtual robot model, and the closer the distance is, the smaller the movement amplitude of the terminal of the mechanical arm of the virtual robot model is.
A robot three-dimensional space scale prompt system based on screen identification comprises:
the virtual robot model is imported into three-dimensional game engine software, the virtual robot model is placed at the world coordinate system origin in the three-dimensional game engine software, and a virtual camera and an operation interface are added to the three-dimensional game engine software;
the screen identification drawing and binding module is used for importing a slider control into three-dimensional game engine software, drawing a screen identification original image and dragging the screen identification original image into the slider control to obtain a screen identification, and binding the screen identification at the tail end of a mechanical arm of the virtual robot model as a prompt of a spatial scale;
the virtual camera visual angle scale design module is used for importing a glider control into three-dimensional game engine software as a display control of the virtual camera visual angle scale, and adding distance marks at two ends of the glider control;
the distance control and calculation module of the virtual camera and the origin of the world coordinate system is used for enabling the scale slide block of the virtual camera to slide along the direction of the mouse roller, and the distance between the virtual camera and the origin in the three-dimensional game engine software is changed; setting the empty object at the origin of a world coordinate system in three-dimensional game engine software, and changing the distance between the virtual camera and the empty object through the direction of a mouse roller; calculating the Euclidean distance between the virtual camera and the empty object in a three-dimensional space in real time, and assigning the distance to a value attribute in the Slider control; because the origin can not be selected, the empty object can be selected and calculated as the mark of the origin of the world coordinate system.
The coordinate calculation module of the tail end of the screen identifier in the screen space is used for acquiring the coordinates of the tail end of the mechanical arm of the virtual robot model in the screen space, and calculating the coordinates of the tail end of the prompt line in the screen space, wherein the coordinates of the tail end of the mechanical arm in the screen space are taken as a starting point, and the tail end of the screen identifier is taken as an end point;
the screen identification terminal coordinate conversion module is used for converting the coordinates of the screen identification terminal in the screen into a three-dimensional space to obtain homogeneous coordinates of the point in the three-dimensional space;
the screen identification tail end and mechanical arm tail end distance calculation module of the virtual robot model is used for acquiring three-dimensional point coordinates of the mechanical arm tail end of the virtual robot model in the space and calculating the Euclidean distance between the homogeneous coordinate and the three-dimensional point coordinates of the mechanical arm tail end in the space;
the virtual camera visual angle and distance adjusting and coordinate converting module is used for adding the Euclidean distance between the virtual camera and the empty object in the three-dimensional space into the calculation of the reverse kinematics offset, and converting the mechanical arm tail end coordinate of the virtual robot model into the mechanical arm joint angle of the virtual robot model by using the reverse kinematics; adjusting the visual angle of the virtual camera and the distance between the virtual camera and the virtual robot model through the mouse to enable the distance calculated in the next step to be consistent with the expected moving distance, so that the mechanical arm joint angle of the virtual robot model is used for controlling the mechanical arm motion of the virtual robot model;
and the moving distance display module is used for displaying the Euclidean distance between the homogeneous coordinate of the screen identifier tail end in the three-dimensional space and the three-dimensional point coordinate of the mechanical arm tail end in the space at the tail end of the screen identifier, wherein the distance represents the distance of the screen identifier length movement corresponding to the moving distance of the mechanical arm tail end of the real robot in the real space.
Compared with the prior art, the invention has the beneficial technical effects that:
1. the three-dimensional space scale prompting method is novel in design, utilizes the advantage of flexible operation of a virtual space, and combines a virtual camera.
2. The hardware cost is not needed, and the cost is very low.
3. The interactive mode with the screen identification is provided, the operation accuracy is improved, and the operability is high.
4. The visual angle and the distance of the camera can be changed at will through the mouse roller, the operation state of an operator is reflected visually, and a more humanized operation auxiliary mode is provided.
5. The virtual robot and the real robot can execute the same instruction code in parallel by combining the synchronous operation of the real robot, the expansibility is strong, and the operation safety is improved.
In conclusion, the invention has the advantages of novel and reasonable design, low investment cost, high operation precision, strong operability and good expandability, combines the operation of the virtual robot and the operation of the real robot, can randomly switch the observation visual angle and increases the operation safety.
Drawings
FIG. 1 is a block diagram of an interactive system of the present invention.
FIG. 2 is a flow chart of the overall scheme of the present invention.
FIG. 3 is a block diagram of the spatial scale transformation process in step 3 of the method of the present invention;
fig. 4 is a schematic diagram of scale indication of the present invention, (a) is a virtual robot model with a screen identifier, and (b) is a schematic diagram after the screen identifier is enlarged, wherein a number represents a moving distance in a real space, and a coordinate axis length represents a moving distance in a screen space.
Detailed Description
As shown in fig. 1 to 4, a robot three-dimensional space scale prompting method based on screen identification includes the following steps:
step 1, as shown in fig. 1, establishing an interactive system, importing an original virtual robot model, adjusting the original virtual robot model to obtain a virtual robot model, importing the virtual robot model into three-dimensional game engine software, and adding a virtual camera and an operation interface into the three-dimensional game engine software;
step 1.1, establishing an interactive system, wherein the interactive system comprises a real robot (the model number of the real robot in the embodiment ABB14000), a computer for remotely controlling the real robot and an operating handle connected with the computer; adjusting the real robot system to a reset state, connecting the real robot system to a computer through a network cable, and connecting the operating handle to the computer through a USB connecting wire;
step 1.2, importing and adjusting an original virtual robot model: inputting a virtual robot model (model number ABB14000 of the virtual robot model in the embodiment) into a computer, installing three-dimensional modeling software (such as 3Dmax), and performing axis adjustment operation on each joint of the original virtual robot model by using the three-dimensional modeling software to obtain the virtual robot model, so that the rotation directions of the joints of the virtual robot model and the real robot are consistent; the model of the original virtual robot model is consistent with that of the real robot; in the embodiment, 3Dmax is used for setting the parent-child relationship of the existing model and the rotating shaft of each mechanical arm joint, so that the joint rotating directions of the virtual robot and the real robot are consistent, the model is subjected to surface reduction operation, the situation that the model file is too large and is not beneficial to operation is prevented, and the model file is exported to be in an FBX file format;
step 1.3, importing a virtual robot model: installing three-dimensional game engine software (such as three-dimensional game engine Unity software) in a computer, importing the virtual robot model obtained by the processing in the step 1.2 into the three-dimensional game engine software, and aligning the coordinate origin of the virtual robot model with the world coordinate system origin in the three-dimensional game engine software; adjusting the coordinate scale of the robot model to be matched with the size of the real robot, and adding illumination in the virtual scene to make the virtual robot look more real and clear;
step 1.4, adding a virtual camera and an operation interface: adding a virtual camera in three-dimensional game engine software as a global observation camera to ensure that the virtual camera is over against a virtual robot model; an operation interface is added in the three-dimensional game engine software, and three check boxes are arranged above the operation interface to switch different visual angles. Adding a virtual camera as a global observation camera, specifically: a virtual camera is added in the three-dimensional game engine software, so that the virtual camera is over against the virtual robot model, and the orientation of the camera always faces the origin of a world coordinate system, thereby realizing the random switching of the visual angle of the virtual camera. In the embodiment, an operation interface is added by using a Canvas component of Unity, three check boxes are added in a two-dimensional operation interface and represent a preset camera view angle respectively, and then two buttons are added and represent resetting and sending instructions to a real robot system respectively.
Step 2, importing a slider control into three-dimensional game engine software, drawing a screen identification original drawing and dragging the screen identification original drawing into the slider control to obtain a screen identification, and binding the screen identification at the tail end of a mechanical arm of the virtual robot model as a prompt of a spatial scale; in the embodiment, scales are marked on an X axis and a Y axis of a screen mark, and a digital display is arranged at the endpoint of each coordinate axis; the number represents the distance that the end of the arm of the virtual robot model moves in the screen space and the distance that the end of the arm of the real robot moves in the actual three-dimensional space, and in order to enable the operator to observe the change of the number without affecting the operation, the transparency of the screen logo is set to be semi-transparent and the color of the screen logo is set to be red. The screen identification is similar to a two-dimensional coordinate system, but the difference of the screen identification is that the length of the axis in the screen is fixed and does not change with the distance of the camera, but the number at the end point of the axis changes with the distance of the camera.
Step 2.1, two Slider controls are led into three-dimensional game engine software to be used as basic controls of an X axis and a Y axis respectively, and the orientation of the two Slider controls is adjusted to enable the X axis and the Y axis to form a two-dimensional coordinate system; hiding the FillArea attribute of the Slider control;
step 2.2, drawing the screen identification original image: drawing original pictures of screen marks in drawing software, wherein the colors of the original pictures are red, arrows are added to two ends of the screen marks, and the original pictures are stored as image files;
step 2.3, dragging the screen identification original image drawn in the step 2.2 to a Slider control corresponding to the X axis and the Y axis to be used as a background image, and obtaining a screen identification;
and 2.4, because the attention of an operator is mainly focused on the tail end of the mechanical arm during operation, the screen identifier obtained in the step 2.3 is bound at the tail end of the mechanical arm of the virtual robot model, and because the parent object of the screen identifier is the tail end of the mechanical arm, the switching of the visual angle and the operation of the mechanical arm can not cause the identifier to be separated from the position.
Designing a visual angle scale of a virtual camera in an operation interface, controlling the distance of the virtual camera by using a mouse roller, converting a mechanical arm tail end coordinate of a virtual robot model into a mechanical arm joint angle of the virtual robot model by using inverse kinematics, adding Euclidean distance between the virtual camera and an empty object in a three-dimensional space in the calculation process, calculating the Euclidean distance between a three-dimensional point coordinate of the mechanical arm tail end of the virtual robot model in the space and a homogeneous coordinate of a screen identification tail end in the three-dimensional space, and displaying the distance at the screen identification tail end;
step 3, designing a virtual camera view angle scale: leading in a glider control in three-dimensional game engine software as a display control of a virtual camera visual angle scale, and adding distance marks at two ends of the glider control;
step 4, controlling the distance of the virtual camera by the mouse wheel: the virtual camera scale slide block slides along the direction of the mouse roller, and the distance between the virtual camera and the original point in the three-dimensional game engine software also changes; changing the distance between the virtual camera and the empty object through the direction of a mouse roller; calculating the Euclidean distance between the virtual camera and the empty object in a three-dimensional space in real time, and assigning the distance to a value attribute in the Slider control; because the origin can not be selected, the empty object can be selected and calculated as the mark of the origin of the world coordinate system.
Step 5, obtaining the coordinates of the tail end of the mechanical arm of the virtual robot model in the screen space, and calculating the coordinates of the tail end of the prompt line in the screen space, wherein the coordinates of the tail end of the mechanical arm in the screen space are taken as a starting point, and the tail end of the screen mark is taken as an end point; in the present embodiment, this step corresponds to fig. 3, obtaining the coordinate a of the end point a of the mechanical arm in the screen space; drawing a prompt line with a fixed length by taking the a as a starting point, and calculating a coordinate b of the tail end of the prompt line in a screen;
step 6, converting the coordinates of the tail end of the screen identifier in the screen into a three-dimensional space to obtain homogeneous coordinates of the point in the three-dimensional space; in this embodiment, this step corresponds to fig. 3, B is converted from the screen space to the three-dimensional space to obtain a point B;
step 7, acquiring three-dimensional point coordinates of the tail end of the mechanical arm of the virtual robot model in the space, and calculating the Euclidean distance between the homogeneous coordinates obtained in the step 3.6 and the three-dimensional point coordinates of the tail end of the mechanical arm in the space; in this embodiment, this step corresponds to fig. 3, a three-dimensional coordinate of a mechanical arm end point a is obtained, and an euclidean distance D between three-dimensional space points a and B is calculated;
step 8, converting the terminal coordinates of the mechanical arm of the virtual robot model into the joint angle of the mechanical arm of the virtual robot model by using inverse kinematics, adding the Euclidean distance in the step 4 into the calculation of the offset of the inverse kinematics, wherein the moving amplitude of the terminal of the mechanical arm of the virtual robot model depends on the distance between the virtual camera and the virtual robot model, and the closer the distance is, the smaller the moving amplitude of the terminal of the mechanical arm is; converting arm end coordinates of the virtual robot model into arm joint angles of the virtual robot model using inverse kinematics, comprising: in the conversion process from the camera space to the robot space, the mechanical arm tail end coordinate of the virtual robot model is converted into the mechanical arm joint angle of the virtual robot model by using inverse kinematics, the obtained joint angle is used for driving the mechanical arm tail end of the virtual robot model to move, and the Euclidean distance between the virtual camera and the virtual robot model is added into the inverse kinematics solving process, so that the mechanical arm tail end moving amplitude of the virtual robot model is related to the distance between the virtual camera and the virtual robot model, and the closer the distance is, the smaller the mechanical arm tail end moving amplitude of the virtual robot model is.
Step 9, displaying the calculated distance at the tail end of the screen mark, wherein the distance represents the distance of the operator moving the length of the mark and corresponds to the distance of the tail end of the mechanical arm of the real robot moving in the actual space; in this embodiment, this step corresponds to fig. 3, and D is displayed as a prompt at the end of the screen marker.
Fig. 4 is a schematic diagram of scale hinting according to an embodiment of the present invention, (a) is a virtual robot model with a screen identifier, and (b) is a schematic diagram after the screen identifier is enlarged, where a number represents a moving distance in an actual space, and a length of a coordinate axis represents a moving distance in a screen space.
This embodiment still provides a three-dimensional space scale reminder system of robot based on screen identification, includes:
the virtual robot model is imported into three-dimensional game engine software, the virtual robot model is placed at the world coordinate system origin in the three-dimensional game engine software, and a virtual camera and an operation interface are added to the three-dimensional game engine software;
the screen identifier drawing and binding module is used for introducing a slider control into three-dimensional game engine software, drawing an original screen identifier image and dragging the original screen identifier image into the slider control to obtain a screen identifier, and binding the screen identifier image at the tail end of a mechanical arm of the virtual robot model as a prompt of a spatial scale;
the virtual camera visual angle scale design module is used for importing a glider control into three-dimensional game engine software as a display control of the virtual camera visual angle scale, and adding distance marks at two ends of the glider control;
the distance control and calculation module of the virtual camera and the origin of the world coordinate system is used for enabling the scale slide block of the virtual camera to slide along the direction of the mouse roller, and the distance between the virtual camera and the origin in the three-dimensional game engine software is changed; setting the empty object at the origin of a world coordinate system in three-dimensional game engine software, and changing the distance between the virtual camera and the empty object through the direction of a mouse roller; calculating the Euclidean distance between the virtual camera and the empty object in a three-dimensional space in real time, and assigning the distance to a value attribute in the Slider control; because the origin can not be selected, the empty object can be selected and calculated as the mark of the origin of the world coordinate system.
The terminal of the screen identifier is a coordinate calculation module in a screen space, and the terminal of the mechanical arm of the virtual robot model is used for acquiring the coordinates in the screen space, and calculating the coordinates of the terminal of the prompt line in the screen space by taking the coordinates of the terminal of the mechanical arm in the screen space as a starting point and the terminal of the screen identifier as an end point;
the screen identification terminal coordinate conversion module is used for converting the coordinates of the screen identification terminal in the screen into a three-dimensional space to obtain homogeneous coordinates of the point in the three-dimensional space;
the screen identification tail end and mechanical arm tail end distance calculation module of the virtual robot model is used for acquiring three-dimensional point coordinates of the mechanical arm tail end of the virtual robot model in the space and calculating the Euclidean distance between the homogeneous coordinate and the three-dimensional point coordinates of the mechanical arm tail end in the space;
the virtual camera visual angle and distance adjusting and coordinate converting module is used for adding the Euclidean distance between the virtual camera and the empty object in the three-dimensional space into the calculation of the reverse kinematics offset, and converting the mechanical arm terminal coordinate of the virtual robot model into the mechanical arm joint angle of the virtual robot model by using the reverse kinematics; adjusting the visual angle of the virtual camera and the distance between the virtual camera and the virtual robot model through a mouse to enable the distance calculated in the next step to be consistent with the expected moving distance so as to control the mechanical arm motion of the virtual robot model by using the mechanical arm joint angle of the virtual robot model;
and the moving distance display module is used for displaying the Euclidean distance between the homogeneous coordinate of the screen identifier tail end in the three-dimensional space and the three-dimensional point coordinate of the mechanical arm tail end in the space at the tail end of the screen identifier, wherein the distance represents the distance of the screen identifier length movement corresponding to the moving distance of the mechanical arm tail end of the real robot in the real space.

Claims (7)

1. A robot three-dimensional space scale prompting method based on screen identification is characterized by comprising the following steps:
step 1, importing and adjusting an original virtual robot model to obtain a virtual robot model, then importing the virtual robot model into three-dimensional game engine software, placing the virtual robot model at the origin of a world coordinate system in the three-dimensional game engine software, and adding a virtual camera and an operation interface in the three-dimensional game engine software;
step 2, importing a slider control into three-dimensional game engine software, drawing an original image of the screen identifier and dragging the original image into the slider control to obtain the screen identifier, and binding the screen identifier at the tail end of a mechanical arm of the virtual robot model as a prompt of a spatial scale;
step 3, importing a glider control into three-dimensional game engine software as a display control of a virtual camera visual angle scale, and adding far and near marks at two ends of the glider control;
step 4, the scale slide block of the virtual camera can slide along with the direction of the mouse roller, and the distance between the virtual camera and the original point in the three-dimensional game engine software also changes; setting a hollow object at the origin of a world coordinate system in three-dimensional game engine software, and changing the distance between a virtual camera and the hollow object through the direction of a mouse roller; calculating the Euclidean distance between the virtual camera and the empty object in a three-dimensional space in real time, and assigning the distance to a value attribute in the Slider control;
step 5, obtaining the coordinates of the tail end of the mechanical arm of the virtual robot model in the screen space, and calculating the coordinates of the tail end of the prompt line in the screen space, wherein the coordinates of the tail end of the mechanical arm in the screen space are taken as a starting point, and the tail end of the screen mark is taken as an end point;
step 6, converting the coordinates of the tail end of the screen identifier in the screen into a three-dimensional space to obtain homogeneous coordinates of the point in the three-dimensional space;
step 7, acquiring three-dimensional point coordinates of the tail end of the mechanical arm of the virtual robot model in the space, and calculating the Euclidean distance between the homogeneous coordinate acquired in the step 6 and the three-dimensional point coordinates of the tail end of the mechanical arm in the space;
step 8, adding the Euclidean distance in the step 4 into the calculation of the reverse kinematics offset, and converting the mechanical arm terminal coordinates of the virtual robot model into the mechanical arm joint angle of the virtual robot model by using the reverse kinematics; adjusting the visual angle of the virtual camera and the distance between the virtual camera and the virtual robot model through a mouse to enable the distance calculated in the step 7 to be consistent with the expected moving distance, and controlling the mechanical arm motion of the virtual robot model by using the mechanical arm joint angle of the virtual robot model;
and 9, displaying the distance calculated in the step 7 at the tail end of the screen mark, wherein the distance represents the distance moved by the length of the screen mark and corresponds to the distance moved by the tail end of the mechanical arm of the real robot in the actual space.
2. The screen identification-based robot three-dimensional space scale prompting method according to claim 1, wherein the step 1 specifically comprises the following steps:
step 1.1, establishing an interactive system, wherein the interactive system comprises a real robot, a computer for remotely controlling the real robot and an operating handle connected with the computer;
step 1.2, importing and adjusting an original virtual robot model: inputting a virtual robot model in a computer, installing three-dimensional modeling software, and performing axis adjustment operation on each joint of the original virtual robot model by using the three-dimensional modeling software to obtain the virtual robot model, so that the rotation directions of the virtual robot model and the joints of the real robot are consistent; the model of the original virtual robot model is consistent with that of the real robot;
step 1.3, importing a virtual robot model: installing three-dimensional game engine software in a computer, importing the virtual robot model obtained by the processing in the step 1.2 into the three-dimensional game engine software, and aligning the coordinate origin of the virtual robot model with the world coordinate system origin in the three-dimensional game engine software;
step 1.4, adding a virtual camera and an operation interface: adding a virtual camera in three-dimensional game engine software as a global observation camera to ensure that the virtual camera is over against a virtual robot model; an operation interface is added in the three-dimensional game engine software, and three check boxes are arranged above the operation interface to switch different visual angles.
3. The screen identification-based robot three-dimensional space scale prompting method of claim 2, wherein the adding of the virtual camera as a global observation camera in step 1.4 specifically comprises:
a virtual camera is added in the three-dimensional game engine software, so that the virtual camera is over against the virtual robot model, and the orientation of the camera always faces the origin of a world coordinate system, thereby realizing the random switching of the visual angle of the virtual camera.
4. The screen identification-based robot three-dimensional space scale prompting method according to claim 3, wherein the step 2 specifically comprises the following steps:
step 2.1, two Slider controls are led into three-dimensional game engine software to be used as basic controls of an X axis and a Y axis respectively, and the orientation of the two Slider controls is adjusted to enable the X axis and the Y axis to form a two-dimensional coordinate system; hiding the FillArea attribute of the Slider control;
step 2.2, drawing the screen identification original image: drawing original pictures of screen marks in drawing software, wherein the colors of the original pictures are red, arrows are added to two ends of each screen mark, and the original pictures are stored as image files;
step 2.3, dragging the screen identification original image drawn in the step 2.2 to a Slider control corresponding to the X axis and the Y axis to be used as a background image, and obtaining a screen identification;
and 2.4, binding the screen identifier obtained in the step 2.3 at the tail end of the mechanical arm of the virtual robot model.
5. The screen identification-based robot three-dimensional space scale prompting method of claim 4, wherein scales are marked on the X axis and the Y axis of the screen identification, and a digital display is set at the end point of each coordinate axis; the number represents the distance that the end of the arm of the virtual robot model moves in the screen space, the transparency of the screen logo is set to be semi-transparent and the color of the screen logo is set to be red, corresponding to the distance that the end of the arm of the real robot moves in the real three-dimensional space.
6. The method for prompting the three-dimensional space of the robot based on the screen identification as claimed in claim 1, wherein the step 8 of converting the robot arm end coordinates of the virtual robot model into the robot arm joint angles of the virtual robot model by using inverse kinematics comprises the following steps:
in the conversion process from the camera space to the robot space, the mechanical arm tail end coordinate of the virtual robot model is converted into the mechanical arm joint angle of the virtual robot model by using inverse kinematics, the obtained joint angle is used for driving the mechanical arm tail end of the virtual robot model to move, and the Euclidean distance between the virtual camera and the virtual robot model is added into the inverse kinematics solving process, so that the mechanical arm tail end moving amplitude of the virtual robot model is related to the distance between the virtual camera and the virtual robot model, and the closer the distance is, the smaller the mechanical arm tail end moving amplitude of the virtual robot model is.
7. A robot three-dimensional space scale prompt system based on screen identification is characterized by comprising:
the import module is used for importing and adjusting the original virtual robot model to obtain a virtual robot model, then importing the virtual robot model into three-dimensional game engine software, placing the virtual robot model at the origin of a world coordinate system in the three-dimensional game engine software, and adding a virtual camera and an operation interface in the three-dimensional game engine software;
the screen identification drawing and binding module is used for importing a slider control into three-dimensional game engine software, drawing a screen identification original image and dragging the screen identification original image into the slider control to obtain a screen identification, and binding the screen identification at the tail end of a mechanical arm of the virtual robot model as a prompt of a spatial scale;
the virtual camera visual angle scale design module is used for importing a glider control into three-dimensional game engine software as a display control of the virtual camera visual angle scale, and adding distance marks at two ends of the glider control;
the distance control and calculation module of the virtual camera and the origin of the world coordinate system is used for enabling the scale sliding block of the virtual camera to slide along the direction of the mouse roller, and the distance between the virtual camera and the origin in the three-dimensional game engine software is changed along with the sliding block; setting a hollow object at the origin of a world coordinate system in three-dimensional game engine software, and changing the distance between a virtual camera and the hollow object through the direction of a mouse roller; calculating the Euclidean distance between the virtual camera and the empty object in a three-dimensional space in real time, and assigning the distance to a value attribute in the Slider control;
the coordinate calculation module of the tail end of the screen identifier in the screen space is used for acquiring the coordinates of the tail end of the mechanical arm of the virtual robot model in the screen space, and calculating the coordinates of the tail end of the prompt line in the screen space, wherein the coordinates of the tail end of the mechanical arm in the screen space are taken as a starting point, and the tail end of the screen identifier is taken as an end point;
the screen mark tail end coordinate conversion module is used for converting the coordinates of the screen mark tail end in the screen into a three-dimensional space to obtain the homogeneous coordinates of the point in the three-dimensional space;
the screen identification tail end and mechanical arm tail end distance calculation module of the virtual robot model is used for acquiring three-dimensional point coordinates of the mechanical arm tail end of the virtual robot model in the space and calculating the Euclidean distance between the homogeneous coordinate and the three-dimensional point coordinates of the mechanical arm tail end in the space;
the virtual camera visual angle and distance adjusting and coordinate converting module is used for adding the Euclidean distance between the virtual camera and the empty object in the three-dimensional space into the calculation of the reverse kinematics offset, and converting the mechanical arm terminal coordinate of the virtual robot model into the mechanical arm joint angle of the virtual robot model by using the reverse kinematics; adjusting the visual angle of the virtual camera and the distance between the virtual camera and the virtual robot model through a mouse to enable the distance calculated by the previous module to be consistent with the expected moving distance so as to control the mechanical arm motion of the virtual robot model by using the mechanical arm joint angle of the virtual robot model;
and the moving distance display module is used for displaying the Euclidean distance between the homogeneous coordinate of the screen identifier tail end in the three-dimensional space and the three-dimensional point coordinate of the mechanical arm tail end in the space at the tail end of the screen identifier, wherein the distance represents the distance of the screen identifier length movement corresponding to the moving distance of the mechanical arm tail end of the real robot in the real space.
CN201911316517.0A 2019-12-19 2019-12-19 Robot three-dimensional space scale prompting method and system based on screen identification Active CN111113414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911316517.0A CN111113414B (en) 2019-12-19 2019-12-19 Robot three-dimensional space scale prompting method and system based on screen identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911316517.0A CN111113414B (en) 2019-12-19 2019-12-19 Robot three-dimensional space scale prompting method and system based on screen identification

Publications (2)

Publication Number Publication Date
CN111113414A CN111113414A (en) 2020-05-08
CN111113414B true CN111113414B (en) 2022-08-30

Family

ID=70500577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911316517.0A Active CN111113414B (en) 2019-12-19 2019-12-19 Robot three-dimensional space scale prompting method and system based on screen identification

Country Status (1)

Country Link
CN (1) CN111113414B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113504063B (en) * 2021-06-30 2022-10-21 南京航空航天大学 Three-dimensional space touch screen equipment visualization test method based on multi-axis mechanical arm
CN116230167B (en) * 2023-02-21 2023-11-24 中国人民解放军海军军医大学第三附属医院 Prompting method and prompting system for interface operation of surgical robot
CN117140539B (en) * 2023-11-01 2024-01-23 成都交大光芒科技股份有限公司 Three-dimensional collaborative inspection method for robot based on space coordinate transformation matrix

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06203166A (en) * 1993-01-06 1994-07-22 Fujitsu Ltd Measurement, controller and learning method for multi-dimensional position
US5675229A (en) * 1994-09-21 1997-10-07 Abb Robotics Inc. Apparatus and method for adjusting robot positioning
JP2001022495A (en) * 1999-07-12 2001-01-26 Hitachi Ltd Three-dimensional virtual desk-top management device and switching method
CN102448678A (en) * 2009-05-26 2012-05-09 奥尔德巴伦机器人公司 System and method for editing and controlling the behavior of a movable robot
CN102473035A (en) * 2009-07-22 2012-05-23 英默森公司 Interactive touch screen gaming metaphors with haptic feedback across platforms
JP2015147259A (en) * 2014-02-05 2015-08-20 株式会社デンソーウェーブ Teaching device for robot
CN108553895A (en) * 2018-04-24 2018-09-21 网易(杭州)网络有限公司 User interface element and the associated method and apparatus of three-dimensional space model
CN108830894A (en) * 2018-06-19 2018-11-16 亮风台(上海)信息科技有限公司 Remote guide method, apparatus, terminal and storage medium based on augmented reality
CN109419554A (en) * 2017-09-04 2019-03-05 北京航空航天大学 A kind of carpal bone system of virtual operation and method based on unity3d

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8840466B2 (en) * 2011-04-25 2014-09-23 Aquifi, Inc. Method and system to create three-dimensional mapping in a two-dimensional game

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06203166A (en) * 1993-01-06 1994-07-22 Fujitsu Ltd Measurement, controller and learning method for multi-dimensional position
US5675229A (en) * 1994-09-21 1997-10-07 Abb Robotics Inc. Apparatus and method for adjusting robot positioning
JP2001022495A (en) * 1999-07-12 2001-01-26 Hitachi Ltd Three-dimensional virtual desk-top management device and switching method
CN102448678A (en) * 2009-05-26 2012-05-09 奥尔德巴伦机器人公司 System and method for editing and controlling the behavior of a movable robot
CN102473035A (en) * 2009-07-22 2012-05-23 英默森公司 Interactive touch screen gaming metaphors with haptic feedback across platforms
JP2015147259A (en) * 2014-02-05 2015-08-20 株式会社デンソーウェーブ Teaching device for robot
CN109419554A (en) * 2017-09-04 2019-03-05 北京航空航天大学 A kind of carpal bone system of virtual operation and method based on unity3d
CN108553895A (en) * 2018-04-24 2018-09-21 网易(杭州)网络有限公司 User interface element and the associated method and apparatus of three-dimensional space model
CN108830894A (en) * 2018-06-19 2018-11-16 亮风台(上海)信息科技有限公司 Remote guide method, apparatus, terminal and storage medium based on augmented reality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向大屏幕投影环境的场景漫游交互技术;许春耀等;《计算机工程与设计》;中国航天科工集团二院706所;20130704;第34卷(第5期);1729-1730 *

Also Published As

Publication number Publication date
CN111113414A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN110238831B (en) Robot teaching system and method based on RGB-D image and teaching device
CN111113414B (en) Robot three-dimensional space scale prompting method and system based on screen identification
CN106485780B (en) Method for realizing building information model experience based on virtual reality technology
US7236854B2 (en) Method and a system for programming an industrial robot
CN108196679B (en) Gesture capturing and texture fusion method and system based on video stream
CN110355750B (en) Interaction control method for hand-eye coordination of teleoperation
CN106313049A (en) Somatosensory control system and control method for apery mechanical arm
CN107030692B (en) Manipulator teleoperation method and system based on perception enhancement
JP2011110620A (en) Method of controlling action of robot, and robot system
CN107122045A (en) A kind of virtual man-machine teaching system and method based on mixed reality technology
CN108214445A (en) A kind of principal and subordinate's isomery remote operating control system based on ROS
CN102368810A (en) Semi-automatic aligning video fusion system and method thereof
CN110815189A (en) Robot rapid teaching system and method based on mixed reality
TWI659279B (en) Process planning apparatus based on augmented reality
CN108828996A (en) A kind of the mechanical arm remote control system and method for view-based access control model information
CN105319991A (en) Kinect visual information-based robot environment identification and operation control method
CN210361314U (en) Robot teaching device based on augmented reality technology
CN115328304A (en) 2D-3D fused virtual reality interaction method and device
CN110142769A (en) The online mechanical arm teaching system of ROS platform based on human body attitude identification
CN112732075B (en) Virtual-real fusion machine teacher teaching method and system for teaching experiments
CN104019761A (en) Three-dimensional configuration obtaining device and method of corn plant
CN206877277U (en) A kind of virtual man-machine teaching system based on mixed reality technology
Teng et al. Augmented-reality-based 3D Modeling system using tangible interface
CN112192563B (en) Painting control method and chip of intelligent painting robot and intelligent painting robot
Dinh et al. Augmented reality interface for taping robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant