CN108247633B - Robot control method and system - Google Patents

Robot control method and system Download PDF

Info

Publication number
CN108247633B
CN108247633B CN201711471267.9A CN201711471267A CN108247633B CN 108247633 B CN108247633 B CN 108247633B CN 201711471267 A CN201711471267 A CN 201711471267A CN 108247633 B CN108247633 B CN 108247633B
Authority
CN
China
Prior art keywords
robot
voice command
unit
move
command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711471267.9A
Other languages
Chinese (zh)
Other versions
CN108247633A (en
Inventor
殷伟豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Green Refrigeration Technology Center Co Ltd of Zhuhai
Original Assignee
Gree Green Refrigeration Technology Center Co Ltd of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Green Refrigeration Technology Center Co Ltd of Zhuhai filed Critical Gree Green Refrigeration Technology Center Co Ltd of Zhuhai
Priority to CN201711471267.9A priority Critical patent/CN108247633B/en
Publication of CN108247633A publication Critical patent/CN108247633A/en
Application granted granted Critical
Publication of CN108247633B publication Critical patent/CN108247633B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Abstract

The invention discloses a control method and a control system for a robot. Wherein, the method comprises the following steps: acquiring position information of a laser spot emitted by a laser signal generator; generating a planning path according to the position information and the current position of the robot; and controlling the robot to move to a target area according to the planned path. The invention solves the technical problems of higher control difficulty and lower efficiency caused by complex operation of the online programming teaching method.

Description

Robot control method and system
Technical Field
The invention relates to the field of robot control, in particular to a robot control method and system.
Background
At present, industrial robots are widely applied in production, and with the continuous expansion of the application range of the industrial robots, the requirements of people on robot teaching technology are continuously improved. At present, most robots applied in domestic and overseas production are online programming teaching methods, specifically, an operator directly operates a robot joint or indirectly utilizes a robot demonstrator to guide the motion of a motion plan of the robot, and the robot collects motion data of each joint and records the motion data to generate a robot motion instruction so as to complete the learning of teaching actions.
However, the traditional online programming teaching method generally performs teaching by operating each axial direction key on the demonstrator, the operation is complex and tedious, and the operator needs to be trained for a certain time to get the hands, so that the operation is inconvenient and the efficiency is low.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a control method and a control system of a robot, which at least solve the technical problems of higher control difficulty and lower efficiency caused by complex operation of an online programming teaching method.
According to an aspect of an embodiment of the present invention, there is provided a control method of a robot, including: acquiring position information of a laser spot emitted by a laser signal generator; generating a planning path according to the position information and the current position of the robot; and controlling the robot to move to the target area according to the planned path.
Optionally, before acquiring the position information of the laser spot emitted by the laser signal generator, the method further includes: receiving a first voice command, wherein the first voice command is used for indicating the robot to move to a preset position; acquiring a first control instruction stored in advance according to the first voice command; and executing a first control instruction according to the set parameters so as to control the robot to move to a preset position.
Optionally, after controlling the robot to move to the target area according to the planned path, the method further includes: receiving a second voice command, wherein the second voice command is used for indicating the motion action of the robot; analyzing the second voice command to generate a second control instruction; and controlling the robot to execute the motion action according to the second control instruction.
Optionally, the first voice command comprises an action command keyword; wherein the action command keyword includes at least one of: returning to the original point, returning to a standby point, performing test operation, starting automatic operation, suspending, stopping and clearing alarm; the second voice command includes at least one of: key variables, numeric variables, and unit variables; wherein the keyword variables include at least one of: direction, speed, distance, acceleration and deceleration and time delay; the numerical variables include at least one of: integer, floating point number; the unit variable includes at least one of: direction unit, speed unit, distance unit, acceleration and deceleration unit, and time unit.
Optionally, after controlling the robot to move to the target area according to the planned path, the method further includes: storing the planned path; and controlling the robot to move according to the stored planned path under the condition of receiving the operation starting command.
According to another aspect of the embodiments of the present invention, there is also provided a control system of a robot, including: a laser signal generator for emitting laser light; the camera equipment is used for acquiring the position information of the laser spot emitted by the laser signal generator; the motion controller is connected with the camera equipment and used for generating a planned path according to the position information and the current position of the robot; and controlling the robot to move to the target area according to the planned path.
Optionally, the control system of the robot further comprises: the voice recognition device is used for receiving a first voice command, and the first voice command is used for indicating the robot to move to a preset position; wherein, motion controller includes: the voice command analysis module is used for analyzing the first voice command; the language interpreter module is connected with the voice command analysis module and used for converting the analyzed first voice command into data which can be recognized by the robot; the operation control algorithm module is connected with the language interpreter module and used for acquiring a first control instruction which is stored in advance according to the identifiable data; and executing a first control instruction according to the set parameters so as to control the robot to move to a preset position.
Optionally, the voice recognition device is further configured to receive a second voice command, where the second voice command is used to instruct a motion action of the robot; the voice command analysis module is also used for analyzing the second voice command into parameters; the language interpreter module is also used for converting the parameters into recognizable data; the operation control algorithm module is also used for analyzing the identifiable data and generating a second control instruction; and controlling the robot to execute the motion action according to the second control instruction.
Optionally, the first voice command comprises an action command keyword; wherein the action command keyword includes at least one of: returning to the original point, returning to a standby point, performing test operation, starting automatic operation, suspending, stopping and clearing alarm; the second voice command includes at least one of: key variables, numeric variables, and unit variables; wherein the keyword variables include at least one of: direction, speed, distance, acceleration and deceleration and time delay; the numerical variables include at least one of: integer, floating point number; the unit variable includes at least one of: direction unit, speed unit, distance unit, acceleration and deceleration unit, and time unit.
Optionally, the control system of the robot further comprises: a memory for storing the planned path; and the motion controller is also used for controlling the robot to move according to the stored planned path under the condition of receiving the operation starting command.
In the embodiment of the invention, the position information of a laser spot emitted by a laser signal generator is acquired; generating a planning path according to the position information and the current position of the robot; the robot is controlled to move to a target area according to a planned path, and the purpose of quickly and preliminarily positioning the target area is achieved through the guiding and positioning of the laser point, so that an operator can control the robot more flexibly and conveniently, the positioning time of the robot is greatly shortened, the teaching efficiency of the robot is improved, and the technical problems of high control difficulty and low efficiency caused by the complex operation of an online programming teaching method are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic flow chart of an alternative robot control method according to an embodiment of the invention;
FIG. 2 is a schematic flow chart diagram of an alternative method of controlling a robot in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of an alternative robot control system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an alternative motion controller according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an alternative motion controller according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the invention, there is provided a method embodiment of a control method for a robot, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a control method of a robot according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, obtaining the position information of the laser spot emitted by the laser signal generator.
In the above step S102, the laser signal generator generates a laser spot, and the camera device obtains the position information of the laser spot. Specifically, the operator holds the laser signal generator by hand, points to a position near the target, and then the camera device acquires the position coordinates of the laser point (i.e., the position information).
And step S104, generating a planned path according to the position information and the current position of the robot.
In step S104, after the position information is obtained, the camera device transmits the position information to an image processing and analyzing module of the motion controller, and analyzes the position information into data recognizable by the robot in a language interpreter module of the motion controller, and generates a planned path through a motion control algorithm module, and optimizes the planned path.
And S106, controlling the robot to move to a target area according to the planned path.
In step S106, the motion control algorithm module generates a robot control instruction after generating the planned path, so as to control the robot to move to the target area according to the planned path. The robot executes the control command according to the set parameters, and the robot moves to a position near the target (namely, a target area) rapidly. Therefore, the robot is controlled more flexibly and conveniently by an operator, the positioning time of the robot is greatly shortened, and the teaching efficiency of the robot is improved.
Through the steps, the purpose of quickly and preliminarily positioning the target area is achieved through the guiding and positioning of the laser spot, so that an operator can control the robot more flexibly and conveniently, the positioning time of the robot is greatly shortened, the technical effect of teaching efficiency of the robot is improved, and the technical problems of high control difficulty and low efficiency caused by complex operation of an online programming teaching method are solved.
Optionally, before acquiring the position information of the laser spot emitted by the laser signal generator, the method further includes: receiving a first voice command, wherein the first voice command is used for indicating the robot to move to a preset position; acquiring a first control instruction stored in advance according to the first voice command; and executing a first control instruction according to the set parameters so as to control the robot to move to a preset position.
In this embodiment, before the guiding and positioning by the laser point, the robot may be controlled to return to the origin/standby point (i.e. the preset position) by voice. Specifically, an operator sends a "return to the original point" or "return to the standby point" voice command (i.e., the first voice command) to the voice recognition device, the voice command is transmitted to the voice command analysis module of the motion controller through wireless network communication, the voice command is recognized and analyzed, the voice command is sent to the language interpreter module and analyzed into data which can be recognized by the robot, the path is planned and optimized through the motion control algorithm module, a robot motion control command is finally generated, the motion controller executes the voice command according to the originally stored corresponding control command and the set parameters, and the robot moves to the position of the original point/standby point.
Optionally, after controlling the robot to move to the target area according to the planned path, the method further includes: receiving a second voice command, wherein the second voice command is used for indicating the motion action of the robot; analyzing the second voice command to generate a second control instruction; and controlling the robot to execute the motion action according to the second control instruction.
In this embodiment, after the guidance positioning is performed by the laser spot, the fine positioning may be performed by a voice-controlled robot. Specifically, the operator may issue a voice command (i.e., the second voice command described above) regarding keyword variables (Keywords) such as direction, speed, distance, acceleration, deceleration, and delay, and specific numeric variables (Numbers) and unit variables (Units), which may be precise values with floating point Numbers, to the motion controller through the voice recognition device to control the motion of the robot. The voice command analysis module of the motion controller can recognize and analyze the voice command as a specific parameter, and sends the specific parameter to the language interpreter module and the motion control algorithm module, and finally generates a robot motion control instruction.
Optionally, the first voice command comprises an action command keyword; wherein the action command keyword includes at least one of: returning to the original point, returning to a standby point, performing test operation, starting automatic operation, suspending, stopping and clearing alarm; the second voice command includes at least one of: key variables, numeric variables, and unit variables; wherein the keyword variables include at least one of: direction, speed, distance, acceleration and deceleration and time delay; the numerical variables include at least one of: integer, floating point number; the unit variables include at least one of: direction unit, speed unit, distance unit, acceleration and deceleration unit, and time unit.
In this embodiment, a set of voice control command matching model needs to be established in advance, so that the voice command parsing module can process and match conveniently, and the format of the voice control command is defined as follows:
keyword variable Keywords + numeric variable Numbers + Unit variable Unit
First, a library of voice command keyword variables is defined: the direction D, the speed V, the distance S, the acceleration time Acc, the deceleration time Dec, the delay T and other keywords can be expanded and defined according to the requirements subsequently. Wherein, the direction D variable may include: left and right; forward and backward; upwards and downwards.
Secondly, defining a voice command numerical variable library, namely an integer Unit and a floating point Float, and then performing extension definition according to requirements.
Then, defining the voice command unit variable library, direction unit (forward + and backward-), speed unit mm/s, distance unit mm, acceleration unit mm/s2Time unit s. And extension definition can be carried out subsequently according to requirements. Wherein the direction unit variable forward and reverse is defined as: left (+), right (-); forward (+), backward (-); up (+), down (-).
In addition, the definition of some solid constant libraries can be separately performed, for example, when the robot returns to the original point/standby point, runs in a trial mode, and runs automatically, the robot may move at a fixed speed, acceleration, deceleration, distance, and the like, and only voice commands and numerical values corresponding to the solid constant libraries need to be defined in advance and stored in the memory module. The abstract format of the voice command is: the action command Keywords keys. The included voice commands are: the method comprises the following steps of returning to an original point, returning to a standby point, commissioning, starting automatic operation, suspending, stopping, clearing alarm and the like, and can be further expanded and defined according to requirements subsequently.
The complete voice control command data frame can be completed through the definition of the variable library, and an operator can send a data frame conforming to the voice command format to the motion controller through the voice recognition equipment: the voice command of the Keywords + number + Unit can be recognized by the controller, and finally, the corresponding control instruction can be obtained and output to the robot, and the voice command without the Keywords is executed by default parameter values.
For example, the voice command is: "direction: to the left; speed: 60 percent; distance: 100 mm; time delay: 1s ". The voice command analysis module of the motion controller can recognize and analyze the voice command as a specific parameter, namely the robot moves 100mm at a speed of 60% to the left after delaying for 1s, and sends the distance to the language interpreter module and the motion control algorithm module, and finally generates a robot motion control instruction. It should be noted that the voice control command matching model can be modified and expanded according to the actual application.
After the robot is accurately positioned to the target position, the robot can execute specific process tasks, such as the production processes of grabbing and placing, welding, dispensing, assembling and the like of the tail end tool.
Optionally, after controlling the robot to move to the target area according to the planned path, the method further includes: storing the planned path; and controlling the robot to move according to the stored planned path under the condition of receiving the operation starting command.
In the embodiment, the robot global teaching path is planned, the control parameters of each key point are stored in the storage module of the motion controller in real time and can be displayed on the human-computer interface of the teaching machine in real time, and an operator can conveniently check the control parameters.
In the embodiment, an operator can flexibly select different teaching modes such as a common teaching mode and an intelligent teaching mode according to actual production environment and process requirements, so that the user experience is good, and good conditions are created for the use of various production environments. In the intelligent teaching mode, through the guide location of laser points, the image processing analysis module of camera equipment and motion controller can be fast preliminary location target near position for operating personnel is nimble more simple and convenient to the control of robot, has shortened the positioning time of robot moreover greatly, improves robot teaching efficiency.
Furthermore, the voice control command can help the operator and the robot to realize more accurate and detailed target position positioning, namely when the robot is close to a position near the target, the voice recognition device can also be used for sending related voice commands containing keywords and data to control and finely adjust the positioning of the target position. The invention also establishes a set of voice control command matching model, which is convenient for the voice command analysis module to process and match.
In addition, each key point control parameter of the robot global teaching path planning can be stored in the motion controller storage module in real time and can be displayed on a human-computer interface of the teaching machine in real time, and an operator can conveniently check the parameters. Therefore, the target position is positioned through the technology of mutually combining laser guidance and voice, so that the robot can more quickly and accurately teach and plan the path in a complex working environment.
The present invention will be described in further detail with reference to examples, but the embodiments of the present invention are not limited thereto.
The teaching device has two modes of ordinary teaching and intelligent teaching, and field operators can select the teaching mode according to the actual production condition and environment of products. The function is selected and switched according to the actual production environment or process, the operation is flexible and convenient, the user experience is good, and good conditions are created for the use of various production environments.
If the operator selects the common teaching mode, the operator needs to manually switch a teaching mode knob on the demonstrator to the common teaching mode, and planning of a robot teaching path can be realized through a method of an axis direction key on the demonstrator or setting an axis target position coordinate.
As shown in fig. 2, if the operator selects the intelligent teaching mode, the method for controlling the robot according to the present embodiment includes the steps of:
step S1: and selecting an intelligent teaching mode.
Step S1 requires the operator to manually switch the teaching mode knob on the teach pendant to the smart teaching mode.
Step S2: and (4) storing the teaching path and data thereof.
The step S2 includes the following steps:
(1) voice control returns to the origin/standby point. An operator aims at the voice recognition equipment to send a voice command of returning to an original point or a standby point, the voice command is transmitted to a voice command analysis module of the controller through wireless network communication to be recognized and analyzed, the voice command is sent to a language interpreter module to be analyzed into data which can be recognized by the robot, a motion control algorithm module plans a path and optimizes the path, a robot motion control instruction is finally generated, the motion controller executes the voice instruction according to the originally stored corresponding control command and set parameters, and the robot moves to the position of the original point/standby point.
(2) And the laser quickly positions the target position. The invention guides and positions through the position of the laser point emitted by the laser signal generator, and the image processing and analyzing module of the camera equipment and the motion controller can preliminarily and quickly position the position near the target. Specifically, the operator holds the laser signal transmitter and points to a position near the target. The camera device acquires the position coordinates of the laser point, transmits the position coordinates to the image processing and analyzing module of the motion controller, analyzes the position coordinates into data which can be recognized by the robot in the language interpreter module of the motion controller, plans a path through the motion control algorithm module and optimizes the path, and then generates a robot control instruction. The robot executes the position instruction according to the set parameters, and the robot moves to the position near the target quickly. Therefore, the robot is controlled more flexibly and conveniently by an operator, the positioning time of the robot is greatly shortened, and the teaching efficiency of the robot is improved.
(3) And voice control fine positioning. The voice control command can realize more accurate and fine target position positioning, an operator can send keyword variables such as direction, speed, distance, acceleration, deceleration, time delay and the like to the motion controller through the voice recognition equipment, and the specific numerical variable number and the unit variable Units are added to control the motion of the robot, wherein the specific numerical variable can be an accurate value with a floating point number. The voice command analysis module of the motion controller can recognize and analyze the voice command into specific parameters and send the parameters to the language interpreter module and the motion control algorithm module, and finally a robot motion control instruction is generated.
(4) And the tail end tool executes a task. After the robot is accurately positioned to the target position, the robot executes specific process tasks, such as the production processes of grabbing and placing of the tail end tool, welding, dispensing, assembling and the like.
(5) And laser and voice quickly and finely position the next target position. And (4) repeating the operation steps of (2), (3) and (4) to continue to carry out subsequent robot control commands.
(6) Voice control returns to the origin/standby point. And (3) repeating the voice command in the step (1) by the operator to control the robot to move to the position of the origin/standby point.
(7) And storing teaching path data. Through the steps, the robot global teaching path is planned, the control parameters of all key points are stored in the motion controller storage module in real time and can be displayed on the human-computer interface of the teaching machine in real time, and an operator can conveniently check the control parameters.
Step S3: and (5) trial operation of a teaching path.
The Step S3 is a trial operation of the global teaching planned path, that is, the operator issues a voice command "trial operation" to the voice recognition device, and the robot automatically and continuously runs the global teaching planned path stored in the Step2 once.
Step S4: and (4) automatic production.
In step S4, after all the teaching processes are completed, the robot formally starts an automatic production cycle, and the operator sends a voice command "automatic operation start" to the voice recognition device, and the robot executes a corresponding control command to realize automatic operation production. During the operation, the voice command 'stop' and 'pause' can be sent out to stop or pause the robot movement. If abnormal alarm occurs in the operation process of the robot control system, after the alarm is solved, a 'clear alarm' voice command can be sent to remove the alarm state.
The control method of the robot provided by the embodiment of the invention solves the problem that the teaching operation is complex and tedious by operating direction keys of each axis on the demonstrator or setting target position coordinates of each axis in the traditional online programming teaching method, so that the teaching operation of the robot is simple and convenient and is easy to master; the problem of the demand of the quick flexible production of small batch, many varieties, production environment complicacy under present different application occasions is solved, realized nimble convenience, quick accurate, efficient technical effect. Through the control method of the robot, the complexity of the traditional teaching method can be reduced to a great extent, operators can conveniently and quickly know and teach the robot action, and the control method can be suitable for small-batch and multi-variety quick flexible production of the robot in different occasions so as to improve the production efficiency.
Example 2
According to an embodiment of the present invention, there is provided an embodiment of a control system of a robot, as shown in fig. 3, including:
a laser signal generator 30 for emitting laser light;
an image pickup device 32 for acquiring positional information of the laser spot emitted by the laser signal generator 30;
a motion controller 34 connected to the camera 32 and configured to generate a planned path according to the position information and the current position of the robot 36; the robot 36 is controlled to move to the target area according to the planned path.
Optionally, as also shown in fig. 3, the control system of the robot further comprises:
a voice recognition device 38 for receiving a first voice command instructing the robot 36 to move to a preset position.
As shown in fig. 4, the motion controller 34 includes:
a voice command parsing module 340 for parsing the first voice command;
the language interpreter module 342 is connected with the voice command parsing module 340 and is used for converting the parsed first voice command into data which can be recognized by the robot 36;
the operation control algorithm module 344 is connected with the language interpreter module 342 and is used for acquiring a first control instruction stored in advance according to the identifiable data; and executing the first control instruction according to the set parameters to control the robot 36 to move to the preset position.
Optionally, a voice recognition device 38, further configured to receive a second voice command, the second voice command being configured to instruct a motion action of the robot 36;
the voice command parsing module 340 is further configured to parse the second voice command into parameters;
a language interpreter module 342, also for converting the parameters into recognizable data;
the operation control algorithm module 344 is further configured to analyze the identifiable data to generate a second control instruction; and controlling the robot 36 to execute the motion action according to the second control instruction.
As shown in fig. 5, the motion controller of the present embodiment includes: a motion control algorithm module 344, a language interpreter module, a voice command parsing module 340, and an image processing analysis module 346.
The image processing and analyzing module is responsible for processing image information data transmitted by the binocular camera (namely, the camera equipment), for example, processing position coordinates of a laser point emitted by the laser signal generator, and the binocular camera (for example, a binocular camera in Kinect somatosensory equipment) is utilized to obtain RGB images and depth image information of the laser point, so that positioning and tracking of the laser point can be realized. The laser signal generator can be held by hands and used as a rapid target position positioning and guiding device, and the Kinect somatosensory device is installed on a production field and is convenient to position the proper position of a laser spot globally.
And the voice command analysis module is responsible for processing the voice information data input by the voice recognition equipment. The speech recognition equipment is generally provided with speech recognition software, and the principle of the speech recognition equipment is that a speech signal is modeled by using the principles of hidden Markov (HMM), Artificial Neural Network (ANN) and the like, so that the overall non-stationarity and the local stationarity of the speech signal can be improved to a great extent. The voice recognition equipment can be held by hands, serves as acquisition equipment of voice control commands of operators, and is in data communication with the controller through WIFI.
Optionally, the first voice command comprises an action command keyword; wherein the action command keyword includes at least one of: returning to the original point, returning to a standby point, performing test operation, starting automatic operation, suspending, stopping and clearing alarm; the second voice command includes at least one of: key variables, numeric variables, and unit variables; wherein the keyword variables include at least one of: direction, speed, distance, acceleration and deceleration and time delay; the numerical variables include at least one of: integer, floating point number; the unit variables include at least one of: direction unit, speed unit, distance unit, acceleration and deceleration unit, and time unit.
Optionally, the control system of the robot further comprises: a memory for storing the planned path; and the motion controller 34 is also used for controlling the robot 36 to move according to the stored planning path under the condition of receiving the operation starting command.
The invention provides a robot control system which can carry out real-time and intelligent path planning and teaching by acquiring global effective position information designated by external personnel in real time. As also shown in fig. 3, the control system of the robot includes: robot 36, motion controller 34, teach pendant, laser signal generator 30, binocular camera (i.e., camera device 32), voice recognition device 38.
The teaching machine is a device for controlling the operation of the robot. In this embodiment, the teaching device has two modes, namely a normal teaching mode and an intelligent teaching mode, and a field operator can select the teaching mode according to the actual production condition and the environment of a product. The function is selected and switched according to the actual production environment or process, the operation is flexible and convenient, the user experience is good, and good conditions are created for the use of various production environments.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (4)

1. A method for controlling a robot, comprising:
acquiring position information of a laser spot emitted by a laser signal generator;
generating a planning path according to the position information and the current position of the robot;
controlling the robot to move to a target area according to the planned path;
before the obtaining of the position information of the laser spot emitted by the laser signal generator, the method further includes:
receiving a first voice command, wherein the first voice command is used for indicating the robot to move to a preset position;
acquiring a first prestored control instruction according to the first voice command;
executing a first control instruction according to the set parameters to control the robot to move to the preset position;
wherein, after controlling the robot to move to the target area according to the planned path, the method further comprises:
receiving a second voice command, wherein the second voice command is used for indicating a motion action of the robot;
analyzing the second voice command to generate a second control instruction;
controlling the robot to execute the motion action according to the second control instruction;
wherein the first voice command comprises an action command keyword; wherein the action command keyword includes at least one of: returning to the original point, returning to a standby point, performing test operation, starting automatic operation, suspending, stopping and clearing alarm;
wherein, after controlling the robot to move to the target area according to the planned path, the method further comprises:
storing the planned path;
and controlling the robot to move according to the stored planned path under the condition of receiving a running starting command.
2. The method of claim 1,
the second voice command includes at least one of: key variables, numeric variables, and unit variables; wherein the keyword variables include at least one of: direction, speed, distance, acceleration and deceleration and time delay; the numerical variables include at least one of: integer, floating point number; the unit variable includes at least one of: direction unit, speed unit, distance unit, acceleration and deceleration unit, and time unit.
3. A control system for a robot, comprising:
a laser signal generator for emitting laser light;
the camera equipment is used for acquiring the position information of the laser spot emitted by the laser signal generator;
the motion controller is connected with the camera equipment and used for generating a planned path according to the position information and the current position of the robot; controlling the robot to move to a target area according to the planned path;
wherein the system further comprises:
the voice recognition device is used for receiving a first voice command, and the first voice command is used for indicating the robot to move to a preset position;
wherein the motion controller includes:
the voice command analysis module is used for analyzing the first voice command;
the language interpreter module is connected with the voice command analysis module and used for converting the analyzed first voice command into data which can be recognized by the robot;
the operation control algorithm module is connected with the language interpreter module and used for acquiring a first control instruction which is stored in advance according to the identifiable data; executing a first control instruction according to the set parameters to control the robot to move to the preset position;
the voice recognition device is further used for receiving a second voice command, and the second voice command is used for indicating the motion action of the robot;
the voice command analysis module is also used for analyzing the second voice command into parameters;
the language interpreter module is also used for converting the parameters into recognizable data;
the operation control algorithm module is also used for analyzing the identifiable data to generate a second control instruction; controlling the robot to execute the motion action according to the second control instruction;
wherein the first voice command comprises an action command keyword; wherein the action command keyword includes at least one of: returning to the original point, returning to a standby point, performing test operation, starting automatic operation, suspending, stopping and clearing alarm;
wherein, still include:
a memory for storing the planned path;
and the motion controller is also used for controlling the robot to move according to the stored planned path under the condition of receiving the operation starting command.
4. The system of claim 3,
the second voice command includes at least one of: key variables, numeric variables, and unit variables; wherein the keyword variables include at least one of: direction, speed, distance, acceleration and deceleration and time delay; the numerical variables include at least one of: integer, floating point number; the unit variable includes at least one of: direction unit, speed unit, distance unit, acceleration and deceleration unit, and time unit.
CN201711471267.9A 2017-12-27 2017-12-27 Robot control method and system Active CN108247633B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711471267.9A CN108247633B (en) 2017-12-27 2017-12-27 Robot control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711471267.9A CN108247633B (en) 2017-12-27 2017-12-27 Robot control method and system

Publications (2)

Publication Number Publication Date
CN108247633A CN108247633A (en) 2018-07-06
CN108247633B true CN108247633B (en) 2021-09-03

Family

ID=62725119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711471267.9A Active CN108247633B (en) 2017-12-27 2017-12-27 Robot control method and system

Country Status (1)

Country Link
CN (1) CN108247633B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108908336A (en) * 2018-07-20 2018-11-30 珠海智新自动化科技有限公司 A kind of manipulator command generating method and system
CN109062220B (en) * 2018-08-31 2021-06-29 创新先进技术有限公司 Method and device for controlling terminal movement
CN109917789B (en) * 2019-03-13 2021-07-20 珠海格力电器股份有限公司 Automatic transportation method and device for household appliances and storage medium
CN111819039B (en) * 2019-07-15 2023-08-15 深圳配天智能技术研究院有限公司 Robot control method, apparatus and readable storage medium
US20230384782A1 (en) * 2022-05-24 2023-11-30 International Business Machines Corporation Visual light-based direction to robotic system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3723329A1 (en) * 1986-07-16 1988-01-21 Tokico Ltd Industrial reproduction robot
CN1460050A (en) * 2001-03-27 2003-12-03 索尼公司 Action teaching apparatus and action teaching method for robot system, and storage medium
CN101947152A (en) * 2010-09-11 2011-01-19 山东科技大学 Electroencephalogram-voice control system and working method of humanoid artificial limb
CN103050118A (en) * 2012-11-29 2013-04-17 三一重工股份有限公司 Engineering machinery, control system and action voice control system thereof
CN103271784A (en) * 2013-06-06 2013-09-04 山东科技大学 Man-machine interactive manipulator control system and method based on binocular vision
CN106095109A (en) * 2016-06-20 2016-11-09 华南理工大学 The method carrying out robot on-line teaching based on gesture and voice
CN106125925A (en) * 2016-06-20 2016-11-16 华南理工大学 Method is arrested based on gesture and voice-operated intelligence
CN206105869U (en) * 2016-10-12 2017-04-19 华南理工大学 Quick teaching apparatus of robot
CN106826838A (en) * 2017-04-01 2017-06-13 西安交通大学 A kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors
CN206711600U (en) * 2017-02-24 2017-12-05 广州幻境科技有限公司 The voice interactive system with emotive function based on reality environment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3723329A1 (en) * 1986-07-16 1988-01-21 Tokico Ltd Industrial reproduction robot
CN1460050A (en) * 2001-03-27 2003-12-03 索尼公司 Action teaching apparatus and action teaching method for robot system, and storage medium
CN101947152A (en) * 2010-09-11 2011-01-19 山东科技大学 Electroencephalogram-voice control system and working method of humanoid artificial limb
CN103050118A (en) * 2012-11-29 2013-04-17 三一重工股份有限公司 Engineering machinery, control system and action voice control system thereof
CN103271784A (en) * 2013-06-06 2013-09-04 山东科技大学 Man-machine interactive manipulator control system and method based on binocular vision
CN106095109A (en) * 2016-06-20 2016-11-09 华南理工大学 The method carrying out robot on-line teaching based on gesture and voice
CN106125925A (en) * 2016-06-20 2016-11-16 华南理工大学 Method is arrested based on gesture and voice-operated intelligence
CN206105869U (en) * 2016-10-12 2017-04-19 华南理工大学 Quick teaching apparatus of robot
CN206711600U (en) * 2017-02-24 2017-12-05 广州幻境科技有限公司 The voice interactive system with emotive function based on reality environment
CN106826838A (en) * 2017-04-01 2017-06-13 西安交通大学 A kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors

Also Published As

Publication number Publication date
CN108247633A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
CN108247633B (en) Robot control method and system
US11654501B2 (en) Systems and methods for gesture control of a welding system
CN104936748B (en) Free-hand robot path teaching
EP3591521B1 (en) Assistance system, method, and program for assisting a user in fulfilling a task
KR20190056935A (en) Mobile terminal providing augmented reality based maintenance guidance, remote managing apparatus and method for remote guidance using the same
KR102586646B1 (en) Machine tool system
CN104875204A (en) Offline programming module and application method of plasma space cutting robot
CN1935470A (en) Method for optimizing robot program and robot control system
JP5141876B2 (en) Orbit search device
CN105511400B (en) A kind of pressing robot control system
Nuzzi et al. MEGURU: a gesture-based robot program builder for Meta-Collaborative workstations
CA2922792C (en) Cable processing machine monitoring with improved precision mechanism for cable processing
Matthaiakis et al. Flexible programming tool enabling synergy between human and robot
CN107199423B (en) Non-programming teaching-free intelligent welding robot
CN105955180A (en) Intelligent manufacturing adaptive dynamic generation robot real-time automatic programming method
CN109128439A (en) CAD diagram paper technology guided robot automatic soldering method
WO2019127024A1 (en) Method and apparatus for robotic machining
CN109531576B (en) Welding control method, device and system and welding robot
CN116021250B (en) Intelligent assembly system
CN116483977A (en) Method for realizing mobile robot-machine interaction inspection by using large language model
US11322147B2 (en) Voice control system for operating machinery
CN108000535A (en) A kind of six-joint robot intelligent controller
CN217475952U (en) Robot control system and robot
CN115033503B (en) Equipment control method based on augmented reality
CN116049425B (en) Robot interaction method based on RCS architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant