CN114211486B - Robot control method, robot and storage medium - Google Patents
Robot control method, robot and storage medium Download PDFInfo
- Publication number
- CN114211486B CN114211486B CN202111518577.8A CN202111518577A CN114211486B CN 114211486 B CN114211486 B CN 114211486B CN 202111518577 A CN202111518577 A CN 202111518577A CN 114211486 B CN114211486 B CN 114211486B
- Authority
- CN
- China
- Prior art keywords
- robot
- information
- sensor
- data
- acquiring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000003993 interaction Effects 0.000 claims abstract description 49
- 238000011217 control strategy Methods 0.000 claims abstract description 40
- 230000009471 action Effects 0.000 claims abstract description 18
- 230000004927 fusion Effects 0.000 claims description 50
- 238000004590 computer program Methods 0.000 claims description 29
- 230000001133 acceleration Effects 0.000 claims description 25
- 238000000605 extraction Methods 0.000 claims description 21
- 238000012549 training Methods 0.000 claims description 19
- 238000003062 neural network model Methods 0.000 claims description 17
- 230000006698 induction Effects 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 9
- 230000007613 environmental effect Effects 0.000 abstract description 10
- 230000001276 controlling effect Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 230000000875 corresponding effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000010365 information processing Effects 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The application is applicable to the technical field of robots, and provides a control method of a robot, the robot and a storage medium, wherein the method comprises the following steps: acquiring first data acquired by a sensor module in the robot, wherein the first data comprises man-machine interaction information, performance information of the robot and environment information of an environment where the robot is located; based on the first data, a robot control strategy is obtained, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy; based on the control strategy, the robot actions are controlled. In the method, at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy is obtained according to man-machine interaction information, performance information of the robot and environmental information of the environment where the robot is located.
Description
Technical Field
The application belongs to the technical field of robots, and particularly relates to a control method of a robot, the robot and a storage medium.
Background
With the development of technology, robots are increasingly used, such as floor sweeping robots, meal delivery robots, and route receiving robots.
Currently, a robot determines a control strategy according to collected information, for example, determines an interaction strategy according to interaction information of a user; or determining a motion strategy according to the detected external environment information. The robot determining a control strategy according to the single information has low degree of intellectualization.
Disclosure of Invention
The embodiment of the application provides a control method of a robot, the robot and a storage medium, which can solve the problem of low intelligent degree of the robot.
In a first aspect, an embodiment of the present application provides a method for controlling a robot, including:
acquiring first data acquired by a sensor module in the robot, wherein the first data comprises man-machine interaction information, performance information of the robot and environment information of an environment where the robot is located;
based on the first data, a control strategy of the robot is obtained, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy;
And controlling the robot action based on the control strategy.
In a second aspect, an embodiment of the present application provides a robot, including a sensor module and an information processing module, the information processing module includes:
the system comprises a data acquisition module, a sensor module and a control module, wherein the data acquisition module is used for acquiring first data acquired by the sensor module in the robot, wherein the first data comprises man-machine interaction information, performance information of the robot and environment information of an environment where the robot is located;
the strategy determining module is used for obtaining a control strategy of the robot based on the first data, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy;
and the control module is used for controlling the robot to act based on the control strategy.
In a third aspect, an embodiment of the present application provides a robot, including: a sensor module, a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method of controlling a robot according to any of the above-mentioned first aspects when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method for controlling a robot according to any one of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a terminal device, causes the terminal device to perform the method of controlling a robot according to any one of the first aspects above.
Compared with the prior art, the embodiment of the first aspect of the application has the beneficial effects that: the method comprises the steps of firstly obtaining first data collected by a sensor module in a robot, wherein the first data comprise man-machine interaction information, performance information of the robot and environment information of an environment where the robot is located; based on the first data, a robot control strategy is obtained, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy; based on the control strategy, the robot actions are controlled. In the method, at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy is obtained according to man-machine interaction information, performance information of the robot and environmental information of the environment where the robot is located.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of a control method of a robot according to an embodiment of the present application;
fig. 2 is a flow chart of a control method of a robot according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a method for obtaining a control strategy of a robot according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for determining to obtain first data according to voice information according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for determining to obtain first data according to touch information according to an embodiment of the present application;
FIG. 6 is a flow chart of a method for determining acquisition of first data according to video information according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an information processing module according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of a robot according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted in context as "when … …" or "upon" or "in response to determining" or "in response to detecting". Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The robot integrates various sensors of different types, is used for collecting information of different types and comprehensively controlling the robot according to the information of different types, and solves the problems that the robot is low in intelligent degree and the robot is controlled according to one type of information.
As shown in fig. 1, an application scenario of a control method of a robot provided in an embodiment of the present application is schematically shown, and the control method of a robot may be used to control the robot. Wherein be equipped with a plurality of sensors on the robot, a plurality of sensors include: microphone array, photo-induction sensor, radio frequency sensor, camera, touch sensor, spacing sensor, current sensor, acceleration sensor, temperature sensor, infrared sensor and ultrasonic sensor. The microphone array is used for collecting voice information of a user. The light induction sensor is used for collecting light induction information. The radio frequency sensor is used for collecting radio frequency information. The camera is used for collecting video information. The touch sensor is used for collecting touch information of a user. The limit sensor is used for collecting the motion position information of the robot. The current sensor is used for collecting current signals of a power supply in the robot. The acceleration sensor is used for collecting acceleration information of the robot. The temperature sensor is used for collecting temperature information of the environment where the robot is located. The infrared sensor is used for collecting obstacle information of the environment where the robot is located. The ultrasonic sensor is used for acquiring second obstacle information of the environment where the robot is located. The robot carries out feature lifting and feature fusion on the information acquired by the sensor to obtain a control strategy of the robot, and corresponding man-machine interaction, obstacle avoidance and movement are carried out according to the control strategy.
Specifically, feature extraction and feature fusion are performed on first voice information collected by the microphone array, light sensing information collected by the light sensing sensor, radio frequency information collected by the radio frequency sensor, first video information collected by the camera and first touch information collected by the touch sensor.
And carrying out characteristic lifting and characteristic fusion on the action position information acquired by the limit sensor, the current signal acquired by the current sensor, the acceleration information acquired by the acceleration sensor, the temperature information acquired by the temperature sensor, the first obstacle information acquired by the infrared sensor and the second obstacle information acquired by the ultrasonic sensor.
Fig. 2 shows a schematic flow chart of a control method of a robot provided in the present application, in which a processor may be used to implement the following method. Referring to fig. 2, the method is described in detail as follows:
s101, acquiring first data acquired by a sensor module in the robot.
In this embodiment, the first data includes man-machine interaction information, performance information of the robot, and environment information of an environment in which the robot is located.
The man-machine interaction information comprises instruction information sent by a user. For example, the man-machine interaction information may include a voice instruction issued by a user and an action instruction issued by the user. The performance information of the robot may include an amount of electricity of a battery in the robot, a current of the battery, an angle of rotation of a head of the robot, an angle of movement of a hand of the robot, and the like. The environmental information may include obstacles around the robot, temperature, humidity of the environment, etc.
In this embodiment, the sensor module in the robot may include a microphone array for collecting man-machine interaction information, a light sensing sensor, a radio frequency sensor, a camera, a touch sensor, and the like.
The sensor module in the robot may include a limit sensor, a current sensor, an acceleration sensor, etc. for collecting performance information of the robot.
The sensor modules in the robot may include temperature sensors, infrared sensors, ultrasonic sensors, etc. for collecting environmental information. Both the infrared sensor and the ultrasonic sensor are used for collecting information of the obstacle.
S102, obtaining the robot control strategy based on the first data.
In this embodiment, the control strategy includes at least one of an interaction strategy, an obstacle avoidance strategy, and a motion strategy.
Specifically, the first data is input into a neural network model to obtain a control strategy of the robot. A process of training the neural network model may also be included before the control strategy for the robot is derived using the neural network model.
Specifically, the training process for the neural network model includes: the training parameters are acquired, and the training parameters can comprise man-machine interaction information, performance information of the robot and environment information of the environment where the robot is located. And inputting the training parameters into the neural network model to be trained to obtain training result data. And comparing the training result data with preset result data to obtain difference data, and adjusting parameters in the neural network model according to the difference data. Training the neural network model with the parameters adjusted by using the training parameters until training result data output by the neural network model meets preset requirements, and obtaining the trained neural network model.
For example, if the man-machine interaction information is an instruction that the head of the robot rotates 10 degrees to the left, the performance information of the robot is that the head of the robot rotates 30 degrees to the left. The robot may determine the interaction policy as "robot play: the head has been turned 10 degrees to the left. The motion strategy is "10 degrees head turn left". The obstacle avoidance strategy of the robot is 'without obstacle avoidance'.
If the man-machine interaction information is a direct-walking instruction sent by a user, the robot detects that the environment information is that an obstacle exists in the position 2 meters in front. The performance information of the robot is that the acceleration of the robot is B. The motion strategy of the robot is 'non-straight running'. The obstacle avoidance strategy of the robot is "avoid obstacles 2 meters in front". The interaction strategy of the robot is that the robot plays: the obstacle is arranged in front of the vehicle, the vehicle cannot move straight, and the route is re-planned.
S103, controlling the robot to act based on the control strategy.
In this embodiment, the robot may be controlled to output corresponding response information, control the movement of the robot, avoid an obstacle, and the like according to the control policy.
In the embodiment of the application, first data acquired by a sensor module in a robot are acquired firstly, wherein the first data comprise man-machine interaction information, performance information of the robot and environment information of an environment where the robot is located; based on the first data, a robot control strategy is obtained, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy; based on the control strategy, the robot actions are controlled. In the method, at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy is obtained according to man-machine interaction information, performance information of the robot and environmental information of the environment where the robot is located.
In one possible implementation manner, the man-machine interaction information includes first voice information collected by the microphone array, light induction information collected by the light induction sensor, radio frequency information collected by the radio frequency sensor, first video information collected by the camera, and first touch information collected by the touch sensor.
Specifically, the implementation procedure of step S101 may include:
s1011, acquiring the first voice information acquired by the microphone array.
In this embodiment, the microphone array may collect voice information of a user, and perform denoising processing on the voice information to obtain first voice information.
By way of example, the first voice information may include: instructions such as singing, head lifting, dancing and the like sent by a user.
S1012, acquiring light sensing information acquired by the light sensing sensor, wherein the light sensing information is used for determining action instructions of a user.
In this embodiment, the light sensing sensor generates light sensing information through light change, and the light sensing information may be a level signal. The light sensing information can identify actions made by the user, such as a waving action of the user, and the like.
S1013, acquiring the radio frequency information acquired by the radio frequency sensor.
In this embodiment, the rf sensor is mainly used to collect information on the rf chip on the object having the rf chip, and in this application, the information collected by the rf sensor is referred to as rf information. For example, the radio frequency sensor can obtain the content in the book by scanning the radio frequency chip in the book, and the robot can display or play the content in the book through the audio/video module after obtaining the content in the book.
S1014, acquiring first video information acquired by the camera.
In this embodiment, the camera is mainly used for collecting video information around the robot, and whether a person, a book, an obstacle, or the like is around the robot can be determined through the video information.
S1015, acquiring first touch information acquired by the touch sensor.
In this embodiment, the touch sensor is mainly used for collecting the touch of the user to the robot, and the robot can make corresponding actions according to the touch information, for example, the user touches the head of the robot, and the robot can make a shy expression, etc.
In one possible implementation, the performance information includes motion position information of a component associated with the limit sensor on the robot collected by the limit sensor, a current signal of a power supply in the robot collected by the current sensor, and acceleration information of the robot collected by the acceleration sensor.
Specifically, the implementation procedure of step S101 may include:
and acquiring action position information of a part, which is acquired by the limit sensor and is associated with the limit sensor, of the robot.
And acquiring a current signal of a power supply in the robot, which is acquired by the current sensor.
And acquiring acceleration information of the robot acquired by the acceleration sensor.
In this embodiment, the limit sensor may include a first limit sensor disposed at the head of the robot for collecting the movement amplitude of the head of the robot, a second limit sensor disposed at the arm of the robot for collecting the movement amplitude of the arm of the robot, a third limit sensor disposed at the leg of the robot for collecting the movement amplitude of the leg of the robot, and so on.
As an example, the first sensor provided at the robot head may collect an angle of the robot head moving leftward, rightward, upward, or downward.
In this embodiment, the current signal of the power supply in the robot may reflect whether the motor in the robot can normally operate, and if the current of the power supply in the robot is greater than the preset current, it indicates that the current of the motor is too large, which causes a certain damage to the motor, and the motor is in an abnormal operating state. From the current signals, a corresponding motion strategy and/or interaction strategy may be determined.
In this embodiment, the acceleration information may reflect the motion situation of the robot at the current time, and further, the motion policy of the robot may be redetermined according to the acceleration, the surrounding environment information, and the like.
In one possible implementation manner, the environmental information includes temperature information of the environment where the robot is located, which is acquired by the temperature sensor, first obstacle information of the environment where the robot is located, which is acquired by the infrared sensor, and second obstacle information of the environment where the robot is located, which is acquired by the ultrasonic sensor.
Specifically, the implementation procedure of step S101 may include:
and acquiring temperature information of the environment where the robot is located, wherein the temperature information is acquired by the temperature sensor.
And acquiring first obstacle information of the environment where the robot is located, wherein the first obstacle information is acquired by the infrared sensor.
And acquiring second obstacle information of the environment where the robot is located, which is acquired by the ultrasonic sensor.
In this embodiment, the temperature information collected by the temperature sensor may generate the interaction information when the user needs to obtain the current temperature, so as to cope with the requirement of the user.
In this embodiment, both the infrared sensor and the ultrasonic sensor may detect whether an obstacle is present around the robot by the user, so that the robot may avoid the obstacle, interact with the user, and/or generate a motion strategy.
As shown in fig. 3, in one possible implementation, the implementation procedure of step S102 may include:
s1021, carrying out feature extraction and feature fusion on the man-machine interaction information to obtain first fusion data.
Specifically, the man-machine interaction information is input into a feature extraction model, and feature extraction is carried out on the man-machine interaction information to obtain feature data. After the feature data is obtained, the feature data can be input into a first feature fusion model to perform data combination and data fusion, so that first fusion data is obtained.
And S1022, performing feature extraction and feature fusion on the performance information and the environment information to obtain second fusion data.
Specifically, the performance information is input into a feature extraction model, and feature extraction is performed on the performance information to obtain first feature data. And inputting the environmental information into a feature extraction model, and carrying out feature extraction on the environmental information to obtain second feature data. After the first feature data and the second feature data are obtained, the first feature data and the second feature data can be input into a second feature fusion model to be subjected to data combination and data fusion, and second fusion data are obtained.
Specifically, the performance information and the environmental information can be input into the feature extraction model to perform feature extraction to obtain third feature data, and the third feature data is input into the feature fusion model to obtain second fusion data.
S1023, obtaining a control strategy of the robot based on the first fusion data and the second fusion data.
In this embodiment, the first fusion data and the second fusion data are input into the trained neural network model, so as to obtain a control strategy of the robot.
In the embodiment of the application, the control strategy is obtained by fusing the information acquired by the plurality of sensors, so that the intelligent degree of the robot can be improved.
Optionally, the implementation procedure of step S102 may include:
performing feature extraction and feature fusion on the man-machine interaction information to obtain first fusion data;
performing feature extraction and feature fusion on the performance information to obtain third fusion data;
performing feature extraction and feature fusion on the environment information to obtain fourth fusion data;
and obtaining a control strategy of the robot based on the first fusion data, the third fusion data and the fourth fusion data.
Optionally, the implementation procedure of step S102 may include:
performing feature extraction and feature fusion on the man-machine interaction information and the performance information to obtain fifth fusion data;
extracting features and fusing the features of the environment information to obtain sixth fused data;
And obtaining a control strategy of the robot based on the fifth fusion data and the sixth fusion data.
Optionally, the implementation procedure of step S102 may include:
performing feature extraction and feature fusion on the man-machine interaction information and the environment information to obtain seventh fusion data;
performing feature extraction and feature fusion on the performance information to obtain eighth fusion data;
and obtaining a control strategy of the robot based on the seventh fusion data and the eighth fusion data.
As shown in fig. 4, in one possible implementation, step S101 may further include:
s201, acquiring second voice information acquired by the microphone array.
In this embodiment, after the robot is turned on, the microphone array collects the voice information sent by the user in real time.
S202, based on a preset wake-up instruction, determining whether the second voice information is matched with the wake-up instruction.
In this embodiment, after the second voice information is obtained, denoising processing may be performed on the voice information, so as to extract key information of the voice information. And matching the key information of the second voice information with a preset wake-up instruction, and determining whether the robot needs to be switched from the dormant state to the working state. The preset wake-up instruction can be set according to the requirement.
And S203, if the second voice information is matched with the wake-up instruction, acquiring the first data acquired by the sensor module.
In this embodiment, if the second voice information matches with the wake-up instruction, the robot is switched from the sleep state to the working state. When the robot is in a working state, first data acquired by the sensor module can be acquired, and a control strategy is determined according to the first data.
For example, if the key information of the second voice information is "small Q and small Q", and the preset wake-up instruction is "small Q and small Q", it is determined that the second voice information matches with the preset wake-up instruction.
As shown in fig. 5, in one possible implementation, step S101 may further include:
s301, second touch information acquired by the touch sensor is acquired, and the first duration of the second touch information is determined.
In this embodiment, after the robot is turned on, the touch sensor collects touch information of the user in real time.
S302, if the first duration is longer than a first preset time, acquiring first data acquired by the sensor module.
In this embodiment, after the second touch information is acquired, the touch duration of the user may be calculated, where the touch duration is denoted as the first duration in this application. When the touch time is longer than the first preset time, the robot can be switched from the dormant state to the working state. The first preset time may be set as needed, for example, 4 seconds, 5 seconds, 6 seconds, or the like.
In this embodiment, if the first duration is less than or equal to the first preset time, the first data acquired by the sensor module is not acquired, and the touch sensor in the robot continues to detect the touch information. The phenomenon that the robot is started due to the fact that a user touches the robot by mistake is avoided.
As shown in fig. 6, in one possible implementation, step S101 may further include:
s401, acquiring second video information acquired by the camera, and determining whether face information exists in the second video information.
In this embodiment, after the robot is turned on, the camera collects video information around the robot in real time. The present application refers to the second video information. After the processor acquires the second video information, the processor analyzes the second video information to determine whether the face information exists in the second video information, namely whether a user exists near the robot.
Specifically, the processor may input the second video information into the detection model to determine whether the face information exists in the second video information.
And S402, if the face information exists in the second video information, determining a second duration time when the face information exists in the second video information.
In this embodiment, if the face information exists in the second video information, the duration of the face information may be calculated so as to exclude a case where the user simply passes through the location, but does not want to interact with the robot.
S403, if the second duration is longer than a second preset time, acquiring the first data acquired by the sensor module.
In this embodiment, if the duration of the face information is longer than the second preset time, it may be determined that the user wants to interact with the robot, and the robot may be switched from the sleep state to the operation state. The second preset time may be set as needed.
In the embodiment of the application, the trigger condition that the robot acquires the first data is set, and the first data is acquired when the trigger condition is met, so that misoperation of the robot can be prevented, in addition, the number of data processing in the robot can be reduced, and the service life of the robot is prolonged.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the control method of the robot described in the above embodiments, the present application provides a robot, which includes a sensor module and an information processing module.
Referring to fig. 7, the information processing module 500 may include: a data acquisition module 510, a policy determination module 520, and a control module 530.
The data acquisition module 510 is configured to acquire first data acquired by the sensor module in the robot, where the first data includes man-machine interaction information, performance information of the robot, and environmental information of an environment where the robot is located;
a policy determining module 520, configured to obtain a control policy of the robot based on the first data, where the control policy includes at least one of an interaction policy, an obstacle avoidance policy, and a motion policy;
and a control module 530, configured to control the robot action based on the control strategy.
In one possible implementation, the sensor module includes a microphone array, a light sensing sensor, a radio frequency sensor, a camera, and a touch sensor;
the man-machine interaction information comprises first voice information collected by the microphone array, light induction information collected by the light induction sensor, radio frequency information collected by the radio frequency sensor, first video information collected by the camera and first touch information collected by the touch sensor;
The light sensing information is used for determining action instructions of a user.
In one possible implementation, the sensor module includes: the device comprises a limit sensor, a current sensor, an acceleration sensor, a temperature sensor, an infrared sensor and an ultrasonic sensor;
the performance information comprises action position information of a part, which is collected by the limit sensor and is associated with the limit sensor, of the robot, current signals of a power supply in the robot, which are collected by the current sensor, and acceleration information of the robot, which is collected by the acceleration sensor;
the environment information comprises temperature information of the environment where the robot is located, collected by the temperature sensor, first obstacle information of the environment where the robot is located, collected by the infrared sensor, and second obstacle information of the environment where the robot is located, collected by the ultrasonic sensor.
In one possible implementation, the policy determination module 520 may be specifically configured to:
performing feature extraction and feature fusion on the man-machine interaction information to obtain first fusion data;
performing feature extraction and feature fusion on the performance information and the environment information to obtain second fusion data;
And obtaining a control strategy of the robot based on the first fusion data and the second fusion data.
In one possible implementation, the data acquisition module 510 may be specifically configured to:
acquiring second voice information acquired by the microphone array;
determining whether the second voice information is matched with a wake-up instruction or not based on the preset wake-up instruction;
and if the second voice information is matched with the wake-up instruction, acquiring the first data acquired by the sensor module.
In one possible implementation, the data acquisition module 510 may be specifically configured to:
acquiring second touch information acquired by the touch sensor, and determining a first duration of the second touch information;
and if the first duration is longer than a first preset time, acquiring first data acquired by the sensor module.
In one possible implementation, the data acquisition module 510 may be specifically configured to:
acquiring second video information acquired by the camera, and determining whether face information exists in the second video information;
if the face information exists in the second video information, determining a second duration time when the face information exists in the second video information;
And if the second duration is longer than a second preset time, acquiring the first data acquired by the sensor module.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the present application also provides a robot, referring to fig. 8, the robot 700 may include: the sensor module, the at least one processor 710, the memory 720 and the computer program stored in the memory 720 and executable on the at least one processor 710, the processor 710 implementing the steps of any of the various method embodiments described above, such as steps S101 to S103 in the embodiment shown in fig. 2, when the computer program is executed. Alternatively, the processor 710 may perform the functions of the modules/units in the above-described apparatus embodiments, such as the functions of the modules 510 to 530 shown in fig. 7, when executing the computer program.
The sensor module includes: microphone array, light induction sensor, radio frequency sensor, camera, touch sensor, limit sensor, current sensor, acceleration sensor, temperature sensor, infrared sensor and ultrasonic sensor; the microphone array is used for collecting first voice information, the light induction sensor is used for collecting light induction information, the radio frequency sensor is used for collecting radio frequency information, the camera is used for collecting first video information, and the touch sensor is used for collecting first touch information; the limiting sensor is used for collecting action position information of a part, which is associated with the limiting sensor, on the robot, the current sensor is used for collecting current signals of a power supply in the robot, and the acceleration sensor is used for collecting acceleration information of the robot; the temperature sensor is used for acquiring temperature information of the environment where the robot is located, the infrared sensor is used for acquiring first obstacle information of the environment where the robot is located, and the ultrasonic sensor is used for acquiring second obstacle information of the environment where the robot is located.
By way of example, a computer program may be partitioned into one or more modules/units that are stored in memory 720 and executed by processor 710 to complete the present application. The one or more modules/units may be a series of computer program segments capable of performing specific functions for describing the execution of the computer program in the robot 700.
It will be appreciated by those skilled in the art that fig. 8 is merely an example of a robot and is not limiting of the robot and may include more or fewer components than shown, or may combine certain components, or different components, such as input and output devices, network access devices, buses, etc.
The processor 710 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 720 may be an internal memory unit of the robot, or may be an external memory device of the robot, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), or the like. The memory 720 is used for storing the computer program as well as other programs and data required by the robot. The memory 720 may also be used to temporarily store data that has been output or is to be output.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or one type of bus.
The control method of the robot provided by the embodiment of the application can be applied to terminal equipment such as computers, tablet computers, notebook computers, netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and the specific type of the terminal equipment is not limited.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device, apparatus and method may be implemented in other manners. For example, the above-described embodiments of the terminal device are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by one or more processors, the computer program may implement the steps of each of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by one or more processors, the computer program may implement the steps of each of the method embodiments described above.
Also, as a computer program product, the steps of the various method embodiments described above may be implemented when the computer program product is run on a terminal device, causing the terminal device to execute.
Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (10)
1. A control method of a robot, comprising:
acquiring first data acquired by a sensor module in the robot, wherein the first data comprises man-machine interaction information, performance information of the robot and environment information of the environment where the robot is located, and the man-machine interaction information comprises a motion instruction sent by a user;
based on the first data, a control strategy of the robot is obtained, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy;
controlling the robot action based on the control strategy, specifically, inputting first data into a neural network model to obtain a control strategy of the robot;
The training process of the neural network model comprises the following steps: acquiring training parameters, wherein the training parameters comprise man-machine interaction information, performance information of the robot and environment information of the environment where the robot is located; inputting training parameters into a neural network model to be trained to obtain training result data, comparing the training result data with preset result data to obtain difference data, adjusting parameters in the neural network model according to the difference data, and training the neural network model with the parameters adjusted by using the training parameters until the training result data output by the neural network model meets preset requirements to obtain a trained neural network model.
2. The method of controlling a robot of claim 1, wherein the sensor module comprises a microphone array, a light sensing sensor, a radio frequency sensor, a camera, and a touch sensor;
the man-machine interaction information comprises first voice information collected by the microphone array, light induction information collected by the light induction sensor, radio frequency information collected by the radio frequency sensor, first video information collected by the camera and first touch information collected by the touch sensor;
The light sensing information is used for determining action instructions of a user.
3. The method of controlling a robot according to claim 2, wherein the sensor module includes: the device comprises a limit sensor, a current sensor, an acceleration sensor, a temperature sensor, an infrared sensor and an ultrasonic sensor;
the performance information comprises action position information of a part, which is collected by the limit sensor and is associated with the limit sensor, of the robot, current signals of a power supply in the robot, which are collected by the current sensor, and acceleration information of the robot, which is collected by the acceleration sensor;
the environment information comprises temperature information of the environment where the robot is located, collected by the temperature sensor, first obstacle information of the environment where the robot is located, collected by the infrared sensor, and second obstacle information of the environment where the robot is located, collected by the ultrasonic sensor.
4. A control method of a robot according to any one of claims 1 to 3, characterized in that based on the first data, a control strategy of the robot is obtained, comprising:
performing feature extraction and feature fusion on the man-machine interaction information to obtain first fusion data;
Performing feature extraction and feature fusion on the performance information and the environment information to obtain second fusion data;
and obtaining a control strategy of the robot based on the first fusion data and the second fusion data.
5. The method for controlling a robot according to claim 2, wherein the acquiring the first data acquired by the sensor module in the robot includes:
acquiring second voice information acquired by the microphone array;
determining whether the second voice information is matched with a wake-up instruction or not based on the preset wake-up instruction;
and if the second voice information is matched with the wake-up instruction, acquiring the first data acquired by the sensor module.
6. The method for controlling a robot according to claim 2, wherein the acquiring the first data acquired by the sensor module in the robot includes:
acquiring second touch information acquired by the touch sensor, and determining a first duration of the second touch information;
and if the first duration is longer than a first preset time, acquiring first data acquired by the sensor module.
7. The method for controlling a robot according to claim 2, wherein the acquiring the first data acquired by the sensor module in the robot includes:
Acquiring second video information acquired by the camera, and determining whether face information exists in the second video information;
if the face information exists in the second video information, determining a second duration time when the face information exists in the second video information;
and if the second duration is longer than a second preset time, acquiring the first data acquired by the sensor module.
8. A robot comprising a sensor module, a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method of controlling a robot according to any one of claims 1 to 7 when executing the computer program.
9. The robot of claim 8, wherein the sensor module comprises: microphone array, light induction sensor, radio frequency sensor, camera, touch sensor, limit sensor, current sensor, acceleration sensor, temperature sensor, infrared sensor and ultrasonic sensor;
the microphone array is used for collecting first voice information, the light induction sensor is used for collecting light induction information, the radio frequency sensor is used for collecting radio frequency information, the camera is used for collecting first video information, and the touch sensor is used for collecting first touch information;
The limiting sensor is used for collecting action position information of a part, which is associated with the limiting sensor, on the robot, the current sensor is used for collecting current signals of a power supply in the robot, and the acceleration sensor is used for collecting acceleration information of the robot;
the temperature sensor is used for acquiring temperature information of the environment where the robot is located, the infrared sensor is used for acquiring first obstacle information of the environment where the robot is located, and the ultrasonic sensor is used for acquiring second obstacle information of the environment where the robot is located.
10. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method of controlling a robot according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111518577.8A CN114211486B (en) | 2021-12-13 | 2021-12-13 | Robot control method, robot and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111518577.8A CN114211486B (en) | 2021-12-13 | 2021-12-13 | Robot control method, robot and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114211486A CN114211486A (en) | 2022-03-22 |
CN114211486B true CN114211486B (en) | 2024-03-22 |
Family
ID=80701295
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111518577.8A Active CN114211486B (en) | 2021-12-13 | 2021-12-13 | Robot control method, robot and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114211486B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118012031A (en) * | 2022-10-28 | 2024-05-10 | 苏州科瓴精密机械科技有限公司 | Control method and device of robot, robot and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103064416A (en) * | 2012-12-10 | 2013-04-24 | 江西洪都航空工业集团有限责任公司 | Indoor and outdoor autonomous navigation system for inspection robot |
CN104216505A (en) * | 2013-05-29 | 2014-12-17 | 腾讯科技(深圳)有限公司 | Control method and device of portable intelligent terminal |
WO2017157302A1 (en) * | 2016-03-17 | 2017-09-21 | 北京贝虎机器人技术有限公司 | Robot |
CN109739223A (en) * | 2018-12-17 | 2019-05-10 | 中国科学院深圳先进技术研究院 | Robot obstacle-avoiding control method, device and terminal device |
CN110154056A (en) * | 2019-06-17 | 2019-08-23 | 常州摩本智能科技有限公司 | Service robot and its man-machine interaction method |
CN210433409U (en) * | 2019-05-22 | 2020-05-01 | 合肥师范学院 | Robot of sweeping floor with speech control |
WO2021232933A1 (en) * | 2020-05-19 | 2021-11-25 | 华为技术有限公司 | Safety protection method and apparatus for robot, and robot |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11016729B2 (en) * | 2017-11-08 | 2021-05-25 | International Business Machines Corporation | Sensor fusion service to enhance human computer interactions |
US10969763B2 (en) * | 2018-08-07 | 2021-04-06 | Embodied, Inc. | Systems and methods to adapt and optimize human-machine interaction using multimodal user-feedback |
-
2021
- 2021-12-13 CN CN202111518577.8A patent/CN114211486B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103064416A (en) * | 2012-12-10 | 2013-04-24 | 江西洪都航空工业集团有限责任公司 | Indoor and outdoor autonomous navigation system for inspection robot |
CN104216505A (en) * | 2013-05-29 | 2014-12-17 | 腾讯科技(深圳)有限公司 | Control method and device of portable intelligent terminal |
WO2017157302A1 (en) * | 2016-03-17 | 2017-09-21 | 北京贝虎机器人技术有限公司 | Robot |
CN109739223A (en) * | 2018-12-17 | 2019-05-10 | 中国科学院深圳先进技术研究院 | Robot obstacle-avoiding control method, device and terminal device |
CN210433409U (en) * | 2019-05-22 | 2020-05-01 | 合肥师范学院 | Robot of sweeping floor with speech control |
CN110154056A (en) * | 2019-06-17 | 2019-08-23 | 常州摩本智能科技有限公司 | Service robot and its man-machine interaction method |
WO2021232933A1 (en) * | 2020-05-19 | 2021-11-25 | 华为技术有限公司 | Safety protection method and apparatus for robot, and robot |
Also Published As
Publication number | Publication date |
---|---|
CN114211486A (en) | 2022-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11036840B2 (en) | Fingerprint recognition method and apparatus, and touchscreen terminal | |
CN102055836B (en) | Mobile terminal with action recognition function and action recognition method thereof | |
CN109739223B (en) | Robot obstacle avoidance control method and device, terminal device and storage medium | |
CN103412769B (en) | External card parameter configuration, equipment and system | |
CN107992251B (en) | Skill control method, skill control device, electronic equipment and storage medium | |
CN105573538B (en) | Sliding broken line compensation method and electronic equipment | |
WO2022012173A1 (en) | Emulated card switching method, terminal device, and storage medium | |
CN108710469A (en) | The startup method and mobile terminal and medium product of a kind of application program | |
CN105279417A (en) | Application starting control method and user terminal | |
CN104035702B (en) | A kind of method preventing intelligent terminal's maloperation and intelligent terminal | |
CN102985897A (en) | Efficient gesture processing | |
EP3435211B1 (en) | Method and appartus for responding to touch operation, storage medium and terminal | |
CN103488384A (en) | Voice assistant application interface display method and device | |
CN104991721A (en) | Fingerprint operation method and apparatus | |
CN210119760U (en) | Electronic device and electronic system | |
KR20180074983A (en) | Method for obtaining bio data and an electronic device thereof | |
CN114211486B (en) | Robot control method, robot and storage medium | |
US20110069028A1 (en) | Method and system for detecting gestures on a touchpad | |
US11620995B2 (en) | Voice interaction processing method and apparatus | |
CN112513790B (en) | Electronic device and method for displaying functional visibility for providing battery charging of an external device through a display | |
CN114474149B (en) | Automatic test method, device, server and readable storage medium | |
CN105183217A (en) | Touch display device and touch display method | |
US20140348334A1 (en) | Portable terminal and method for detecting earphone connection | |
CN109358755B (en) | Gesture detection method and device for mobile terminal and mobile terminal | |
CN107483723A (en) | Mobile terminal and its operation process recording method, computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |