CN114488879A - Robot control method and robot - Google Patents

Robot control method and robot Download PDF

Info

Publication number
CN114488879A
CN114488879A CN202111653094.9A CN202111653094A CN114488879A CN 114488879 A CN114488879 A CN 114488879A CN 202111653094 A CN202111653094 A CN 202111653094A CN 114488879 A CN114488879 A CN 114488879A
Authority
CN
China
Prior art keywords
behavior information
information
target user
associated behavior
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111653094.9A
Other languages
Chinese (zh)
Other versions
CN114488879B (en
Inventor
王晨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pengxing Intelligent Research Co Ltd
Original Assignee
Shenzhen Pengxing Intelligent Research Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pengxing Intelligent Research Co Ltd filed Critical Shenzhen Pengxing Intelligent Research Co Ltd
Priority to CN202111653094.9A priority Critical patent/CN114488879B/en
Priority claimed from CN202111653094.9A external-priority patent/CN114488879B/en
Publication of CN114488879A publication Critical patent/CN114488879A/en
Application granted granted Critical
Publication of CN114488879B publication Critical patent/CN114488879B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D57/00Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track
    • B62D57/02Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
    • B62D57/032Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted supporting base and legs; with alternately or sequentially lifted feet or skid
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application discloses a robot control method and a robot, which are used for enhancing the interactivity between the robot and a user, so that the use experience of the user is improved. The method of the embodiment of the application is applied to a robot control system and comprises the following steps: acquiring current behavior information of a target user; searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user by combining the current time information and/or the current position information of the target user; if at least one piece of associated behavior information is searched and the at least one piece of associated behavior information does not comprise a corresponding target object to be executed, recommending the associated behavior information to the target user; and if at least one piece of associated behavior information is searched and the at least one piece of associated behavior information comprises a corresponding target object to be executed, searching the target object to be executed from the acquired environment image information, and if the target object to be executed is searched, recommending the associated behavior information to the target user.

Description

Robot control method and robot
Technical Field
The embodiment of the application relates to the technical field of robot control, in particular to a robot control method and a robot.
Background
Industrial robot arms are mechanical electronic devices with anthropomorphic arm, wrist and hand functions, and with the continuous development of social technologies, robot arms applied to industrial production also begin to gradually spread in the form of cooperative robot arms. Compared with the traditional industrial mechanical arm, the cooperative mechanical arm has the characteristics of small volume, high flexibility, easiness in installation and the like, and is mainly used for man-machine cooperative work and can complete one piece of work with people in a cooperative manner.
The cooperative mechanical arm is arranged on the robot and assists a user to complete a task by operating household articles or tools. In the prior art, a user is required to input a certain operation instruction on a robot control system so as to control the robot to perform semi-autonomous or autonomous operation of daily tasks, and the control system cannot extend and extend other related operation schemes for the user according to the operation instruction input by the user, so that the user can select the operation schemes.
At present, in the process that a robot control system controls a robot to execute an operation instruction input by a user, the robot control system cannot feed back/adjust the operation scheme according to various data such as scenes, time, used articles and the like in specific applications, so that the various data are not completely closed-loop, and the robot control system cannot output/recommend associated behavior information for the user through the current behavior or position information of the user, thereby influencing the use experience of the user.
Disclosure of Invention
The embodiment of the application provides a robot control method and a robot, which are used for determining user requirements according to current behavior information of a user, outputting/recommending associated behavior information for the user, and enhancing the interactivity between the robot and the user, so that the use experience of the user is improved.
The present application provides, from a first aspect, a robot control method applied to a robot control system, including:
acquiring current behavior information of a target user;
searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user in combination with current time information and/or current position information of the target user, wherein the associated behavior information comprises associated actions and/or a target object to be executed;
if at least one piece of associated behavior information is searched, and the at least one piece of associated behavior information does not comprise a corresponding target object to be executed, recommending the associated behavior information to the target user;
if at least one piece of associated behavior information is searched, and the at least one piece of associated behavior information comprises a corresponding target object to be executed, searching the target object to be executed from the acquired environment image information, and if the target object to be executed is searched, recommending the associated behavior information to the target user.
The present application provides, from a second aspect, a robot comprising:
the first acquisition unit is used for acquiring the current behavior information of a target user;
the first searching unit is used for searching relevant behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user in combination with current time information and/or current position information of the target user, wherein the relevant behavior information comprises relevant actions and/or a target object to be executed;
the first execution unit is used for recommending the associated behavior information to the target user when the first search unit searches at least one piece of associated behavior information and the at least one piece of associated behavior information does not comprise a corresponding target object to be executed;
the first execution unit is further configured to search the to-be-executed target object from the acquired environment image information when the first search unit searches for at least one piece of associated behavior information and the at least one piece of associated behavior information includes a corresponding to-be-executed target object, and recommend the associated behavior information to the target user if the to-be-executed target object is searched.
According to the technical scheme, the embodiment of the application has the following advantages:
the method comprises the steps of firstly obtaining current behavior information of a target user, searching from a behavior database according to the current behavior information, the current time information and/or the current position information, and recommending associated behavior information to the target user according to the searched associated behavior information when the associated behavior information matched with the current behavior information of the target user is searched. According to the method and the device, the technical means of searching the relevant behavior information matched with the relevant information in the database and the technical means of recommending the corresponding relevant behavior information to the user are adopted, so that the relevant behavior information is output/recommended to the user through the current behavior or position information of the user, the interaction between the robot and the user is enhanced, and the use experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a hardware structure of a quadruped robot provided by the present application;
fig. 2 is a schematic diagram of a mechanical structure of the quadruped robot provided by the present application;
FIG. 3 is a schematic flow chart diagram illustrating an exemplary method for controlling a robot;
fig. 4 is a schematic flowchart of another embodiment of a robot control method provided in the present application;
fig. 5 is a schematic flowchart of another embodiment of a robot control method provided in the present application;
FIG. 6 is a schematic structural diagram of one embodiment of a robot provided by the present application;
FIG. 7 is a schematic structural diagram of another embodiment of a robot provided in the present application;
FIG. 8 is a schematic structural diagram of an embodiment of a robot control apparatus provided in the present application;
fig. 9 is a data structure diagram of an object recognition dictionary provided in the present application.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to represent components are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of a multi-legged robot 100 according to an embodiment of the present invention. In the embodiment shown in fig. 1, the multi-legged robot 100 includes a mechanical unit 101, a communication unit 102, a sensing unit 103, an interface unit 104, a storage unit 105, a control module 110, and a power supply 111. The various components of the multi-legged robot 100 can be connected in any manner, including wired or wireless connections, and the like. Those skilled in the art will appreciate that the specific structure of the multi-legged robot 100 shown in fig. 1 does not constitute a limitation to the multi-legged robot 100, that the multi-legged robot 100 may include more or less components than those shown, that some components do not belong to the essential constitution of the multi-legged robot 100, that some components may be omitted or combined as necessary within the scope not changing the essence of the invention.
The following describes the components of the multi-legged robot 100 in detail with reference to fig. 1:
the mechanical unit 101 is the hardware of the multi-legged robot 100. As shown in fig. 1, the machine unit 101 may include a drive plate 1011, a motor 1012, a machine structure 1013, as shown in fig. 2, the machine structure 1013 may include a body 1014, extendable legs 1015, feet 1016, and in other embodiments, the machine structure 1013 may further include extendable robotic arms, a rotatable head structure, a swingable tail structure, a carrying structure, a saddle structure, a camera structure, etc. It should be noted that each component module of the mechanical unit 101 may be one or multiple, and may be configured according to specific situations, for example, the number of the legs 1015 may be 4, each leg 1015 may be configured with 3 motors 1012, and the number of the corresponding motors 1012 is 12.
The communication unit 102 can be used for receiving and transmitting signals, and can also communicate with other devices through a network, for example, receive command information sent by a remote controller or other multi-legged robots 100 to move in a specific direction at a specific speed according to a specific gait, and transmit the command information to the control module 110 for processing. The communication unit 102 includes, for example, a WiFi module, a 4G module, a 5G module, a bluetooth module, an infrared module, etc.
The sensing unit 103 is used for acquiring information data of the environment around the multi-legged robot 100 and monitoring parameter data of each component inside the multi-legged robot 100, and sending the information data to the control module 110. The sensing unit 103 includes various sensors such as a sensor for acquiring surrounding environment information: laser radar (for long-range object detection, distance determination, and/or velocity value determination), millimeter wave radar (for short-range object detection, distance determination, and/or velocity value determination), a camera, an infrared camera, a Global Navigation Satellite System (GNSS), and the like. Such as sensors monitoring the various components inside the multi-legged robot 100: an Inertial Measurement Unit (IMU) (for measuring values of velocity, acceleration and angular velocity values), a sole sensor (for monitoring sole impact point position, sole attitude, ground contact force magnitude and direction), a temperature sensor (for detecting component temperature). As for the other sensors such as the load sensor, the touch sensor, the motor angle sensor, and the torque sensor, which can be configured in the multi-legged robot 100, the detailed description is omitted here.
The interface unit 104 can be used to receive inputs from external devices (e.g., data information, power, etc.) and transmit the received inputs to one or more components within the multi-legged robot 100, or can be used to output inputs to external devices (e.g., data information, power, etc.). The interface unit 104 may include a power port, a data port (e.g., a USB port), a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, and the like.
The storage unit 105 is used to store software programs and various data. The storage unit 105 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system program, a motion control program, an application program (such as a text editor), and the like; the data storage area may store data generated by the multi-legged robot 100 in use (such as various sensing data acquired by the sensing unit 103, log file data), and the like. In addition, the storage unit 105 may include high-speed random access memory, and may also include non-volatile memory, such as disk memory, flash memory, or other volatile solid-state memory.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 107 may be used to receive input numeric or character information. Specifically, the input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also called a touch screen, may collect a user's touch operations (such as operations of the user on the touch panel 1071 or near the touch panel 1071 using a palm, a finger, or a suitable accessory) and drive a corresponding connection device according to a preset program. The touch panel 1071 may include two parts of a touch detection device 1073 and a touch controller 1074. The touch detection device 1073 detects the touch orientation of the user, detects a signal caused by a touch operation, and transmits the signal to the touch controller 1074; the touch controller 1074 receives touch information from the touch sensing device 1073, converts the touch information into touch point coordinates, and transmits the touch point coordinates to the control module 110, and can receive and execute commands from the control module 110. The input unit 107 may include other input devices 1072 in addition to the touch panel 1071. In particular, other input devices 1072 may include, but are not limited to, one or more of a remote control joystick or the like, and are not limited to such.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the control module 110 to determine the type of the touch event, and then the control module 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 1, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions, respectively, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions, which is not limited herein.
The control module 110 is a control center of the multi-legged robot 100, connects the respective components of the entire multi-legged robot 100 using various interfaces and lines, and performs overall control of the multi-legged robot 100 by operating or executing software programs stored in the storage unit 105 and calling up data stored in the storage unit 105.
The power supply 111 is used to supply power to various components, and the power supply 111 may include a battery and a power supply control board for controlling functions such as battery charging, discharging, and power consumption management. In the embodiment shown in fig. 1, the power source 111 is electrically connected to the control module 110, and in other embodiments, the power source 111 may be electrically connected to the sensing unit 103 (e.g., a camera, a radar, a sound box, etc.) and the motor 1012 respectively. It should be noted that each component may be connected to a different power source 111 or powered by the same power source 111.
On the basis of the above embodiments, in particular, in some embodiments, the communication connection with the multi-legged robot 100 can be performed through a terminal device, when the terminal device communicates with the multi-legged robot 100, the command information can be transmitted to the multi-legged robot 100 through the terminal device, the multi-legged robot 100 can receive the command information through the communication unit 102, and in case of receiving the command information, the command information can be transmitted to the control module 110, so that the control module 110 can process the target velocity value according to the command information. Terminal devices include, but are not limited to: the mobile phone, the tablet computer, the server, the personal computer, the wearable intelligent device and other electrical equipment with the image shooting function.
The instruction information may be determined according to a preset condition. In one embodiment, the multi-legged robot 100 can include a sensing unit 103, and the sensing unit 103 can generate instruction information according to the current environment in which the multi-legged robot 100 is located. The control module 110 can determine whether the current velocity value of the multi-legged robot 100 satisfies the corresponding preset condition according to the instruction information. If yes, keeping the current speed value and the current gait movement of the multi-legged robot 100; if the target velocity value is not met, the target velocity value and the corresponding target gait are determined according to the corresponding preset conditions, so that the multi-legged robot 100 can be controlled to move at the target velocity value and the corresponding target gait. The environmental sensors may include temperature sensors, air pressure sensors, visual sensors, sound sensors. The instruction information may include temperature information, air pressure information, image information, and sound information. The communication mode between the environmental sensor and the control module 110 may be wired communication or wireless communication. The manner of wireless communication includes, but is not limited to: wireless network, mobile communication network (3G, 4G, 5G, etc.), bluetooth, infrared.
The hardware configuration and the mechanical configuration of the robot according to the present application are explained above, and the robot control method and the robot control device according to the present application are explained below.
In the prior art, a cooperative mechanical arm is arranged on a robot to assist a user in completing tasks by operating household articles or tools, and therefore, a user is required to input a certain operation instruction on a robot control system to control the robot to execute semi-autonomous or autonomous operation of daily tasks, the control system cannot extend and extend other related operation schemes for the user according to the operation instruction input by the user for selection by the user, and therefore, when the robot control system controls the robot to execute the operation instruction input by the user, the robot control system cannot feed back/adjust the operation schemes according to various data such as scenes, time, articles used and the like in specific application, so that the various data are not completely closed-loop, and the robot control system cannot output/recommend related behavior information for the user according to an identification object selected by the user, the use experience of the user is influenced.
Based on this, the application provides a robot control method and a control device applied to a robot control system, which are used for achieving the effect that the robot control system can output/recommend associated behavior information for a user through the current behavior or position information of the user. For convenience of description, the robot control system is exemplified as an execution subject in the present embodiment.
Referring to fig. 3, fig. 3 provides an embodiment of a robot control method according to the present application, the method including:
301. acquiring current behavior information of a target user;
after receiving a running initiation request of a user, the robot control system can control a camera on a corresponding robot body to acquire an image in a current visual field, then determine an object existing in the current visual field, such as a human body action of a target user, according to the image, and determine current behavior information of the target user through analysis of the human body action. The images can be acquired by an RGBD camera mounted on the robot body.
Specifically, the robot control system controls the camera on the robot, and the captured image includes a real object in the current field of view of the camera mounted on the robot, for example, the image in the current field of view acquired by the robot includes objects such as a tool, an apple, a tea table, and a tv being played, and the target user sitting on the sofa. The robot control system can subsequently determine the current position information of the target user according to the object characteristics as follows: and the living room further determines that the current behavior information of the target user is as follows: is leaned on the sofa.
302. Searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user in combination with the current time information and/or the current position information of the target user, wherein the associated behavior information comprises associated actions and/or a target object to be executed;
the robot control system has a positioning function, can position the region position of the robot in real time through the positioning function, and synchronously correct local time information in real time according to the region position, wherein the time information comprises time point information or time period information, and the time period information can be divided by seasons, months, weeks, days, the morning, the afternoon and the like, and can also be divided according to a time period defined by a user; for example, 3, 4, 5 months in spring, 6, 7, 8 months in summer, 9, 10, 11 months in autumn, 12, 1, 2 months in winter; and the time periods in one day are set in advance in a subdivided manner, namely 1:00-5:00 is early morning, 5:00-8:00 is morning, 8:00-11:00 is morning, 11:00-13:00 is noon, 13:00-17:00 is afternoon, 17:00-19:00 is evening, 19:00-20:00 is midnight, and 20:00-24:00 is late night.
It should be noted that the current location information of the target user may be determined according to the robot control system itself having a positioning function and/or environmental information acquired by a camera of the robot.
In the embodiment of the application, the database of the target user includes, but is not limited to, recording and storing behavior data generated when the target user experiences various services of the robot, recording execution time of the behavior data, analyzing word characteristics of the behavior data, and then providing support for the control system and corresponding service work of the robot through a query and matching engine. As the number of times the robot is to perform the behavior information selected by the target user increases, the control system can continuously refine the database with the newly increased behavior information. The data of the database comprises: the behavior information, the associated behavior information matched with the behavior information, the associated action corresponding to the associated behavior information, the target object to be executed, the operation information corresponding to the target object to be executed, and the auxiliary target object related to the operation information corresponding to the target object to be executed. The representation form of the target object to be executed and the auxiliary target object in the database may be an image, a character, a symbol, and the like, and is not limited herein.
In the embodiment of the present application, the behavior information of the user may include, but is not limited to, sitting, lying, walking, jogging, running, boxing, waving, clapping, and the like, and the location information may include, but is not limited to, outdoor and indoor scenes such as a natatorium, a living room, a kitchen, a bedroom, and the like. The robot control system can search relevant associated behavior information in the database according to the current behavior information of the target user and by combining with the current time information, and further, in order to improve the accuracy of the associated behavior information searched in the database, the robot control system can determine a current application scene by combining with the current position information, so that the associated behavior information is matched according to the application scene, the current time and the current behavior of the target user.
It should be noted that, the step 302 may have two implementation sequences, and the control system may first search the associated information matching the current behavior information from the database of the target user, and then determine the target object to be executed corresponding to the associated information; or determining the target object to be executed, which needs to be the execution object, from the current behavior information, and then determining the associated behavior information matched with the target object to be executed, where the implemented sequence is not limited. The specific expression may be: for example, when the current behavior information of the target user is acquired as: the user sits on the sofa, and the current time information is as follows: twelve noons, the current location information is: the living room extracts the behavior characteristics of 'sitting on a sofa', and the control system can match the behavior information associated with the behavior characteristics from the database by combining the current time and the current position information: eating apples and further determining that the target to be executed is: the apple, the operation information corresponding to the target object to be executed is: washing, peeling, and cutting into pieces.
303. And if the at least one piece of associated behavior information is searched and does not comprise the corresponding target object to be executed, recommending the associated behavior information to the target user, if the at least one piece of associated behavior information is searched and comprises the corresponding target object to be executed, searching the target object to be executed from the acquired environment image information, and if the target object to be executed is searched, recommending the associated behavior information to the target user.
In the embodiment of the application, two ways are included for searching the target object to be executed from the acquired environment image information, wherein the first way is to acquire the environment image information, identify and acquire images of all target objects in the environment image information, compare the target object to be executed with the images of all target objects in the environment image information, and determine the positions of all matched target objects to be executed in the environment; and secondly, acquiring a target object to be executed, directly finding the target object to be executed in the acquired environment image information according to the target object to be executed, and if the target object to be executed is searched, not continuously searching other target objects to be executed.
The target object to be executed is an execution object of the robot, the auxiliary target object is an auxiliary tool of the robot, and the associated behavior information is, for example: when the user eats the apple, the relation linkage is taken as eating, and the target objects to be executed are as follows: the operation information corresponding to the target object to be executed is 'cutting' and 'peeling', and the auxiliary target object is: a "knife". For another example, the associated behavior information is: back pounding, then the target to be executed for this information is: the operation information corresponding to the target object to be executed is "beat" on the back of the user.
The robot can analyze which time periods are not suitable for recommending information to the user and which time periods are suitable for recommending information to the user, specifically, the current behavior of the user, the current interaction between the user and which environment object, the condition of the environment object and the like need to be analyzed, for example, the current behavior of the user is that the user sits on a sofa to watch television, and if the robot judges that the user is watching television in a concentrated manner through visual information, the robot is not suitable for instantly recommending information to the user. The robot can acquire television content in a network or acquire the television content through visual information, and is suitable for recommending associated behavior information to a user when the television plays an advertisement or the user switches channels by using a remote control.
In the embodiment of the application, firstly, the acquired environment image information may be input into a pre-constructed recognition neural network model for recognition, a set is output, and then, data in the set is subjected to certain processing to obtain a recognized object, the set at least includes one piece of data, the data may be integers within a certain preset range, and each integer may be used for a label corresponding to a different object, so that the integer represents uniqueness.
The neural network model includes, but is not limited to, CNN-based model built by yolo series, transform-based model built, and the like.
And then, the robot control system matches the corresponding Chinese character strings with the data of the set through an object recognition dictionary, and the object corresponding to the Chinese character string corresponding to the name of the target object to be executed is the execution object of the robot. In the embodiment of the present application, the object recognition dictionary mentioned is used for matching data with a target object; the object recognition dictionary comprises integers and Chinese character strings in a range from 0 to a preset range, each integer is matched with a unique Chinese character string, the data structure of the object recognition dictionary is shown in FIG. 9, wherein key is a non-repeated integer from 0 to N preset during model training and represents a recognizable object, value is a Chinese character string, and the integer output by the model is mapped to the Chinese name of the corresponding object through one-to-one pairing to serve as final output d { (key 1: value1, key2: value2, key3: value3 }. After the control system acquires a set at least containing one data through the recognition neural network model, the data in the set are further matched through the object recognition dictionary to obtain at least one Chinese character string.
For example, when the control system gets a set of identified neural network model outputs: "1, 5, 6", the integer range preset by the object recognition dictionary is 0 to 1999, therefore, "1, 5, 6" in the set are all valid recognizable data, respectively, 1, 5, 6 pass through the object recognition dictionary, it is known that "1" corresponds to "dish", "5" corresponds to "kitchen knife", and "6" corresponds to "apple", and the set comprising three Chinese character strings is output: and {1: dish, 5: kitchen knife, 6: apple } ", the associated behavior information is: when the user eats the apple, the relation linkage is taken as eating, and the target objects to be executed are as follows: the operation information corresponding to the target object to be executed is 'cutting', 'peeling' and 'placing', and the auxiliary target object is: a kitchen knife.
Optionally, the control system may recommend the associated behavior information in a manner of displaying on a screen interface of the robot itself, may recommend the associated behavior information in a manner of voice, and may recommend the associated behavior information in a manner of establishing a communication connection with a third-party device, which is not limited herein.
Optionally, since the quantity of the associated behavior information corresponding to the target object to be executed is variable, and sometimes all the associated behavior information cannot be recommended to the target user, further, in order to recommend the associated behavior information more accurately, the control system may use the usage frequency of the associated behavior information, the correlation between the associated action and the target object to be executed, and other directions as the screening conditions of the recommended associated behavior information.
In the embodiment of the application, the control system searches the associated behavior information matched with the current behavior information from the database according to the current time information, the current behavior information and/or the current position information of the target user, recommends the associated behavior information to the target user according to the searched associated behavior information, and enhances the interactivity between the robot and the user. It should be noted that the robot control method provided in the embodiment of the present application is to extend an operation scheme executable by a robot according to a state of data such as a current behavior, a scene, and time of a target user, so as to improve an ability of a robot control system to perform feedback/adjustment on the operation scheme according to corresponding information, achieve a relevant data closed loop, and recommend the extended operation scheme, that is, corresponding associated behavior information to the user through the robot control system, so as to improve a user experience of the user.
Referring to fig. 4, the robot control method provided in the present application further includes:
403. if at least one piece of associated behavior information is searched, determining the first N associated actions which are executed for the most times in the past preset time period as recommended associated behavior information;
404. if the associated behavior information is not searched, retrieving the current behavior information of the target user through a networked third-party search engine, and taking the first n pieces of information with the highest search frequency matched with the current time information and/or the current position information of the target user as recommended associated behavior information by combining the current time information and/or the current position information of the target user;
step 403 follows step 302 of the above embodiment.
In this embodiment, the robot control system may search, according to the current behavior information, associated behavior information that matches the current behavior information from the database of the target user in combination with the current time information and/or the current location information.
Because the database includes the execution times of the associated actions in the past preset time period, in order to determine the recommendable associated behavior information more specifically, the robot control system may search, in the database, the current behavior information in combination with the current time information and/or the current position information for associated behavior information matched therewith, if at least one associated behavior information is searched, select the top N associated actions with the largest execution times in the past preset time period as the recommended associated behavior information, and determine the target object in the associated behavior information as the target object to be executed, for example, when the control system obtains the current behavior information of the target user: "lean on and sit on the sofa", the current time is: "12:05 noon", and the current location information is: when the system is in a living room, the information is put into a database and can be matched with historical behavior data of a corresponding time position: the method comprises the following steps of eating apples for 5 times, eating plums for 2 times, eating grapes for 3 times, eating waxberries for 1 time and the like, wherein the control system selects the first 3 with the largest execution times as recommended associated behavior information according to the previous preset time period because the matched historical behavior data are more: the method comprises the steps of eating apples, eating grapes and eating plums, wherein the apples, the grapes and the plums are target objects to be executed corresponding to associated behavior information, and the associated actions are eating.
And when the associated behavior information is not searched in the database, searching on a third-party search engine is needed, selecting the information with the highest n search frequencies as the recommended associated behavior information, and further determining the object as the target object to be executed if the target object exists in the associated behavior information.
405. If at least one piece of associated behavior information is searched, and the at least one piece of associated behavior information does not comprise a corresponding target object to be executed, recommending the associated behavior information to a target user, if the at least one piece of associated behavior information is searched, and the at least one piece of associated behavior information comprises the corresponding target object to be executed, searching the target object to be executed from the acquired environment image information, if the target object to be executed is searched, judging whether the distance between the searched target object to be executed and the target user exceeds a preset length, and if the distance exceeds the preset length, recommending the associated behavior information to the target user;
in the embodiment of the application, when at least one piece of associated behavior information is searched and the at least one piece of associated behavior information includes a corresponding target object to be executed, if the target object to be executed is not searched in the acquired environment image information, the associated behavior information is not pushed to a target user, and if the target object to be executed is searched, the associated behavior information corresponding to the target object to be executed is further recommended to the target user after the fact that the distance between the target object to be executed and the target user exceeds the preset length is determined.
In the embodiment of the application, the recommendation mode of the target user associated behavior information can be determined through specific scene characteristics. Two different types of recommendations are specifically introduced below:
1. recommending associated behavior information to a target user in a voice mode;
specifically, in any scene, the robot control system may recommend the associated behavior information to the target user in a voice manner. Before voice recommendation, the robot control system may number each piece of associated behavior information that needs to be pushed by voice in advance, for example, a eat an apple, B eat a plum, and the like, and then play the information in sequence by voice. Further, the target user can directly reply to the number "a", the control system determines that the corresponding robot executes the "washing" operation after receiving the reply of the target user, and can also directly reply to the recommended content "wash apple".
2. The associated behavior information or the character name or the picture of the target object to be executed is displayed on a display screen of the robot or a screen of third-party equipment which establishes communication connection;
specifically, when the target user is sitting on a sofa to watch television or play a mobile phone, the behavior information of the user may include but is not limited to sitting, lying, walking, jogging, running, boxing, waving hands, clapping hands and the like, the robot control system may acquire that the current behavior information of the target user is "sitting", and at this time, when the control system further identifies the mobile phone which is being on the screen or the television which is playing content according to an image shot by a camera on the robot, the bluetooth is turned on, and a receiving device corresponding to the mobile phone is found in the surrounding environment, so as to establish wireless communication with the mobile phone or the television, and associated behavior information which needs to be recommended is displayed on an associated screen for the target user to browse and select.
Optionally, the control system may further determine whether the associated behavior information needs to be displayed on the screen according to the current time information. For example, when the control system obtains that the current time is 23:00, it may be determined that the current time information is "late at night" and is not suitable for disturbing the target user or affecting users around by voice, at this time, the associated behavior information may also be displayed on a screen currently being watched by the target user, and if the action of the target user to watch the screen is not recognized or the screen being highlighted and played is not recognized, the display screen carried by the robot is controlled to be highlighted, and the associated behavior information is displayed on the display screen.
406. If the feedback information of the target user is received within the first preset time period, analyzing whether the feedback information includes meaning indication that the recommended associated behavior information is accepted or not, if yes, executing a step 407, and if not, ending the process;
407. generating a corresponding first operation instruction, and executing corresponding operation according to the first operation instruction;
in the embodiment of the application, if a target user has a demand, the target user can select the demanded related behavior information according to the related behavior information recommended by the robot control system within a preset time, and if the target user has no demand, the related behavior information recommended by the robot control system can be rejected.
Specifically, when the robot control system recommends the associated behavior information as: when the apple is eaten, the consent of the target user through voice reply is received within the preset time after the association behavior information is recommended, if the consent is good, ok, eating, no peeling is needed, and the like, the target user can be determined to agree with the recommendation of the system, and then a corresponding first operation instruction is generated to control the robot to execute corresponding operation; and receiving 'disagreement, NO use, NO eating, NO, i.e. the target user wants to eat pears' and the like replied by the voice of the target user within the preset time, determining that the target user refuses the recommendation of the system, and terminating the flow.
408. Acquiring target associated behavior information selected by a target user according to a first operation instruction;
409. generating a first data set from the target associated behavior information through a first model, wherein the first model is used for analyzing the part of speech of words in the target associated behavior information, and the first data set is a word and a corresponding part of speech data set in the target associated behavior information;
410. and storing the words and the corresponding part of speech data of the first data set in a database according to part of speech classification.
In the embodiment of the application, the control system takes the target associated behavior information selected by the target user each time as new data, performs certain analysis on the new data, and stores the new data into the database, so that the associated behavior information content in the database is updated and iterated.
Specifically, for example, the associated behavior information selected by the user is "eat apple". After the control system controls the corresponding robot to execute the associated behavior information, the associated behavior information is processed by any natural language processing model (namely, a first model) with a part of speech analysis function to output a first data set: "{ apple-noun, eat-verb }", store the relevant information of this first data set to the database according to Chinese part of speech.
In the embodiment of the application, after the control system recommends the associated behavior information corresponding to the target object to be executed to the target user, a corresponding first operation instruction can be generated according to the feedback information which is sent by the target user and has the meaning of agreeing with the recommendation to control the robot to execute the corresponding operation. And the target associated behavior information selected by the target user can be acquired according to the first operation instruction, the part of speech analysis is carried out on the target associated behavior information through the first model, then the first data set is output, words in the first data set and corresponding part of speech data are stored in the database according to part of speech classification, and therefore the effect of coping with new application scenes and new user behaviors through updating iteration of the database data is achieved.
Referring to fig. 5, the robot control method provided in the present application further includes:
501. judging whether the operation needs to be assisted by an auxiliary target object or not according to the feedback information, if so, executing a step 502, and if not, executing a step 504;
502. searching for an auxiliary target object from the acquired environment image information;
503. if the auxiliary target object is searched, generating a corresponding first operation instruction, executing corresponding operation, if the auxiliary target object is not searched, informing a target user, and if the auxiliary target object is searched from the acquired environment image information within a second preset time period after the target user is informed, generating a corresponding first operation instruction, and executing corresponding operation;
504. generating a corresponding first operation instruction, and executing corresponding operation according to the first operation instruction;
step 501 follows step 406 in the above embodiment, and step 504 precedes step 408 in the above embodiment.
In the embodiment of the application, in the associated behavior information executed by the robot, there may be a case where the associated action can be completed only by requiring an auxiliary tool to perform an auxiliary operation, and therefore, after receiving feedback information sent by a target user, the robot control system needs to analyze the feedback information and determine whether the cooperation of the auxiliary target object is required, if so, the auxiliary target object is searched and then a corresponding operation of the associated behavior information selected by the target user is executed, and if not, a corresponding first operation instruction may be directly generated according to the feedback information to execute the corresponding operation.
Specifically, for example, the control system may know, according to the feedback information, that the associated behavior information selected by the target user is: eating apple, then the target objects to be executed for this information are: since the apple needs to be cut with a knife as an auxiliary tool, the auxiliary objects are: and (4) cutting. At the moment, the control system searches for an object meeting the knife setting according to the acquired environment image information, generates a corresponding first operation instruction if the object is searched, controls the robot to execute the operation, and enables a user to place a knife in a shooting area of a camera of the robot in a voice prompt mode, a prompt message sending mode to a mobile phone terminal of the user and the like if the object is not searched, so that the control system acquires second environment image information shot by the camera within preset time for the second time, and the corresponding knife can be searched.
In the embodiment of the application, the control system can judge whether the auxiliary target object is needed besides the target to be executed or not by analyzing the feedback information sent by the target user, and if so, the control system can actively determine the position of the auxiliary target object and execute corresponding operation, so that the execution force of the robot can be increased.
The above embodiments describe the robot control method provided in the present application in detail, and the robot provided in the present application is described in detail below with reference to the accompanying drawings.
Referring to fig. 6, fig. 6 provides an embodiment of a robot according to the present application, including:
a first obtaining unit 601, configured to obtain current behavior information of a target user;
a first searching unit 602, configured to search, according to current behavior information of a target user, associated behavior information matched with the current behavior information of the target user from a database in combination with current time information and/or current location information of the target user, where the associated behavior information includes an associated action and/or a target object to be executed;
a first executing unit 603, configured to recommend relevant behavior information to a target user when the first searching unit 602 searches for at least one piece of relevant behavior information and the at least one piece of relevant behavior information does not include a corresponding target object to be executed; the first searching unit 602 is further configured to search the to-be-executed target object from the acquired environment image information when at least one piece of associated behavior information is searched for and includes the corresponding to-be-executed target object, and recommend the associated behavior information to the target user if the to-be-executed target object is searched for.
In this embodiment of the application, after the first obtaining unit 601 obtains the current behavior information of the target user, the first searching unit 602 searches, according to the current behavior information and/or the current location information obtained by the first obtaining unit 601, associated behavior information matched with the current time information from the database in combination with the current time information, and then recommends the associated behavior information to the user through the first executing unit 603, so that interactivity between the robot and the user is enhanced, and the use experience of the user is improved.
Referring to fig. 7, fig. 7 provides another embodiment of a robot control apparatus according to the present application, the apparatus including:
a first obtaining unit 701, configured to obtain current behavior information of a target user;
a first searching unit 702, configured to search, according to current behavior information of a target user, associated behavior information matched with the current behavior information of the target user from a database in combination with current time information and/or current location information of the target user, where the associated behavior information includes an associated action and/or a target object to be executed;
a recommendation determining unit 703, configured to determine, when at least one piece of associated behavior information is searched, the top N associated actions that are executed the most frequently in a past preset time period as recommended associated behavior information; the network third-party search engine is further used for searching current behavior information of the target user through the network third-party search engine when the associated behavior information is not searched, and the information with the highest n search frequencies matched with the current time information and/or the current position information of the target user is recommended associated behavior information;
a first executing unit 704, configured to recommend relevant behavior information to a target user when the first searching unit 702 searches for at least one piece of relevant behavior information and the at least one piece of relevant behavior information does not include a corresponding target object to be executed; the first searching unit 702 is further configured to search the to-be-executed target object from the acquired environment image information when at least one piece of associated behavior information is searched for and includes the corresponding to-be-executed target object, and recommend the associated behavior information to the target user if the to-be-executed target object is searched for;
a first analyzing unit 705, configured to analyze whether the feedback information includes a meaning indication that the recommended associated behavior information is approved to be accepted when the feedback information of the target user is received within a first preset time period;
a second executing unit 706, configured to generate a corresponding first operation instruction when the first analyzing unit 705 analyzes that the feedback information includes an indication that the recommended associated behavior information is accepted, and execute a corresponding operation according to the first operation instruction;
a second obtaining unit 707 for obtaining target associated behavior information selected by a target user according to the first operation instruction;
the set generating unit 708 is configured to generate a first data set from the target associated behavior information through a first model, where the first model is used to analyze parts of speech of words in the target associated behavior information, and the first data set is a word and a corresponding part of speech data set in the target associated behavior information;
the data storage unit 709 is configured to store the words of the first data set and the corresponding part-of-speech data in the database according to part-of-speech classifications.
In this embodiment of the present application, the second executing unit 706 includes:
a second judging module 7061, configured to judge whether an operation needs to be assisted by an auxiliary target according to the feedback information;
a third searching module 7062, configured to search for an auxiliary target object from the acquired environment image information when the second determining module 7061 determines that the operation needs to be assisted by the auxiliary target object;
a third executing module 7063, configured to generate a corresponding operation instruction and execute a corresponding operation when the auxiliary target object is searched by the third searching module 7062; the third search module 5062 is further configured to notify the target user when the auxiliary target object is not searched, and generate a corresponding first operation instruction and execute a corresponding operation if the auxiliary target object is searched from the acquired environment image information within a second preset time period after the target user is notified.
In this embodiment of the application, the first executing unit 704 is further configured to, when the associated behavior information includes a corresponding target object to be executed, determine whether a distance between the searched target object to be executed and the target user exceeds a preset length, and if the distance exceeds the preset length, recommend the associated behavior information to the target user.
In this embodiment of the application, the manner for recommending the associated behavior information in the first execution unit 704 includes:
recommending associated behavior information to a target user in a voice mode;
or the like, or, alternatively,
and displaying the associated behavior information or the character name or the picture of the target object to be executed on a display screen of the robot or a screen of third-party equipment for establishing communication connection.
Referring to fig. 8, fig. 8 provides one embodiment of a robotic device for the present application, the robotic device comprising:
a processor 801, a memory 802, an input/output unit 803, a bus 804;
the processor 801 is connected to a memory 802, an input/output unit 803, and a bus 804;
the processor 801 specifically performs the following operations:
acquiring current behavior information of a target user;
searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user in combination with the current time information and/or the current position information of the target user, wherein the associated behavior information comprises associated actions and/or a target object to be executed;
if at least one piece of associated behavior information is searched and the at least one piece of associated behavior information does not comprise a corresponding target object to be executed, recommending the associated behavior information to the target user;
and if at least one piece of associated behavior information is searched and the at least one piece of associated behavior information comprises a corresponding target object to be executed, searching the target object to be executed from the acquired environment image information, and if the target object to be executed is searched, recommending the associated behavior information to the target user.
In this embodiment, the functions of the processor 801 correspond to the steps in the embodiments shown in fig. 3 to fig. 5, which are not repeated herein.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (12)

1. A robot control method is applied to a robot control system and is characterized by comprising the following steps:
acquiring current behavior information of a target user;
searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user in combination with current time information and/or current position information of the target user, wherein the associated behavior information comprises associated actions and/or a target object to be executed;
if at least one piece of associated behavior information is searched, and the at least one piece of associated behavior information does not comprise a corresponding target object to be executed, recommending the associated behavior information to the target user;
if at least one piece of associated behavior information is searched, and the at least one piece of associated behavior information comprises a corresponding target object to be executed, searching the target object to be executed from the acquired environment image information, and if the target object to be executed is searched, recommending the associated behavior information to the target user.
2. The robot control method according to claim 1, wherein after the recommending the associated behavior information to the target user, the robot control method further comprises:
if the feedback information of the target user is received within a first preset time period, analyzing whether the feedback information comprises meaning expression of agreeing to accept the recommended associated behavior information;
and if so, generating a corresponding first operation instruction, and executing corresponding operation according to the first operation instruction.
3. The robot control method according to claim 2, wherein the step of recommending the associated behavior information to the target user further comprises:
if the associated behavior information comprises the corresponding target object to be executed, judging whether the distance between the searched target object to be executed and the target user exceeds a preset length, and if so, recommending the associated behavior information to the target user.
4. The robot control method according to claim 3, wherein the manner of recommending the associated behavior information includes:
recommending the associated behavior information to the target user in a voice mode;
or displaying the associated behavior information or the character name or the picture of the target object to be executed on a display screen of the robot or a screen of a third-party device establishing communication connection.
5. A robot control method according to any of claims 2-4, characterized in that before generating the respective first operation instruction, it further comprises:
judging whether the operation needs to be assisted by an auxiliary target object or not according to the feedback information, and if so, searching the auxiliary target object from the acquired environment image information;
if the auxiliary target object is searched, generating a corresponding first operation instruction, and executing a corresponding operation;
and if the auxiliary target object is not searched, informing the target user, and if the auxiliary target object is searched from the acquired environment image information within a second preset time period after the target user is informed, generating a corresponding first operation instruction and executing corresponding operation.
6. The robot control method according to claim 5, wherein the database includes the number of executions of the associated action within a past preset time period;
the step of searching the database for the associated behavior information matching with the current behavior information of the target user further comprises the following steps:
if at least one piece of associated behavior information is searched, determining the first N associated behaviors which are executed for the most times in the past preset time period as recommended associated behavior information;
and if the associated behavior information is not searched, searching the current behavior information of the target user through a networked third-party search engine, and taking the top n pieces of information with the highest search frequency matched with the current time information and/or the current position information of the target user as recommended associated behavior information by combining the current time information and/or the current position information of the target user.
7. A robot, comprising:
the first acquisition unit is used for acquiring the current behavior information of a target user;
the first searching unit is used for searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user in combination with current time information and/or current position information of the target user, wherein the associated behavior information comprises associated actions and a target object to be executed;
the first execution unit is used for recommending the associated behavior information to the target user when the first search unit searches at least one piece of associated behavior information and the at least one piece of associated behavior information does not comprise a corresponding target object to be executed;
the first execution unit is further configured to search the to-be-executed target object from the acquired environment image information when the first search unit searches for at least one piece of associated behavior information and the at least one piece of associated behavior information includes a corresponding to-be-executed target object, and recommend the associated behavior information to the target user if the to-be-executed target object is searched.
8. The robot of claim 7, further comprising:
the first analysis unit is used for analyzing whether the feedback information comprises meaning expression of agreeing to accept the recommended associated behavior information or not when the feedback information of the target user is received within a first preset time period;
and the second execution unit is used for generating a corresponding first operation instruction when the feedback information analyzed by the first analysis unit comprises the meaning expression which agrees to accept the recommended associated behavior information, and executing corresponding operation according to the first operation instruction.
9. The robot of claim 8, wherein the first executing unit is further configured to, when the associated behavior information includes a corresponding target object to be executed, determine whether a distance between the searched target object to be executed and the target user exceeds a preset length, and if the distance exceeds the preset length, recommend the associated behavior information to the target user.
10. The robot according to claim 9, wherein the manner of recommending the associated behavior information in the first execution unit includes:
recommending the associated behavior information to the target user in a voice mode;
or the like, or, alternatively,
and displaying the associated behavior information or the character name or the picture of the target object to be executed on a display screen of the robot or a screen of third-party equipment establishing communication connection.
11. A robot as claimed in any of claims 8 to 10, wherein the second execution unit comprises:
the second judgment module is used for judging whether the operation needs to be assisted by an auxiliary target or not according to the feedback information;
a third searching module, configured to search for an auxiliary target object from the acquired environment image information when the second determining module determines that the operation needs to be assisted by the auxiliary target object;
the third execution module is used for generating a corresponding operation instruction and executing corresponding operation when the auxiliary target object is searched by the third search module;
and the third execution module is further configured to notify the target user when the auxiliary target object is not searched by the third search module, and generate a corresponding first operation instruction and execute a corresponding operation when the auxiliary target object is searched from the acquired environment image information within a second preset time period after the target user is notified.
12. A robot as claimed in claim 11, wherein the database includes a number of executions of the associated action within a past preset time period;
the robot further comprises:
the recommendation determining unit is used for determining the top N associated actions with the largest number of execution times in the past preset time period as recommended associated behavior information when at least one associated behavior information is searched; and when the associated behavior information is not searched, searching the current behavior information of the target user through a networked third-party search engine, and taking the first n pieces of information with the highest search frequency matched with the current time information and/or the current position information of the target user as the recommended associated behavior information.
CN202111653094.9A 2021-12-30 Robot control method and robot Active CN114488879B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111653094.9A CN114488879B (en) 2021-12-30 Robot control method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111653094.9A CN114488879B (en) 2021-12-30 Robot control method and robot

Publications (2)

Publication Number Publication Date
CN114488879A true CN114488879A (en) 2022-05-13
CN114488879B CN114488879B (en) 2024-05-31

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506816A (en) * 2023-04-18 2023-07-28 广州市小粤云科技有限公司 Telephone information recommendation device and recommendation system thereof

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005131713A (en) * 2003-10-28 2005-05-26 Advanced Telecommunication Research Institute International Communication robot
WO2006080820A1 (en) * 2005-01-28 2006-08-03 Sk Telecom Co., Ltd. Method and system for recommending preferred service using mobile robot
WO2012008553A1 (en) * 2010-07-15 2012-01-19 日本電気株式会社 Robot system
CN106462256A (en) * 2016-07-07 2017-02-22 深圳狗尾草智能科技有限公司 A function recommendation method, system and robot based on positive wakeup
CN107498555A (en) * 2017-08-11 2017-12-22 上海思依暄机器人科技股份有限公司 One kind action transmitting method, device and robot
CN108858207A (en) * 2018-09-06 2018-11-23 顺德职业技术学院 It is a kind of that Target Searching Method and system are cooperateed with based on the multirobot remotely controlled
US20190058609A1 (en) * 2017-08-15 2019-02-21 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for pushing information based on artificial intelligence
CN110174118A (en) * 2019-05-29 2019-08-27 北京洛必德科技有限公司 Robot multiple-objective search-path layout method and apparatus based on intensified learning
CN111680147A (en) * 2020-07-07 2020-09-18 腾讯科技(深圳)有限公司 Data processing method, device, equipment and readable storage medium
CN111975772A (en) * 2020-07-31 2020-11-24 深圳追一科技有限公司 Robot control method, device, electronic device and storage medium
US20210162593A1 (en) * 2019-12-03 2021-06-03 Samsung Electronics Co., Ltd. Robot and method for controlling thereof
US20210342124A1 (en) * 2018-09-28 2021-11-04 Element Ai Inc. Context-based recommendations for robotic process automation design

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005131713A (en) * 2003-10-28 2005-05-26 Advanced Telecommunication Research Institute International Communication robot
WO2006080820A1 (en) * 2005-01-28 2006-08-03 Sk Telecom Co., Ltd. Method and system for recommending preferred service using mobile robot
WO2012008553A1 (en) * 2010-07-15 2012-01-19 日本電気株式会社 Robot system
CN106462256A (en) * 2016-07-07 2017-02-22 深圳狗尾草智能科技有限公司 A function recommendation method, system and robot based on positive wakeup
CN107498555A (en) * 2017-08-11 2017-12-22 上海思依暄机器人科技股份有限公司 One kind action transmitting method, device and robot
US20190058609A1 (en) * 2017-08-15 2019-02-21 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for pushing information based on artificial intelligence
CN108858207A (en) * 2018-09-06 2018-11-23 顺德职业技术学院 It is a kind of that Target Searching Method and system are cooperateed with based on the multirobot remotely controlled
US20210342124A1 (en) * 2018-09-28 2021-11-04 Element Ai Inc. Context-based recommendations for robotic process automation design
CN110174118A (en) * 2019-05-29 2019-08-27 北京洛必德科技有限公司 Robot multiple-objective search-path layout method and apparatus based on intensified learning
US20210162593A1 (en) * 2019-12-03 2021-06-03 Samsung Electronics Co., Ltd. Robot and method for controlling thereof
CN111680147A (en) * 2020-07-07 2020-09-18 腾讯科技(深圳)有限公司 Data processing method, device, equipment and readable storage medium
CN111975772A (en) * 2020-07-31 2020-11-24 深圳追一科技有限公司 Robot control method, device, electronic device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506816A (en) * 2023-04-18 2023-07-28 广州市小粤云科技有限公司 Telephone information recommendation device and recommendation system thereof
CN116506816B (en) * 2023-04-18 2023-10-17 广州市小粤云科技有限公司 Telephone information recommendation device and recommendation system thereof

Similar Documents

Publication Publication Date Title
CN106873773B (en) Robot interaction control method, server and robot
US11763599B2 (en) Model training method and apparatus, face recognition method and apparatus, device, and storage medium
US8321221B2 (en) Speech communication system and method, and robot apparatus
US11017609B1 (en) System and method for generating augmented reality objects
CN103717358A (en) Control system, display control method, and non-transitory computer readable storage medium
US20130167025A1 (en) System and method for online user assistance
CN111353299B (en) Dialog scene determining method based on artificial intelligence and related device
KR102466438B1 (en) Cognitive function assessment system and method of assessing cognitive funtion
US10347026B2 (en) Information processing apparatus with location based display
CN108572586B (en) Information processing apparatus and information processing system
US20210233529A1 (en) Imaging control method and apparatus, control device, and imaging device
KR20180072534A (en) Electronic device and method for providing image associated with text
CN110534109A (en) Audio recognition method, device, electronic equipment and storage medium
CN113821720A (en) Behavior prediction method and device and related product
KR20200085143A (en) Conversational control system and method for registering external apparatus
CN111515970B (en) Interaction method, mimicry robot and related device
JP2020091801A (en) Work analysis system and work analysis method
CN114005511A (en) Rehabilitation training method and system, training self-service equipment and storage medium
CN114488879B (en) Robot control method and robot
CN114488879A (en) Robot control method and robot
CN107230427A (en) Intelligent display platform based on laser navigation, Activity recognition
WO2006080820A1 (en) Method and system for recommending preferred service using mobile robot
CN115086094B (en) Equipment selection method and related device
WO2018090534A1 (en) Method and device for generating user state map
CN109284783A (en) Machine learning-based worship counting method and device, user equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant