CN116636774A - Robot, control method, device, equipment and storage medium thereof - Google Patents

Robot, control method, device, equipment and storage medium thereof Download PDF

Info

Publication number
CN116636774A
CN116636774A CN202310623952.8A CN202310623952A CN116636774A CN 116636774 A CN116636774 A CN 116636774A CN 202310623952 A CN202310623952 A CN 202310623952A CN 116636774 A CN116636774 A CN 116636774A
Authority
CN
China
Prior art keywords
user
room
target
robot
environment map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310623952.8A
Other languages
Chinese (zh)
Inventor
孙境廷
王欣
李昂
王琴琴
钟锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202310623952.8A priority Critical patent/CN116636774A/en
Publication of CN116636774A publication Critical patent/CN116636774A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Manipulator (AREA)

Abstract

The application discloses a robot and a control method, a device, equipment and a storage medium thereof. Compared with the prior art that only system-level room type name definition is supported, the user can customize room type names, and needs to additionally memorize the corresponding relation between the system-level room type names and actual rooms before issuing task instructions. Further, by adding the user-defined target name to the hotword lexicon, the recognition accuracy of the hotword contained in the user voice command by the voice command analysis model is improved.

Description

Robot, control method, device, equipment and storage medium thereof
Technical Field
The present application relates to the field of intelligent robot control technologies, and in particular, to a robot, a control method, a control device, a control apparatus, and a storage medium thereof.
Background
With the development of socioeconomic and scientific technologies, the pursuit of high-level material life is increasing. In such a background, mobile intelligent robots such as home service robots and the like are increasingly appearing in the public view. Taking a cleaning robot as an example, the cleaning robot plays an important role in intelligent home members, and is also becoming popular with consumers in recent years.
In the process that the cleaning robot autonomously moves in the working area and executes a specific task, an environment map can be constructed, so that more accurate navigation and path planning can be realized. By further partitioning the room on the map and combining the voice interaction technology, the user can give a command through voice to control the robot to accurately move to a specific room area so as to execute a specific task.
In the prior art, the types of the room partitions only support system-level naming, namely, a user can only select the room type names in a fixed naming list, such as 'bedroom', 'kitchen', 'bathroom', and the like, and when rooms with multiple types of names are encountered, a digital identifier is automatically added to distinguish between the rooms, such as 'bedroom one', 'bedroom two', and the like. The naming mode of the room partition type leads to the fact that when a user issues a specific task of a specific area through voice, the user needs to additionally memorize the specific room corresponding to the first room, the second toilet and the third bedroom, for example, the user cannot interact naturally and conveniently, and the user is inconvenient to use.
Disclosure of Invention
In view of the above problems, the present application provides a robot, a control method, a device and a storage medium thereof, so as to support user-defined room type names, thereby facilitating the user to issue related task instructions to the robot through voice interaction, and facilitating the use of the user. The specific scheme is as follows:
in a first aspect, a robot control method is provided, including:
acquiring an environment map of the robot, wherein the environment map is partitioned according to rooms;
responding to an instruction of a user for defining a room type for a target room in the environment map, and modifying the room type name of the target room into a target name customized by the user in the environment map;
and adding the target name into a hot word stock corresponding to the user, so as to load the corresponding hot word stock when analyzing the voice command of the user by utilizing a pre-configured voice command analysis model.
Preferably, the method further comprises:
and receiving the voice control instruction of the user, identifying and intention analyzing the voice control instruction by adopting a voice instruction analysis model added with a hot word library corresponding to the user, and controlling the robot to execute corresponding task operation according to the intention analysis result.
Preferably, in response to a user instruction for defining a room type for a target room in the environment map, modifying a room type name of the target room to a target name customized by the user in the environment map includes:
responding to the operation of editing the room type name of the target room in the displayed environment map on the terminal equipment by a user, and taking the edited target name as the updated room type name of the target room.
Preferably, in response to a user instruction for defining a room type for a target room in the environment map, modifying a room type name of the target room to a target name customized by the user in the environment map includes:
responding to a voice command of a user for defining a room type, and identifying and extracting a user-defined target name from the voice command;
determining a target room in which a user issues the voice instruction, or determining a target room in which a human body pointing point is located when the user issues the voice instruction;
and modifying the room type name of the target room into the target name in the environment map.
Preferably, the method further comprises:
After detecting that a user deletes a target name customized by a room in an environment map, deleting the corresponding target name in a hotword lexicon corresponding to the user.
Preferably, the voice instruction analysis model comprises a voice recognition model and an intention analysis model; identifying and intention analyzing the voice control command by adopting a voice command analysis model added with a hot word library corresponding to the user, wherein the method comprises the following steps:
identifying the voice control instruction by adopting a voice identification model added with a hot word lexicon corresponding to the user to obtain an identification text; the weight of a decoding path corresponding to the hotword in the hotword lexicon is improved in a voice recognition decoding stage;
and carrying out intention analysis on the identification text by adopting the intention analysis model to obtain an intention analysis result.
Preferably, the intention analysis model is adopted to analyze the intention of the recognition text to obtain an intention analysis result, which comprises the following steps:
if the recognition text is detected to contain the hot words in the hot word lexicon, replacing the hot words in the recognition text with uniform identifiers to obtain a replaced recognition text;
Performing intention analysis on the replaced identification text by adopting the intention analysis model to obtain a preliminary intention analysis result;
and replacing the unified identifier in the preliminary intention analysis result with the hot word to obtain a final intention analysis result.
Preferably, the hotword lexicons corresponding to the users of the same robot are the same hotword lexicon.
Preferably, if the hot word stock corresponding to each user is a private hot word stock, adding the target name to the hot word stock corresponding to the user includes:
and adding the target name into a private hot word lexicon corresponding to the identity of the user.
Preferably, the acquiring the environment map of the robot includes:
identifying the identity of the current user;
and calling the environment map in the map storage space corresponding to the identity of the current user, wherein the environment map created by the user corresponding to the identity is stored in the map storage space.
In a second aspect, there is provided a robot control device including:
the map acquisition unit is used for acquiring an environment map of the robot, and the environment map is partitioned according to rooms;
The name modifying unit is used for responding to an instruction of a user for defining a room type of a target room in the environment map, and modifying the room type name of the target room into the user-defined target name in the environment map;
and the word stock adding unit is used for adding the target name into the hot word stock corresponding to the user, so that when the voice instruction of the user is analyzed by utilizing a pre-configured voice instruction analysis model, the corresponding hot word stock is loaded.
In a third aspect, there is provided a robot control apparatus comprising: a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the steps of the robot control method as described above.
In a fourth aspect, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the robot control method as described above.
In a fifth aspect, there is provided a robot comprising: the robot comprises a robot main body, and an environment sensing sensor, a microphone and control equipment which are arranged on the robot main body;
the environment sensing sensor is used for collecting environment data and constructing an environment map;
The microphone is used for receiving an instruction issued by a voice form of a user;
the control device is used for realizing the steps of the robot control method.
By means of the technical scheme, the room type names in the environment map of the user-defined robot are supported, specifically, the instruction of the user for defining the room types of the target rooms in the environment map can be responded, the room type names of the target rooms are further modified to be the user-defined target names in the environment map, the user can customize the room type names of the target rooms according to own preference, for example, a certain room type name is defined as a 'dad room', 'xiaobai nest', and the like, the room type names defined in the mode better conform to the preference and the memory habit of the user, and the user-defined room type names can be directly used when the related task instructions are issued to the robot through voices later, for example, the 'please clean the dad room' is used, compared with the prior art which only supports the definition of the room type names of the system level, the corresponding relation between the room type names and the actual rooms is required to be additionally memorized by the user before the task instructions are issued.
Furthermore, after the room type name of the target room in the environment map is modified to be the user-defined target name, the user-defined target name can be further added into the hot word lexicon corresponding to the user, the hot word lexicon can be loaded in advance before the voice command of the user is analyzed by utilizing the voice command analysis model, namely the voice command of the user can be analyzed by utilizing the voice command analysis model loaded with the hot word lexicon corresponding to the user, and the voice command analysis model improves the accuracy of recognition and intention analysis of the hot words contained in the voice command of the user due to the loading of the hot word lexicon, so that the obtained intention analysis result is more accurate, namely the command intention of the user is more accurately understood, and the robot can be controlled to better execute the task corresponding to the user command.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
Fig. 1 is a schematic flow chart of a robot control method according to an embodiment of the present application;
FIG. 2 illustrates an example diagram of a multi-user shared robotic device, a shared environment map, and a namespace;
FIG. 3 illustrates an example diagram of a multi-user shared robotic device, a shared environment map, and respective independent namespaces;
FIG. 4 illustrates an exemplary diagram of a multi-user shared robotic device, a respective independent environment map, and a namespace;
fig. 5 is a schematic structural diagram of a robot control device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a robot control device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The application provides a robot control scheme which can be suitable for a control process of a robot. The robot is a movable robot, can be separated from the charging pile to automatically move and work, and returns to the charging pile for charging and other maintenance. Taking a cleaning robot in a home service robot as an example, the cleaning robot can realize floor cleaning work such as drawing, sweeping, mopping and the like, and return to a charging pile to carry out maintenance work such as charging, cleaning rag, dust collection, water storage and the like.
The scheme of the application can be realized based on a robot, terminal equipment such as a mobile phone, a charging pile, a cloud end and the like which are communicated with the robot, or the scheme of the application can be realized by the cooperation of the robot and other terminal equipment.
Next, as described in connection with fig. 1, the robot control method of the present application may include the steps of:
and step S100, acquiring an environment map of the robot, wherein the environment map is partitioned according to rooms.
Specifically, the robot can acquire environmental data through various types of sensors (such as LDS radar, RGB image sensor and the like) arranged on the robot in the autonomous moving process of the working area, and an environmental map is constructed. Generally, the construction of the environment map may be performed according to a SLAM algorithm. After the environment map is constructed, the environment map may be partitioned into a plurality of rooms. When the room is partitioned, the image morphology can be referred to for partitioning, or the room can be partitioned through morphology and visual identification information, for example, the room is partitioned more accurately according to the visual identification information, such as the positions of walls and doors in a map.
In some scenarios, after the mapping and partitioning described above is completed, the type of each room may also be determined based on visual identification information, e.g., based on identified items such as sofas, furniture, etc. Here, the room types are matched in system-level naming, and examples such as the room types may include a system-level preset room type name template of "bedroom", "kitchen", "balcony", etc.
In this step, an environment map of the robot is acquired, and the environment map is partitioned according to rooms. The type name of each room in the environment map may be empty or may be labeled with a system level type name.
And step S110, responding to an instruction of a user for defining a room type for a target room in the environment map, and modifying the room type name of the target room into the user-defined target name in the environment map.
Specifically, the user may issue an instruction for defining a room type for a target room in the environment map on the terminal device, or may issue an instruction for defining a room type for a target room in the environment map in a voice form.
The method and the device can respond to the instruction of the user and modify the room type name of the target room in the environment map into the user-defined target name.
For example, the original type name of room a in an environment map is "bedroom two", and the user may modify the type name of room a to be "dad's bedroom".
And step 120, adding the target name into a hot word stock corresponding to the user, so as to load the corresponding hot word stock when analyzing the voice command of the user by utilizing a pre-configured voice command analysis model.
In order to facilitate interaction between a subsequent user and the robot through voice and to issue corresponding tasks to the robot, the application can be used for pre-configuring a voice command analysis model for identifying and analyzing intention of a voice control command of the user. In this embodiment, in order to improve accuracy of recognition and intent analysis of the user-defined room type name by the model, the user-defined target name may be added as a hotword to a hotword lexicon corresponding to the user, and the hotword lexicon may be loaded into the voice instruction analysis model, so that the voice instruction analysis model is used to process the voice control instruction of the user.
The robot control method provided by the embodiment of the application supports the room type names in the environment map of the user-defined robot, and specifically, can respond to the instruction of the user for defining the room types of the target rooms in the environment map, and further modify the room type names of the target rooms into the user-defined target names in the environment map, wherein the user can customize the room type names of the target rooms according to own preference, such as defining a certain room type name as a 'dad room', 'a small white nest', and the like, the room type names defined in the mode are more in line with the preference and memory habit of the user, and the user-defined room type names can be directly used when the robot is issued with related task instructions through voice later, such as 'please clean the dad room', compared with the prior art which only supports the definition of the system-level room type names, the user is required to additionally memorize the corresponding relation between the system-level room type names and the actual rooms before the task instructions are issued.
Furthermore, after the room type name of the target room in the environment map is modified to be the user-defined target name, the user-defined target name can be further added into the hot word lexicon corresponding to the user, the hot word lexicon can be loaded in advance before the voice command of the user is analyzed by utilizing the voice command analysis model, namely the voice command of the user can be analyzed by utilizing the voice command analysis model loaded with the hot word lexicon corresponding to the user, and the voice command analysis model improves the accuracy of recognition and intention analysis of the hot words contained in the voice command of the user due to the loading of the hot word lexicon, so that the obtained intention analysis result is more accurate, namely the command intention of the user is more accurately understood, and the robot can be controlled to better execute the task corresponding to the user command.
Optionally, the user may have a case of deleting the already-customized room type name, and at this time, in order to avoid the erroneous attention of the voice command parsing model to the deleted room type name, the deleted room type name may be deleted from the hotword lexicon corresponding to the user. In a specific embodiment, after detecting that a user deletes a target name customized for a room in an environment map, deleting the corresponding target name in a hotword lexicon corresponding to the user. The act of deleting the target name of the room may include: the names are changed, deleted and restored to the original names of the system.
In some embodiments of the present application, the user may issue an instruction to define a room type name for a room in a number of different forms, and the following description will be made separately:
1. user issues instruction on terminal equipment
Specifically, the user can view the environment map constructed by the robot through the APP on the terminal device, and the partition and the room type in the environment map. Meanwhile, the user can edit (split and combine) the partition on the terminal equipment, and the user can select a target room with the type of the user-defined room, and further the target name of the target room is edited in a user-defined mode, so that the process of issuing an instruction for defining the type of the room to the target room in the environment map is realized.
Correspondingly, in the foregoing step S110, in response to a user instruction for defining a room type for a target room in the environment map, a process for modifying a room type name of the target room to the user-defined target name in the environment map may specifically include:
responding to the operation of editing the room type name of the target room in the displayed environment map on the terminal equipment by a user, and taking the edited target name as the updated room type name of the target room.
2. User issues instruction through voice
Specifically, the user may take the form of a pure voice command, or a voice command in combination with a human body posture.
2.1 for the form of a voice-only instruction in which the original type name of the target room needs to be specified, and the modified target name, for example, "please modify the name of bedroom two to dad's room".
Correspondingly, in the foregoing step S110, in response to a user instruction for defining a room type for a target room in the environment map, a process for modifying a room type name of the target room to the user-defined target name in the environment map may specifically include:
responding to a voice command of a user for defining a room type, and if the voice command is identified to be the room type name modification, identifying and extracting the original type name of the target room and the modified target name from the voice command;
and searching the target room with the original type name in the environment map, and modifying the type name into the target name.
2.2, for the form of matching the voice command with the human body gesture, the voice command needs to indicate the target name of the target room after the modification, and meanwhile, the user needs to indicate the target room needing to be subjected to the room type name modification through the human body gesture. For example, the user may move to the target room and then issue a voice command to the robot: "please modify the name of the room where me is currently located to dad's room"; for another example, the user may point to the location of the target room with a finger and simultaneously issue a voice command: "please modify the name of the room i points to dad's room".
Correspondingly, in the foregoing step S110, in response to a user instruction for defining a room type for a target room in the environment map, a process for modifying a room type name of the target room to the user-defined target name in the environment map may specifically include:
and responding to the voice command of the user for defining the room type, and identifying and extracting the user-defined target name from the voice command.
And determining a target room in which the user issues the voice instruction or determining a target room in which a human body pointing point is located when the user issues the voice instruction.
And modifying the room type name of the target room into the target name in the environment map.
The embodiment of the application provides several different ways for modifying the type name of the target room, supports the modification of the type name of the room by a user in various ways, and is more convenient to use.
On the basis of the embodiment, the application further introduces a process of calling the voice command analysis model and processing the voice control command of the user.
Specifically, the user may issue a control instruction to the robot in the form of voice for instructing the robot to perform a specific task on a specific area, for example, "please sweep a dad's room. On the basis, the application can receive the voice control instruction of the user, adopts the voice instruction analysis model added with the hot word library corresponding to the user to identify and analyze the intention of the voice control instruction, and controls the robot to execute corresponding task operation according to the result of the intention analysis.
The voice command analysis model in this embodiment may be an end-to-end neural network model, that is, after inputting a voice control command, the intention analysis result may be directly output.
In addition, the voice command analysis model may further include a voice recognition model and an intention analysis model. In the above step, the process of identifying and intention analyzing the voice control command by using a voice command analysis model added with the hot word library corresponding to the user may include:
and identifying the voice control instruction by adopting a voice identification model added with a hot word lexicon corresponding to the user to obtain an identification text. And in the voice recognition decoding stage, the weight of the decoding path corresponding to the hot word in the hot word library is improved, so that the success rate of recognizing the hot word in the hot word library is improved.
Further, the intention analysis model is adopted to analyze the intention of the identification text, and an intention analysis result is obtained.
In the embodiment of the application, in order to improve the intention analysis accuracy of the intention analysis model, a processing scheme of a dynamic entity is provided.
Specifically, if the fact that the identification text contains the hot words in the hot word lexicon is detected, the hot words in the identification text are replaced by the uniform identification as dynamic entities, and the replaced identification text is obtained.
For example, the recognition text obtained by the speech recognition model is "sweep dad's room". The "father room" belongs to a hot word in the hot word lexicon, and the hot word in the identification text can be replaced by a unified identifier of "room", and the replaced identification text is "clean-up room".
Further, an intention analysis model is adopted to analyze intention of the replaced identification text, a preliminary intention analysis result is obtained, and a unified identifier in the preliminary intention analysis result is replaced by a corresponding hot word, so that a final intention analysis result is obtained.
Still referring to the above example, the intention analysis model performs intention analysis on the replaced recognition text, so as to obtain a preliminary intention analysis result: the task type is "sweep" and the task work area is "[ room ]". Further, the unified mark in the preliminary intention analysis result is replaced by the corresponding hot word, and a final intention analysis result is obtained: the task type is "sweep" and the task work area is "dad's room".
Obviously, by adopting the scheme provided by the embodiment, the hot words in the identification text are replaced by the unified identification, so that the influence of the user-defined room type name on the intention analysis model can be reduced, and the accuracy of intention identification when the user issues a specific area of the robot to execute a specific task through voice is improved.
In some embodiments of the present application, the situation that multiple users share the same robot device is further considered, and three alternative implementations are provided.
Scheme 1:
in this embodiment, the hotword lexicon corresponding to each user using the same robot may be identified as the same hotword lexicon.
Referring to fig. 2, a plurality of users may share one robot device a and share the same environment map a, each environment map a shares a uniform namespace a, and a plurality of different users (e.g., user a and user B) correspond to the same hot word stock a. Examples are as follows: for an environment map A, a user A custom names one room, such as a ' dad room ', and then ' dad ' room ' can be added into a hot word lexicon, the type name of the room is also effective for a user B, the user A and the user B share the unified hot word lexicon at the same time, and the hot word lexicon can be stored locally on a robot or can be stored in a terminal device such as a cloud end, a charging pile and the like of the robot communication.
When a voice control instruction issued by a user is received later (the identity of the user issuing the voice control instruction can be not distinguished), a voice instruction analysis model loaded with the unified hot word lexicon can be called to identify and intend to analyze the voice control instruction.
Scheme 2:
in this embodiment, the hot word lexicon corresponding to each user may be identified as a private hot word lexicon, that is, when multiple users share the same robot device, the same environment map may be shared, but each user has a separate naming space for the same environment map, and separate room names may be added to separate hot word lexicons.
Referring to fig. 3, a user a and a user B share one robot device a and may share the same environment map a, but each has an independent namespace, the user a performs custom naming on each room under the namespace a and adds a naming name to the private hotword lexicon a, and the user B performs custom naming on each room under the namespace B and adds a naming name to the private hotword lexicon B. User a's custom room type name is not validated for user B. For example, user A custom names one of the rooms as "dad's room". If the user B does not custom name the room, the default name is still adopted, for example, "bedroom one", or the user B may custom name the room, for example, "old man's room", and the secondary naming is not effective for the user a. When the robot is positioned on the map A, the user A can use the voice to clean the room of dad, and the user B can use the voice to clean the room of old man, so as to realize the same cleaning purpose of the same room.
The corresponding step S120 may be a process of adding the user-defined target name to the hot word stock corresponding to the user, where the target name is added to the private hot word stock corresponding to the identity of the user. The identity of the user may be the identity of the user currently logged in to the terminal device, such as an account number of the currently logged in user.
On the basis, when a voice control instruction of a user is received subsequently, the identity of an issuer of the voice control instruction needs to be identified, a hot word lexicon corresponding to the identity of the issuer is further obtained, the hot word lexicon is added into a voice instruction analysis model, and the voice control instruction is identified and intention analysis is carried out by adopting the voice instruction analysis model.
The process of identifying the identity of the sender of the voice control instruction can adopt modes such as voiceprint, image face recognition and the like. For example, the voice control instruction may be matched with voiceprint information of each user registered in advance, thereby determining identity information of an issuer of the voice control instruction. Or, the image sensor arranged on the robot can collect the face image of the sender of the voice control instruction, and then the collected face image is matched with the face image of each user registered in advance, so that the identity information of the sender of the voice control instruction is determined.
Scheme 3:
in this embodiment, the hot word lexicon corresponding to each user may be identified as a private hot word lexicon, and when multiple users share the same robot device, each user has an independent map and naming space.
Referring to fig. 4, user a and user B may share the same robot device a, but each of user a and user B has an independent map space and namespace. For example, user a has a map a stored on device a, and user a names one of the rooms by definition, e.g. "dad's room", and map a is not visible to user B; the user B stores another map B on the device A, the user B names one room as a ' old man ' room ' by self definition, and the map B is invisible to the user A; the robot can distinguish the user A from the user B through voiceprint recognition or image face recognition, and when the user A says "clean the father room", the robot is switched into the map A enjoyed by the user A to reposition and autonomously move to the corresponding area to start cleaning; when the user B says "clean the room of the old man", the robot is switched to the map B enjoyed by the user B for repositioning and cleaning; thus, in actual use, the map a and the map B may correspond to the same actual home environment, or may correspond to different home environments. The private hotword lexicons of each of user a and user B may be stored locally at the robotic device a or at a cloud server.
In the solution 3, the step S100 may specifically include the step of acquiring an environment map of the robot:
the identity of the current user is identified.
And calling the environment map in the map storage space corresponding to the identity of the current user, wherein the environment map created by the user corresponding to the identity is stored in the map storage space.
In the foregoing embodiments, the user may issue the instruction for defining the room type name for the room in various different forms, and the manner of identifying the identity of the current user in this step may also be different according to the different forms of the instruction issued by the user.
If the user issues the instruction in the mode 1, that is, on the terminal device, the identity of the user currently logged in the terminal device can be identified in this step.
If the user issues the instruction in the foregoing mode 2, that is, by voice, the identity of the user that issues the voice instruction currently may be confirmed in this step by means of voiceprint, image face recognition, and the like.
The embodiment of the application discloses three different strategy schemes for realizing robot control under the condition that a plurality of users share the same robot equipment, and particularly one or more schemes can be selected according to actual application requirements.
It should be noted that, each of the robot control methods described in the above embodiments of the present application may be implemented in an offline or networked state of the robot. It can be understood that if the environment map of the robot, the hot word library corresponding to the user and the voice command analysis model are all present locally in the robot, the robot can support the implementation of the control method in an offline state, and if the environment map of the robot, the hot word library corresponding to the user and the voice command analysis model are all present in the cloud (or server), the robot needs to acquire the information from the cloud (or server) in a networked state, thereby completing the control method.
The following describes a robot control device provided in an embodiment of the present application, and the robot control device described below and the robot control method described above may be referred to correspondingly to each other.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a robot control device according to an embodiment of the present application.
As shown in fig. 5, the apparatus may include:
a map acquisition unit 11 for acquiring an environment map of the robot, the environment map being partitioned by room;
a name modifying unit 12, configured to modify, in response to an instruction of a user to define a room type for a target room in the environment map, a room type name of the target room to a target name customized by the user in the environment map;
And the word stock adding unit 13 is configured to add the target name to a hot word stock corresponding to the user, so that when a voice instruction of the user is analyzed by using a pre-configured voice instruction analysis model, the corresponding hot word stock is loaded.
Optionally, the robot control device may further include:
the voice control unit is used for receiving the voice control instruction of the user, adopting a voice instruction analysis model added with a hot word library corresponding to the user to identify and analyze the intention, and controlling the robot to execute corresponding task operation according to the intention analysis result
Optionally, the process of modifying, by the name modifying unit, the name of the room type of the target room to the user-defined target name in the environment map in response to the instruction of the user to define the room type of the target room in the environment map may include:
responding to the operation of editing the room type name of the target room in the displayed environment map on the terminal equipment by a user, and taking the edited target name as the updated room type name of the target room.
Optionally, the process of modifying, by the name modifying unit, the name of the room type of the target room to the user-defined target name in the environment map in response to the instruction of the user to define the room type of the target room in the environment map may include:
Responding to a voice command of a user for defining a room type, and identifying and extracting a user-defined target name from the voice command;
determining a target room in which a user issues the voice instruction, or determining a target room in which a human body pointing point is located when the user issues the voice instruction;
and modifying the room type name of the target room into the target name in the environment map.
Optionally, the robot control device may further include:
and the word stock deleting unit is used for deleting the corresponding target name in the hot word stock corresponding to the user after detecting that the user deletes the target name customized by one room in the environment map.
Optionally, the voice command analysis model may include a voice recognition model and an intention analysis model, and the process of the voice control unit recognizing the voice control command and analyzing the intention by using the voice command analysis model added with the hot word stock corresponding to the user may include:
identifying the voice control instruction by adopting a voice identification model added with a hot word lexicon corresponding to the user to obtain an identification text; the weight of a decoding path corresponding to the hotword in the hotword lexicon is improved in a voice recognition decoding stage;
And carrying out intention analysis on the identification text by adopting the intention analysis model to obtain an intention analysis result.
Optionally, the process of performing intent analysis on the recognition text by the voice control unit using the intent analysis model to obtain an intent analysis result may include:
if the recognition text is detected to contain the hot words in the hot word lexicon, replacing the hot words in the recognition text with uniform identifiers to obtain a replaced recognition text;
performing intention analysis on the replaced identification text by adopting the intention analysis model to obtain a preliminary intention analysis result;
and replacing the unified identifier in the preliminary intention analysis result with the hot word to obtain a final intention analysis result.
Optionally, the hotword lexicon corresponding to each user of the same robot is used as the same hotword lexicon. Or, if the hot word stock corresponding to each user is a private hot word stock, the process of adding the target name to the hot word stock corresponding to the user by the word stock adding unit may include:
and adding the target name into a private hot word lexicon corresponding to the identity of the user.
Optionally, the process of the map obtaining unit obtaining the environment map of the robot may include:
identifying the identity of the current user;
and calling the environment map in the map storage space corresponding to the identity of the current user, wherein the environment map created by the user corresponding to the identity is stored in the map storage space.
The robot control device provided by the embodiment of the application can be applied to a robot control device, and the robot control device can be a Central Processing Unit (CPU) of a robot or a cloud end, a charging pile or other terminal devices communicated with the robot. Alternatively, fig. 6 shows a block diagram of a hardware structure of the robot control apparatus, and referring to fig. 6, the hardware structure of the robot control apparatus may include: at least one processor 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4;
in the embodiment of the application, the number of the processor 1, the communication interface 2, the memory 3 and the communication bus 4 is at least one, and the processor 1, the communication interface 2 and the memory 3 complete the communication with each other through the communication bus 4;
processor 1 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present application, etc.;
The memory 3 may comprise a high-speed RAM memory, and may further comprise a non-volatile memory (non-volatile memory) or the like, such as at least one magnetic disk memory;
wherein the memory stores a program, the processor is operable to invoke the program stored in the memory, the program operable to:
acquiring an environment map of the robot, wherein the environment map is partitioned according to rooms;
responding to an instruction of a user for defining a room type for a target room in the environment map, and modifying the room type name of the target room into a target name customized by the user in the environment map;
and adding the target name into a hot word stock corresponding to the user, so as to load the corresponding hot word stock when analyzing the voice command of the user by utilizing a pre-configured voice command analysis model.
Alternatively, the refinement function and the extension function of the program may be described with reference to the above.
The embodiment of the present application also provides a storage medium storing a program adapted to be executed by a processor, the program being configured to:
acquiring an environment map of the robot, wherein the environment map is partitioned according to rooms;
responding to an instruction of a user for defining a room type for a target room in the environment map, and modifying the room type name of the target room into a target name customized by the user in the environment map;
And adding the target name into a hot word stock corresponding to the user, so as to load the corresponding hot word stock when analyzing the voice command of the user by utilizing a pre-configured voice command analysis model.
Alternatively, the refinement function and the extension function of the program may be described with reference to the above.
The embodiment of the application also provides a robot, which comprises: the robot comprises a robot main body, and an environment sensing sensor, a microphone and control equipment which are arranged on the robot main body;
the environment sensing sensor is used for collecting environment data and constructing an environment map. The environment sensing sensor can detect and identify the surrounding environment of the robot, can be regarded as an 'eye' of the robot, and captures external environment information. Environmental awareness sensors include, but are not limited to: laser radar, camera, millimeter wave radar, ultrasonic radar, infrared sensor, etc.
The microphone is configured to receive an instruction issued in a voice form of a user, for example, a voice instruction defining a room type of the user is received, or a voice control instruction that the user controls the robot to execute a corresponding task operation is received.
The control device is used for realizing the steps of the robot control method, and the control device can be a Central Processing Unit (CPU) and the like.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the present specification, each embodiment is described in a progressive manner, and each embodiment focuses on the difference from other embodiments, and may be combined according to needs, and the same similar parts may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (14)

1. A robot control method, comprising:
acquiring an environment map of the robot, wherein the environment map is partitioned according to rooms;
responding to an instruction of a user for defining a room type for a target room in the environment map, and modifying the room type name of the target room into a target name customized by the user in the environment map;
and adding the target name into a hot word stock corresponding to the user, so as to load the corresponding hot word stock when analyzing the voice command of the user by utilizing a pre-configured voice command analysis model.
2. The method as recited in claim 1, further comprising:
And receiving the voice control instruction of the user, identifying and intention analyzing the voice control instruction by adopting a voice instruction analysis model added with a hot word library corresponding to the user, and controlling the robot to execute corresponding task operation according to the intention analysis result.
3. The method of claim 1, wherein modifying the room type name of the target room to the user-defined target name in the environment map in response to a user-defined room type instruction for the target room in the environment map comprises:
responding to the operation of editing the room type name of the target room in the displayed environment map on the terminal equipment by a user, and taking the edited target name as the updated room type name of the target room.
4. The method of claim 1, wherein modifying the room type name of the target room to the user-defined target name in the environment map in response to a user-defined room type instruction for the target room in the environment map comprises:
responding to a voice command of a user for defining a room type, and identifying and extracting a user-defined target name from the voice command;
Determining a target room in which a user issues the voice instruction, or determining a target room in which a human body pointing point is located when the user issues the voice instruction;
and modifying the room type name of the target room into the target name in the environment map.
5. The method as recited in claim 1, further comprising:
after detecting that a user deletes a target name customized by a room in an environment map, deleting the corresponding target name in a hotword lexicon corresponding to the user.
6. The method of claim 2, wherein the voice command parsing model comprises a voice recognition model and an intent parsing model; identifying and intention analyzing the voice control command by adopting a voice command analysis model added with a hot word library corresponding to the user, wherein the method comprises the following steps:
identifying the voice control instruction by adopting a voice identification model added with a hot word lexicon corresponding to the user to obtain an identification text; the weight of a decoding path corresponding to the hotword in the hotword lexicon is improved in a voice recognition decoding stage;
and carrying out intention analysis on the identification text by adopting the intention analysis model to obtain an intention analysis result.
7. The method of claim 6, wherein using the intent resolution model to perform intent resolution on the identified text to obtain an intent resolution result comprises:
if the recognition text is detected to contain the hot words in the hot word lexicon, replacing the hot words in the recognition text with uniform identifiers to obtain a replaced recognition text;
performing intention analysis on the replaced identification text by adopting the intention analysis model to obtain a preliminary intention analysis result;
and replacing the unified identifier in the preliminary intention analysis result with the hot word to obtain a final intention analysis result.
8. The method of claim 1, wherein the hotword lexicon corresponding to each user using the same robot is the same hotword lexicon.
9. The method of claim 1, wherein if the hotword lexicon corresponding to each user is a private hotword lexicon, adding the target name to the hotword lexicon corresponding to the user comprises:
and adding the target name into a private hot word lexicon corresponding to the identity of the user.
10. The method of claim 9, wherein the acquiring the environment map of the robot comprises:
Identifying the identity of the current user;
and calling the environment map in the map storage space corresponding to the identity of the current user, wherein the environment map created by the user corresponding to the identity is stored in the map storage space.
11. A robot control device, comprising:
the map acquisition unit is used for acquiring an environment map of the robot, and the environment map is partitioned according to rooms;
the name modifying unit is used for responding to an instruction of a user for defining a room type of a target room in the environment map, and modifying the room type name of the target room into the user-defined target name in the environment map;
and the word stock adding unit is used for adding the target name into the hot word stock corresponding to the user, so that when the voice instruction of the user is analyzed by utilizing a pre-configured voice instruction analysis model, the corresponding hot word stock is loaded.
12. A robot control apparatus, comprising: a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the respective steps of the robot control method according to any one of claims 1 to 10.
13. A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the robot control method according to any one of claims 1 to 10.
14. A robot, comprising: the robot comprises a robot main body, and an environment sensing sensor, a microphone and control equipment which are arranged on the robot main body;
the environment sensing sensor is used for collecting environment data and constructing an environment map;
the microphone is used for receiving an instruction issued by a voice form of a user;
the control device for performing the steps of the robot control method of any one of claims 1-10.
CN202310623952.8A 2023-05-29 2023-05-29 Robot, control method, device, equipment and storage medium thereof Pending CN116636774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310623952.8A CN116636774A (en) 2023-05-29 2023-05-29 Robot, control method, device, equipment and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310623952.8A CN116636774A (en) 2023-05-29 2023-05-29 Robot, control method, device, equipment and storage medium thereof

Publications (1)

Publication Number Publication Date
CN116636774A true CN116636774A (en) 2023-08-25

Family

ID=87622588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310623952.8A Pending CN116636774A (en) 2023-05-29 2023-05-29 Robot, control method, device, equipment and storage medium thereof

Country Status (1)

Country Link
CN (1) CN116636774A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351944A (en) * 2023-12-06 2024-01-05 科大讯飞股份有限公司 Speech recognition method, device, equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351944A (en) * 2023-12-06 2024-01-05 科大讯飞股份有限公司 Speech recognition method, device, equipment and readable storage medium
CN117351944B (en) * 2023-12-06 2024-04-12 科大讯飞股份有限公司 Speech recognition method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
KR102255273B1 (en) Apparatus and method for generating map data of cleaning space
CN110174888B (en) Self-moving robot control method, device, equipment and storage medium
CN111657798B (en) Cleaning robot control method and device based on scene information and cleaning robot
CN111839371B (en) Ground sweeping method and device, sweeper and computer storage medium
CN107466404B (en) Article searching method and device and robot
CN110575099B (en) Fixed-point cleaning method, floor sweeping robot and storage medium
CN111973075B (en) Floor sweeping method and device based on house type graph, sweeper and computer medium
CN108231069A (en) Sound control method, Cloud Server, clean robot and its storage medium of clean robot
JP6713057B2 (en) Mobile body control device and mobile body control program
CN116636774A (en) Robot, control method, device, equipment and storage medium thereof
CN104182232A (en) Method for establishing context-aware applications and user terminal
CN110618614A (en) Control method and device for smart home, storage medium and robot
CN106415509A (en) Hub-to-hub peripheral discovery
CN112784664A (en) Semantic map construction and operation method, autonomous mobile device and storage medium
CN111643017A (en) Cleaning robot control method and device based on schedule information and cleaning robot
CN111127648A (en) Indoor plane map generation method and device and sweeping map generation method
JPWO2019211932A1 (en) Information processing equipment, information processing methods, programs, and autonomous behavior robot control systems
CN105679082A (en) Robot-based parking stall occupation realization method and system
CN112732379B (en) Method for running application program on intelligent terminal, terminal and storage medium
CN103793536A (en) Intelligent platform obtaining method and device
CN105167700A (en) Parameter setting method and apparatus
WO2007046613A1 (en) Method of representing personality of mobile robot based on navigation logs and mobile robot apparatus therefor
CN110708220A (en) Intelligent household control method and system and computer readable storage medium
CN114386432A (en) Semantic recognition method and device, robot and intelligent equipment
CN112426100B (en) Control method, control device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination