CN113936651A - Positioning system and method for robot - Google Patents

Positioning system and method for robot Download PDF

Info

Publication number
CN113936651A
CN113936651A CN202110975418.4A CN202110975418A CN113936651A CN 113936651 A CN113936651 A CN 113936651A CN 202110975418 A CN202110975418 A CN 202110975418A CN 113936651 A CN113936651 A CN 113936651A
Authority
CN
China
Prior art keywords
user
voice
robot
groups
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110975418.4A
Other languages
Chinese (zh)
Inventor
陈春杰
吴新宇
叶鑫
陈少聪
陈灵星
张哲文
王卓
刘贻达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202110975418.4A priority Critical patent/CN113936651A/en
Publication of CN113936651A publication Critical patent/CN113936651A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The application provides a positioning system and a method of a robot. The positioning system of the robot includes: at least two groups of sound pickup devices and controllers; the system comprises at least two groups of sound pickup devices, a server and a server, wherein the at least two groups of sound pickup devices are used for respectively acquiring voice instructions sent by a user from at least two positions and respectively acquiring azimuth information of the voice instructions acquired at each position; the controller is connected with the at least two groups of sound pickup devices and is used for determining the current position information of the user based on the at least two pieces of position information. The robot positioning system can accurately position the position of a user and is not limited by a complex scene or a privacy area.

Description

Positioning system and method for robot
Technical Field
The invention relates to the technical field of robots, in particular to a positioning system and a positioning method of a robot.
Background
With the development of network technology, the robot technology is more mature, and the intelligent home robot gradually enters human life.
In order to provide better service for the user, the robot generally needs to continuously acquire the position of the user in the service process so as to complete corresponding instructions. Currently, an intelligent household robot is generally used by matching one or more of a depth camera, an infrared camera, a high-definition camera and the like to locate a specific position of a user. However, when a complex scene such as a shelter exists in the middle of the scene, the camera cannot accurately position the position of the user; when facing certain privacy areas marked by the user, such as bedrooms, toilets and other areas, the camera stops working according to the will of the user, the robot loses the positioning capability, and normal service work cannot be carried out.
Disclosure of Invention
The application provides a positioning system and a positioning method of a robot, and the positioning system of the robot can solve the problem that the position of a user cannot be accurately positioned when the existing robot faces a complex scene or a privacy area.
In order to solve the technical problem, the application adopts a technical scheme that: a positioning system for a robot is provided. The positioning system of the robot includes: at least two groups of sound pickup devices and controllers; the system comprises at least two groups of sound pickup devices, a server and a server, wherein the at least two groups of sound pickup devices are used for respectively acquiring voice instructions sent by a user from at least two positions and respectively acquiring azimuth information of the voice instructions acquired at each position; the controller is connected with the at least two groups of sound pickup devices and is used for determining the current position information of the user based on the at least two pieces of position information.
Wherein, each sound pickup device comprises a microphone annular array; the microphone annular array is used for acquiring voice instructions sent by users in different spatial directions at the current position and acquiring azimuth angle values of the voice instructions acquired in different spatial directions.
Wherein, the annular array of microphones is in hexagonal distribution.
Wherein the annular array of microphones comprises: a pickup assembly and a processing assembly; the pickup assembly is used for acquiring voice instructions of different spatial directions of the current position; the processing component is connected with the pickup component and used for comparing the volume values of the voice instructions in different spatial directions collected by the pickup component and acquiring the azimuth value corresponding to the voice instruction with the maximum volume value.
The microphone annular array further comprises a wake-up assembly connected with the pickup assembly and used for acquiring a wake-up instruction of a user so as to start the pickup assembly to work according to the wake-up instruction.
The controller determines the current azimuth information of the user based on the linear distance of the two groups of microphone annular arrays and the azimuth angle values respectively collected by the two groups of microphone annular arrays.
The controller is further used for recognizing the voice command and controlling the robot to execute the voice command according to the recognized voice command and the current position information and according to a preset mode.
Wherein, at least two groups of pickup devices correspond to the positions of the plurality of regular hexagons one by one and are positioned in the middle positions of the regular hexagons.
In order to solve the technical problem, the application adopts a technical scheme that: a robot positioning method is provided. The robot positioning method comprises the following steps: respectively acquiring voice instructions sent by a user from at least two positions through at least two groups of sound pickup devices, and respectively acquiring azimuth information of the voice instructions acquired at each position; determining, by the controller, current location information of the user based on the at least two location information.
Wherein, the sound pickup device comprises a microphone annular array; the voice instructions sent by the user are respectively collected from at least two positions through at least two groups of sound pickup devices, and the steps of respectively obtaining the azimuth information of the voice instructions collected at each position specifically comprise: respectively acquiring voice instructions sent by a user from at least two positions through at least two groups of microphone annular arrays, and respectively acquiring the azimuth angle value of the voice instructions acquired at each position; the step of determining, by the controller, current location information of the user based on the at least two location information specifically comprises: and determining the current azimuth information of the user by the controller based on the linear distance of the two groups of microphone annular arrays and the azimuth angle values respectively collected by the two groups of microphone annular arrays.
The microphone annular array comprises a pickup assembly and a processing assembly connected with the pickup assembly; the steps of respectively acquiring voice commands sent by a user from at least two positions through at least two groups of microphone annular arrays and respectively acquiring the azimuth angle value of the voice command acquired at each position specifically comprise: respectively acquiring voice instructions of a user from different spatial directions of at least two positions through at least two groups of pickup assemblies; and comparing the volume values of the voice commands in different spatial directions collected by the pickup assembly through the processing assembly, and acquiring the azimuth value corresponding to the voice command with the maximum volume value.
Wherein, gather the voice command that the user sent respectively from at least two positions through at least two sets of pickup apparatus to before the step of the voice command's that each position was gathered step of acquireing respectively, still include: and acquiring a user awakening instruction through the awakening component, and starting the pickup component to work according to the awakening instruction.
Wherein, the method further comprises: and recognizing the voice command through the controller, and controlling the robot to execute the voice command according to the recognized voice command and the current position information and the preset mode.
The positioning system of the robot is provided with at least two groups of sound pickup devices, so that voice instructions sent by a user are respectively acquired from at least two positions through the at least two groups of sound pickup devices, and azimuth information of the voice instructions acquired at each position is respectively acquired; meanwhile, the controller is connected with at least two groups of sound pickup devices by arranging the controller, so that the controller determines the current position information of the user based on at least two pieces of position information; the positioning system positions the current position of the user based on the voice command of the user, so that the accurate positioning of the position of the user can be realized, the limitation of a complex scene or a privacy area is avoided, and the practicability is high; meanwhile, the positioning system determines the current position information of the user based on the acquired at least two position information, so that the positioning system has higher accuracy compared with a scheme of determining the position of the user based on one position information, and can avoid the problem that the robot walks or does not walk to a task place due to the fact that a voice instruction output by the user is too fast or too short.
Drawings
Fig. 1 is a schematic structural diagram of a positioning system of a robot according to an embodiment of the present disclosure;
fig. 2 is a schematic position diagram of at least two groups of sound pickup devices and a user according to an embodiment of the present disclosure;
fig. 3 is a schematic plan view of at least two groups of sound pickup devices arranged in a certain area according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a sound pickup apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a positioning system of a robot according to another embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a positioning system of a robot according to another embodiment of the present application;
fig. 7 is a flowchart of a positioning method of a robot according to an embodiment of the present application;
FIG. 8 is a sub-flowchart of step S11 of FIG. 7 according to an embodiment of the present application;
fig. 9 is a flowchart of a positioning method for a robot according to another embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. All directional indications (such as up, down, left, right, front, and rear … …) in the embodiments of the present application are only used to explain the relative positional relationship between the components, the movement, and the like in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indication is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In the use process of the robot, in order to provide services for users, positioning operation is generally required. At present, patent CN201910391585.7 provides an intelligence house speech control system, and it can realize the acoustic control function, has the speech recognition function, realizes opening fast and stops the house, realizes the independent control in each room. However, the system provided by the patent does not introduce a specific voice recognition system and method, the built-in infrared module for sensing whether the user is indoors can only be used in an open scene, when the user is shielded by a shielding object, the problem of wrong judgment may occur, and meanwhile, the system can only complete the function of controlling home.
Patent CN202011006270.5 provides a voice positioning method and related device based on a sweeping robot, which can determine the volume value of a first voice instruction collected by each microphone in a microphone array through the microphone array carried by the robot, and then obtain a user direction based on the voice instruction with the largest volume value, and then move towards the user direction, and continuously obtain a sound signal in the moving process, so as to reach the place where the user is located and perform sweeping. However, the accuracy of acquiring the specific position of the user by only using a group of microphone arrays is low, and only the specific direction of the user can be determined; and if the user sends an instruction too fast or too short, the situation of inaccurate positioning is easy to occur, so that the sweeping robot walks or does not walk to the task place.
Patent CN201811277380.8 provides a microphone positioning method and device, the method includes: determining an initial position of a microphone right in front of a light-emitting surface of a display screen; setting a target position of a microphone; determining a first movement displacement of the microphone in a direction perpendicular to the light emergent surface of the display screen according to the target position and the initial position; determining a second movement displacement of the microphone in a direction parallel to the light emergent surface of the display screen according to the target position and the initial position; and controlling the microphone to move to the target position according to the first movement displacement and the second movement displacement. However, the method must rely on the light-emitting screen to reflect the ultrasonic waves for positioning, and because the range of the light-emitting screen is limited, in an actual scene, the sound waves emitted by the ultrasonic module may not be reflected by the screen and then received by the ultrasonic module, so that the practicability is weak.
The application provides a positioning system and a method of a robot, under the condition that the robot closes a camera or does not need the camera, the positioning of a user sending a voice command can still be realized by depending on a multi-microphone annular array and a background processor, and the positioning accuracy is higher; and meanwhile, the method is not limited by complicated environments such as shelters or privacy areas.
The present application will be described in detail with reference to the accompanying drawings and examples.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a positioning system of a robot according to an embodiment of the present disclosure; in this embodiment, a positioning system for a robot is provided, where the system is applicable to a mobile robot, and the mobile robot may be a sweeping robot, a mopping robot, a sweeping and mopping integrated robot, a weeding robot, a window wiping robot, a patrol robot, a security robot, etc.
Specifically, the positioning system of the robot comprises at least two groups of sound pickup devices 11 and a controller 12.
Referring to fig. 2, fig. 2 is a schematic position diagram of at least two groups of sound pickup devices and a user according to an embodiment of the present application; the at least two groups of sound pickup devices 11 are located at different positions of a certain area and are used for respectively acquiring voice instructions sent by a user from at least two positions and respectively acquiring azimuth information of the voice instructions acquired at each position. Taking two sound pickup devices 11 in fig. 2 as an example, the first group of sound pickup devices 11 is located at position O, and the second group of sound pickup devices 11 is located at position a for being located at position B. The first group of sound pickup devices 11 are used for acquiring a voice instruction sent by a user at a position B from a position O and acquiring azimuth information of the acquired voice instruction; the second group of sound pickup means 11 is configured to pick up a voice instruction sent by a user at the position B from the position a, and acquire azimuth information of the picked-up voice instruction. It will be appreciated that the bearing information obtained by the first and second groups of sound pick-up means 11, 11 is not the same.
In an embodiment, referring to fig. 3, fig. 3 is a schematic plan view of at least two groups of sound pickup devices provided in an embodiment of the present application; in order to satisfy the optimal economy, the positions of the areas to be served by the robot can be divided in units of hexagons of a certain fixed size. In an embodiment, at least two groups of sound pickup devices 11 may be disposed in a one-to-one correspondence with a plurality of hexagons, and located at the center of the hexagons. In this way, no matter where the user is located in the area to be served, two or more sound pickup devices 11 can accurately pick up the voice command sent by the user, so that the blind-corner-free voice positioning of the whole service area can be realized. Wherein, the radius of the circle corresponding to the hexagon can be 3-5 meters.
The controller 12 is connected to at least two groups of sound pickup means 11 for determining current position information of the user based on at least two position information. The positioning system is used for positioning the current position of the user based on the voice command of the user, a camera is not needed, so that the accurate positioning of the position of the user can be realized, the limitation of a complex scene or a privacy area is avoided, the user can be accurately positioned under the condition that a shielding object exists, the human-computer interaction under the privacy environment can be realized, the specified function is completed, and the practicability is high. Meanwhile, the positioning system determines the current position information of the user based on the acquired at least two position information, so that the positioning system has higher accuracy compared with a scheme of determining the position of the user based on one position information, and can avoid the problem that the robot walks or does not walk to a task place due to the fact that a voice instruction output by the user is too fast or too short.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a sound pickup apparatus according to an embodiment of the present application; each pickup 11 comprises an annular array of microphones. The annular array of microphones is hexagonally distributed. In a specific embodiment, the annular array of microphones is used for collecting voice commands sent by a user from different spatial directions of the position where the annular array of microphones is located, and acquiring azimuth angle values of the voice commands collected from different spatial directions. It is assumed that fig. 4 corresponds to an annular array of microphones as part of the first sound pickup apparatus 11 in fig. 2, which annular array of microphones is used to pick up voice commands of a user at position B from six different spatial directions a, B, c, d, e, f at position O. It is understood that in this embodiment, each location, such as location O and location a, respectively, captures six voice commands of different volume values; and then, acquiring azimuth angle values corresponding to the six voice instructions with different volume values acquired at the positions respectively. Specifically, each sound pickup device 11 further includes a plurality of LED lamps 11a and a plurality of mounting holes 11 b. The LED lamps 11a and/or the mounting holes 11b are arranged at intervals and distributed annularly. Wherein, the LED lamp 11a is used for illumination; the sound pickup device 11 is fixed to a certain position through a mounting hole 11 b.
In a specific embodiment, referring to fig. 5, fig. 5 is a schematic structural diagram of a positioning system of a robot according to another embodiment of the present disclosure; the annular array of microphones includes a pickup assembly 111 and a processing assembly 112.
The sound pickup assembly 111 may include a plurality of microphones 11c, and the plurality of microphones 11c collect voice instructions of different spatial directions of the far field or the near field of the current position by using a microphone array technology. For example, referring to fig. 4, the sound pickup assembly 111 includes six microphones 11c for picking up voice commands of the user from six different spatial directions a, b, c, d, e, and f, respectively.
The processing component 112 is connected to the sound pickup component 111, and is configured to compare the volume values of the voice commands collected by the sound pickup component 111 in different spatial directions, and obtain an azimuth angle value corresponding to the voice command with the largest volume value. Specifically, the processing component 112 compares the collected volume value of each voice command, and then screens out the microphone 11c with the largest volume value; it is understood that the microphone 11c with the largest volume value is pointed to a position approximately closer to the user, and then the current azimuth angle value of the user is obtained based on the microphone 11c, and then the azimuth angle value and the voice command are transmitted to the controller 12.
In particular, the processing component 112 is further configured to perform reduction, enhancement and/or denoising processing on the voice command, so as to effectively enhance the specific voice command of the user in a noisy environment, so as to suppress noise well and enhance voice.
It will be appreciated that in particular embodiments, at least two groups of sound pickup devices 11 may each obtain an azimuth angle value via their respective annular arrays of microphones. For example, the first sound pickup apparatus 11 in fig. 2 may obtain an azimuth angle value ≈ AOB between the position O where it is located and the user; the second sound pickup device 11 can obtain the azimuth angle value of the position a where the second sound pickup device is located and the azimuth angle OAB of the user.
In this embodiment, the controller 12 can specifically determine the current position information of the user based on the linear distances of the two sets of annular microphone arrays and the azimuth angle values respectively collected by the two sets of annular microphone arrays. The linear distance between the two groups of annular arrays of the microphones can be set in advance according to practical application scenes. For example, if at least two groups of annular arrays of microphones are distributed as shown in fig. 2, the linear distance OA between the two groups of annular arrays of microphones is (n +1) × R. Wherein n is the number of hexagons between the two groups of annular arrays of microphones, and R is the diameter of the circle corresponding to the hexagons.
Specifically, on a three-dimensional coordinate axis, according to the cosine theorem:
BO2=AO2+AB2-2·AB·AO·cos∠OAB (1);
AB2=BO2+AO2-2·BO·AO·cos∠AOB (2);
by combining equation (1) and equation (2), the distance OB from the first set of multi-microphone annular arrays to the user and the distance AB from the second set of multi-microphone annular arrays to the user can be calculated, and thus the coordinates of the user at a particular location B in space can be determined.
Of course, in a specific embodiment, the current position information of the user may also be determined based on the position information of the voice commands collected by the two groups of sound pickup devices 11 at other positions; and then, taking the intermediate position of the current position information determined for multiple times to determine the current position information of the user. This can further improve the accuracy of the positioning.
In a specific implementation, the controller 12 is further configured to recognize a voice command, and control the robot to execute the voice command according to a preset mode according to the recognized voice command and the current position information. The preset mode can include directly completing the voice command, completing the corresponding command when the user arrives, or completing a part of the voice command first and then completing other voice commands when the user arrives, and the like. For example, if the voice instruction sent by the user is "please play song" duckling ", the controller 12 recognizes the voice instruction, and controls the robot to directly play song" duckling "according to the voice instruction and the current position information of the user. If the voice command sent by the user is "please get the kitchen and take away the fruit tray", the controller 12 recognizes the voice command, and controls the robot to reach the user according to the voice command and the current position information of the user to complete the voice command "please get the kitchen and take away the fruit tray".
In an embodiment, referring to fig. 6, fig. 6 is a schematic structural diagram of a positioning system of a robot according to another embodiment of the present application; the microphone annular array further comprises a wake-up component 113, wherein the wake-up component 113 is connected with the sound pickup component 111 and is used for acquiring a wake-up instruction of a user so as to start the sound pickup component 111 to work according to the wake-up instruction.
In this embodiment, before the sound pickup assembly 111 is turned on, the robot needs to perform a wake-up operation, and therefore, before the voice instruction is collected, a voice wake-up signal of the target user needs to be obtained, for example, the user needs to shout a wake-up word like "love, love", and after receiving the wake-up instruction, the robot automatically detects and recognizes the wake-up word, and then starts the sound pickup assembly 111 to operate according to the detection recognition result, so as to collect the voice instruction sent by the user through the sound pickup assembly 111.
In the positioning system of the robot provided in this embodiment, at least two groups of sound pickup devices 11 are arranged, so that voice instructions sent by a user are respectively collected from at least two positions through the at least two groups of sound pickup devices 11, and azimuth information of the voice instructions collected at each position is respectively obtained; meanwhile, the controller 12 is arranged, and the controller 12 is connected with at least two groups of sound pickup devices 11, so that the controller 12 determines the current position information of the user based on at least two pieces of position information; the positioning system positions the current position of the user based on the voice command of the user, so that the accurate positioning of the position of the user can be realized, the limitation of a complex scene or a privacy area is avoided, and the practicability is high; meanwhile, the positioning system determines the current position information of the user based on the acquired at least two position information, so that the positioning system has higher accuracy compared with a scheme of determining the position of the user based on one position information, and can avoid the problem that the robot walks or does not walk to a task place due to the fact that a voice instruction output by the user is too fast or too short.
Referring to fig. 7, fig. 7 is a flowchart illustrating a positioning method of a robot according to an embodiment of the present disclosure; in this embodiment, a positioning method of a robot is provided, which can be performed by the positioning system of a robot provided in any of the above embodiments. Specifically, the method comprises the following steps:
step S11: voice instructions sent by a user are respectively collected from at least two positions through at least two groups of sound pickup devices, and azimuth information of the voice instructions collected at each position is respectively obtained.
Wherein each sound pickup device 11 comprises an annular array of microphones. Step S11 may specifically collect the voice command sent by the user from at least two positions through at least two sets of microphone annular arrays, and obtain the azimuth angle value of the voice command collected at each position.
In a specific embodiment, referring to fig. 8, fig. 8 is a sub-flowchart of step S11 in fig. 7 according to an embodiment of the present application; the annular array of microphones includes a pickup assembly 111 and a processing assembly 112. Step S11 specifically includes:
step S111: the voice instructions of the user are respectively collected from different spatial directions of at least two positions through at least two groups of pickup components.
Wherein each set of pickup assemblies 111 captures a user's voice commands from different spatial directions at a location. In a specific embodiment, each set of pickup assemblies 111 may include multiple microphones 11c, and the multiple microphones 11c collect voice instructions of different spatial directions of far field or near field of the current location by microphone array technology.
Step S112: and comparing the volume values of the voice commands in different spatial directions collected by the pickup assembly through the processing assembly, and acquiring the azimuth value corresponding to the voice command with the maximum volume value.
The processing component 112 in the sound pickup component 111 corresponding to each position compares the volume values of the voice commands in different spatial directions collected by the sound pickup component 111 connected to the processing component, and obtains the azimuth angle value corresponding to the voice command with the largest volume value, so as to obtain the azimuth angle value of the current position. For example, the processing component of the first group of sound pickup components 111 at the position O compares the volume values of the voice commands in six different spatial directions collected by the sound pickup component 111 connected to the processing component, and obtains the azimuth angle value corresponding to the voice command with the largest volume value, so as to obtain the azimuth angle value of the position O.
Step S12: determining, by the controller, current location information of the user based on the at least two location information.
Specifically, the current position information of the user may be determined by the controller 12 based on the linear distances of the two sets of annular microphone arrays and the azimuth angle values respectively collected by the two sets of annular microphone arrays. For a specific implementation of this step, reference may be made to the related text description in the positioning system of the robot provided in the above embodiment, and the same or similar technical effects may be achieved, which are not described herein again. In a specific implementation process, the step "determining, by the controller 12, the current position information of the user based on the linear distances of the two sets of annular arrays of microphones and the azimuth angle values respectively collected by the two sets of annular arrays of microphones" may be performed multiple times to obtain multiple pieces of current position information, and then taking the middle positions of the multiple pieces of current position information to determine the current position information of the user. This can effectively improve the accuracy of positioning.
In a specific embodiment, step S12 further includes: the voice command is recognized through the controller 12, and the robot is controlled to execute the voice command according to the recognized voice command and the current position information according to the preset mode.
The preset mode can include directly completing the voice command, completing the corresponding command when the user arrives, or completing a part of the voice command first and then completing other voice commands when the user arrives, and the like.
In the positioning method for the robot provided by this embodiment, the voice commands sent by the user are respectively collected from at least two positions through at least two groups of sound pickup devices 11, and the azimuth information of the voice commands collected at each position is respectively obtained; current location information of the user is then determined by the controller 12 based on the at least two location information. The method can realize accurate positioning of the user position, is not limited by a complex scene or a private area, and has strong practicability; meanwhile, the method determines the current position information of the user based on the acquired at least two position information, and compared with a scheme of determining the position of the user based on one position information, the method has higher accuracy and can avoid the problem that the robot walks or does not walk to a task place due to the fact that a voice instruction output by the user is too fast or too short.
Referring to fig. 9, fig. 9 is a flowchart of a positioning method of a robot according to another embodiment of the present application. In this embodiment, another robot positioning method is provided, which is different from the robot positioning method provided in the first embodiment described above, before step S11, the method further includes: the wake-up command of the user is obtained through the wake-up component 113, and the sound pickup component 111 is started to work according to the wake-up command. Specifically, the method comprises the following steps:
step S21: and acquiring a user awakening instruction through the awakening component, and starting the pickup component to work according to the awakening instruction.
In this embodiment, before the sound pickup assembly 111 is turned on, the robot needs to perform a wake-up operation, so before the voice instruction is collected, a voice wake-up signal of the target user needs to be obtained, for example, the user needs to shout a wake-up word like "love, love", and after receiving the wake-up instruction, the robot automatically detects and identifies the wake-up word, and then starts the sound pickup assembly 111 to operate according to the detection and identification result.
Step S22: voice instructions sent by a user are respectively collected from at least two positions through at least two groups of sound pickup devices, and azimuth information of the voice instructions collected at each position is respectively obtained.
Step S23: determining, by the controller, current location information of the user based on the at least two location information.
The specific implementation processes of step S22 and step S23 are the same as or similar to the specific implementation processes of step S11 and step S12 in the first embodiment, and the same or similar technical effects can be achieved, which are not described herein again.
The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (13)

1. A positioning system for a robot, comprising:
the system comprises at least two groups of sound pickup devices, a voice processing device and a voice processing device, wherein the sound pickup devices are used for respectively acquiring voice instructions sent by a user from at least two positions and respectively acquiring azimuth information of the voice instructions acquired at each position;
and the controller is connected with the at least two groups of sound pickup devices and used for determining the current position information of the user based on at least two pieces of position information.
2. The positioning system of a robot as claimed in claim 1, wherein each of said pickup means comprises an annular array of microphones; the microphone annular array is used for collecting voice instructions sent by users in different spatial directions at the current position and acquiring azimuth angle values of the voice instructions collected in different spatial directions.
3. The positioning system of a robot of claim 2, wherein the annular array of microphones is hexagonally distributed.
4. The positioning system of a robot of claim 2, wherein the annular array of microphones comprises:
the pickup assembly is used for acquiring voice instructions of different spatial directions of the current position;
and the processing component is connected with the pickup component and used for comparing the volume values of the voice instructions in different spatial directions acquired by the pickup component and acquiring the azimuth value corresponding to the voice instruction with the largest volume value.
5. The robot positioning system of claim 4, wherein the microphone ring array further comprises a wake-up component connected to the pickup component for obtaining a wake-up command from the user to activate the pickup component according to the wake-up command.
6. The positioning system of claim 2, wherein the controller determines the current position information of the user based on the linear distances of the two sets of annular arrays of microphones and the azimuth angle values respectively collected by the two sets of annular arrays of microphones.
7. The positioning system of a robot according to any one of claims 1-6, wherein the controller is further configured to recognize the voice command and control the robot to execute the voice command according to a preset mode according to the recognized voice command and the current position information.
8. The positioning system of claim 1, wherein the at least two groups of sound pickup devices are located in one-to-one correspondence with the plurality of regular hexagons and are located at the middle positions of the regular hexagons.
9. A method of positioning a robot, comprising:
respectively acquiring voice instructions sent by a user from at least two positions through at least two groups of sound pickup devices, and respectively acquiring azimuth information of the voice instructions acquired at each position;
determining, by a controller, current location information of the user based on at least two of the location information.
10. The method of claim 9, wherein the pickup device comprises an annular array of microphones; the step of respectively acquiring the voice commands sent by the user from at least two positions through at least two groups of sound pickup devices and respectively acquiring the azimuth information of the voice commands acquired at each position specifically comprises the following steps:
respectively acquiring voice instructions sent by a user from at least two positions through at least two groups of microphone annular arrays, and respectively acquiring the azimuth angle value of the voice instructions acquired at each position;
the step of determining, by the controller, current location information of the user based on at least two of the location information may specifically include:
and determining the current azimuth information of the user by a controller based on the linear distance of the two groups of microphone annular arrays and the azimuth angle values respectively acquired by the two groups of microphone annular arrays.
11. The method of claim 10, wherein the annular array of microphones includes a pickup assembly and a processing assembly coupled to the pickup assembly; the step of respectively acquiring voice commands sent by a user from at least two positions through at least two groups of microphone annular arrays and respectively acquiring the azimuth angle value of the voice command acquired at each position specifically comprises the following steps:
respectively acquiring voice instructions of a user from different spatial directions of at least two positions through at least two groups of pickup assemblies;
and comparing the volume values of the voice commands in different spatial directions collected by the pickup assembly through the processing assembly, and acquiring the azimuth value corresponding to the voice command with the maximum volume value.
12. The method according to claim 11, wherein before the step of acquiring the voice commands sent by the user from at least two positions by at least two groups of sound pickup devices and acquiring the orientation information of the voice commands acquired at each position, the method further comprises:
and acquiring a wake-up instruction of the user through a wake-up component, and starting the pickup component to work according to the wake-up instruction.
13. The method of claim 9, further comprising:
and recognizing the voice command through the controller, and controlling the robot to execute the voice command according to a preset mode according to the recognized voice command and the current position information.
CN202110975418.4A 2021-08-24 2021-08-24 Positioning system and method for robot Pending CN113936651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110975418.4A CN113936651A (en) 2021-08-24 2021-08-24 Positioning system and method for robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110975418.4A CN113936651A (en) 2021-08-24 2021-08-24 Positioning system and method for robot

Publications (1)

Publication Number Publication Date
CN113936651A true CN113936651A (en) 2022-01-14

Family

ID=79274487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110975418.4A Pending CN113936651A (en) 2021-08-24 2021-08-24 Positioning system and method for robot

Country Status (1)

Country Link
CN (1) CN113936651A (en)

Similar Documents

Publication Publication Date Title
US20230205321A1 (en) Systems and Methods of Tracking Moving Hands and Recognizing Gestural Interactions
US10996768B2 (en) Device and method for orientation and positioning
CN106440192B (en) A kind of household electric appliance control method, device, system and intelligent air condition
WO2020199971A1 (en) Method for enhancing far-field speech recognition rate, system and readable storage medium
US20200246977A1 (en) Robots, methods, computer programs, computer-readable media, arrays of microphones and controllers
WO2021037129A1 (en) Sound collection method and apparatus
CN102590884B (en) Human body positioning and following system and method based on pyroelectric infrared sensors
US20210116933A1 (en) Mapping and tracking system for robots
CN106775572A (en) Electronic equipment and its control method with microphone array
CN206559550U (en) The remote control and television system of a kind of built-in microphone array
US20210310643A1 (en) Human tracking to produce improved jobsite lighting
KR20130027347A (en) Mobile robot, and system and method for remotely controlling the same
WO2021042985A1 (en) Internet of things control apparatus and method, and electronic device
CN105284190A (en) Identification device, method, and computer program product
CN107589688A (en) The method and device of MIC array received phonetic orders, speech control system
JP2007320033A (en) Communication robot
CN113936651A (en) Positioning system and method for robot
CN110597077B (en) Method and system for realizing intelligent scene switching based on indoor positioning
CN109830998A (en) Recharging device
CN112014797A (en) Audio-listening and position-distinguishing system of network camera
KR20200095990A (en) Method for controlling smart device
CN111903194B (en) System and method for enhancing voice commands using connected lighting systems
CN212341741U (en) Automatic cloud platform equipment based on infrared sensor development
CN217560686U (en) Inductor and household equipment
CN212541905U (en) Voice control equipment with flight function and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination