CN113110410B - Service robot based on voice recognition and control method thereof - Google Patents

Service robot based on voice recognition and control method thereof Download PDF

Info

Publication number
CN113110410B
CN113110410B CN202110249041.4A CN202110249041A CN113110410B CN 113110410 B CN113110410 B CN 113110410B CN 202110249041 A CN202110249041 A CN 202110249041A CN 113110410 B CN113110410 B CN 113110410B
Authority
CN
China
Prior art keywords
module
robot
taking
placing
central processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110249041.4A
Other languages
Chinese (zh)
Other versions
CN113110410A (en
Inventor
朱大昌
陈朝政
陈杰勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202110249041.4A priority Critical patent/CN113110410B/en
Publication of CN113110410A publication Critical patent/CN113110410A/en
Application granted granted Critical
Publication of CN113110410B publication Critical patent/CN113110410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a service robot based on voice recognition and a control method thereof, wherein the service robot comprises: a body; the laser scanning module is used for constructing an indoor map through laser scanning; the voice recognition module is used for recognizing the voice control signal; the central processing module is used for planning a path according to the voice control signal and the indoor map and generating a moving track; the robot motion module is used for driving the machine body to move according to the moving track; the robot lifting module is used for adjusting the height of the machine body; and the robot taking and placing module is used for taking or placing the shoes. The automatic shoe taking and placing device can avoid obstacles in the moving process of the robot, avoids collision and damage of the robot, improves the efficiency of taking and placing shoes while ensuring the safe operation of the robot, is very convenient through voice control of a user, realizes automatic shoe taking and placing through the robot lifting module and the robot taking and placing module, and saves time and cost for the user. The invention can be widely applied to the technical field of household robots.

Description

Service robot based on voice recognition and control method thereof
Technical Field
The invention relates to the technical field of household robots, in particular to a service robot based on voice recognition and a control method thereof.
Background
Although the scope that intelligent house robot covered is increasingly wide, but some fields do not have to relate to, get and put the shoes field if intelligence, though also some robots that are used for in other fields have similar function, like cloud end intelligence delivery robot, can independently plan the route, independently keep away the barrier, voice interaction, accomplish tasks such as transportation in a flexible way, generally be applied to the hospital, the hotel, the market, places such as office building, but its size is too big, and the price is expensive, unsuitable family uses, and also do not have the device that special design realized getting and putting shoes action, even intelligent delivery robot transports shoes to the assigned position, also need artificially put it on the shoes cupboard, it is very inconvenient.
It should be realized that in the house environment, the scope that the robot needs the activity is less, and the house environment can change along with the removal of people and object in real time, and intelligent delivery robot among the prior art is not suitable for in the house environment, needs urgently to design a low-cost, be convenient for user control, can avoid the service robot based on speech recognition of barrier anticollision in real time to realize intelligent house and get and put shoes.
Disclosure of Invention
In order to solve the above technical problems, the present invention aims to: a convenient and efficient service robot based on voice recognition and a control method thereof are provided.
The first technical scheme adopted by the invention is as follows:
a service robot based on speech recognition comprising a power assembly and a control assembly, the power assembly comprising:
a body;
the laser scanning module is used for constructing an indoor map through laser scanning;
a voice recognition module for recognizing a voice control signal;
the central processing module is used for planning a path according to the voice control signal and the indoor map and generating a moving track;
the robot motion module is used for driving the machine body to move according to the moving track;
the robot lifting module is used for adjusting the height of the machine body;
the robot taking and placing module is used for taking or placing shoes;
the central processing module, the laser scanning module, the voice recognition module, the robot motion module, the robot lifting module and the robot taking and placing module are all arranged on the machine body, and the laser scanning module, the voice recognition module, the robot motion module, the robot lifting module and the robot taking and placing module are all connected with the central processing module.
Further, the fuselage includes base and shell, the base with the shell passes through robot lifting module connects, the laser scanning module voice recognition module and the robot is got and is put the module setting and be in on the shell, the robot motion module sets up on the base, central processing module sets up inside the shell.
Further, the robot is got and is put module and is put platform and two at least lead screw motors including getting, get the platform and put the subassembly including first getting and put the subassembly and the second is got and is put the subassembly, first getting is put the subassembly and the second is got and is put the subassembly and set up side by side a side surface of shell, the lead screw motor is used for the drive first getting is put the subassembly and the second is got and is put the subassembly and remove in the horizontal plane, the lead screw motor with central processing module connects.
Furthermore, the taking and placing table further comprises a first baffle and a second baffle, the first baffle is arranged on one side, away from the second taking and placing component, of the first taking and placing component, and the second baffle is arranged on one side, away from the first taking and placing component, of the second taking and placing component.
Further, the robot lifting module comprises a lifter and a connecting rod, the base and the shell are connected through the connecting rod, the lifter is used for driving the connecting rod to move up and down, and the lifter is connected with the central processing module.
Further, the robot motion module comprises a stepping motor and a wheel type motion structure, the wheel type motion structure is arranged on the base, the stepping motor is used for driving the wheel type motion structure to operate, and the stepping motor is connected with the central processing module.
Further, the service robot based on voice recognition further comprises a signal receiver, wherein the signal receiver is used for receiving first position information, and the first position information is position information of the shoe cabinet.
The second technical scheme adopted by the invention is as follows:
a control method of a service robot based on voice recognition is used for being executed by the service robot based on voice recognition, and comprises the following steps:
the voice recognition module recognizes a voice control signal of a user;
the laser scanning module performs laser scanning on an indoor environment and constructs an indoor map;
the central processing module carries out path planning according to the voice control signal and the indoor map to generate a moving track;
the robot motion module drives the machine body to move according to the movement track;
after the machine body moves to the end point position of the movement track, the central processing module controls the robot lifting module to adjust the height of the machine body to a preset height;
the central processing module controls the robot taking and placing module to finish taking or placing the shoes;
and the robot motion module drives the body to return to the starting point position according to the moving track.
Further, the step of the central processing module planning a path according to the voice control signal and the indoor map to generate a movement track specifically includes:
the central processing module determines user position information according to the voice control signal;
the method comprises the steps that a central processing module obtains first position information of a shoe cabinet;
and the central processing module performs path planning according to the user position information, the first position information and the indoor map to generate a moving track.
Further, the control method further comprises the step of adjusting the moving track in the moving process of the machine body, and the control method specifically comprises the following steps:
in the moving process of the robot body, the laser scanning module performs laser scanning on an indoor environment, when the obstacle is detected to exist on the moving track, the position information and the size information of the obstacle are determined through the laser scanning, the position information and the size information of the obstacle are fed back to the central processing module, the central processing module adjusts the moving track according to the position information and the size information of the obstacle, and the robot motion module drives the robot body to continue moving according to the adjusted moving track.
The invention has the beneficial effects that: the invention relates to a service robot based on voice recognition and a control method thereof.A voice control signal of a user is recognized through a voice recognition module, laser scanning is carried out on an indoor environment in real time through a laser scanning module to obtain a real-time indoor map, then a central processing module carries out path planning according to the real-time indoor map and the voice control signal to generate a moving track, a robot body is driven to move through a robot motion module according to the moving track, and a robot lifting module and a robot taking and placing module are matched to finish taking and placing shoes. The laser scanning module can scan the indoor environment in real time, so that obstacles are avoided in the moving process of the robot, the robot is prevented from being collided and damaged, the safe operation of the robot is ensured, and meanwhile, the efficiency of taking and placing shoes is improved; the voice control can be realized through the voice recognition module, and the voice recognition module is very convenient; get through robot lifting module and robot and get the module and realize getting shoes or putting shoes automatically, for the user has practiced thrift the time cost, realized that intelligent house gets and puts shoes.
Drawings
Fig. 1 is a block diagram of a service robot based on speech recognition according to an embodiment of the present invention;
FIG. 2 is a side view of a service robot based on speech recognition according to an embodiment of the present invention;
FIG. 3 is a top view of a service robot based on speech recognition according to an embodiment of the present invention;
fig. 4 is a front view of a service robot based on speech recognition according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a connection between a housing and a base according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating steps of a method for controlling a service robot based on voice recognition according to an embodiment of the present invention.
Reference numerals:
10. a base; 20. a housing; 30. a laser scanning module; 40. a voice recognition module; 50. a taking and placing table; 501. a first pick-and-place assembly; 502. a second pick-and-place assembly; 503. a first baffle plate; 504. a second baffle; 60. a screw motor; 70. a connecting rod; 80. a wheeled motion structure.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the description of the present invention, the meaning of a plurality is more than two, if there are first and second described for the purpose of distinguishing technical features, but not for indicating or implying relative importance or implicitly indicating the number of indicated technical features or implicitly indicating the precedence of the indicated technical features. Furthermore, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Referring to fig. 1, an embodiment of the present invention provides a service robot based on voice recognition, including:
a body;
the laser scanning module 30, the laser scanning module 30 is used for constructing an indoor map through laser scanning;
the voice recognition module 40, the voice recognition module 40 is used for recognizing the voice control signal;
the central processing module is used for planning a path according to the voice control signal and the indoor map and generating a moving track;
the robot motion module is used for driving the machine body to move according to the movement track;
the robot lifting module is used for adjusting the height of the machine body;
the robot taking and placing module is used for taking or placing shoes;
the central processing module, the laser scanning module 30, the voice recognition module 40, the robot motion module, the robot lifting module and the robot taking and placing module are all arranged on the machine body, and the laser scanning module 30, the voice recognition module 40, the robot motion module, the robot lifting module and the robot taking and placing module are all connected with the central processing module.
According to the embodiment of the invention, the voice control signal of a user is recognized through the voice recognition module 40, the laser scanning module 30 is used for carrying out laser scanning on an indoor environment in real time to obtain a real-time indoor map, then the central processing module carries out path planning according to the real-time indoor map and the voice control signal to generate a moving track, the robot body is driven to move through the robot motion module according to the moving track, and the robot lifting module and the robot taking and placing module are matched to complete shoe taking and placing. According to the invention, the laser scanning module 30 can be used for scanning the indoor environment in real time, so that obstacles are avoided in the moving process of the robot, the robot is prevented from being damaged by collision, and the efficiency of taking and placing shoes is improved while the safe operation of the robot is ensured; the voice control can be realized through the voice recognition module 40, which is very convenient; get through robot lifting module and robot and get the module and realize getting shoes or putting shoes automatically, for the user has practiced thrift the time cost, realized that intelligent house gets and puts shoes.
Referring to fig. 2, as a further alternative embodiment, the main body includes a base 10 and a housing 20, the base 10 and the housing 20 are connected through a robot lifting module, the laser scanning module 30, the voice recognition module 40 and the robot taking and placing module are disposed on the housing 20, the robot moving module is disposed on the base 10, and the central processing module is disposed inside the housing 20.
Specifically, the laser scanning module 30 and the voice recognition module 40 may be disposed on the top of the housing 20 to facilitate the collection of voice control signals and laser scanning.
Referring to fig. 3 and 4, as a further alternative embodiment, the robot taking and placing module includes a taking and placing table 50 and at least two lead screw motors 60, the taking and placing table 50 includes a first taking and placing assembly 501 and a second taking and placing assembly 502, the first taking and placing assembly 501 and the second taking and placing assembly 502 are disposed side by side on a side surface of the housing 20, the lead screw motors 60 are used for driving the first taking and placing assembly 501 and the second taking and placing assembly 502 to move in a horizontal plane, and the lead screw motors 60 are connected with the central processing module.
Specifically, in the embodiment of the present invention, the first taking and placing component 501 and the second taking and placing component 502 are controlled to take and place the shoes, and it can be understood that when the first taking and placing component 501 and the second taking and placing component 502 are close together, the taking and placing table 50 capable of placing the shoes is formed, and when the first taking and placing component 501 and the second taking and placing component 502 are separated towards two sides, the shoes placed thereon fall down; similarly, when the pick-and-place component and the second pick-and-place component 502 are close to each other from two sides to the middle, the shoe can be picked up.
Optionally, a side of the first pick-and-place assembly 501 close to the second pick-and-place assembly 502 is provided with an inclined surface, and a side of the second pick-and-place object close to the first pick-and-place assembly 501 is provided with an inclined surface. Through the arrangement of the inclined plane, shoes can be taken and placed more conveniently.
Referring to fig. 4, as a further alternative embodiment, the pick-and-place table 50 further includes a first baffle 503 and a second baffle 504, the first baffle 503 is disposed on a side of the first pick-and-place assembly 501 far away from the second pick-and-place assembly 502, and the second baffle 504 is disposed on a side of the second pick-and-place assembly 502 far away from the first pick-and-place assembly 501.
In particular, the first and second baffles 503 and 504 are used to prevent shoes from falling off the pick-and-place table 50.
Referring to fig. 5, as a further alternative embodiment, the robot lift module includes a lift and a connecting rod 70, the base 10 and the housing 20 are connected by the connecting rod 70, the lift is used to drive the connecting rod 70 to move up and down, and the lift is connected with the central processing module.
It will be appreciated that portions of the housing 20 are hidden in fig. 5 to clearly show the attachment bar 70 where the housing 20 is attached to the base 10.
Specifically, the robot lifting module drives the connecting rod 70 to move up and down by the lifter, so that the height of the pick-and-place table 50 can be adjusted, thereby facilitating the taking of shoes from the shoe cabinet or the placing of shoes on the shoe cabinet.
Alternatively, the height of the shoe chest may be previously acquired and the distance of the up and down movement of the connecting rod 70 may be controlled according to the height of the shoe chest.
Referring to fig. 2, as a further alternative embodiment, the robot motion module includes a stepping motor and a wheel motion structure 80, the wheel motion structure 80 is disposed on the base 10, the stepping motor is used for driving the wheel motion structure 80 to operate, and the stepping motor is connected with the central processing module.
Further as an optional implementation manner, the service robot based on voice recognition further includes a signal receiver, and the signal receiver is configured to receive first location information, where the first location information is location information of the shoe cabinet.
Specifically, a signal transmitter can be arranged in the shoe cabinet in advance to provide the position information of the shoe cabinet for the service robot based on the voice recognition, so that the service robot based on the voice recognition can conveniently perform path planning according to the position information of the shoe cabinet; the service robot based on voice recognition may also return to the shoe chest since it was at a standby.
The above is a description of the structure of the embodiment of the present invention, and the following is a description of the control method of the embodiment of the present invention.
Referring to fig. 6, an embodiment of the present invention provides a control method for a service robot based on voice recognition, which is executed by the service robot based on voice recognition and includes the following steps:
s101, a voice recognition module 40 recognizes a voice control signal of a user;
s102, the laser scanning module 30 performs laser scanning on the indoor environment and constructs an indoor map;
s103, the central processing module carries out path planning according to the voice control signal and the indoor map to generate a moving track;
s104, the robot motion module drives the machine body to move according to the moving track;
s105, after the body moves to the end position of the moving track, the central processing module controls the robot lifting module to adjust the height of the body to a preset height;
s106, the central processing module controls the robot taking and placing module to finish shoe taking or shoe placing;
and S107, the robot motion module drives the body to return to the starting point position according to the moving track.
Further as an optional implementation manner, the step S103 of the central processing module planning a path according to the voice control signal and the indoor map and generating a movement track specifically includes:
s1031, the central processing module determines user position information according to the voice control signal;
s1032, the central processing module acquires first position information of the shoe cabinet;
s1033, the central processing module carries out path planning according to the user position information, the first position information and the indoor map, and a moving track is generated.
Further as an optional implementation manner, the control method further includes a step of adjusting a movement track in the movement process of the body, which specifically includes:
in the moving process of the robot body, the laser scanning module 30 performs laser scanning on an indoor environment, when an obstacle is detected to exist on a moving track, the position information and the size information of the obstacle are determined through the laser scanning, the position information and the size information of the obstacle are fed back to the central processing module, the central processing module adjusts the moving track according to the position information and the size information of the obstacle, and the robot motion module drives the robot body to continue moving according to the adjusted moving track.
Specifically, the service robot scans an indoor environment by using the laser scanning module 30 to construct an indoor map, and then waits beside the shoe cabinet; after a user sends a voice control signal, starting a service robot based on voice recognition, and advancing to the current position of the user obtained by recognition; the voice recognition module 40 collects voice control signals of a user at present and controls the robot through the central processing unit, and the voice recognition module 40 comprises a voice collection module and a voice preprocessing module for processing the voice control signals of the user; if the voice module receives a plurality of effective voice control signals at the same time, the central processing module classifies the voice control signals and then sequentially executes control operation corresponding to the voice control signals according to the sequence of the received signals; if the voice recognition module 40 detects that the voice control signal is sent by the same user, it will execute the control operation according to the latter voice control signal.
In the moving process of the service robot, the external map is continuously updated through the laser scanning module 30, and the moving track is re-planned in real time when an obstacle is detected; a signal transmitter is arranged at the shoe cabinet to provide position information for the robot, and when the robot returns, the return path can be planned autonomously according to the signal sent by the robot and an indoor map constructed by the laser scanning module 30.
The robot lifting module is controlled by the central processing module and can lift the robot taking and placing module to a position with the same height as the shoe cabinet; the shoes are put into the shoe cabinet by a robot taking and placing module consisting of a screw motor and a taking and placing table 50, and the positions where the shoes are placed are recorded in the central processing unit.
Optionally, after the user sends out a voice control signal for taking out the shoes, the voice control signal is converted into an electric signal through the voice recognition module 40 and transmitted to the central processing module; the central processing module sends a control signal to the robot motion module according to the received electric signal so that the robot moves linearly to the shoe cabinet signal source, the indoor map is continuously updated through the laser scanning module 30, and a path is re-planned when an obstacle is detected until a preset shoe cabinet signal position is reached; then the central processing module transmits a signal to the robot lifting module to enable the robot lifting module to ascend to a preset height, and then the central processing module transmits a signal to the robot taking and placing module to enable the robot taking and placing module to take the shoes onto a taking and placing platform 50 of the robot and then move to the position where the user is located.
Optionally, when the service robot does not receive the valid voice control signal, the service robot automatically enters a standby state after 10 seconds, and can be in standby at the shoe cabinet.
Optionally, when the service robot detects an obstacle, the service robot draws the position of the obstacle on an indoor map, and if the obstacle is on the moving track of the robot, the robot adjusts the advancing direction of the robot at a preset angle according to the outline of the obstacle, so as to avoid the obstacle.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The above-described methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the above-described methods may be implemented in any type of computing platform operatively connected to a suitable connection, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The above description is only a preferred embodiment of the present invention, and the present invention is not limited to the above embodiment, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention as long as the technical effects of the present invention are achieved by the same means. The invention is capable of other modifications and variations in its technical solution and/or its implementation, within the scope of protection of the invention.

Claims (7)

1. A service robot based on speech recognition, comprising:
a body;
the laser scanning module is used for constructing an indoor map through laser scanning;
the voice recognition module is used for recognizing a voice control signal;
the central processing module is used for planning a path according to the voice control signal and the indoor map and generating a moving track;
the robot motion module is used for driving the machine body to move according to the moving track;
the robot lifting module is used for adjusting the height of the machine body;
the robot taking and placing module is used for taking or placing shoes;
the central processing module, the laser scanning module, the voice recognition module, the robot motion module, the robot lifting module and the robot taking and placing module are all arranged on the machine body, and the laser scanning module, the voice recognition module, the robot motion module, the robot lifting module and the robot taking and placing module are all connected with the central processing module;
the robot body comprises a base and a shell, the base is connected with the shell through the robot lifting module, the laser scanning module, the voice recognition module and the robot taking and placing module are arranged on the shell, the robot motion module is arranged on the base, and the central processing module is arranged in the shell;
the robot taking and placing module comprises a taking and placing table and at least two lead screw motors, the taking and placing table comprises a first taking and placing assembly and a second taking and placing assembly, the first taking and placing assembly and the second taking and placing assembly are arranged on the surface of one side of the shell side by side, an inclined plane is arranged on one side, close to the second taking and placing assembly, of the first taking and placing assembly, an inclined plane is arranged on one side, close to the first taking and placing assembly, of the second taking and placing assembly, the lead screw motors are used for driving the first taking and placing assembly and the second taking and placing assembly to move in a horizontal plane, and the lead screw motors are connected with the central processing module;
the taking and placing table further comprises a first baffle and a second baffle, the first baffle is arranged on one side, away from the second taking and placing assembly, of the first taking and placing assembly, and the second baffle is arranged on one side, away from the first taking and placing assembly, of the second taking and placing assembly.
2. The service robot based on the voice recognition is characterized in that the robot lifting module comprises a lifter and a connecting rod, the base and the shell are connected through the connecting rod, the lifter is used for driving the connecting rod to move up and down, and the lifter is connected with the central processing module.
3. A service robot based on speech recognition according to claim 1, characterized in that: the robot motion module comprises a stepping motor and a wheel type motion structure, the wheel type motion structure is arranged on the base, the stepping motor is used for driving the wheel type motion structure to operate, and the stepping motor is connected with the central processing module.
4. A speech recognition based service robot according to any one of claims 1 to 3, wherein: the service robot based on the voice recognition further comprises a signal receiver, wherein the signal receiver is used for receiving first position information, and the first position information is position information of the shoe cabinet.
5. A control method of a service robot based on voice recognition, for execution by the service robot based on voice recognition according to any one of claims 1 to 4, characterized by comprising the steps of:
the voice recognition module recognizes a voice control signal of a user;
the laser scanning module carries out laser scanning on the indoor environment and constructs an indoor map;
the central processing module carries out path planning according to the voice control signal and the indoor map to generate a moving track;
the robot motion module drives the machine body to move according to the moving track;
after the machine body moves to the end point position of the moving track, the central processing module controls the robot lifting module to adjust the height of the machine body to a preset height;
the central processing module controls the robot taking and placing module to finish taking or placing the shoes;
and the robot motion module drives the machine body to return to the starting point position according to the movement track.
6. The control method according to claim 5, wherein the step of generating a movement trajectory by the central processing module performing path planning according to the voice control signal and the indoor map specifically includes:
the central processing module determines user position information according to the voice control signal;
the method comprises the steps that a central processing module obtains first position information of a shoe cabinet;
and the central processing module performs path planning according to the user position information, the first position information and the indoor map to generate a moving track.
7. The control method according to claim 5, further comprising the step of adjusting a movement trajectory during the movement of the body, specifically:
in the moving process of the robot body, the laser scanning module performs laser scanning on an indoor environment, when the obstacle is detected to exist on the moving track, the position information and the size information of the obstacle are determined through the laser scanning, the position information and the size information of the obstacle are fed back to the central processing module, the central processing module adjusts the moving track according to the position information and the size information of the obstacle, and the robot motion module drives the robot body to continue moving according to the adjusted moving track.
CN202110249041.4A 2021-03-08 2021-03-08 Service robot based on voice recognition and control method thereof Active CN113110410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110249041.4A CN113110410B (en) 2021-03-08 2021-03-08 Service robot based on voice recognition and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110249041.4A CN113110410B (en) 2021-03-08 2021-03-08 Service robot based on voice recognition and control method thereof

Publications (2)

Publication Number Publication Date
CN113110410A CN113110410A (en) 2021-07-13
CN113110410B true CN113110410B (en) 2023-03-31

Family

ID=76710615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110249041.4A Active CN113110410B (en) 2021-03-08 2021-03-08 Service robot based on voice recognition and control method thereof

Country Status (1)

Country Link
CN (1) CN113110410B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105058365A (en) * 2015-08-11 2015-11-18 山东建筑大学 Bionic type robot for automatically taking and putting books
CN105479458A (en) * 2016-02-29 2016-04-13 昆山—邦泰汽车零部件制造有限公司 Mechanical arm capable of conveniently taking steel plate
CN106426212B (en) * 2016-11-21 2019-07-23 肇庆学院 A kind of family doorway shoe chest puts shoes robot with autonomous
CN108634614A (en) * 2018-06-16 2018-10-12 佛山市豪洋电子有限公司 Smart home shoe chest robot
CN108927809A (en) * 2018-06-21 2018-12-04 佛山市豪洋电子有限公司 A kind of family's shoes robot
CN109469385A (en) * 2018-11-05 2019-03-15 深圳力侍技术有限公司 A kind of car carrying folder device of tyre
CN109557920A (en) * 2018-12-21 2019-04-02 华南理工大学广州学院 A kind of self-navigation Jian Tu robot and control method
CN109822596B (en) * 2019-04-02 2023-07-25 成都信息工程大学 Service robot and control system thereof

Also Published As

Publication number Publication date
CN113110410A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
US11709497B2 (en) Method for controlling an autonomous mobile robot
WO2020062835A1 (en) Robot and automatic recharging method and system therefor, electronic device and storage medium
CN1098750C (en) Movable robot system using RF module
RU2212995C2 (en) System with cleaning robot operating with use of mobile communication network
EP3424395B1 (en) Method and apparatus for performing cleaning operation by cleaning device
EP3469974A1 (en) Cooperative work system formed by mother robot and child robot, and operation method thereof
ES2812568T3 (en) Autonomous mobile robot to execute work assignments in a physical environment in which there are stationary and non-stationary obstacles
JP5396577B2 (en) Operating system
CN105652864A (en) Map construction method utilizing mobile robot and work method utilizing map
US9881277B2 (en) Wrist band haptic feedback system
EP3904989A1 (en) Service robot system, robot and method for operating the service robot
CN108776473A (en) A kind of working method of intelligent disinfecting robot
US20190184569A1 (en) Robot based on artificial intelligence, and control method thereof
CN106125729A (en) Intelligence serving trolley and control system thereof
CN108027609A (en) House keeper robot and control method
CN113675923B (en) Charging method, charging device and robot
EP1886549B1 (en) Apparatus for controlling the displacement of a self-propelling ground-based automatic vehicle
US11947015B1 (en) Efficient coverage planning of mobile robotic devices
CN209776188U (en) Unmanned charging system of car based on 3D vision technique
CN113126613A (en) Intelligent mowing system and autonomous mapping method thereof
CN108687784A (en) meal delivery robot
CN113115621A (en) Intelligent mowing system and autonomous mapping method thereof
CN111706979A (en) Control method of intelligent dehumidifier and intelligent dehumidifier
CN113110410B (en) Service robot based on voice recognition and control method thereof
US20190324468A1 (en) Method for the navigation and self-localization of an autonomously moving processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant