CN110051289A - Robot voice control method and device, robot and medium - Google Patents

Robot voice control method and device, robot and medium Download PDF

Info

Publication number
CN110051289A
CN110051289A CN201910265952.9A CN201910265952A CN110051289A CN 110051289 A CN110051289 A CN 110051289A CN 201910265952 A CN201910265952 A CN 201910265952A CN 110051289 A CN110051289 A CN 110051289A
Authority
CN
China
Prior art keywords
robot
phonetic order
sound source
source position
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910265952.9A
Other languages
Chinese (zh)
Other versions
CN110051289B (en
Inventor
刘帅
刘洋
肖福建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Stone Innovation Technology Co ltd
Original Assignee
Beijing Rockrobo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Rockrobo Technology Co Ltd filed Critical Beijing Rockrobo Technology Co Ltd
Priority to CN201910265952.9A priority Critical patent/CN110051289B/en
Priority to CN202210225162.XA priority patent/CN114468898B/en
Publication of CN110051289A publication Critical patent/CN110051289A/en
Application granted granted Critical
Publication of CN110051289B publication Critical patent/CN110051289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)
  • Electric Vacuum Cleaner (AREA)

Abstract

The embodiment of the application provides a robot voice control method, a device, a robot and a medium, wherein the robot voice control method comprises the following steps: receiving a first voice instruction; recognizing the sound source direction of the first voice instruction and steering the robot to the sound source direction; receiving a second voice instruction; and recognizing the sound source position of the second voice instruction and enabling the robot to move to the vicinity of the sound source position. The embodiment of the application can adopt a voice appointed mode, so that the robot can accurately work according to the instruction of the user, the user cleans the floor through the voice control robot at an appointed position, for example, a bedroom is cleaned, a living room is cleaned, the robot cleans the floor, and the like, so that the robot can perform purposeful work according to the intention of the user, and meanwhile, a sensor is added in the appointed position cleaning process as a positioning means, the position identification accuracy rate of the robot is increased, the working efficiency is improved, and the user experience is increased.

Description

Robot voice control method, device, robot and medium
Technical field
This application involves control technology field more particularly to a kind of robot voice control method, device, robot and Jie Matter.
Background technique
With the development of technology, there are the various robots with speech recognition system, such as machine of sweeping the floor People, floor-mopping robot, dust catcher, weeder etc..These robots can receive the language of user's input by speech recognition system Sound instruction, to execute the operation of phonetic order instruction, this has not only liberated labour, also as save human cost.
In the related technology, above-mentioned robot can receive the phonetic order of user's input, then be known by the voice of itself The phonetic order of other system identification input, to control the operation that robot executes phonetic order instruction.But user is in control machine When device people, it is still desirable to can accurately control robot to specified position and do corresponding work, such as instruct machine of sweeping the floor People is cleaned to specified place and (refers to which which arrives).Existing mobile terminal control sweeping robot fixed point cleans the mode taken It is that robot is it needs to be determined that indoor map, then reflects that onto mobile phone mobile terminal, user is on seeing mobile phone for map first After indoor map, position to be cleaned is clicked according to relative bearing, robot is just moved to the position and carries out local cleaning.
But this mode has following defect: on the one hand, indoor map must be stored in advance in robot, if indoor arrangement is sent out Changing (such as the change in location such as tables and chairs, bed cabinet), robot needs to re-recognize house-map and saves, and cannot thus do It cleans, robot is needed to update map and uploads onto the server to fixed point at any time, be issued to the new map of user hand generator terminal and come out, User could treat purging zone and be clicked according to the relative position in new map.On the other hand, most of at present to sweep the floor Robot can not generate 3 D stereo map, the map of most of sweeper be all it is two-dimensional, relatively more abstract, user is according to mobile phone The map at end, it is difficult to accurately practical region to be cleaned within doors be positioned, the position often clicked on map and reality There is larger gap in border position, therefore user experience is bad.
Summary of the invention
It is situated between in view of this, the embodiment of the present application provides a kind of robot voice control method, device, robot and storage Matter precisely works to allow the robot to according to phonetic order to designated position.
In a first aspect, the embodiment of the present application provides, a kind of robot voice control method, which comprises
Receive the first phonetic order;
It identifies the Sounnd source direction of first phonetic order and the robot is made to turn to the Sounnd source direction;
Receive the second phonetic order;
It identifies the sound source position of second phonetic order and is moved to the robot near the sound source position.
In some possible implementations, the Sounnd source direction of identification first phonetic order simultaneously makes the machine People turns to the Sounnd source direction, comprising:
Identify the Sounnd source direction of first phonetic order;
Do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
Stop the operating of the driving motor.
In some possible implementations, the sound source position of identification second phonetic order simultaneously makes the machine After people is moved to the sound source position nearby, further includes:
Receive third phonetic order;
It identifies the content of the third phonetic order and corresponding movement is executed according to the content of the third phonetic order.
In some possible implementations, the content of the identification third phonetic order and according to the third language The content of sound instruction executes corresponding movement, comprising:
Identify that the third phonetic order is position correct class instruction, the robot start to execute locally clean it is dynamic Make.
In some possible implementations, the content of the identification third phonetic order and according to the third language The content of sound instruction executes corresponding movement, comprising:
The third phonetic order is identified as the instruction of positional fault class, the robot is instructed according to the positional fault class Sound source position continue it is mobile to sound source position;
Until receiving the correct class instruction in position, the robot starts to execute the movement locally cleaned.
In some possible implementations, the sound source position of identification second phonetic order simultaneously makes the machine People is moved near the sound source position, comprising:
Identify the sound source position of second phonetic order;
The sound source position is confirmed by sensor;
It is moved to the robot near the sound source position.
It is described to be moved to the robot near the sound source position, comprising: to make in some possible implementations Fast movement speed is moved near the sound source position when robot is with than cleaning.
In some possible implementations, first phonetic order is to wake up class phonetic order, second voice Instruction is manipulation class phonetic order.
In some possible implementations, the wake-up class phonetic order and the manipulation class phonetic order are stored in advance In the robot or the cloud being connect with the robot.
Second aspect, the embodiment of the present application provide a kind of robot voice control device, comprising:
First receiving unit, for receiving the first phonetic order;
First recognition unit the Sounnd source direction of first phonetic order and makes described in the robot turns to for identification Sounnd source direction;
Second receiving unit, for receiving the second phonetic order;
Second recognition unit the sound source position of second phonetic order and makes the robot be moved to institute for identification It states near sound source position.
In some possible implementations, first recognition unit is also used to:
Identify the Sounnd source direction of first phonetic order;
Do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
Stop the operating of the driving motor.
In some possible implementations, further includes:
Third receiving unit, for receiving third phonetic order;
Third recognition unit, for identification content of the third phonetic order and according in the third phonetic order Hold and executes corresponding movement.
In some possible implementations, the third recognition unit is also used to:
Identify that the third phonetic order is position correct class instruction, the robot start to execute locally clean it is dynamic Make.
In some possible implementations, the third recognition unit is also used to:
The third phonetic order is identified as the instruction of positional fault class, the robot is instructed according to the positional fault class Sound source position continue it is mobile to sound source position;
Until receiving the correct class instruction in position, the robot starts to execute the movement locally cleaned.
In some possible implementations, second recognition unit is also used to:
Identify the sound source position of second phonetic order;
The sound source position is confirmed by sensor;
It is moved to the robot near the sound source position.
It is described to be moved to the robot near the sound source position in some possible implementations, comprising:
Fast movement speed is moved near the sound source position when making the robot with than cleaning.
In some possible implementations, first phonetic order is to wake up class phonetic order, second voice Instruction is manipulation class phonetic order.
In some possible implementations, the wake-up class phonetic order and the manipulation class phonetic order are stored in advance In the robot or the cloud being connect with the robot.
The third aspect, the embodiment of the present application provide a kind of robot voice control device, including processor and memory, institute It states memory and is stored with the computer program instructions that can be executed by the processor, the processor executes the computer journey When sequence instructs, as above any method and step is realized.
Fourth aspect, the embodiment of the present application provide a kind of robot, including as above described in any item devices.
5th aspect, the embodiment of the present application provide a kind of non-transient computer readable storage medium, are stored with computer Program instruction, the computer program instructions realize as above any method and step when being called and being executed by processor.
Compared with the existing technology, the present invention at least has following technical effect that
The mode that the embodiment of the present application can take voice specified makes robot carry out accurate work according to the instruction of user Make, user carries out the cleaning of designated position by voice control sweeping robot, such as cleans bedroom, please clean parlor, comes My this cleaning etc. allows the robot to carry out purposive work according to the wish of user, meanwhile, it is swept in designated position Increase sensor in journey as positioning means, increases the position recognition accuracy of robot, improve work efficiency, increase User experience.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this Shen Some embodiments please for those of ordinary skill in the art without creative efforts, can be with root Other attached drawings are obtained according to these attached drawings.
Fig. 1 is an application scenarios schematic diagram provided by the embodiments of the present application;
Fig. 2 is robot architecture's top view provided by the embodiments of the present application;
Fig. 3 is robot architecture's bottom view provided by the embodiments of the present application;
Fig. 4 is robot architecture's front view provided by the embodiments of the present application;
Fig. 5 is robot architecture's perspective view provided by the embodiments of the present application;
Fig. 6 is robot architecture's block diagram provided by the embodiments of the present application;
Fig. 7 is the flow diagram for the robot voice control method that one embodiment of the application provides;
Fig. 8 is the flow diagram for the robot voice control method that the another embodiment of the application provides;
Fig. 9 is the structural schematic diagram for the robot voice control device that one embodiment of the application provides;
Figure 10 is the structural schematic diagram for the robot voice control device that the another embodiment of the application provides;
Figure 11 is the electronic structure schematic diagram of robot provided by the embodiments of the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application In attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is Some embodiments of the present application, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art Every other embodiment obtained without creative efforts, shall fall in the protection scope of this application.
It will be appreciated that though may be described in the embodiment of the present application using term first, second, third, etc.., But these ... it should not necessarily be limited by these terms.These terms be only used to by ... be distinguished from each other out.For example, not departing from the application In the case where scope of embodiments, first ... can also be referred to as second ..., and similarly, second ... can also be referred to as One ....
In order to clearly describe the behavior of robot, following direction definition is carried out:
As shown in figure 5, robot 100 can be by being mutually perpendicular to the shifting of axis relative to the following three defined by main body 110 Dynamic various combinations are advanced on the ground: antero posterior axis X, lateral shaft Y and central vertical shaft Z.Along the forward side of antero posterior axis X To being denoted as " forward direction ", and " backward " is denoted as along the rearward drive direction of antero posterior axis X.Lateral shaft Y be substantially along by The axle center that the central point of driving wheel module 141 defines extends between the right wheel and revolver of robot.
Robot 100 can be rotated about the Y axis.When the forward portion of robot 100 tilts upwards, dip down to backward part It is when tiltedly " facing upward ", and the forward portion for working as robot 100 tilts down, is when being tilted upwards to backward part " nutation ".Separately Outside, robot 100 can turn about the Z axis.On the forward direction of robot, when robot 100 is to the inclination of the right side of X-axis " right-hand rotation ", when robot 100 to the left side of X-axis be " left-hand rotation ".
Referring to Fig. 1, being a kind of possible application scenarios provided by the embodiments of the present application, which includes machine People, such as sweeping robot, floor-mopping robot, dust catcher, weeder etc..In certain embodiments, which can be Robot is specifically as follows sweeping robot, floor-mopping robot.In an implementation, speech recognition system has can be set in robot, It to receive the phonetic order of user's sending, and is rotated according to phonetic order according to arrow direction, to respond the voice of user Instruction.Robot is also provided with instantaneous speech power, to export suggestion voice.In other embodiments, robot can be set It is equipped with touch-sensitive display, to receive the operational order of user's input.Robot is also provided with WIFI module, Bluetooth The wireless communication modules such as module to connect with intelligent terminal, and are received user by wireless communication module and are passed using intelligent terminal Defeated operational order.
The structure of correlation machine people is described as follows, as shown in Figure 2-5:
Robot 100 includes machine body 110, sensory perceptual system 120, control system, drive system 140, cleaning systems, energy Source system and man-machine interactive system 170.As shown in Figure 2.
Machine body 110 includes forward portion 111 and backward part 112, and having approximate circular shape, (front and back is all round Shape), there can also be other shapes, the approximate D-shape of circle including but not limited to behind front.
As shown in figure 4, sensory perceptual system 120 includes positioned at the position determining means 121 of 110 top of machine body, positioned at machine The buffer 122 of the forward portion 111 of device main body 110, steep cliff sensor 123 and ultrasonic sensor, infrared sensor, magnetic force The sensing devices such as meter, accelerometer, gyroscope, odometer provide the various positions information and movement of machine to control system 130 Status information.Position determining means 121 include but is not limited to camera, laser ranging system (LDS).Below with triangle telemetry Laser ranging system for illustrate how carry out position determine.The basic principle of triangle telemetry based on similar triangles etc. Than relationship, this will not be repeated here.
Laser ranging system includes luminescence unit and light receiving unit.Luminescence unit may include the light source for emitting light, light source It may include light-emitting component, such as the infrared or luminous ray light emitting diode (LED) of transmitting infrared light or luminous ray.It is excellent Selection of land, light source can be the light-emitting component of transmitting laser beam.In the present embodiment, the example by laser diode (LD) as light source Son.Specifically, due to the monochrome of laser beam, orientation and collimation property, use the light source of laser beam can make measurement compared to Other light are more accurate.For example, the infrared light or luminous ray of light emitting diode (LED) transmitting are by week compared to laser beam Such environmental effects (such as color or texture of object) is enclosed, and may be decreased in measurement accuracy.Laser diode (LD) it can be dot laser, measure the two-dimensional position information of barrier, be also possible to line laser, measure the certain model of barrier Enclose interior three dimensional local information.
Light receiving unit may include imaging sensor, and the light for being reflected by barrier or being scattered is formed on the imaging sensor Point.Imaging sensor can be the set of single or plurality of rows of multiple unit pixels.These light receiving elements can be by optical signal Be converted to electric signal.Imaging sensor can be complementary metal oxide semiconductor (CMOS) sensor or charge coupled cell (CCD) sensor, since the advantage in cost is preferably complementary metal oxide semiconductor (CMOS) sensor.Moreover, light Unit may include sensitive lens component.The light for being reflected by barrier or being scattered can advance via sensitive lens component to scheme As forming image on sensor.Sensitive lens component may include single or multiple lens.
Base portion can support luminescence unit and light receiving unit, and luminescence unit and light receiving unit are arranged on base portion and to each other Every a specific range.For the barrier situation around robot measurement on 360 degree of directions, base portion can be made to be rotatably arranged In main body 110, it can not also be rotated with base portion itself and rotate transmitting light, reception light by the way that rotating element is arranged. The angular velocity of rotation of rotating element can be obtained by setting optic coupling element and code-disc, and optic coupling element incudes the tooth on code-disc and lacks, By tooth lack spacing slip over time and tooth lack between distance value be divided by instantaneous angular velocity can be obtained.The scarce density of tooth is got on code-disc Greatly, the accuracy rate and precision of measurement are also just corresponding higher but just more accurate in structure, and calculation amount is also higher;Conversely, tooth lacks Density it is smaller, the accuracy rate and precision of measurement are accordingly also lower, but can be relatively easy in structure, and calculation amount is also got over It is small, some costs can be reduced.
The data processing equipment connecting with light receiving unit, such as DSP, will be relative to all angles on 0 degree of angular direction of robot Obstacle distance value at degree records and sends to the data processing unit in control system 130, such as the application processing comprising CPU Device (AP), location algorithm of the CPU operation based on particle filter obtain the current location of robot, and are charted according to this position, supply Navigation uses.It is preferable to use instant positioning and map structuring (SLAM) for location algorithm.
Although the laser ranging system based on triangle telemetry can measure the infinity other than certain distance in principle Distance value at distance, but actually telemeasurement, for example, 6 meters or more of realization be it is very difficult, be primarily due to light The size limitation of pixel unit on the sensor of unit, while also by the photoelectric conversion speed of sensor, sensor and connection The calculating speed of data transmission bauds, DSP between DSP influences.The measured value that laser ranging system is affected by temperature The variation that meeting generating system can not put up with, the thermal expansion that the structure being primarily due between luminescence unit and light receiving unit occurs become Shape leads to the angle change between incident light and emergent light, and luminescence unit and light receiving unit itself can also have temperature drift.Swash Optical range finding apparatus be used for a long time after, as many factors such as temperature change, vibration accumulate and caused by deformation also can serious shadow Ring measurement result.The accuracy of measurement result directly determines the accuracy of map making, is robot further progress strategy The basis of implementation, it is particularly important.
As shown in figure 3, the forward portion 111 of machine body 110 can carry buffer 122, driving wheel during cleaning Module 141 promotes robot in ground running, and buffer 122 detects machine via sensing system, such as infrared sensor One or more events in the driving path of people 100, robot can pass through the event that is detected by buffer 122, such as obstacle Object, wall, and controlling driving wheel module 141 makes robot to respond to the event, for example away from barrier.
Control system 130 is arranged on the circuit main board in machine body 110, including with non-transitory memory, such as Hard disk, flash memory, random access memory, the computation processor of communication, such as central processing unit, application processor, The obstacle information that application processor is fed back according to laser ranging system draws institute, robot using location algorithm, such as SLAM Instant map in the environment.And combining buffer 122, steep cliff sensor 123 and ultrasonic sensor, infrared sensor, magnetic Range information, the velocity information comprehensive descision sweeper of the sensing devices such as power meter, accelerometer, gyroscope, odometer feedback are worked as It is preceding which kind of working condition be in, threshold is such as crossed, upper carpet is located at steep cliff, and either above or below is stuck, and dirt box is full, is taken Rise etc., also specific next step action policy can be provided for different situations, so that the work of robot is more in line with owner Requirement, have better user experience.Further, even if control system 130 can be based on the cartographic information planning that SLAM is drawn Cleaning path the most efficient and rational and cleaning method greatly improve the sweeping efficiency of robot.
Drive system 140 can be based on having distance and an angle information, such as x, y and θ component, drive command and manipulate machine Device people 100 crosses over ground run.Drive system 140 includes driving wheel module 141, and driving wheel module 141 can control a left side simultaneously Wheel and right wheel, in order to more accurately control the movement of machine, preferably driving wheel module 141 respectively include left driving wheel module and Right driving wheel module.Left and right driving wheel module is opposed along the lateral shaft defined by main body 110.In order to which robot can be on ground It is moved more stablely on face or stronger locomitivity, robot may include one or more driven wheel 142, driven Wheel includes but is not limited to universal wheel.Driving wheel module includes traveling wheel and drive motor and the control electricity for controlling drive motor Road, driving wheel module can also connect the circuit and odometer of measurement driving current.Driving wheel module 141 can removably connect It is connected in main body 110, easy disassembly and maintenance.Driving wheel can have biasing drop suspension system, movably fasten, Such as be rotatably attached, robot body 110 is arrived, and receive spring that is downward and biasing far from robot body 110 Biasing.Spring biasing allows driving wheel with certain contact and traction of the Productivity maintenance with ground, while robot 100 is clear Clean element is also with certain pressure contact ground 10.
Cleaning systems can be dry cleaning system and/or wet cleaning system.As dry cleaning system, main cleaning The purging system 151 that connecting component of the function between round brush, dirt box, blower, air outlet and four is constituted.With ground With the round brush centainly interfered by the rubbish on ground sweep up and winding between round brush and dirt box suction inlet front, then by Blower generates and passes through the gas sucking dirt box for having suction of dirt box.The dust collection capacity of sweeper can use the sweeping efficiency of rubbish DPU (Dust pick up efficiency) is characterized, and sweeping efficiency DPU is by roller brushes structure and Effect of Materials, by dust suction The wind power utilization rate in the air duct that mouth, the connecting component between dirt box, blower, air outlet and four are constituted influences, by blower Type and power influence, be a responsible system design problem.Compared to common plug-in dust catcher, the raising of dust collection capacity Meaning is bigger for the clean robot of limited energy.Because the raising of dust collection capacity is directly effectively reduced for the energy It is required that, that is to say, that the machine on 80 square meter ground can be cleaned by filling primary electricity originally, can be evolved flat to fill primary electricity cleaning 100 Rice is even more.And the service life for reducing the battery of charging times can also greatly increase, so that user replaces the frequency of battery Rate also will increase.It is more intuitive and importantly, the raising of dust collection capacity is the most obvious and important user experience, Yong Huhui Immediately arrive at sweep whether clean/wipe whether clean conclusion.Dry cleaning system also may include that there is the side of rotary shaft to brush 152, rotary shaft is angled relative to ground, for being moved to clast in the round brush region of cleaning systems.
Energy resource system includes rechargeable battery, such as nickel-metal hydride battery and lithium battery.Rechargeable battery can connect charge control Circuit, battery pack charging temperature detection circuit and battery undervoltage observation circuit, charging control circuit, the detection of battery pack charging temperature Circuit, battery undervoltage observation circuit are connected with single chip machine controlling circuit again.Host, which passes through, is arranged in fuselage side or lower section Charging electrode connect with charging pile and charges.If having attached dust on exposed charging electrode, can during the charging process by In the cumulative effect of charge, causes the plastics body of electrode perimeter to melt deformation, even result in electrode itself and deform, it can not Continue to charge normal.
Man-machine interactive system 170 includes the key on host panel, and key carries out function selection for user;Can also include Display screen and/or indicator light and/or loudspeaker, display screen, indicator light and loudspeaker to user show current machine status or Function options;It can also include mobile phone client program.For path navigation type cleaning equipment, cell phone client can be to The map of environment and machine present position where user's presentation device, can provide a user more horn of plenty and hommization Function items.
Fig. 6 is the block diagram of sweeping robot according to the present invention.
Sweeping robot according to present example may include: the voice of user for identification microphone array unit, Communication unit for being communicated with remote control equipment or other equipment, the mobile unit for driving main body, cleaning unit, with And the memory cell for storing information.Input unit (key etc. of sweeping robot), object detection sensors, charging are single Member, microphone array unit, angle detecting unit, position detection unit, communication unit, driving unit and memory cell can be with It is connected to control unit, predetermined information is transmitted to control unit or receives predetermined information from control unit.
Microphone array unit can be by the information ratio of the voice inputted by receiving unit and storage in a memory cell Compared with to determine whether input voice corresponds to specific order.If it is determined that the voice inputted corresponds to specific order, then Corresponding order is transmitted to control unit.If information phase of the voice that can not be will test with storage in a memory cell Compare, then detected voice can be considered as noise to ignore detected voice.
For example, the voice corresponding word " come, come here, arriving here, to here " detected, and exist and be stored in The corresponding text control command of word (come here) in the information of memory cell.In such a case, it is possible to by right The order answered is transmitted in control unit.
Angle detecting unit can be detected by using the time difference or level of the voice for being input to multiple receiving units The direction of voice.The direction for the voice that angle detecting unit will test is transmitted to control unit.Control unit can be by making Movement routine is determined with the voice direction detected by angle detecting unit.
Position detection unit can detecte coordinate of the main body in predetermined cartographic information.In one embodiment, by imaging The cartographic information of the information and storage that head detects in a memory cell can be compared to each other to detect the current location of main body. Other than camera, position detection unit can also use global positioning system (GPS).
In a broad sense, position detection unit can detecte whether main body is arranged on specific position.For example, position is examined Surveying unit may include the unit whether being arranged on charging pile for detecting main body.
For example, whether can be input to according to electric power for detecting in the method whether main body is arranged on charging pile Detect whether main body is arranged at charge position in charhing unit.In another example can be by being arranged on main body or charging pile Charge position detection unit detect whether main body is arranged at charge position.
Predetermined information can be transmitted to by communication unit/received from remote control equipment or other equipment.Communication unit The cartographic information of sweeping robot can be updated.
Driving unit can be with mobile unit operating and cleaning unit.Driving unit can be moved along what is determined by control unit The mobile mobile unit in path.
Predetermined information related with the operation of sweeping robot is stored in memory cell.For example, sweeping robot institute cloth The cartographic information in the region set, control command information corresponding with the voice that microphone array unit is identified, by angle detecting It direction angle information that unit detects, the location information detected by position detection unit and is detected by object detection sensors To obstacle information can store in a memory cell.
Control unit can receive the information detected by receiving unit, camera and object detection sensors.Control Unit can identify the direction and detection sweeping robot that the voice of user, detection voice occur based on the information transmitted Position.In addition, control unit can be with mobile unit operating and cleaning unit.
A kind of embodiment, as shown in fig. 7, being applied to the robot in Fig. 1 application scenarios, the embodiment of the present application provides one kind Robot voice control method, described method includes following steps:
Step S702: the first phonetic order is received;
Under normal conditions, the speech recognition system of robot can have dormant state and state of activation.Such as work as robot Be in working condition or unused state, speech recognition system at this time in a dormant state, in the dormant state, language Sound identifying system hardly occupies the excess resource of robot, is other languages that will not go to identify in addition to the first phonetic order Sound instruction.
It, can be by speech recognition system by stopping if speech recognition system in a dormant state receives the first phonetic order Dormancy state switches to state of activation.In active state, speech recognition system can identify configuration in speech recognition system Phonetic order, such as the first phonetic order, the second phonetic order etc..
Specifically, the first phonetic order: for waking up speech recognition system, i.e., instruction control speech recognition system, which is in, swashs State living.In an implementation, if speech recognition system in a dormant state, when robot receives the first phonetic order, can incite somebody to action Speech recognition system switches to state of activation by dormant state.If speech recognition system is active, voice knowledge is controlled Other system keeps state of activation, can also do nothing.In the particular embodiment, the first phonetic order It customized can be arranged, can also be arranged with system default, such as: the first phonetic order can be customized " the unlatching language of user Sound ", " booting " " coming ", " coming here ", " to here ", " to here " etc..First phonetic order (waking up class phonetic order) is pre- The cloud for being first stored in the robot or being connect with the robot.For convenience of description, hereafter with the first phonetic order for " mistake Come " for be illustrated.
Step S704: the Sounnd source direction of identification first phonetic order simultaneously makes the robot turn to the sound source side To;
After the first activation instruction for example " to come " etc. has been received in robot, robot passes through angle detecting unit Detect the direction of voice, for example, by using the voice for being input to multiple receiving units time difference or horizontal detect voice Direction.The direction for the voice that angle detecting unit will test is transmitted to control unit.Control unit can be by using by side The voice direction control drive system detected to detection unit, makes robot carry out the movement such as original place rotation, so that robot Direction of advance turns to user's Sounnd source direction.After such human-computer interaction is uttered a sound similar to a people to work by people, stop Work in hand turns to dialogue state, so that human-computer interaction is more humanized.
In some possible implementations, the Sounnd source direction of identification first phonetic order simultaneously makes the machine People turns to the Sounnd source direction, specifically comprises the following steps:
Step S7042: the Sounnd source direction of identification first phonetic order;
Step S7044: do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
After the first activation instruction for example " to come " etc. has been received in robot, robot passes through angle detecting unit Detect the direction of voice, for example, by using the voice for being input to multiple receiving units time difference or horizontal detect voice Direction.The direction for the voice that angle detecting unit will test is transmitted to control unit.Control unit can be by using by side The voice direction control drive system detected to detection unit, makes robot carry out the movement such as original place rotation, so that robot Direction of advance turns to user's Sounnd source direction.This process, robot do not stop its working condition, clean motor and are still within booting shape State.
Step S7046: stop the operating of the driving motor.
After waiting revolute to Sounnd source direction, robot only retains and is active about all drive systems Speech recognition system, robot is in complete armed state at this time, has detected whether control command sending in real time.
The embodiment of the present application when receiving the first phonetic order, can first be such that the speech recognition system of robot activates State, and the Sounnd source direction that robot turns to voice is controlled, it is at armed state.Then control is received within a certain period of time The control command for executing operation carries out expected movement according to control command.It can accurately be operated, be improved as indicated Speech recognition controlled effect under noise state improves the discrimination of the phonetic order of user's input, can be relatively accurately It works according to the phonetic order of user, also increases the interest of human-computer interaction.
Step S706: the second phonetic order is received;
Second phonetic order: being used to indicate operation, i.e., instruction control robot executes operation.The operation can be customized The operation of setting is also possible to system default setting, such as: sweep operation, the operation that mops floor, weeding operation etc..Specific real It applies in example, the second phonetic order customized can be arranged, can also be arranged with system default, such as: the second phonetic order can be User is customized " come here cleaning ", " coming to clean ", " here " etc..Second phonetic order (manipulation class phonetic order) is deposited in advance The cloud for being stored in the robot or being connect with the robot.It is hereafter " to come here with the second phonetic order for convenience of description It is illustrated for cleaning ".
In optional performance, whether the second phonetic order process that receives, which exists to receive, is used to indicate the of operation The case where two phonetic orders, it at this moment can be judgement in the preset period, such as 1 minute, 2 minutes etc., which can be with It is preset by touch apparatus.Situation is monitored according in the preset time range, executes the following two kinds situation respectively.
The first situation, if it is determined that receiving second phonetic order, then the robot executes second voice Instruct the operation of instruction.
For example, monitoring the control command of " come here cleaning ", order of the robot according to user, Xiang Sheng in 1 minute Source direction is mobile, until receiving third voice control command.
Second situation, if it is determined that not receiving second phonetic order, then the robot goes back to former direction, continues Execute original operation.
For example, not monitoring the control command of " come here cleaning " in 1 minute, robot is according to original cleaning direction Or position is swept, until receiving the first voice control command again.
Step S708: the sound source position of identification second phonetic order simultaneously makes the robot be moved to the sound source position Near setting.
For example, recognizing the control command of " come here cleaning ", robot is moved according to the order of user to Sounnd source direction It is dynamic, until being moved to sound source position or receiving third voice control command.
The mode that the embodiment of the present application can take voice specified makes robot carry out accurate work according to the instruction of user Make, user carries out the cleaning of designated position by voice control sweeping robot, such as cleans bedroom, please clean parlor, comes My this cleaning etc. allows the robot to carry out purposive work according to the wish of user, improves work efficiency, increase User experience.
In a further embodiment, as shown in figure 8, being applied to the robot in Fig. 1 application scenarios, the embodiment of the present application is mentioned For a kind of robot voice control method, described method includes following steps:
Step S802: the first phonetic order is received;
Under normal conditions, the speech recognition system of robot can have dormant state and state of activation.Such as work as robot Be in working condition or unused state, speech recognition system at this time in a dormant state, in the dormant state, language Sound identifying system hardly occupies the excess resource of robot, is other languages that will not go to identify in addition to the first phonetic order Sound instruction.
It, can be by speech recognition system by stopping if speech recognition system in a dormant state receives the first phonetic order Dormancy state switches to state of activation.In active state, speech recognition system can identify configuration in speech recognition system Phonetic order, such as the first phonetic order, the second phonetic order etc..
Specifically, the first phonetic order: for waking up speech recognition system, i.e., instruction control speech recognition system, which is in, swashs State living.In an implementation, if speech recognition system in a dormant state, when robot receives the first phonetic order, can incite somebody to action Speech recognition system switches to state of activation by dormant state.If speech recognition system is active, voice knowledge is controlled Other system keeps state of activation, can also do nothing.In the particular embodiment, the first phonetic order It customized can be arranged, can also be arranged with system default, such as: the first phonetic order can be customized " the unlatching language of user Sound ", " booting " " coming ", " coming here ", " to here ", " to here " etc..First phonetic order (waking up class phonetic order) is pre- The cloud for being first stored in the robot or being connect with the robot.For convenience of description, hereafter with the first phonetic order for " mistake Come " for be illustrated.
Step S804: the Sounnd source direction of identification first phonetic order simultaneously makes the robot turn to the sound source side To;
After the first activation instruction for example " to come " etc. has been received in robot, robot passes through angle detecting unit Detect the direction of voice, for example, by using the voice for being input to multiple receiving units time difference or horizontal detect voice Direction.The direction for the voice that angle detecting unit will test is transmitted to control unit.Control unit can be by using by side The voice direction control drive system detected to detection unit, makes robot carry out the movement such as original place rotation, so that robot Direction of advance turns to user's Sounnd source direction.After such human-computer interaction is uttered a sound similar to a people to work by people, stop Work in hand turns to dialogue state, so that human-computer interaction is more humanized.
In some possible implementations, the Sounnd source direction of identification first phonetic order simultaneously makes the machine People turns to the Sounnd source direction, specifically comprises the following steps:
Step S8042: the Sounnd source direction of identification first phonetic order;
Step S8044: do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
After the first activation instruction for example " to come " etc. has been received in robot, robot passes through angle detecting unit Detect the direction of voice, for example, by using the voice for being input to multiple receiving units time difference or horizontal detect voice Direction.The direction for the voice that angle detecting unit will test is transmitted to control unit.Control unit can be by using by side The voice direction control drive system detected to detection unit, makes robot carry out the movement such as original place rotation, so that robot Direction of advance turns to user's Sounnd source direction.This process, robot do not stop its working condition, clean motor and are still within booting shape State.
Step S8046: stop the operating of the driving motor.
After waiting revolute to Sounnd source direction, robot only retains and is active about all drive systems Speech recognition system, robot is in complete armed state at this time, has detected whether control command sending in real time.
The embodiment of the present application when receiving the first phonetic order, can first be such that the speech recognition system of robot activates State, and the Sounnd source direction that robot turns to voice is controlled, it is at armed state.Then control is received within a certain period of time The control command for executing operation carries out expected movement according to control command.It can accurately be operated, be improved as indicated Speech recognition controlled effect under noise state improves the discrimination of the phonetic order of user's input, can be relatively accurately It works according to the phonetic order of user, also increases the interest of human-computer interaction.
Step S806: the second phonetic order is received;
Second phonetic order: being used to indicate operation, i.e., instruction control robot executes operation.The operation can be customized The operation of setting is also possible to system default setting, such as: sweep operation, the operation that mops floor, weeding operation etc..Specific real It applies in example, the second phonetic order customized can be arranged, can also be arranged with system default, such as: the second phonetic order can be User is customized " come here cleaning ", " coming to clean ", " here " etc..Second phonetic order (manipulation class phonetic order) is deposited in advance The cloud for being stored in the robot or being connect with the robot.It is hereafter " to come here with the second phonetic order for convenience of description It is illustrated for cleaning ".
In optional performance, whether the second phonetic order process that receives, which exists to receive, is used to indicate the of operation The case where two phonetic orders, it at this moment can be judgement in the preset period, such as 1 minute, 2 minutes etc., which can be with It is preset by touch apparatus.Situation is monitored according in the preset time range, executes the following two kinds situation respectively.
The first situation, step S808: if it is determined that receiving second phonetic order, then described in the robot execution The operation of second phonetic order instruction.
For example, monitoring the control command of " come here cleaning ", order of the robot according to user, Xiang Sheng in 1 minute Source direction is mobile, until receiving third voice control command.
Second situation, step S810: if it is determined that not receiving second phonetic order, then the robot goes back to original Direction continues to execute original operation.
For example, not monitoring the control command of " come here cleaning " in 1 minute, robot is according to original cleaning direction Or position is swept, until receiving the first voice control command again.
Step S808: the sound source position of identification second phonetic order simultaneously makes the robot be moved to the sound source position Near setting.
In some possible implementations, in order to improve user experience, fast shifting when making the robot with than cleaning Dynamic speed is moved near the sound source position.Such as make the robot with the movement speed of 1.5-3 times (preferably 1.5-2 times) It is moved near the sound source position.During this, avoidance encounters barrier deceleration and still works, prevents because speed is too fast And cause risk factor.
Due to determining distance according to sound source, the decibel by sound source is different, and indoor barrier voice signal reflection etc. is a variety of Factor influences, and can generate certain error.In order to further increase sound source position positioning accuracy, hear that sound source instructs in robot Afterwards, sensor device entrained by robot, such as camera, displacement sensor etc. confirm the distance of sound source by identification, from And increase substantially accurate positioning.
Specifically, the sound source position of identification second phonetic order simultaneously makes in some possible implementations The robot is moved near the sound source position, comprising: the sound source position of identification second phonetic order;Pass through sensing Device confirms the sound source position;It is moved to the robot near the sound source position.
It should be noted that above-mentioned aiding sensors are not required, distance is determined according to sound source, machine can also be passed through Study, according to the corresponding relationship of indoor regular speech decibel and distance, carries out machine learning, by learning model typing before factory Robot memory.As long as user in less complicated indoor environment, takes conventional decibel to carry out phonetic order sending, robot It can substantially achieve near sound source position, after then reaching again nearby, issue secondary-confirmation request, be substantially also able to satisfy and want It asks.
Step S812: third phonetic order is received;
Third phonetic order: confirmation operation, the i.e. operation of confirmation robot the second phonetic order correctness of execution are used for. The operation can be the operation of customized setting, be also possible to system default setting, such as: position is correct, positional fault, OK, Continue to clean, relocate etc..In the particular embodiment, third phonetic order customized can be arranged, can also be silent with system Recognize setting, such as: it is customized " position is correct " that third phonetic order can be user, " positional fault ", " OK " etc..Third voice The cloud that instruction (confirmation class phonetic order) is pre-stored within the robot or connect with the robot.For convenience of description, Hereafter it is illustrated so that the second phonetic order is " position is correct ", " positional fault " two class as an example.
Step S814: identifying the content of the third phonetic order and executes phase according to the content of the third phonetic order The movement answered.
In some possible implementations, the content of the identification third phonetic order and according to the third language The content of sound instruction executes corresponding movement, including the following two kinds situation:
The first situation identifies that the third phonetic order is the correct class instruction in position, and user passes through " position is correct " Voice command confirms that the position of robot is correct, and the robot starts to execute the movement locally cleaned, such as near user A certain position executes cleaning movement.
Second situation identifies the third phonetic order as the instruction of positional fault class, and user passes through " positional fault " Voice command denies the position of robot, and the sound source position that the robot " positional fault " according to instructs continues to sound source Position is mobile, is moved to " positional fault " sound source position always and nearby carries out reaffirming process, or until receives position Correct class instruction, the robot start to execute the movement locally cleaned, and a certain position executes cleaning movement near user.
The mode that the embodiment of the present application can take voice specified makes robot carry out accurate work according to the instruction of user Make, user carries out the cleaning of designated position by voice control sweeping robot, such as cleans bedroom, please clean parlor, comes My this cleaning etc. allows the robot to carry out purposive work according to the wish of user, meanwhile, it is swept in designated position Increase sensor in journey as positioning means, increases the position recognition accuracy of robot, improve work efficiency, increase User experience.
In a further embodiment, as shown in figure 9, being conjointly employed in the robot in Fig. 1 application scenarios, the application is implemented Example provides a kind of robot voice control device, including the first receiving unit 902, the first recognition unit 904, the second receiving unit 906 and second recognition unit 908, each unit be described as follows.The method that Fig. 9 shown device can execute embodiment illustrated in fig. 7, The part that the present embodiment is not described in detail can refer to the related description to embodiment illustrated in fig. 7.The implementation procedure of the technical solution With the description in technical effect embodiment shown in Figure 7, details are not described herein.
First receiving unit 902, for receiving the first phonetic order;
First recognition unit 904 Sounnd source direction of first phonetic order and turns to the robot for identification The Sounnd source direction;
Second receiving unit 906, for receiving the second phonetic order;
Second recognition unit 908 identifies the sound source position of second phonetic order and the robot is made to be moved to institute It states near sound source position.
For example, recognizing the control command of " come here cleaning ", robot is moved according to the order of user to Sounnd source direction It is dynamic, until being moved to sound source position or receiving third voice control command.
The mode that the embodiment of the present application can take voice specified makes robot carry out accurate work according to the instruction of user Make, user carries out the cleaning of designated position by voice control sweeping robot, such as cleans bedroom, please clean parlor, comes My this cleaning etc. allows the robot to carry out purposive work according to the wish of user, improves work efficiency, increase User experience.
In a further embodiment, as shown in Figure 10, the robot being conjointly employed in Fig. 1 application scenarios, the application are real It applies example and a kind of robot voice control device is provided, including the first receiving unit 1002, the first recognition unit 1004, second receive Unit 1006, the second recognition unit 1008, third receiving unit 1010 and the 4th recognition unit 1012, the description of each unit is such as Under.The method that Figure 10 shown device can execute embodiment illustrated in fig. 8, the part that the present embodiment is not described in detail can refer to pair The related description of embodiment illustrated in fig. 8.Retouching in implementation procedure and the technical effect embodiment shown in Figure 8 of the technical solution It states, details are not described herein.
First receiving unit 1002, for receiving the first phonetic order;
First recognition unit 1004 Sounnd source direction of first phonetic order and turns to the robot for identification The Sounnd source direction;
Second receiving unit 1006, for receiving the second phonetic order;
Second recognition unit 1008 sound source position of second phonetic order and keeps the robot mobile for identification Near to the sound source position.
Third receiving unit 1010, for receiving third phonetic order;
Third recognition unit 1012, for identification content of the third phonetic order and according to the third phonetic order Content execute corresponding movement.
In some possible implementations, the content of the identification third phonetic order and according to the third language The content of sound instruction executes corresponding movement, including the following two kinds situation:
The first situation identifies that the third phonetic order is the correct class instruction in position, and user passes through " position is correct " Voice command confirms that the position of robot is correct, and the robot starts to execute the movement locally cleaned, such as near user A certain position executes cleaning movement.
Second situation identifies the third phonetic order as the instruction of positional fault class, and user passes through " positional fault " Voice command denies the position of robot, and the sound source position that the robot " positional fault " according to instructs continues to sound source Position is mobile, is moved to " positional fault " sound source position always and nearby carries out reaffirming process, or until receives position Correct class instruction, the robot start to execute the movement locally cleaned, and a certain position executes cleaning movement near user.
In some possible implementations, first recognition unit 1002 is also used to:
Identify the Sounnd source direction of first phonetic order;
Do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
Stop the operating of the driving motor.
In some possible implementations, the third recognition unit 1010 is also used to:
Identify that the third phonetic order is position correct class instruction, the robot start to execute locally clean it is dynamic Make.
In some possible implementations, the third recognition unit 1010 is also used to:
The third phonetic order is identified as the instruction of positional fault class, the robot is instructed according to the positional fault class Sound source position continue it is mobile to sound source position;
Until receiving the correct class instruction in position, the robot starts to execute the movement locally cleaned.
In some possible implementations, second recognition unit 1006 is also used to:
Identify the sound source position of second phonetic order;
The sound source position is confirmed by sensor;
It is moved to the robot near the sound source position.
It is described to be moved to the robot near the sound source position in some possible implementations, comprising:
Fast movement speed is moved near the sound source position when making the robot with than cleaning.
In some possible implementations, first phonetic order is to wake up class phonetic order, second voice Instruction is manipulation class phonetic order.
In some possible implementations, the wake-up class phonetic order and the manipulation class phonetic order are stored in advance In the robot or the cloud being connect with the robot.
The mode that the embodiment of the present application can take voice specified makes robot carry out accurate work according to the instruction of user Make, user carries out the cleaning of designated position by voice control sweeping robot, such as cleans bedroom, please clean parlor, comes My this cleaning etc. allows the robot to carry out purposive work according to the wish of user, meanwhile, it is swept in designated position Increase sensor in journey as positioning means, increases the position recognition accuracy of robot, improve work efficiency, increase User experience.
The embodiment of the present application provides a kind of robot, including as above any robot voice control device.
The embodiment of the present application provides a kind of robot, including processor and memory, and the memory is stored with can be by The computer program instructions that the processor executes when the processor executes the computer program instructions, realize aforementioned The method and step of one embodiment.
The embodiment of the present application provides a kind of non-transient computer readable storage medium, is stored with computer program instructions, The computer program instructions realize the method and step of aforementioned any embodiment when being called and being executed by processor.
As shown in figure 11, robot 1100 may include processing unit (such as central processing unit, graphics processor etc.) 1101, it can be loaded at random according to the program being stored in read-only memory (ROM) 1102 or from storage device 1108 It accesses the program in memory (RAM) 1103 and executes various movements appropriate and processing.In RAM 1103, it is also stored with electricity Child robot 1100 operates required various programs and data.Processing unit 1101, ROM 1102 and RAM 1103 pass through total Line 1104 is connected with each other.Input/output (I/O) interface 1105 is also connected to bus 1104.
In general, following device can connect to I/O interface 1105: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph As the input unit 1106 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration The output device 1107 of dynamic device etc.;Storage device 1108 including such as tape, hard disk etc.;And communication device 1109.Communication Device 1109 can permit electronic robot 1100 and wirelessly or non-wirelessly be communicated with other robot to exchange data.Although figure 7 show the electronic robot 1100 with various devices, it should be understood that being not required for implementing or having all show Device.It can alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communication device 1109, or from storage device 1108 are mounted, or are mounted from ROM 1102.When the computer program is executed by processing unit 1101, the disclosure is executed The above-mentioned function of being limited in the method for embodiment.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned robot;It is also possible to individualism, and without It is incorporated in the robot.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+ +, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package, Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, the One acquiring unit is also described as " obtaining the unit of at least two internet protocol addresses ".
The apparatus embodiments described above are merely exemplary, wherein described, unit can as illustrated by the separation member It is physically separated with being or may not be, component shown as a unit may or may not be physics list Member, it can it is in one place, or may be distributed over multiple network units.It can be selected according to the actual needs In some or all of the modules achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying creativeness Labour in the case where, it can understand and implement.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although The application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: it still may be used To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features; And these are modified or replaceed, each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution spirit and Range.

Claims (21)

1. a kind of robot voice control method, which is characterized in that the described method includes:
Receive the first phonetic order;
It identifies the Sounnd source direction of first phonetic order and the robot is made to turn to the Sounnd source direction;
Receive the second phonetic order;
It identifies the sound source position of second phonetic order and is moved to the robot near the sound source position.
2. the method according to claim 1, wherein the Sounnd source direction of identification first phonetic order is simultaneously The robot is set to turn to the Sounnd source direction, comprising:
Identify the Sounnd source direction of first phonetic order;
Do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
Stop the operating of the driving motor.
3. the method according to claim 1, wherein the sound source position of identification second phonetic order is simultaneously After so that the robot is moved to the sound source position nearby, further includes:
Receive third phonetic order;
It identifies the content of the third phonetic order and corresponding movement is executed according to the content of the third phonetic order.
4. according to the method described in claim 3, it is characterized in that, the content and basis of the identification third phonetic order The content of the third phonetic order executes corresponding movement, comprising:
Identify that the third phonetic order is the correct class instruction in position, the robot starts to execute the movement locally cleaned.
5. according to the method described in claim 3, it is characterized in that, the content and basis of the identification third phonetic order The content of the third phonetic order executes corresponding movement, comprising:
Identify that the third phonetic order instructs for positional fault class, the sound that the robot is instructed according to the positional fault class Continue mobile to sound source position in source position;
Until receiving the correct class instruction in position, the robot starts to execute the movement locally cleaned.
6. -5 any method according to claim 1, which is characterized in that the sound source of identification second phonetic order Position is simultaneously moved to the robot near the sound source position, comprising:
Identify the sound source position of second phonetic order;
The sound source position is confirmed by sensor;
It is moved to the robot near the sound source position.
7. according to the method described in claim 6, it is characterized in that, described so that the robot is moved to the sound source position attached Closely, comprising:
Fast movement speed is moved near the sound source position when making the robot with than cleaning.
8. according to the method described in claim 6, it is characterized by:
First phonetic order is to wake up class phonetic order, and second phonetic order is manipulation class phonetic order.
9. according to the method described in claim 8, it is characterized by: the wake-up class phonetic order and the manipulation class voice refer to Enable the cloud for being pre-stored within the robot or connecting with the robot.
10. a kind of robot voice control device characterized by comprising
First receiving unit, for receiving the first phonetic order;
First recognition unit the Sounnd source direction of first phonetic order and makes the robot turn to the sound source for identification Direction;
Second receiving unit, for receiving the second phonetic order;
Second recognition unit the sound source position of second phonetic order and makes the robot be moved to the sound for identification Near source position.
11. device according to claim 10, which is characterized in that first recognition unit is also used to:
Identify the Sounnd source direction of first phonetic order;
Do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
Stop the operating of the driving motor.
12. device according to claim 10, which is characterized in that further include:
Third receiving unit, for receiving third phonetic order;
Third recognition unit the content of the third phonetic order and is held for identification according to the content of the third phonetic order The corresponding movement of row.
13. device according to claim 12, which is characterized in that the third recognition unit is also used to:
Identify that the third phonetic order is the correct class instruction in position, the robot starts to execute the movement locally cleaned.
14. device according to claim 12, which is characterized in that the third recognition unit is also used to:
Identify that the third phonetic order instructs for positional fault class, the sound that the robot is instructed according to the positional fault class Continue mobile to sound source position in source position;
Until receiving the correct class instruction in position, the robot starts to execute the movement locally cleaned.
15. any device of 0-14 according to claim 1, which is characterized in that second recognition unit is also used to:
Identify the sound source position of second phonetic order;
The sound source position is confirmed by sensor;
It is moved to the robot near the sound source position.
16. device according to claim 15, which is characterized in that described that the robot is made to be moved to the sound source position Near, comprising:
Fast movement speed is moved near the sound source position when making the robot with than cleaning.
17. device according to claim 15, it is characterised in that:
First phonetic order is to wake up class phonetic order, and second phonetic order is manipulation class phonetic order.
18. device according to claim 17, it is characterised in that: the wake-up class phonetic order and the manipulation class voice Instruct the cloud for being pre-stored within the robot or connecting with the robot.
19. a kind of robot voice control device, which is characterized in that including processor and memory, the memory is stored with The computer program instructions that can be executed by the processor when processor executes the computer program instructions, are realized Any method and step of claim 1-9.
20. a kind of robot, which is characterized in that including such as described in any item devices of claim 10-19.
21. a kind of non-transient computer readable storage medium, which is characterized in that be stored with computer program instructions, the meter Calculation machine program instruction realizes any method and step of claim 1-9 when being called and being executed by processor.
CN201910265952.9A 2019-04-03 2019-04-03 Voice control method and device for sweeping robot, robot and medium Active CN110051289B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910265952.9A CN110051289B (en) 2019-04-03 2019-04-03 Voice control method and device for sweeping robot, robot and medium
CN202210225162.XA CN114468898B (en) 2019-04-03 2019-04-03 Robot voice control method, device, robot and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910265952.9A CN110051289B (en) 2019-04-03 2019-04-03 Voice control method and device for sweeping robot, robot and medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210225162.XA Division CN114468898B (en) 2019-04-03 2019-04-03 Robot voice control method, device, robot and medium

Publications (2)

Publication Number Publication Date
CN110051289A true CN110051289A (en) 2019-07-26
CN110051289B CN110051289B (en) 2022-03-29

Family

ID=67318233

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210225162.XA Active CN114468898B (en) 2019-04-03 2019-04-03 Robot voice control method, device, robot and medium
CN201910265952.9A Active CN110051289B (en) 2019-04-03 2019-04-03 Voice control method and device for sweeping robot, robot and medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210225162.XA Active CN114468898B (en) 2019-04-03 2019-04-03 Robot voice control method, device, robot and medium

Country Status (1)

Country Link
CN (2) CN114468898B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110379424A (en) * 2019-07-29 2019-10-25 方毅 A method of it is accurate to a little by voice control
CN110428850A (en) * 2019-08-02 2019-11-08 深圳市无限动力发展有限公司 Voice pick-up method, device, storage medium and mobile robot
CN110881909A (en) * 2019-12-20 2020-03-17 小狗电器互联网科技(北京)股份有限公司 Control method and device of sweeper
CN110946518A (en) * 2019-12-20 2020-04-03 小狗电器互联网科技(北京)股份有限公司 Control method and device of sweeper
CN111261012A (en) * 2020-01-19 2020-06-09 佛山科学技术学院 Pneumatic teaching trolley
CN111358368A (en) * 2020-03-05 2020-07-03 宁波大学 Manual guide type floor sweeping robot
CN112155485A (en) * 2020-09-14 2021-01-01 江苏美的清洁电器股份有限公司 Control method, control device, cleaning robot and storage medium
WO2021022420A1 (en) * 2019-08-02 2021-02-11 深圳市无限动力发展有限公司 Audio collection method, apparatus, and mobile robot
CN112890680A (en) * 2019-11-19 2021-06-04 科沃斯机器人股份有限公司 Follow-up cleaning operation method, control method, device, robot and storage medium
CN113739322A (en) * 2021-08-20 2021-12-03 科沃斯机器人股份有限公司 Purifier and control method thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001057853A1 (en) * 2000-01-31 2001-08-09 Japan Science And Technology Corporation Robot auditory device
CN104934033A (en) * 2015-04-21 2015-09-23 深圳市锐曼智能装备有限公司 Control method of robot sound source positioning and awakening identification and control system of robot sound source positioning and awakening identification
CN106328132A (en) * 2016-08-15 2017-01-11 歌尔股份有限公司 Voice interaction control method and device for intelligent equipment
CN108814449A (en) * 2018-07-30 2018-11-16 马鞍山问鼎网络科技有限公司 A kind of artificial intelligence sweeping robot control method based on phonetic order
CN109202897A (en) * 2018-08-07 2019-01-15 北京云迹科技有限公司 Information transferring method and system
CN109346069A (en) * 2018-09-14 2019-02-15 北京赋睿智能科技有限公司 A kind of interactive system and device based on artificial intelligence
CN109358751A (en) * 2018-10-23 2019-02-19 北京猎户星空科技有限公司 A kind of wake-up control method of robot, device and equipment
CN109377991A (en) * 2018-09-30 2019-02-22 珠海格力电器股份有限公司 Intelligent equipment control method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3771812B2 (en) * 2001-05-28 2006-04-26 インターナショナル・ビジネス・マシーンズ・コーポレーション Robot and control method thereof
KR101356165B1 (en) * 2012-03-09 2014-01-24 엘지전자 주식회사 Robot cleaner and controlling method of the same
CN105957521B (en) * 2016-02-29 2020-07-10 青岛克路德机器人有限公司 Voice and image composite interaction execution method and system for robot
CN109093627A (en) * 2017-06-21 2018-12-28 富泰华工业(深圳)有限公司 intelligent robot
CN108831483A (en) * 2018-09-07 2018-11-16 马鞍山问鼎网络科技有限公司 A kind of artificial intelligent voice identifying system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001057853A1 (en) * 2000-01-31 2001-08-09 Japan Science And Technology Corporation Robot auditory device
CN104934033A (en) * 2015-04-21 2015-09-23 深圳市锐曼智能装备有限公司 Control method of robot sound source positioning and awakening identification and control system of robot sound source positioning and awakening identification
CN106328132A (en) * 2016-08-15 2017-01-11 歌尔股份有限公司 Voice interaction control method and device for intelligent equipment
CN108814449A (en) * 2018-07-30 2018-11-16 马鞍山问鼎网络科技有限公司 A kind of artificial intelligence sweeping robot control method based on phonetic order
CN109202897A (en) * 2018-08-07 2019-01-15 北京云迹科技有限公司 Information transferring method and system
CN109346069A (en) * 2018-09-14 2019-02-15 北京赋睿智能科技有限公司 A kind of interactive system and device based on artificial intelligence
CN109377991A (en) * 2018-09-30 2019-02-22 珠海格力电器股份有限公司 Intelligent equipment control method and device
CN109358751A (en) * 2018-10-23 2019-02-19 北京猎户星空科技有限公司 A kind of wake-up control method of robot, device and equipment

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110379424A (en) * 2019-07-29 2019-10-25 方毅 A method of it is accurate to a little by voice control
CN110379424B (en) * 2019-07-29 2021-11-02 方毅 Method for controlling accurate point reaching through voice
WO2021022420A1 (en) * 2019-08-02 2021-02-11 深圳市无限动力发展有限公司 Audio collection method, apparatus, and mobile robot
CN110428850A (en) * 2019-08-02 2019-11-08 深圳市无限动力发展有限公司 Voice pick-up method, device, storage medium and mobile robot
CN112890680B (en) * 2019-11-19 2023-12-12 科沃斯机器人股份有限公司 Follow-up cleaning operation method, control device, robot and storage medium
CN112890680A (en) * 2019-11-19 2021-06-04 科沃斯机器人股份有限公司 Follow-up cleaning operation method, control method, device, robot and storage medium
CN110946518A (en) * 2019-12-20 2020-04-03 小狗电器互联网科技(北京)股份有限公司 Control method and device of sweeper
CN110881909A (en) * 2019-12-20 2020-03-17 小狗电器互联网科技(北京)股份有限公司 Control method and device of sweeper
CN111261012A (en) * 2020-01-19 2020-06-09 佛山科学技术学院 Pneumatic teaching trolley
CN111261012B (en) * 2020-01-19 2022-01-28 佛山科学技术学院 Pneumatic teaching trolley
CN111358368A (en) * 2020-03-05 2020-07-03 宁波大学 Manual guide type floor sweeping robot
CN112155485A (en) * 2020-09-14 2021-01-01 江苏美的清洁电器股份有限公司 Control method, control device, cleaning robot and storage medium
CN113739322A (en) * 2021-08-20 2021-12-03 科沃斯机器人股份有限公司 Purifier and control method thereof

Also Published As

Publication number Publication date
CN114468898B (en) 2023-05-05
CN114468898A (en) 2022-05-13
CN110051289B (en) 2022-03-29

Similar Documents

Publication Publication Date Title
CN110051289A (en) Robot voice control method and device, robot and medium
US20220167820A1 (en) Method and Apparatus for Constructing Map of Working Region for Robot, Robot, and Medium
CN110495821B (en) Cleaning robot and control method thereof
CN109920424A (en) Robot voice control method and device, robot and medium
CN110623606B (en) Cleaning robot and control method thereof
TWI821992B (en) Cleaning robot and control method thereof
CN110136704A (en) Robot voice control method and device, robot and medium
US20220125270A1 (en) Method for controlling automatic cleaning device, automatic cleaning device, and non-transitory storage medium
JP2019505256A (en) Automatic cleaning equipment and cleaning method
CN109932726A (en) Robot ranging calibration method and device, robot and medium
CN109920425A (en) Robot voice control method and device, robot and medium
CN210931181U (en) Cleaning robot
CN113625700B (en) Self-walking robot control method, device, self-walking robot and storage medium
CN210931183U (en) Cleaning robot
CN117008148A (en) Method, apparatus and storage medium for detecting slip state
CN116942017A (en) Automatic cleaning device, control method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220424

Address after: 102200 No. 8008, floor 8, building 16, yard 37, Chaoqian Road, Changping Park, Zhongguancun Science and Technology Park, Changping District, Beijing

Patentee after: Beijing Stone Innovation Technology Co.,Ltd.

Address before: No. 6016, 6017 and 6018, Block C, No. 8 Heiquan Road, Haidian District, Beijing 100085

Patentee before: Beijing Roborock Technology Co.,Ltd.