CN109920424A - Robot voice control method and device, robot and medium - Google Patents

Robot voice control method and device, robot and medium Download PDF

Info

Publication number
CN109920424A
CN109920424A CN201910265953.3A CN201910265953A CN109920424A CN 109920424 A CN109920424 A CN 109920424A CN 201910265953 A CN201910265953 A CN 201910265953A CN 109920424 A CN109920424 A CN 109920424A
Authority
CN
China
Prior art keywords
robot
unit
driving trace
phonetic order
dimensional map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910265953.3A
Other languages
Chinese (zh)
Inventor
刘帅
刘洋
肖福建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Stone Innovation Technology Co ltd
Original Assignee
Beijing Rockrobo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Rockrobo Technology Co Ltd filed Critical Beijing Rockrobo Technology Co Ltd
Priority to CN201910265953.3A priority Critical patent/CN109920424A/en
Publication of CN109920424A publication Critical patent/CN109920424A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the application provides a robot voice control method, a device, a robot and a medium, wherein the robot voice control method comprises the following steps: receiving a voice instruction; recognizing the content of the voice command, and determining the working area of the robot according to the content; and executing the voice instruction in the working area. The embodiment of the application can adopt a voice designated mode, so that the robot can accurately work according to the instruction of the user, especially, when the sweeping robot cleans for the first time, the indoor layout is automatically divided, then the internal memory or the cloud end is recorded, a certain region is wanted to be cleaned next time, the user sweeps the floor through the voice control, the robot can work purposefully according to the will of the user, the accurate voice controllability of the robot is increased, and the working efficiency is improved.

Description

Robot voice control method, device, robot and medium
Technical field
This application involves control technology field more particularly to a kind of robot voice control method, device, robot and Jie Matter.
Background technique
With the development of technology, there are the various robots with speech recognition system, such as machine of sweeping the floor People, floor-mopping robot, dust catcher, weeder etc..These robots can receive the language of user's input by speech recognition system Sound instruction, to execute the operation of phonetic order instruction, this has not only liberated labour, also as save human cost.
In the related technology, above-mentioned robot can receive the phonetic order of user's input, then be known by the voice of itself The phonetic order of other system identification input, to control the operation that robot executes phonetic order instruction.But user is in control machine When device people, it is still desirable to can accurately control robot to specified position and do corresponding work, such as instruct machine of sweeping the floor People is cleaned (partition cleans, and cleans for example, specifying in bedroom, specifies and cleans in parlor) to specified place, and is adopted at present The mode taken is that user sends the purging zone of control command control device people, machine by two indoor maps on mobile phone People treats purging zone according to relative bearing and carries out partition, and robot is just moved to the region and executes cleaning movement.
Since current most of sweeping robots can not generate 3 D stereo map, the map of most of sweeper is all two Dimension, relatively abstract, user is according to the map of mobile phone terminal, it is difficult to accurately carry out accurately to practical region to be cleaned within doors Positioning, and since mobile phone screen size is limited, there is certain gap in the region often marked on map with what actual needs cleaned, Cause robot that can not accurately clean according to user, causes user experience bad.
Summary of the invention
It is situated between in view of this, the embodiment of the present application provides a kind of robot voice control method, device, robot and storage Matter precisely works to allow the robot to according to phonetic order to designated position.
In a first aspect, the embodiment of the present application provides a kind of robot voice control method, which comprises
Receive phonetic order;
It identifies the content of the phonetic order, and the robot work region is determined according to the content;
The phonetic order is executed in the working region.
It is described that the robot work region is determined according to the content in some possible implementations, comprising:
The location information prestored according to robot described in the content search of the phonetic order;
The robot work region is determined according to the positional information.
In some possible implementations, the location information prestored, comprising:
Mark the specific position in the robot driving trace;
The driving trace three-dimensional map is drawn according to the specific position;
The driving trace three-dimensional map is stored, the location information prestored is obtained.
Specific position in some possible implementations, in the label robot driving trace, comprising:
By laser scanning methods or camera visual identity mode, obtain specific in the robot driving trace Location information;
It is marked the specific location information as different zones separation.
In some possible implementations, it is described according to the specific position draw the driving trace three-dimensional map it Afterwards, comprising:
The driving trace three-dimensional map is sent to client;
User is received to the hand labeled information of each region of the driving trace three-dimensional map.
In some possible implementations, the storage driving trace three-dimensional map obtains the position letter prestored Breath, comprising:
The cloud that the driving trace three-dimensional map is stored in the robot or is connect with the robot obtains pre- The location information deposited.
Second aspect, the embodiment of the present application provide a kind of robot voice control device, comprising:
Receiving unit, for receiving phonetic order;
Recognition unit, the content of the phonetic order for identification, and determine that the robot works according to the content Region;
Execution unit, for executing the phonetic order in the working region.
In some possible implementations, the recognition unit further includes,
Searching unit, the location information prestored for the robot according to the content search of the phonetic order;
Determination unit, for determining the robot work region according to the positional information.
In some possible implementations, the searching unit includes:
Marking unit, for marking the specific position in the robot driving trace;
Drawing unit, for drawing the driving trace three-dimensional map according to the specific position;
Storage unit obtains the location information prestored for storing the driving trace three-dimensional map.
In some possible implementations, the marking unit is also used to,
By laser scanning methods or camera visual identity mode, obtain specific in the robot driving trace Location information;
It is marked the specific location information as different zones separation.
In some possible implementations, the searching unit further include:
Transmission unit, for the driving trace three-dimensional map to be sent to client;
Receiving unit, for receiving user to the hand labeled information of each region of the driving trace three-dimensional map.
In some possible implementations, the storage unit is also used to,
The cloud that the driving trace three-dimensional map is stored in the robot or is connect with the robot obtains pre- The location information deposited.
The third aspect, the embodiment of the present application provide a kind of robot voice control device, including processor and memory, institute It states memory and is stored with the computer program instructions that can be executed by the processor, the processor executes the computer journey When sequence instructs, as above any method and step is realized.
Fourth aspect, the embodiment of the present application provide a kind of robot, including as above described in any item devices.
5th aspect, the embodiment of the present application provide a kind of non-transient computer readable storage medium, are stored with computer Program instruction, the computer program instructions realize as above any method and step when being called and being executed by processor.
Compared with the existing technology, the present invention at least has following technical effect that
The mode that the embodiment of the present application can take voice specified makes robot carry out accurate work according to the instruction of user Make, when especially for example sweeping robot cleans for the first time, indoor arrangement is divided automatically, memory or cloud is then recorded End, wants to clean a certain region, user carries out (for example, bedroom please be cleaned, to clean by voice control sweeping robot next time Parlor, twice of parlor cleaning etc.), allow the robot to carry out purposive work according to the wish of user, increase machine The accurate voice controllability of device people, improves work efficiency.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this Shen Some embodiments please for those of ordinary skill in the art without creative efforts, can be with root Other attached drawings are obtained according to these attached drawings.
Fig. 1 is an application scenarios schematic diagram provided by the embodiments of the present application;
Fig. 2 is robot architecture's top view provided by the embodiments of the present application;
Fig. 3 is robot architecture's bottom view provided by the embodiments of the present application;
Fig. 4 is robot architecture's front view provided by the embodiments of the present application;
Fig. 5 is robot architecture's perspective view provided by the embodiments of the present application;
Fig. 6 is robot architecture's block diagram provided by the embodiments of the present application;
Fig. 7 is the flow diagram for the robot voice control method that one embodiment of the application provides;
Fig. 8 is the flow diagram for the robot voice control method that the another embodiment of the application provides;
Fig. 9 is the structural schematic diagram for the robot voice control device that one embodiment of the application provides;
Figure 10 is the structural schematic diagram for the robot voice control device that the another embodiment of the application provides;
Figure 11 is the structural schematic diagram for the robot voice control device that another embodiment of the application provides;
Figure 12 is the electronic structure schematic diagram of robot provided by the embodiments of the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application In attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is Some embodiments of the present application, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art Every other embodiment obtained without creative efforts, shall fall in the protection scope of this application.
It will be appreciated that though may be described in the embodiment of the present application using term first, second, third, etc.., But these ... it should not necessarily be limited by these terms.These terms be only used to by ... be distinguished from each other out.For example, not departing from the application In the case where scope of embodiments, first ... can also be referred to as second ..., and similarly, second ... can also be referred to as One ....
In order to clearly describe the behavior of robot, following direction definition is carried out:
As shown in figure 5, robot 100 can be by being mutually perpendicular to the shifting of axis relative to the following three defined by main body 110 Dynamic various combinations are advanced on the ground: antero posterior axis X, lateral shaft Y and central vertical shaft Z.Along the forward side of antero posterior axis X To being denoted as " forward direction ", and " backward " is denoted as along the rearward drive direction of antero posterior axis X.Lateral shaft Y be substantially along by The axle center that the central point of driving wheel module 141 defines extends between the right wheel and revolver of robot.
Robot 100 can be rotated about the Y axis.When the forward portion of robot 100 tilts upwards, dip down to backward part It is when tiltedly " facing upward ", and the forward portion for working as robot 100 tilts down, is when being tilted upwards to backward part " nutation ".Separately Outside, robot 100 can turn about the Z axis.On the forward direction of robot, when robot 100 is to the inclination of the right side of X-axis " right-hand rotation ", when robot 100 to the left side of X-axis be " left-hand rotation ".
Referring to Fig. 1, being a kind of possible application scenarios provided by the embodiments of the present application, which includes machine People, such as sweeping robot, floor-mopping robot, dust catcher, weeder etc..In certain embodiments, which can be Robot is specifically as follows sweeping robot, floor-mopping robot.In an implementation, speech recognition system has can be set in robot, It to receive the phonetic order of user's sending, and is rotated according to phonetic order according to arrow direction, to respond the voice of user Instruction.Robot is also provided with instantaneous speech power, to export suggestion voice.In other embodiments, robot can be set It is equipped with touch-sensitive display, to receive the operational order of user's input.Robot is also provided with WIFI module, Bluetooth The wireless communication modules such as module to connect with intelligent terminal, and are received user by wireless communication module and are passed using intelligent terminal Defeated operational order.
The structure of correlation machine people is described as follows, as shown in Figure 2-5:
Robot 100 includes machine body 110, sensory perceptual system 120, control system, drive system 140, cleaning systems, energy Source system and man-machine interactive system 170.As shown in Figure 2.
Machine body 110 includes forward portion 111 and backward part 112, and having approximate circular shape, (front and back is all round Shape), there can also be other shapes, the approximate D-shape of circle including but not limited to behind front.
As shown in figure 4, sensory perceptual system 120 includes positioned at the position determining means 121 of 110 top of machine body, positioned at machine The buffer 122 of the forward portion 111 of device main body 110, steep cliff sensor 123 and ultrasonic sensor, infrared sensor, magnetic force The sensing devices such as meter, accelerometer, gyroscope, odometer provide the various positions information and movement of machine to control system 130 Status information.Position determining means 121 include but is not limited to camera, laser ranging system (LDS).Below with triangle telemetry Laser ranging system for illustrate how carry out position determine.The basic principle of triangle telemetry based on similar triangles etc. Than relationship, this will not be repeated here.
Laser ranging system includes luminescence unit and light receiving unit.Luminescence unit may include the light source for emitting light, light source It may include light-emitting component, such as the infrared or luminous ray light emitting diode (LED) of transmitting infrared light or luminous ray.It is excellent Selection of land, light source can be the light-emitting component of transmitting laser beam.In the present embodiment, the example by laser diode (LD) as light source Son.Specifically, due to the monochrome of laser beam, orientation and collimation property, use the light source of laser beam can make measurement compared to Other light are more accurate.For example, the infrared light or luminous ray of light emitting diode (LED) transmitting are by week compared to laser beam Such environmental effects (such as color or texture of object) is enclosed, and may be decreased in measurement accuracy.Laser diode (LD) it can be dot laser, measure the two-dimensional position information of barrier, be also possible to line laser, measure the certain model of barrier Enclose interior three dimensional local information.
Light receiving unit may include imaging sensor, and the light for being reflected by barrier or being scattered is formed on the imaging sensor Point.Imaging sensor can be the set of single or plurality of rows of multiple unit pixels.These light receiving elements can be by optical signal Be converted to electric signal.Imaging sensor can be complementary metal oxide semiconductor (CMOS) sensor or charge coupled cell (CCD) sensor, since the advantage in cost is preferably complementary metal oxide semiconductor (CMOS) sensor.Moreover, light Unit may include sensitive lens component.The light for being reflected by barrier or being scattered can advance via sensitive lens component to scheme As forming image on sensor.Sensitive lens component may include single or multiple lens.
Base portion can support luminescence unit and light receiving unit, and luminescence unit and light receiving unit are arranged on base portion and to each other Every a specific range.For the barrier situation around robot measurement on 360 degree of directions, base portion can be made to be rotatably arranged In main body 110, it can not also be rotated with base portion itself and rotate transmitting light, reception light by the way that rotating element is arranged. The angular velocity of rotation of rotating element can be obtained by setting optic coupling element and code-disc, and optic coupling element incudes the tooth on code-disc and lacks, By tooth lack spacing slip over time and tooth lack between distance value be divided by instantaneous angular velocity can be obtained.The scarce density of tooth is got on code-disc Greatly, the accuracy rate and precision of measurement are also just corresponding higher but just more accurate in structure, and calculation amount is also higher;Conversely, tooth lacks Density it is smaller, the accuracy rate and precision of measurement are accordingly also lower, but can be relatively easy in structure, and calculation amount is also got over It is small, some costs can be reduced.
The data processing equipment connecting with light receiving unit, such as DSP, will be relative to all angles on 0 degree of angular direction of robot Obstacle distance value at degree records and sends to the data processing unit in control system 130, such as the application processing comprising CPU Device (AP), location algorithm of the CPU operation based on particle filter obtain the current location of robot, and are charted according to this position, supply Navigation uses.It is preferable to use instant positioning and map structuring (SLAM) for location algorithm.
Although the laser ranging system based on triangle telemetry can measure the infinity other than certain distance in principle Distance value at distance, but actually telemeasurement, such as 6 meters or more, realization be it is very difficult, be primarily due to light The size limitation of pixel unit on the sensor of unit, while also by the photoelectric conversion speed of sensor, sensor and connection The calculating speed of data transmission bauds, DSP between DSP influences.The measured value that laser ranging system is affected by temperature The variation that meeting generating system can not put up with, the thermal expansion that the structure being primarily due between luminescence unit and light receiving unit occurs become Shape leads to the angle change between incident light and emergent light, and luminescence unit and light receiving unit itself can also have temperature drift.Swash Optical range finding apparatus be used for a long time after, as many factors such as temperature change, vibration accumulate and caused by deformation also can serious shadow Ring measurement result.The accuracy of measurement result directly determines the accuracy of map making, is robot further progress strategy The basis of implementation, it is particularly important.
As shown in figure 3, the forward portion 111 of machine body 110 can carry buffer 122, driving wheel during cleaning Module 141 promotes robot in ground running, and buffer 122 detects machine via sensing system, such as infrared sensor One or more events in the driving path of people 100, robot can pass through the event that is detected by buffer 122, such as obstacle Object, wall, and controlling driving wheel module 141 makes robot to respond to the event, for example away from barrier.
Control system 130 is arranged on the circuit main board in machine body 110, including with non-transitory memory, such as Hard disk, flash memory, random access memory, the computation processor of communication, such as central processing unit, application processor, The obstacle information that application processor is fed back according to laser ranging system draws institute, robot using location algorithm, such as SLAM Instant map in the environment.And combining buffer 122, steep cliff sensor 123 and ultrasonic sensor, infrared sensor, magnetic Range information, the velocity information comprehensive descision sweeper of the sensing devices such as power meter, accelerometer, gyroscope, odometer feedback are worked as It is preceding which kind of working condition be in, threshold is such as crossed, upper carpet is located at steep cliff, and either above or below is stuck, and dirt box is full, is taken Rise etc., also specific next step action policy can be provided for different situations, so that the work of robot is more in line with owner Requirement, have better user experience.Further, even if control system 130 can be based on the cartographic information planning that SLAM is drawn Cleaning path the most efficient and rational and cleaning method greatly improve the sweeping efficiency of robot.
Drive system 140 can be based on having distance and an angle information, such as x, y and θ component, drive command and manipulate machine Device people 100 crosses over ground run.Drive system 140 includes driving wheel module 141, and driving wheel module 141 can control a left side simultaneously Wheel and right wheel, in order to more accurately control the movement of machine, preferably driving wheel module 141 respectively include left driving wheel module and Right driving wheel module.Left and right driving wheel module is opposed along the lateral shaft defined by main body 110.In order to which robot can be on ground It is moved more stablely on face or stronger locomitivity, robot may include one or more driven wheel 142, driven Wheel includes but is not limited to universal wheel.Driving wheel module includes traveling wheel and drive motor and the control electricity for controlling drive motor Road, driving wheel module can also connect the circuit and odometer of measurement driving current.Driving wheel module 141 can removably connect It is connected in main body 110, easy disassembly and maintenance.Driving wheel can have biasing drop suspension system, movably fasten, Such as be rotatably attached, robot body 110 is arrived, and receive spring that is downward and biasing far from robot body 110 Biasing.Spring biasing allows driving wheel with certain contact and traction of the Productivity maintenance with ground, while robot 100 is clear Clean element is also with certain pressure contact ground 10.
Cleaning systems can be dry cleaning system and/or wet cleaning system.As dry cleaning system, main cleaning The purging system 151 that connecting component of the function between round brush, dirt box, blower, air outlet and four is constituted.With ground With the round brush centainly interfered by the rubbish on ground sweep up and winding between round brush and dirt box suction inlet front, then by Blower generates and passes through the gas sucking dirt box for having suction of dirt box.The dust collection capacity of sweeper can use the sweeping efficiency of rubbish DPU (Dust pick up efficiency) is characterized, and sweeping efficiency DPU is by roller brushes structure and Effect of Materials, by dust suction The wind power utilization rate in the air duct that mouth, the connecting component between dirt box, blower, air outlet and four are constituted influences, by blower Type and power influence, be a responsible system design problem.Compared to common plug-in dust catcher, the raising of dust collection capacity Meaning is bigger for the clean robot of limited energy.Because the raising of dust collection capacity is directly effectively reduced for the energy It is required that, that is to say, that the machine on 80 square meter ground can be cleaned by filling primary electricity originally, can be evolved flat to fill primary electricity cleaning 100 Rice is even more.And the service life for reducing the battery of charging times can also greatly increase, so that user replaces the frequency of battery Rate also will increase.It is more intuitive and importantly, the raising of dust collection capacity is the most obvious and important user experience, Yong Huhui Immediately arrive at sweep whether clean/wipe whether clean conclusion.Dry cleaning system also may include that there is the side of rotary shaft to brush 152, rotary shaft is angled relative to ground, for being moved to clast in the round brush region of cleaning systems.
Energy resource system includes rechargeable battery, such as nickel-metal hydride battery and lithium battery.Rechargeable battery can connect charge control Circuit, battery pack charging temperature detection circuit and battery undervoltage observation circuit, charging control circuit, the detection of battery pack charging temperature Circuit, battery undervoltage observation circuit are connected with single chip machine controlling circuit again.Host, which passes through, is arranged in fuselage side or lower section Charging electrode connect with charging pile and charges.If having attached dust on exposed charging electrode, can during the charging process by In the cumulative effect of charge, causes the plastics body of electrode perimeter to melt deformation, even result in electrode itself and deform, it can not Continue to charge normal.
Man-machine interactive system 170 includes the key on host panel, and key carries out function selection for user;Can also include Display screen and/or indicator light and/or loudspeaker, display screen, indicator light and loudspeaker to user show current machine status or Function options;It can also include mobile phone client program.For path navigation type cleaning equipment, cell phone client can be to The map of environment and machine present position where user's presentation device, can provide a user more horn of plenty and hommization Function items.
Fig. 6 is the block diagram of sweeping robot according to the present invention.
Sweeping robot according to present example may include: the voice of user for identification microphone array unit, Communication unit for being communicated with remote control equipment or other equipment, the mobile unit for driving main body, cleaning unit, with And the memory cell for storing information.Input unit (key etc. of sweeping robot), object detection sensors, charging are single Member, microphone array unit, angle detecting unit, position detection unit, communication unit, driving unit and memory cell can be with It is connected to control unit, predetermined information is transmitted to control unit or receives predetermined information from control unit.
Microphone array unit can be by the information ratio of the voice inputted by receiving unit and storage in a memory cell Compared with to determine whether input voice corresponds to specific order.If it is determined that the voice inputted corresponds to specific order, then Corresponding order is transmitted to control unit.If information phase of the voice that can not be will test with storage in a memory cell Compare, then detected voice can be considered as noise to ignore detected voice.
For example, the voice corresponding word " come, come here, arriving here, to here " detected, and exist and be stored in The corresponding text control command of word (come here) in the information of memory cell.In such a case, it is possible to by right The order answered is transmitted in control unit.
Angle detecting unit can be detected by using the time difference or level of the voice for being input to multiple receiving units The direction of voice.The direction for the voice that angle detecting unit will test is transmitted to control unit.Control unit can be by making Movement routine is determined with the voice direction detected by angle detecting unit.
Position detection unit can detecte coordinate of the main body in predetermined cartographic information.In one embodiment, by imaging The cartographic information of the information and storage that head detects in a memory cell can be compared to each other to detect the current location of main body. Other than camera, position detection unit can also use global positioning system (GPS).
In a broad sense, position detection unit can detecte whether main body is arranged on specific position.For example, position is examined Surveying unit may include the unit whether being arranged on charging pile for detecting main body.
For example, whether can be input to according to electric power for detecting in the method whether main body is arranged on charging pile Detect whether main body is arranged at charge position in charhing unit.In another example can be by being arranged on main body or charging pile Charge position detection unit detect whether main body is arranged at charge position.
Predetermined information can be transmitted to by communication unit/received from remote control equipment or other equipment.Communication unit The cartographic information of sweeping robot can be updated.
Driving unit can be with mobile unit operating and cleaning unit.Driving unit can be moved along what is determined by control unit The mobile mobile unit in path.
Predetermined information related with the operation of sweeping robot is stored in memory cell.For example, sweeping robot institute cloth The cartographic information in the region set, control command information corresponding with the voice that microphone array unit is identified, by angle detecting It direction angle information that unit detects, the location information detected by position detection unit and is detected by object detection sensors To obstacle information can store in a memory cell.
Control unit can receive the information detected by receiving unit, camera and object detection sensors.Control Unit can identify the direction and detection sweeping robot that the voice of user, detection voice occur based on the information transmitted Position.In addition, control unit can be with mobile unit operating and cleaning unit.
As shown in fig. 7, being applied to the robot in Fig. 1 application scenarios, the embodiment of the present application provides a kind of robot Sound control method, described method includes following steps:
Step S702: phonetic order is received.
Phonetic order definition: being used to indicate operation, i.e., instruction control robot executes operation.The operation can be customized The operation of setting is also possible to system default setting, such as: sweep operation, the operation that mops floor, weeding operation etc..Specific real It applies in example, which customized can be arranged, it can also be arranged with system default, such as: the phonetic order can be user Customized " sweeping kitchen ", " mopping floor twice " or " sweeping master bedroom ".The phonetic order (manipulation class phonetic order) forms instruction Collect the cloud for being pre-stored within the robot or connecting with the robot.Progress after waiting receipt of subsequent real-time command Match, successful match then executes corresponding operation.For convenience of description, hereafter said so that phonetic order is " sweeping master bedroom " as an example It is bright.
Step S704: it identifies the content of the phonetic order, and the robot work region is determined according to the content.
After receiving phonetic order, judge whether the phonetic order is that control executes class phonetic order, such as " sweeps master It is sleeping ", and lookup is compared by pre-stored phonetic order collection, after searching successfully, phonetic order described in semantic analysis Content determines described robot work region, such as master bedroom, kitchen, parlor etc..
It is described that the robot work region is determined according to the content in some possible implementations, by such as Lower step is realized:
Step S7042: the location information prestored according to robot described in the content search of the phonetic order.
Wherein, the location information prestored is accomplished in that
Step S70422: the specific position in the robot driving trace is marked.
In some possible implementations, it can be obtained by laser scanning methods or camera visual identity mode Specific location information in the robot driving trace;It is carried out the specific location information as different zones separation Label.
The mode of laser scanning, the sweeping robot with laser scanning methods can pass through the width to common door Special marking is carried out, different zones separation is labeled as.For example, scanning has door width at one to along wall in cleaning process Notch, and the notch leads to another region, then determines herein for cut-point.
Camera visual identity mode, by the machine learning before factory, so that robot, which has, judges institute according to feature Ability in region.For example, judging region for bedroom by identification bed.With sweeping the floor for camera visual identity mode Robot judges that doorframe region for region segmentation point, is marked by identification doorframe.After robot cleaning finishes, Robot divides above-mentioned zone automatically, and local memory or cloud is recorded in above- mentioned information.When next time cleans again, just It can recognize that each room.
Step S70424: the driving trace three-dimensional map is drawn according to the specific position.
By way of above-mentioned laser scanning or after camera visual identity mode obtains corresponding track three-dimensional information, press The three-dimensional map of corresponding region, such as the domain type map of room entirety are formed according to internal algorithm, are held for subsequent sweeping robot It is used when line command.
In some possible implementations, it is described according to the specific position draw the driving trace three-dimensional map it Afterwards, comprising:
The driving trace three-dimensional map is sent to client;User is received to each of the driving trace three-dimensional map The hand labeled information in a region.
Behind first scan completely a room, the driving trace three-dimensional map is sent to client by sweeping robot, Such as mobile phone;On cell phone map, user can be prompted to carry out hand labeled to each region marked off.It is finished in user's mark Later, robot local memory or cloud is recorded in above- mentioned information by robot.When next time cleans again, just it can recognize that each Room.So, it is indicated at this time with voice, sweeper just can easily go to each region to be cleaned.
Step S70426: the driving trace three-dimensional map is stored, the location information prestored is obtained.
In some possible implementations, the driving trace three-dimensional map can be stored in the robot or with The cloud of the robot connection, obtains the location information prestored.
Step S7044: the robot work region is determined according to the positional information.
After sweeping robot finds the address information of corresponding phonetic order, just can quickly it be found according to map location Working region executes cleaning movement into working region.
Step S706: the phonetic order is executed in the working region.
The mode that the embodiment of the present application can take voice specified makes robot carry out accurate work according to the instruction of user Make, when especially for example sweeping robot cleans for the first time, indoor arrangement is divided automatically, memory or cloud is then recorded End, wants to clean a certain region, user carries out (for example, bedroom please be cleaned, to clean by voice control sweeping robot next time Parlor, twice of parlor cleaning etc.), allow the robot to carry out purposive work according to the wish of user, increase machine The accurate voice controllability of device people, improves work efficiency.
In other embodiment, as shown in figure 8, robot voice control method proposed by the present invention includes following step It is rapid:
Step S802: the first phonetic order is received.
Under normal conditions, the speech recognition system of robot can have dormant state and state of activation.Such as work as robot Be in working condition or unused state, speech recognition system at this time in a dormant state, in the dormant state, language Sound identifying system hardly occupies the excess resource of robot, is other languages that will not go to identify in addition to the first phonetic order Sound instruction.
It, can be by speech recognition system by stopping if speech recognition system in a dormant state receives the first phonetic order Dormancy state switches to state of activation.In active state, speech recognition system can identify configuration in speech recognition system Phonetic order, such as the first phonetic order, the second phonetic order etc..
Specifically, the first phonetic order: for waking up speech recognition system, i.e., instruction control speech recognition system, which is in, swashs State living.In an implementation, if speech recognition system in a dormant state, when robot receives the first phonetic order, can incite somebody to action Speech recognition system switches to state of activation by dormant state.If speech recognition system is active, voice knowledge is controlled Other system keeps state of activation, can also do nothing.In the particular embodiment, the first phonetic order It customized can be arranged, can also be arranged with system default, such as: the first phonetic order can be customized " the unlatching language of user Sound ", " booting " " coming ", " coming here ", " to here ", " to here " etc..First phonetic order (waking up class phonetic order) is pre- The cloud for being first stored in the robot or being connect with the robot.For convenience of description, hereafter with the first phonetic order for " mistake Come " for be illustrated.
Step S804: the Sounnd source direction of identification first phonetic order simultaneously makes the robot turn to the sound source side To.
After the first activation instruction for example " to come " etc. has been received in robot, robot passes through angle detecting unit Detect the direction of voice, for example, by using the voice for being input to multiple receiving units time difference or horizontal detect voice Direction.The direction for the voice that angle detecting unit will test is transmitted to control unit.Control unit can be by using by side The voice direction control drive system detected to detection unit, makes robot carry out the movement such as original place rotation, so that robot Direction of advance turns to user's Sounnd source direction.After such human-computer interaction is uttered a sound similar to a people to work by people, stop Work in hand turns to dialogue state, so that human-computer interaction is more humanized.
Step S806: judge whether to receive the second phonetic order for being used to indicate operation.
Second phonetic order: being used to indicate operation, i.e., instruction control robot executes operation.The operation can be customized The operation of setting is also possible to system default setting, such as: sweep operation, the operation that mops floor, weeding operation etc..Specific real It applies in example, which customized can be arranged, it can also be arranged with system default, such as: the phonetic order can be user Customized " sweeping kitchen ", " mopping floor twice " or " sweeping master bedroom ".The phonetic order (manipulation class phonetic order) forms instruction Collect the cloud for being pre-stored within the robot or connecting with the robot.Progress after waiting receipt of subsequent real-time command Match, successful match then executes corresponding operation.For convenience of description, hereafter said so that phonetic order is " sweeping master bedroom " as an example It is bright.
It is described to judge whether that receiving the second phonetic order for being used to indicate operation can be judgement in the preset period, Such as 1 minute, 2 minutes etc., which can be preset by touch apparatus.It is supervised according in the preset time range Situation is surveyed, executes the following two kinds situation respectively.
The first situation executes step S808: if it is determined that receiving second phonetic order, then the robot executes The operation of the second phonetic order instruction.
For example, monitoring the control command of " sweeping master bedroom " in 1 minute, robot is making a reservation for according to the order of user Direction or position are swept, until receiving another the second voice control command.
Second situation executes step S810: if it is determined that not receiving second phonetic order, then the robot turns Former direction is gone back to, original operation is continued to execute.
The control command for " sweeping master bedroom " for example, do not monitor in 1 minute, robot according to original cleaning direction or Position is swept, until receiving the second voice control command again.
The embodiment of the present application when receiving the first phonetic order, can first be such that the speech recognition system of robot activates State, and the Sounnd source direction that robot turns to voice is controlled, it is at armed state.Then control is received within a certain period of time The control command for executing operation carries out expected movement according to control command.It can accurately be operated, be improved as indicated Speech recognition controlled effect under noise state improves the discrimination of the phonetic order of user's input, can be relatively accurately It works according to the phonetic order of user, also increases the interest of human-computer interaction.
Step S812: the content of identification second phonetic order, and determine that the robot works according to the content Region.
After receiving phonetic order, judge whether the phonetic order is that control executes class phonetic order, such as " sweeps master It is sleeping ", and lookup is compared by pre-stored phonetic order collection, after searching successfully, phonetic order described in semantic analysis Content determines described robot work region, such as master bedroom, kitchen, parlor etc..
It is described that the robot work region is determined according to the content in some possible implementations, by such as Lower step is realized:
Step S8122: the location information prestored according to robot described in the content search of the phonetic order.
Wherein, the location information prestored is accomplished in that
Step S81222: the specific position in the robot driving trace is marked.
In some possible implementations, it can be obtained by laser scanning methods or camera visual identity mode Specific location information in the robot driving trace;It is carried out the specific location information as different zones separation Label.
The mode of laser scanning, the sweeping robot with laser scanning methods can pass through the width to common door Special marking is carried out, different zones separation is labeled as.For example, scanning has door width at one to along wall in cleaning process Notch, and the notch leads to another region, then determines herein for cut-point.
Camera visual identity mode, by the machine learning before factory, so that robot, which has, judges institute according to feature Ability in region.For example, judging region for bedroom by identification bed.With sweeping the floor for camera visual identity mode Robot judges that doorframe region for region segmentation point, is marked by identification doorframe.After robot cleaning finishes, Robot divides above-mentioned zone automatically, and local memory or cloud is recorded in above- mentioned information.When next time cleans again, just It can recognize that each room.
Step S81224: the driving trace three-dimensional map is drawn according to the specific position.
By way of above-mentioned laser scanning or after camera visual identity mode obtains corresponding track three-dimensional information, press The three-dimensional map of corresponding region, such as the domain type map of room entirety are formed according to internal algorithm, are held for subsequent sweeping robot It is used when line command.
In some possible implementations, it is described according to the specific position draw the driving trace three-dimensional map it Afterwards, comprising:
The driving trace three-dimensional map is sent to client;User is received to each of the driving trace three-dimensional map The hand labeled information in a region.
Behind first scan completely a room, the driving trace three-dimensional map is sent to client by sweeping robot, Such as mobile phone;On cell phone map, user can be prompted to carry out hand labeled to each region marked off.It is finished in user's mark Later, robot local memory or cloud is recorded in above- mentioned information by robot.When next time cleans again, just it can recognize that each Room.So, it is indicated at this time with voice, sweeper just can easily go to each region to be cleaned.
Step S81226: the driving trace three-dimensional map is stored, the location information prestored is obtained.
In some possible implementations, the driving trace three-dimensional map can be stored in the robot or with The cloud of the robot connection, obtains the location information prestored.
Step S8124: the robot work region is determined according to the positional information.
After sweeping robot finds the address information of corresponding phonetic order, just can quickly it be found according to map location Working region executes cleaning movement into working region.
Step S814: the phonetic order is executed in the working region.
The embodiment of the present application can receive user respectively and wake up class instruction and control class instruction, flexible according to actual instruction The working condition for controlling robot makes robot carry out precise operation according to the instruction of user in such a way that voice is specified, special When being not that for example sweeping robot cleans for the first time, indoor arrangement is divided automatically, memory or cloud is then recorded, next time Want to clean a certain region, user carries out that (for example, bedroom please be cleaned, parlor please be cleaned, visitor by voice control sweeping robot Twice of Room cleaning etc.), it allows the robot to carry out purposive work according to the wish of user, increases the essence of robot Quasi- voice controllability, improves work efficiency.
In a further embodiment, as shown in figure 9, being conjointly employed in the robot in Fig. 1 application scenarios, the application is implemented Example provides a kind of robot voice control device, including receiving unit 902, recognition unit 904 and execution unit 906, each list Member is described as follows.The method that Fig. 9 shown device can execute embodiment illustrated in fig. 7, the part that the present embodiment is not described in detail, It can refer to the related description to embodiment illustrated in fig. 7.Implementation procedure and the technical effect implementation shown in Figure 7 of the technical solution Description in example, details are not described herein.
Specifically, robot voice control device, comprising:
Receiving unit 902, for receiving phonetic order;
As shown in Figure 10, in some possible implementations, the recognition unit 902 further includes searching unit 9022, The location information prestored for the robot according to the content search of the phonetic order;Determination unit 9024 is used for basis The location information determines the robot work region.
As shown in figure 11, in some possible implementations, the searching unit 9022 includes: marking unit 90222, For marking the specific position in the robot driving trace;Drawing unit 90224, for being drawn according to the specific position Make the driving trace three-dimensional map;Storage unit 90226 is obtained and is prestored for storing the driving trace three-dimensional map Location information.
In some possible implementations, the marking unit 90222 is also used to, and passes through laser scanning methods or camera shooting Head visual identity mode, obtains the specific location information in the robot driving trace;The specific location information is made It is marked for different zones separation.
In some possible implementations, the searching unit 9022 further include: transmission unit 90228 is used for The driving trace three-dimensional map is sent to client;Receiving unit 90229, for receiving user to the driving trace three Tie up the hand labeled information of each region of map.
In some possible implementations, the storage unit 90226 is also used to, by the driving trace three-dimensional map The cloud for being stored in the robot or connecting with the robot obtains the location information prestored.
Recognition unit 904, the content of the phonetic order for identification, and determine that the machine is artificial according to the content Make region;
Execution unit 906, for executing the phonetic order in the working region.
The embodiment of the present application provides a kind of robot, including any robot voice control device of Fig. 9-11.
The embodiment of the present application provides a kind of robot, including processor and memory, and the memory is stored with can be by The computer program instructions that the processor executes when the processor executes the computer program instructions, realize aforementioned The method and step of one embodiment.
The embodiment of the present application provides a kind of non-transient computer readable storage medium, is stored with computer program instructions, The computer program instructions realize the method and step of aforementioned any embodiment when being called and being executed by processor.
As shown in figure 12, robot 1200 may include processing unit (such as central processing unit, graphics processor etc.) 1201, it can be loaded at random according to the program being stored in read-only memory (ROM) 1202 or from storage device 1208 It accesses the program in memory (RAM) 1203 and executes various movements appropriate and processing.In RAM 1203, it is also stored with electricity Child robot 1200 operates required various programs and data.Processing unit 1201, ROM 1202 and RAM 1203 pass through total Line 1204 is connected with each other.Input/output (I/O) interface 1205 is also connected to bus 1204.
In general, following device can connect to I/O interface 1205: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph As the input unit 1206 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration The output device 1207 of dynamic device etc.;Storage device 1208 including such as tape, hard disk etc.;And communication device 1209.Communication Device 1209 can permit electronic robot 1200 and wirelessly or non-wirelessly be communicated with other robot to exchange data.Although figure 7 show the electronic robot 1200 with various devices, it should be understood that being not required for implementing or having all show Device.It can alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communication device 1209, or from storage device 1208 are mounted, or are mounted from ROM 1202.When the computer program is executed by processing unit 1201, the disclosure is executed The above-mentioned function of being limited in the method for embodiment.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned robot;It is also possible to individualism, and without It is incorporated in the robot.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by the machine When device people executes, so that the robot: obtaining at least two internet protocol addresses;It includes described for sending to Node evaluation robot The Node evaluation of at least two internet protocol addresses is requested, wherein the Node evaluation robot is internet from described at least two In protocol address, chooses internet protocol address and return;Receive the internet protocol address that the Node evaluation robot returns;Its In, the fringe node in acquired internet protocol address instruction content distributing network.
Alternatively, above-mentioned computer-readable medium carries one or more program, when said one or multiple programs When being executed by the robot, so that the robot: receiving the Node evaluation including at least two internet protocol addresses and request;From institute It states at least two internet protocol addresses, chooses internet protocol address;Return to the internet protocol address selected;Wherein, it receives To internet protocol address instruction content distributing network in fringe node.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+ +, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package, Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, the One acquiring unit is also described as " obtaining the unit of at least two internet protocol addresses ".
The apparatus embodiments described above are merely exemplary, wherein described, unit can as illustrated by the separation member It is physically separated with being or may not be, component shown as a unit may or may not be physics list Member, it can it is in one place, or may be distributed over multiple network units.It can be selected according to the actual needs In some or all of the modules achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying creativeness Labour in the case where, it can understand and implement.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although The application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: it still may be used To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features; And these are modified or replaceed, each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution spirit and Range.

Claims (15)

1. a kind of robot voice control method, which is characterized in that the described method includes:
Receive phonetic order;
It identifies the content of the phonetic order, and the robot work region is determined according to the content;
The phonetic order is executed in the working region.
2. the method according to claim 1, wherein described determine the robot workspace according to the content Domain, comprising:
The location information prestored according to robot described in the content search of the phonetic order;
The robot work region is determined according to the positional information.
3. according to the method described in claim 2, it is characterized in that, the location information prestored, comprising:
Mark the specific position in the robot driving trace;
The driving trace three-dimensional map is drawn according to the specific position;
The driving trace three-dimensional map is stored, the location information prestored is obtained.
4. according to the method described in claim 3, it is characterized in that, specific in the label robot driving trace Position, comprising:
By laser scanning methods or camera visual identity mode, the specific position in the robot driving trace is obtained Information;
It is marked the specific location information as different zones separation.
5. the method according to claim 3 or 4, which is characterized in that described to draw the traveling according to the specific position After the three-dimensional map of track, comprising:
The driving trace three-dimensional map is sent to client;
User is received to the hand labeled information of each region of the driving trace three-dimensional map.
6. according to the method described in claim 3, it is characterized by: the storage driving trace three-dimensional map, obtains pre- The location information deposited, comprising:
Cloud that the driving trace three-dimensional map is stored in the robot or is connect with the robot obtains and prestores Location information.
7. a kind of robot voice control device characterized by comprising
Receiving unit, for receiving phonetic order;
Recognition unit, the content of the phonetic order for identification, and the robot work region is determined according to the content;
Execution unit, for executing the phonetic order in the working region.
8. device according to claim 7, which is characterized in that the recognition unit further includes,
Searching unit, the location information prestored for the robot according to the content search of the phonetic order;
Determination unit, for determining the robot work region according to the positional information.
9. device according to claim 8, which is characterized in that the searching unit includes:
Marking unit, for marking the specific position in the robot driving trace;
Drawing unit, for drawing the driving trace three-dimensional map according to the specific position;
Storage unit obtains the location information prestored for storing the driving trace three-dimensional map.
10. device according to claim 9, which is characterized in that the marking unit is also used to,
By laser scanning methods or camera visual identity mode, the specific position in the robot driving trace is obtained Information;
It is marked the specific location information as different zones separation.
11. device according to claim 9 or 10, which is characterized in that the searching unit further include:
Transmission unit, for the driving trace three-dimensional map to be sent to client;
Receiving unit, for receiving user to the hand labeled information of each region of the driving trace three-dimensional map.
12. device according to claim 9, it is characterised in that: the storage unit is also used to,
Cloud that the driving trace three-dimensional map is stored in the robot or is connect with the robot obtains and prestores Location information.
13. a kind of robot voice control device, which is characterized in that including processor and memory, the memory is stored with The computer program instructions that can be executed by the processor when processor executes the computer program instructions, are realized Any method and step of claim 1-6.
14. a kind of robot, which is characterized in that including such as described in any item devices of claim 7-12.
15. a kind of non-transient computer readable storage medium, which is characterized in that be stored with computer program instructions, the meter Calculation machine program instruction realizes any method and step of claim 1-6 when being called and being executed by processor.
CN201910265953.3A 2019-04-03 2019-04-03 Robot voice control method and device, robot and medium Pending CN109920424A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910265953.3A CN109920424A (en) 2019-04-03 2019-04-03 Robot voice control method and device, robot and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910265953.3A CN109920424A (en) 2019-04-03 2019-04-03 Robot voice control method and device, robot and medium

Publications (1)

Publication Number Publication Date
CN109920424A true CN109920424A (en) 2019-06-21

Family

ID=66968363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910265953.3A Pending CN109920424A (en) 2019-04-03 2019-04-03 Robot voice control method and device, robot and medium

Country Status (1)

Country Link
CN (1) CN109920424A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110946519A (en) * 2019-12-20 2020-04-03 小狗电器互联网科技(北京)股份有限公司 Control method and device of sweeper
CN111128158A (en) * 2019-12-17 2020-05-08 深圳拓邦股份有限公司 Floor sweeping robot parameter voice setting method and floor sweeping robot
CN111739533A (en) * 2020-07-28 2020-10-02 睿住科技有限公司 Voice control system, method and device, storage medium and voice equipment
CN111857156A (en) * 2020-08-02 2020-10-30 珠海市一微半导体有限公司 Robot region dividing method based on laser, chip and robot
CN111897334A (en) * 2020-08-02 2020-11-06 珠海市一微半导体有限公司 Robot region division method based on boundary, chip and robot
CN112890680A (en) * 2019-11-19 2021-06-04 科沃斯机器人股份有限公司 Follow-up cleaning operation method, control method, device, robot and storage medium
CN113116224A (en) * 2020-01-15 2021-07-16 科沃斯机器人股份有限公司 Robot and control method thereof
CN113243821A (en) * 2021-04-26 2021-08-13 深圳市酷客智能科技有限公司 Robot-based indoor environment interactive purification method and device and intelligent cleaning robot
CN113485335A (en) * 2021-07-02 2021-10-08 追觅创新科技(苏州)有限公司 Voice instruction execution method and device, storage medium and electronic device
CN113854904A (en) * 2021-09-29 2021-12-31 北京石头世纪科技股份有限公司 Control method and device of cleaning equipment, cleaning equipment and storage medium

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102359784A (en) * 2011-08-01 2012-02-22 东北大学 Autonomous navigation and obstacle avoidance system and method of indoor mobile robot
CN104615138A (en) * 2015-01-14 2015-05-13 上海物景智能科技有限公司 Dynamic indoor region coverage division method and device for mobile robot
CN104808671A (en) * 2015-05-19 2015-07-29 东南大学 Robot path planning method under home environment
CN105115498A (en) * 2015-09-30 2015-12-02 长沙开山斧智能科技有限公司 Robot location navigation system and navigation method
CN105216905A (en) * 2015-10-27 2016-01-06 北京林业大学 Instant location and map building survey search and rescue robot
CN105259898A (en) * 2015-10-13 2016-01-20 江苏拓新天机器人科技有限公司 Floor sweeping robot controlled by smart phone
CN105796002A (en) * 2016-03-31 2016-07-27 北京小米移动软件有限公司 Indoor cleaning method for cleaning robot, cleaning robot and mobile terminal
CN105869136A (en) * 2015-01-22 2016-08-17 北京雷动云合智能技术有限公司 Collaborative visual SLAM method based on multiple cameras
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN107181818A (en) * 2017-06-27 2017-09-19 华南师范大学 Robot remote control and management system and method based on cloud platform
CN107199572A (en) * 2017-06-16 2017-09-26 山东大学 A kind of robot system and method based on intelligent auditory localization and Voice command
CN107329476A (en) * 2017-08-02 2017-11-07 珊口(上海)智能科技有限公司 A kind of room topology map construction method, system, device and sweeping robot
CN107491070A (en) * 2017-08-31 2017-12-19 成都通甲优博科技有限责任公司 A kind of method for planning path for mobile robot and device
CN107684401A (en) * 2017-09-25 2018-02-13 北京石头世纪科技有限公司 The control method and control device of intelligent cleaning equipment
CN108231069A (en) * 2017-08-30 2018-06-29 深圳乐动机器人有限公司 Sound control method, Cloud Server, clean robot and its storage medium of clean robot
CN207804199U (en) * 2016-06-15 2018-09-04 美国iRobot公司 autonomous mobile robot
CN108594823A (en) * 2018-05-21 2018-09-28 珠海格力电器股份有限公司 The control method and its control system of sweeping robot
CN108733059A (en) * 2018-06-05 2018-11-02 湖南荣乐科技有限公司 A kind of guide method and robot
CN108733419A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Lasting awakening method, device, smart machine and the storage medium of smart machine
CN109003303A (en) * 2018-06-15 2018-12-14 四川长虹电器股份有限公司 Apparatus control method and device based on voice and space object identification and positioning
CN109008815A (en) * 2018-07-31 2018-12-18 上海爱优威软件开发有限公司 A kind of control method and terminal of sweeping robot
CN109036392A (en) * 2018-05-31 2018-12-18 芜湖星途机器人科技有限公司 Robot interactive system
CN109443368A (en) * 2019-01-14 2019-03-08 轻客小觅智能科技(北京)有限公司 Air navigation aid, device, robot and the storage medium of unmanned machine people

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102359784A (en) * 2011-08-01 2012-02-22 东北大学 Autonomous navigation and obstacle avoidance system and method of indoor mobile robot
CN104615138A (en) * 2015-01-14 2015-05-13 上海物景智能科技有限公司 Dynamic indoor region coverage division method and device for mobile robot
CN105869136A (en) * 2015-01-22 2016-08-17 北京雷动云合智能技术有限公司 Collaborative visual SLAM method based on multiple cameras
CN104808671A (en) * 2015-05-19 2015-07-29 东南大学 Robot path planning method under home environment
CN105115498A (en) * 2015-09-30 2015-12-02 长沙开山斧智能科技有限公司 Robot location navigation system and navigation method
CN105259898A (en) * 2015-10-13 2016-01-20 江苏拓新天机器人科技有限公司 Floor sweeping robot controlled by smart phone
CN105216905A (en) * 2015-10-27 2016-01-06 北京林业大学 Instant location and map building survey search and rescue robot
CN105796002A (en) * 2016-03-31 2016-07-27 北京小米移动软件有限公司 Indoor cleaning method for cleaning robot, cleaning robot and mobile terminal
CN207804199U (en) * 2016-06-15 2018-09-04 美国iRobot公司 autonomous mobile robot
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN107199572A (en) * 2017-06-16 2017-09-26 山东大学 A kind of robot system and method based on intelligent auditory localization and Voice command
CN107181818A (en) * 2017-06-27 2017-09-19 华南师范大学 Robot remote control and management system and method based on cloud platform
CN107329476A (en) * 2017-08-02 2017-11-07 珊口(上海)智能科技有限公司 A kind of room topology map construction method, system, device and sweeping robot
CN108231069A (en) * 2017-08-30 2018-06-29 深圳乐动机器人有限公司 Sound control method, Cloud Server, clean robot and its storage medium of clean robot
CN107491070A (en) * 2017-08-31 2017-12-19 成都通甲优博科技有限责任公司 A kind of method for planning path for mobile robot and device
CN107684401A (en) * 2017-09-25 2018-02-13 北京石头世纪科技有限公司 The control method and control device of intelligent cleaning equipment
CN108733419A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Lasting awakening method, device, smart machine and the storage medium of smart machine
CN108594823A (en) * 2018-05-21 2018-09-28 珠海格力电器股份有限公司 The control method and its control system of sweeping robot
CN109036392A (en) * 2018-05-31 2018-12-18 芜湖星途机器人科技有限公司 Robot interactive system
CN108733059A (en) * 2018-06-05 2018-11-02 湖南荣乐科技有限公司 A kind of guide method and robot
CN109003303A (en) * 2018-06-15 2018-12-14 四川长虹电器股份有限公司 Apparatus control method and device based on voice and space object identification and positioning
CN109008815A (en) * 2018-07-31 2018-12-18 上海爱优威软件开发有限公司 A kind of control method and terminal of sweeping robot
CN109443368A (en) * 2019-01-14 2019-03-08 轻客小觅智能科技(北京)有限公司 Air navigation aid, device, robot and the storage medium of unmanned machine people

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112890680B (en) * 2019-11-19 2023-12-12 科沃斯机器人股份有限公司 Follow-up cleaning operation method, control device, robot and storage medium
CN112890680A (en) * 2019-11-19 2021-06-04 科沃斯机器人股份有限公司 Follow-up cleaning operation method, control method, device, robot and storage medium
CN111128158A (en) * 2019-12-17 2020-05-08 深圳拓邦股份有限公司 Floor sweeping robot parameter voice setting method and floor sweeping robot
CN110946519A (en) * 2019-12-20 2020-04-03 小狗电器互联网科技(北京)股份有限公司 Control method and device of sweeper
CN113116224B (en) * 2020-01-15 2022-07-05 科沃斯机器人股份有限公司 Robot and control method thereof
CN113116224A (en) * 2020-01-15 2021-07-16 科沃斯机器人股份有限公司 Robot and control method thereof
CN111739533A (en) * 2020-07-28 2020-10-02 睿住科技有限公司 Voice control system, method and device, storage medium and voice equipment
CN111857156A (en) * 2020-08-02 2020-10-30 珠海市一微半导体有限公司 Robot region dividing method based on laser, chip and robot
CN111897334B (en) * 2020-08-02 2022-06-14 珠海一微半导体股份有限公司 Robot region division method based on boundary, chip and robot
CN111897334A (en) * 2020-08-02 2020-11-06 珠海市一微半导体有限公司 Robot region division method based on boundary, chip and robot
CN111857156B (en) * 2020-08-02 2024-04-02 珠海一微半导体股份有限公司 Laser-based robot region division method, chip and robot
CN113243821A (en) * 2021-04-26 2021-08-13 深圳市酷客智能科技有限公司 Robot-based indoor environment interactive purification method and device and intelligent cleaning robot
CN113485335A (en) * 2021-07-02 2021-10-08 追觅创新科技(苏州)有限公司 Voice instruction execution method and device, storage medium and electronic device
WO2023273898A1 (en) * 2021-07-02 2023-01-05 追觅创新科技(苏州)有限公司 Method and apparatus for executing voice instruction, storage medium, and electronic apparatus
CN113854904A (en) * 2021-09-29 2021-12-31 北京石头世纪科技股份有限公司 Control method and device of cleaning equipment, cleaning equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109920424A (en) Robot voice control method and device, robot and medium
US20220167820A1 (en) Method and Apparatus for Constructing Map of Working Region for Robot, Robot, and Medium
CN110051289A (en) Robot voice control method and device, robot and medium
CN110495821B (en) Cleaning robot and control method thereof
US20230225576A1 (en) Obstacle avoidance method and apparatus for self-walking robot, robot, and storage medium
US20230305573A1 (en) Method for detecting obstacle, self-moving robot, and non-transitory computer readable storage medium
CN109431381A (en) Localization method and device, electronic equipment, the storage medium of robot
TWI789625B (en) Cleaning robot and control method thereof
CN205671994U (en) Automatic cleaning equipment
CN110136704A (en) Robot voice control method and device, robot and medium
TWI821990B (en) Cleaning robot and control method thereof
CN109303521A (en) Dust detects level and the detection of laser back scattering dust
US20220125270A1 (en) Method for controlling automatic cleaning device, automatic cleaning device, and non-transitory storage medium
CN109932726A (en) Robot ranging calibration method and device, robot and medium
WO2022041737A1 (en) Distance measuring method and apparatus, robot, and storage medium
CN109920425A (en) Robot voice control method and device, robot and medium
CN110281236A (en) Mobile robot and its method for safety monitoring
CN210931183U (en) Cleaning robot
CN113625700A (en) Self-walking robot control method, device, self-walking robot and storage medium
CN217792839U (en) Automatic cleaning equipment
WO2022227876A1 (en) Distance measurement method and apparatus, and robot and storage medium
CN116942017A (en) Automatic cleaning device, control method, and storage medium
CN116977858A (en) Ground identification method, device, robot and storage medium
CN117008148A (en) Method, apparatus and storage medium for detecting slip state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220419

Address after: 102200 No. 8008, floor 8, building 16, yard 37, Chaoqian Road, Changping Park, Zhongguancun Science and Technology Park, Changping District, Beijing

Applicant after: Beijing Stone Innovation Technology Co.,Ltd.

Address before: No. 6016, 6017 and 6018, Block C, No. 8 Heiquan Road, Haidian District, Beijing 100085

Applicant before: Beijing Roborock Technology Co.,Ltd.

TA01 Transfer of patent application right