CN110434853A - A kind of robot control method, device and storage medium - Google Patents
A kind of robot control method, device and storage medium Download PDFInfo
- Publication number
- CN110434853A CN110434853A CN201910719457.0A CN201910719457A CN110434853A CN 110434853 A CN110434853 A CN 110434853A CN 201910719457 A CN201910719457 A CN 201910719457A CN 110434853 A CN110434853 A CN 110434853A
- Authority
- CN
- China
- Prior art keywords
- target object
- robot
- control instruction
- hand
- gesture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- User Interface Of Digital Computer (AREA)
- Manipulator (AREA)
Abstract
The application provides a kind of robot control method, device and storage medium, and this method includes receiving voice messaging, extracts the character information in voice messaging;Judge whether there is preset character in character information;If it has, then determining the position of target object according to voice messaging;Gesture identification is carried out to target object, obtains gesture identification result;Corresponding control instruction is determined according to gesture identification result queries instruction database, and robot is controlled according to control instruction and executes corresponding operation;Wherein, instruction database includes pre-set different gesture identification results and its corresponding control instruction.
Description
Technical field
This application involves automatic control technology fields, in particular to a kind of robot control method, device and storage
Medium.
Background technique
Currently, most of robots required when manipulation by key in robot or by remote controler come
It is manipulated, the inconvenient user of these modes of operation carries out using can not improve user to the usage experience of product.
Summary of the invention
The embodiment of the present application is designed to provide a kind of robot control method, device and storage medium, to solve
Pass through the not high problem of inconvenience, user experience existing for key or remote controler Manipulation of the machine people.
To achieve the goals above, it is as follows that this application provides following technical schemes:
First aspect: the application provides a kind of robot control method, receives voice messaging, extracts in the voice messaging
Character information;Judge whether there is preset character in the character information;If it has, then being determined according to the voice messaging
The position of target object;Gesture identification is carried out to the target object, obtains gesture identification result;According to the gesture identification knot
Fruit inquiry instruction library determines corresponding control instruction, controls the robot according to the control instruction and is moved;Wherein, institute
Stating instruction database includes pre-set different gesture identification results and its corresponding control instruction.
In the scheme of above-mentioned design, the position of target object is determined by voice messaging and by voice messaging
Key character trigger the gesture identification to target object, search according to gesture identification result control instruction to according to control
System instruction control robot executes corresponding operation, solves and exists in the prior art by key or remote controler Manipulation of the machine people
Inconvenience, the not high problem of user experience, robot is controlled by the combination of voice and gesture identification, so that right
The control of robot is more convenient, improves experience of the user to product.
It is described that gesture identification is carried out to the target object in the optional embodiment of first aspect, it obtains gesture and knows
Other result, comprising: continuously acquire multiple scene images comprising the target object;Identify target described in every scene image
The images of gestures of object;Whether the hand amplitude of fluctuation for judging the target object is more than threshold value, if it has, then according to described more
It opens scene image and determines that the hand of the target object brandishes direction.
In the embodiment of above-mentioned design, the images of gestures of target object is identified according to scene image, thus according to hand
Gesture image determines the amplitude that hand is swung, and determines whether be determined as effectively for effective gesture according to the amplitude that hand is swung
Hand is determined after gesture brandishes direction, and then corresponding control instruction is searched according to the direction that hand is brandished, to control machine
Device people moves, and solves the problems, such as invalid gesture bring misjudgment in scene image, improves to target object
Gesture identification accuracy.
In the optional embodiment of first aspect, described multiple scene images according to determine the target object
Hand brandishes direction, comprising: analyzes the target pair according to the images of gestures of target object described in multiple described images of gestures
The hand of elephant brandishes trend;Determine that the hand of the target object brandishes direction according to the hand trend of brandishing.
In the embodiment of above-mentioned design, brandish according to multiple images of gestures identification hand with time order and function
Gesture, and then trend is brandished to determine that hand brandishes direction according to hand, so that the identification for brandishing direction to hand is easier.
In the optional embodiment of first aspect, the gesture figure of target object described in every scene image of the identification
Picture, comprising: extract the images of gestures in every scene image;Judge the gesture for belonging to same target in multiple described scene images
Whether the quantity of image is more than preset amount threshold;If so, by the hand for belonging to same target more than preset quantity threshold value
Gesture image is determined as the images of gestures of the target object.
In the embodiment of above-mentioned design, according to the images of gestures quantity of same target to determine whether being effective hand
Gesture first carries out the deletion of some invalid gestures before carrying out amplitude judgement, improves the accuracy of target object gesture identification.
In the optional embodiment of first aspect, described determined according to the gesture identification result queries instruction database is corresponded to
Control instruction, comprising: direction query instruction database is brandished according to the hand of the target object and determines corresponding control instruction;Its
In, described instruction library includes that pre-set different hands brandish direction and its corresponding control instruction.
It is described that correspondence is executed according to the control instruction control robot in the optional embodiment of first aspect
Operation, comprising: judge whether the target object moves every preset time;If it has, then to the target object
It is tracked, and controls the robot to complete the control instruction.
In the embodiment of above-mentioned design, after target object position is mobile, robot can also follow target object
And then control instruction is completed, improve the Experience Degree and susceptibility of user.
In the optional embodiment of first aspect, the robot is equipped with multiple sensors, described according to the voice
Information determines the position of target object, comprising: obtains the voice messaging by the receiving time of each sensor;Calculate the time most
Time difference between short receiving time and remaining each receiving time;It is passed according to the position of the multiple sensor, sound
It broadcasts speed and calculated multiple time differences determines the position of the target object.
In the embodiment of above-mentioned design, the time difference of different sensors is reached by voice messaging to calculate sound source
Position, under the experience for improving human-computer interaction, and meanwhile it is more accurate to the determination of the position of target object.
Second aspect: the application provides a kind of robot controller, and described device includes: reception extraction module, is used for
Voice messaging is received, the character information in the voice messaging is extracted;Judgment module, for judge in the character information whether
With preset character;Determining module, after there is preset character in judging the character information, according to institute's predicate
Message ceases the position for determining target object;Gesture recognition module obtains gesture for carrying out gesture identification to the target object
Recognition result;Control module is inquired, for determining corresponding control instruction, root according to the gesture identification result queries instruction database
The robot is controlled according to the control instruction to be moved;Wherein, described instruction library includes that pre-set different gestures are known
Other result and its corresponding control instruction.
In the embodiment of above-mentioned design, the position of target object is determined by voice messaging and is believed by voice
Key character in breath triggers the gesture identification to target object, searches control instruction according to gesture identification result to root
Corresponding operation is executed according to control instruction control robot, is solved in the prior art through key or remote controler Manipulation of the machine people
Existing problem inconvenient, user experience is not high, controls robot by the combination of voice and gesture identification, makes
Must be more convenient to the control of robot, improve experience of the user to product.
In the optional embodiment of second aspect, the gesture recognition module includes described specifically for continuously acquiring
Multiple scene images of target object;Identify the images of gestures of target object described in every scene image;Judge the target
Whether the hand amplitude of fluctuation of object is more than threshold value, if it has, then determining the target object according to multiple described scene images
Hand brandish direction.
In the optional embodiment of second aspect, the inquiry control module is specifically used for according to the target object
Hand brandish direction query instruction database and determine corresponding control instruction;Wherein, described instruction library includes pre-set difference
Hand brandishes direction and its corresponding control instruction.
In the optional embodiment of second aspect, the judgment module is also used to judge the mesh at predetermined time intervals
Whether mark object moves;Tracing module, for being carried out to the target object after the target object moves
Tracking, and the robot is controlled to complete the control instruction.
In the optional embodiment of second aspect, the robot be equipped with multiple sensors, the determining module, specifically
For obtaining the voice messaging by the receiving time of each sensor;Calculate time shortest receiving time and remaining each
Time difference between receiving time;According to the position of the multiple sensor, sound propagation velocity and it is calculated multiple when
Between difference determine the position of the target object.
The third aspect: present invention also provides a kind of electronic equipment, comprising: processor, memory connected to the processor,
The memory is stored with computer program, and when calculating equipment operation, processor executes the computer program, to execute
Shi Zhihang first aspect, first aspect any optional implementation in the method.
Fourth aspect: it this application provides a kind of non-transient readable storage medium storing program for executing, is deposited on the computer readable storage medium
Computer program is contained, first aspect, any optional reality of first aspect are executed when which is run by processor
The method in existing mode.
5th aspect: this application provides a kind of computer program product, the computer program product is on computers
When operation, so that computer executes the method in any optional implementation of first aspect, first aspect.
Other feature and advantage of the application will be illustrated in subsequent specification, also, partly be become from specification
It is clear that being understood and implementing the embodiment of the present application.The purpose of the application and other advantages can be by written
Specifically noted structure is achieved and obtained in specification and attached drawing.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application will make below to required in the embodiment of the present application
Attached drawing is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore should not be seen
Work is the restriction to range, for those of ordinary skill in the art, without creative efforts, can be with
Other relevant attached drawings are obtained according to these attached drawings.
Fig. 1 is the robot control method first pass figure that the application first embodiment provides;
Fig. 2 is the robot control method second flow chart that the application first embodiment provides;
Fig. 3 is the robot control method third flow chart that the application first embodiment provides;
Fig. 4 is the 4th flow chart of robot control method that the application first embodiment provides;
Fig. 5 is the sensor phonetic incepting schematic diagram that the application first embodiment provides;
Fig. 6 is the robot controller structural schematic diagram that the application second embodiment provides;
Fig. 7 is the electronic devices structure schematic diagram that the application 3rd embodiment provides.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application is described.
First embodiment
As shown in Figure 1, the application provides a kind of robot control method, specifically comprise the following steps:
Step S100: voice messaging is received, the character information in the voice messaging is extracted.
Step S102: judge whether there is preset character in character information, if so, going to step S104.
Step S104: the position of target object is determined according to voice messaging.
Step S106: gesture identification is carried out to target object, obtains gesture identification result.
Step S108: corresponding control instruction is determined according to gesture identification result queries instruction database;Wherein, instruction database includes
Pre-set difference gesture identification result and its corresponding control instruction.
Step S110: robot is controlled according to control instruction and executes corresponding operation.
In the step s 100, voice messaging can be by the sound transducer or software identification module that are arranged in robot
To be received.The sound transducer or software identification module robot it is in running order/have electricity condition in the case where,
It is constantly in open state, real-time reception voice messaging.Robot can be the service robot in hotel, which can be wine
Sound, the language etc. that the sequence of user such as shop customer or staff issue.The character information in the voice messaging is extracted, can be led to
Cross the mode that voice is converted into text.
It converts speech into after text in the step s 100, judges whether have in advance in character information in step S102
If character, it will be appreciated that are as follows: judge in the text being converted into whether there is preset character or keyword, for example, judgement conversion
At text in whether have " small profit (robot name), come ", " small profit, let's go " etc. character, the content of character can basis
Specific application scenarios are configured.
After step S102 judges to have preset character in character information, it is true according to voice messaging to execute step S104
Set the goal the position of object, wherein the voice letter in the target object i.e. step S100~step S102 in step S104
The sender of breath, specific position method of determination can determine the position of target object by sound positioning.
In step s 106, after the position that target object has been determined by step S104, then the target object is carried out
Gesture identification can specifically understand in this way: after the position of target object has been determined, can control the camera in robot
The locality of alignment target object carries out image taking, carries out gesture identification analysis by the image obtained to shooting, in turn
Obtain gesture identification result.Wherein, the camera in robot may be configured as one or more, when being set as one, machine
People is after determining the position of target object, then first control robot is rotated, so that the camera in robot is directed at mesh
The position of object is marked, and then carries out image taking;When the camera setting in robot is multiple, multiple cameras be may be provided at
Different direction, in this way, robot, which can not have to rotation direction, to carry out image taking to the position of target object.
In step S108, after step S106 carries out gesture identification acquisition gesture identification result to target object, pass through
Gesture identification result inquires corresponding control instruction in instruction database, wherein different gesture identification results correspond to different controls
System instruction represents different control instructions for example, the hand that target object carries out different directions is brandished;Target object is different
Palm apperance, represent different control instructions.Specifically, the hand of different directions, which is brandished, represents different control instructions,
For example, user's hand oneself is brandished by export-oriented user, it can represent and carry out mobile control instruction to this target object position;With
Family is brandished from left to right, can be represented user and be wanted to make robot mobile to the right of oneself, corresponding control instruction is to target
The left side is mobile;For example, representing user when palm apperance is V-shape and needing to take pictures, corresponding control instruction can be photographing instruction
Deng.These control instructions have been stored in instruction database after being associated with one by one with gesture identification result, when one gesture of acquisition
It then carries out searching corresponding control instruction in the instruction database after recognition result, and then executes step S110 and referred to by the control
It enables and makes corresponding operation to control robot, such as moved to user, far from user etc..
The scheme of above-mentioned design determines the position of target object by voice messaging and by the pass in voice messaging
Key characters trigger the gesture identification to target object, search control instruction according to gesture identification result to refer to according to control
It enables control robot execute corresponding operation, solves and pass through existing for key or remote controler Manipulation of the machine people in the prior art not
Problem convenient, user experience is not high, controls robot by the combination of voice and gesture identification, so as to machine
The control of people is more convenient, improves experience of the user to product.
In the optional embodiment of the present embodiment, gesture identification is carried out to target object in step S106, obtains hand
Gesture recognition result, as shown in Fig. 2, being specifically as follows:
Step S1060: multiple scene images comprising target object are continuously acquired.
Step S1062: the images of gestures of target object in every scene image of identification.
Step S1064: whether the hand amplitude of fluctuation for judging target object is more than threshold value, if so, going to step
S1066。
Step S1066: determine that the hand of target object brandishes direction according to the images of gestures of target object.
It is stated that after determining the position of target object in the aforementioned description to step S106, pass through robot
Camera carries out image taking to the locality of target object.Scene image in step S1060 is exactly to obtain after being shot
The image obtained, scene image is meant that in actual application, for example, the service robot in hotel is to target object
When locality carries out image taking, barrier is inevitably had between target object and server machine people or there are other people warps
It crosses, therefore, after being shot by camera, the image of acquisition is the scene image in step S1060.In addition, continuously obtaining
It takes and refers to continuously being shot by locality of the camera in robot to target object, the frequency being continuously shot
Can freely it be arranged, for example, 0.1 millisecond of shooting, one image, the duration of shooting can also be freely arranged, for example, 30 seconds etc..
After obtaining multiple scene images, S1062 is thened follow the steps, every scene image is handled, and then obtains
Obtain the images of gestures of target object.The process wherein handled includes a variety of, for example, target object is hidden in scene image
Gear, at this time can delete its corresponding image;And then find out the gesture of target object in scene image clearly multiple connect
Continuous image, and then gesture of the every gesture clearly in consecutive image is successively extracted according to time order and function relationship, to continuous
Gesture identification analysis.
It is identified in every scene image after the images of gestures of target object in step S1062, executes step S1064.
Aforementioned is multiple with time order and function relationship to the images of gestures that the target object extracted has been described in step S1062
Images of gestures can analyze out the target object according to multiple images of gestures with time order and function relationship in step S1064
Hand amplitude of fluctuation, and then judge whether the hand amplitude of fluctuation of target object is more than threshold value, if being more than threshold value, illustrate this
The gesture of target object is determined as it being effectively, to have using machine Man's Demands.Therefore, S1066 is thened follow the steps, according to
The images of gestures of target object determines that the hand of target object brandishes direction, concretely, first by the time of target object
Multiple images of gestures of relationship afterwards, and then the hand for analyzing target object brandishes trend, and then brandishes trend according to hand come really
The hand of object of setting the goal brandishes direction.For example, in multiple continuous images of gestures of a period of time, in first images of gestures
Show that the hand of user on the right of the body of user, shows middle body of the hand in user of user in second images of gestures, the
Left side of body of the hand in user of user is shown in three images of gestures, then then judging that the hand of user at this time brandishes direction and is
By the right to the left side of user of user.After the hand for identifying user brandishes direction, this can be searched in instruction database and waved
The dynamic corresponding control instruction in direction, for example, aforementioned described hand brandishes direction by the right of user to the left side of user, at this time
Robot can be moved to the left side of user.Further, the mobile distance of robot can be according to the swing of user's hand
Amplitude is configured, such as the hand amplitude of fluctuation of user is 30cm and brandishes the right to the left side of user that direction is user
Side, then robot at this time can be 3 meters mobile to the left side of user.Above-mentioned citing is intended merely to facilitate the side for understanding the application
Case can not limit the protection scope of the application.
In the embodiment of above-mentioned design, the images of gestures of target object is identified according to scene image, thus according to hand
Gesture image determines the amplitude that hand is swung, and determines whether be determined as effectively for effective gesture according to the amplitude that hand is swung
Hand is determined after gesture brandishes direction, and then corresponding control instruction is searched according to the direction that hand is brandished, to control machine
Device people moves, and solves the problems, such as invalid gesture bring misjudgment in scene image, improves to target object
Gesture identification accuracy.
In the optional embodiment of the present embodiment, it has been mentioned that scene image in the aforementioned description to step S1062
A series of processing is carried out, but is likely to occur many gestures in the scene image of acquisition, is carrying out step S1064 by waving
Dynamic amplitude judges whether it is that first can first delete to some invalid gestures in scene image before effectively waving
It removes.For example, the images of gestures for identifying target object in every scene image in step S1062, can be set are as follows: extract every
Open the images of gestures in scene image;Judge to belong in multiple scene images the images of gestures of same target quantity whether be more than
Preset amount threshold, if so, the images of gestures for belonging to same target more than preset quantity threshold value is determined as target pair
The images of gestures of elephant.In this scheme, the images of gestures for belonging to same target identified will be present in a certain number of gestures
It, could be by as initial effective gesture in image.Because the duration is general when a user really waves
Be it is long, shoot in multiple scene images of acquisition, images of gestures can have the long period.Some invalid gestures can
The energy duration can be shorter, and the images of gestures for shooting acquisition can be present in less image, therefore, can be through the above scheme
To carry out preliminary deletion to some invalid gestures.In addition, judging whether that the gesture for belonging to same target can be by gesture figure
Distance apart from robot of size, gesture of gesture etc. is judged as in.
In the embodiment of above-mentioned design, according to the images of gestures quantity of same target to determine whether being effective hand
Gesture first carries out the deletion of some invalid gestures before carrying out amplitude judgement, improves the accuracy of target object gesture identification.
In the optional embodiment of the present embodiment, step S110 controls robot according to control instruction and executes corresponding behaviour
Make, as shown in figure 3, can also be executed according to following steps:
Step S1102: judge whether the target object moves every preset time, if so, going to step
S1104。
Step S1104: being tracked target object, and controls robot to complete control instruction.
The scene of above scheme design is that robot is executing step S108 according to gesture identification result look-up command library
In after corresponding control instruction, before or after controlling robot according to control instruction and executing corresponding operation, user
Position be likely to change.For example, user just walks after making gesture (waving);Or user is selling
Gesture, according to control instruction after making part operation, the position of user changes for robot.Therefore, it is necessary to execute step
S1102 judges whether target object moves at interval of preset time, and specific judgment mode can be gesture recognition etc.
Automatic identification technology.After judging that target moves, S1104 is thened follow the steps, target object is tracked, specifically may be used
To track based on robotic vision to target object, it is eventually until that completion target object gesture identification result is corresponding
Control instruction.
In the embodiment of above-mentioned design, after target object position is mobile, robot can also follow target object
And then control instruction is completed, improve the Experience Degree and susceptibility of user.
In the optional embodiment of the present embodiment, it is aforementioned in the description of step S104 it has been mentioned that can be fixed by sound
Position determines the position of target object, concretely: multiple sensors being arranged in robot, which can receive external
Voice messaging the position of target object is determined in step S104 according to voice messaging on this basis, as shown in figure 4, specifically
It can are as follows:
Step S1040: voice messaging is obtained by the receiving time of each sensor.
Step S1042: the time difference between time shortest receiving time and remaining each receiving time is calculated.
Step S1044: it is determined according to the position of multiple sensors, sound propagation velocity and calculated multiple time differences
The position of target object.
As shown in figure 5, above scheme can be regarded as: 4 sensors are arranged to just so that the quantity of sensor is 4 as an example
Rectangular battle array, each sensor can be located on the angle of square array, if the 2K of the side length of square, then the coordinate of sensor 1 is
(- K ,-K), the coordinate of sensor 2 are (K ,-K), and the coordinate of sensor 3 is (K, K), and the coordinate of sensor 4 is (- K, K).User
After issuing voice messaging, the time that each sensor receives the voice messaging is recorded, for example, sensor 1 is initially received language
Message breath, the time received are T1, and sensor 4, sensor 2, sensor 3 are sequentially received voice messaging, receiving time point
Not Wei T4, T2 and T3, then the time difference between most short receptor time and remaining receiving time in step S1042 are as follows: pass
The time difference of sensor 1 and sensor 3 is Δ T1-3The time difference of=T3-T1, sensor 1 and sensor 2 is Δ T1-2=T2-T1,
The time difference of sensor 1 and sensor 4 is Δ T1-4=T4-T1.Based on time difference, sound propagation velocity and multiple sensors
Position determine the position (x, y) of target:
Wherein, c is the spread speed of sound.
After the position that target object has been determined through the above scheme, if only one is taken the photograph such as aforementioned described robot
As head, then then rotating camera, so that it is directed at the direction of the target position of above-mentioned determination and carry out image taking.
In the embodiment of above-mentioned design, the time difference of different sensors is reached by voice messaging to calculate sound source
Position, under the experience for improving human-computer interaction, and meanwhile it is more accurate to the determination of the position of target object.
Second embodiment
Fig. 6 illustrates the schematic block diagram of robot controller 2 provided by the present application, it should be appreciated that the device with
The step of embodiment of the method for the above-mentioned Fig. 1 into Fig. 5 is corresponding, and the method being able to carry out in first embodiment is related to, device tool
The function of body may refer to it is described above, it is appropriate herein to omit detailed description to avoid repeating.The device includes at least one
A operating system that can be stored in memory or be solidificated in device in the form of software or firmware (firmware)
Software function module in (operating system, OS).Specifically, which includes: to receive extraction module 200, is used for
Voice messaging is received, the character information in voice messaging is extracted;Judgment module 202, for judging whether have in character information
Preset character;Determining module 204, it is true according to voice messaging after there is preset character in judging character information
Set the goal the position of object;Gesture recognition module 206 obtains gesture identification result for carrying out gesture identification to target object;
Control module 208 is inquired, for determining corresponding control instruction according to gesture identification result queries instruction database, according to control instruction
It controls robot and executes corresponding operation;Wherein, instruction database includes pre-set different gesture identification results and its corresponding
Control instruction.
The device of above embodiment design is determined the position of target object by voice messaging and is believed by voice
Key character in breath triggers the gesture identification to target object, searches control instruction according to gesture identification result to root
Corresponding operation is executed according to control instruction control robot, is solved in the prior art through key or remote controler Manipulation of the machine people
Existing problem inconvenient, user experience is not high, controls robot by the combination of voice and gesture identification, makes
Must be more convenient to the control of robot, improve experience of the user to product.
In the optional embodiment of the present embodiment, gesture recognition module 206 includes target pair specifically for continuously acquiring
Multiple scene images of elephant;Identify the images of gestures of target object in every scene image;Judge that the hand of target object is swung
Whether amplitude is more than threshold value, if it has, then determining that the hand of target object brandishes direction according to multiple scene images.
In the optional embodiment of the present embodiment, control module 208 is inquired, specifically for the hand according to target object
It brandishes direction query instruction database and determines corresponding control instruction;Wherein, instruction database includes the pre-set different hands side of brandishing
To and its corresponding control instruction.
In the optional embodiment of the present embodiment, judgment module 202 is also used to judge target object at predetermined time intervals
Whether move;Tracing module 210 for being tracked after target object moves to target object, and controls
Robot is to complete control instruction.
In the optional embodiment of the present embodiment, robot is equipped with multiple sensors, and determining module 204 is specifically used for
Voice messaging is obtained by the receiving time of each sensor;Calculate time shortest receiving time and remaining each receiving time
Between time difference;Target is determined according to the position of multiple sensors, sound propagation velocity and calculated multiple time differences
The position of object.
3rd embodiment
As shown in fig. 7, the application provides a kind of electronic equipment 3, comprising: processor 301 and memory 302, processor 301
It is interconnected with memory 302 by bindiny mechanism's (not shown) of communication bus 303 and/or other forms and is mutually communicated, stored
Device 302 is stored with the executable computer program of processor 301, and when calculating equipment operation, processor 301 executes the computer
Program executes the method in any optional implementation of first embodiment, first embodiment when executing.
The application provides a kind of non-transient storage media, is stored with computer program on the non-transient storage media, the meter
The method in any optional implementation of first embodiment, first embodiment is executed when calculation machine program is run by processor.
Wherein, storage medium can be real by any kind of volatibility or non-volatile memory device or their combination
It is existing, such as static random access memory (Static Random Access Memory, abbreviation SRAM), electrically erasable
Read-only memory (Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), can
Erasable programmable read-only memory (EPROM) (Erasable Programmable Read Only Memory, abbreviation EPROM) may be programmed
Read-only memory (Programmable Red-Only Memory, abbreviation PROM), read-only memory (Read-Only
Memory, abbreviation ROM), magnetic memory, flash memory, disk or CD.
The application provides a kind of computer program product, when which runs on computers, makes to succeed in one's scheme
Calculation machine executes the method in any optional implementation of first embodiment, first embodiment.
In embodiment provided herein, it should be understood that disclosed device and method, it can be by others side
Formula is realized.The apparatus embodiments described above are merely exemplary, for example, the division of the module, only one kind are patrolled
Function division is collected, there may be another division manner in actual implementation, in another example, multiple units or components can combine or can
To be integrated into another system, or some features can be ignored or not executed.
Furthermore each functional module in each embodiment of the application can integrate one independent portion of formation together
Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
Herein, relational terms such as first and second and the like be used merely to by an entity or operation with it is another
One entity or operation distinguish, and without necessarily requiring or implying between these entities or operation, there are any this reality
Relationship or sequence.
The above description is only an example of the present application, the protection scope being not intended to limit this application, for ability
For the technical staff in domain, various changes and changes are possible in this application.Within the spirit and principles of this application, made
Any modification, equivalent substitution, improvement and etc. should be included within the scope of protection of this application.
Claims (10)
1. a kind of robot control method characterized by comprising
Voice messaging is received, the character information in the voice messaging is extracted;
Judge whether there is preset character in the character information;
If it has, then determining the position of target object according to the voice messaging;
Gesture identification is carried out to the target object, obtains gesture identification result;
Corresponding control instruction is determined according to the gesture identification result queries instruction database, according to control instruction control
Robot executes corresponding operation;Wherein, described instruction library includes pre-set different gesture identification results and its corresponding
Control instruction.
2. method according to claim 1, which is characterized in that it is described that gesture identification is carried out to the target object, obtain hand
Gesture recognition result, comprising:
Continuously acquire multiple scene images comprising the target object;
Identify the images of gestures of target object described in every scene image;
Whether the hand amplitude of fluctuation for judging the target object is more than threshold value, if it has, then according to the hand of the target object
Gesture image determines that the hand of the target object brandishes direction.
3. method according to claim 2, which is characterized in that described according to the determination of the images of gestures of the target object
The hand of target object brandishes direction, comprising:
Trend is brandished according to the hand that the images of gestures of the target object analyzes the target object;
Determine that the hand of the target object brandishes direction according to the hand trend of brandishing.
4. method according to claim 2, which is characterized in that the hand of target object described in every scene image of the identification
Gesture image, comprising:
Extract the images of gestures in every scene image;
Judge to belong to whether the quantity of the images of gestures of same target is more than preset amount threshold in multiple described scene images;
If so, will be more than that the images of gestures for belonging to same target of preset quantity threshold value is determined as the gesture of the target object
Image.
5. method according to claim 2, which is characterized in that described to be determined according to the gesture identification result queries instruction database
Corresponding control instruction, comprising:
Direction query instruction database, which is brandished, according to the hand of the target object determines corresponding control instruction;Wherein, described instruction
Library includes that pre-set different hands brandish direction and its corresponding control instruction.
6. method according to claim 1, which is characterized in that described to control the robot execution according to the control instruction
Corresponding operation, comprising:
Judge whether the target object moves every preset time;
If it has, then being tracked to the target object, and the robot is controlled to complete the control instruction.
7. method according to claim 1, which is characterized in that the robot is equipped with multiple sensors, described according to
Voice messaging determines the position of target object, comprising:
The voice messaging is obtained by the receiving time of each sensor;
Calculate the time difference between time shortest receiving time and remaining each receiving time;
The target pair is determined according to the position of the multiple sensor, sound propagation velocity and calculated multiple time differences
The position of elephant.
8. a kind of robot controller, which is characterized in that described device includes:
It receives extraction module and extracts the character information in the voice messaging for receiving voice messaging;
Judgment module, for judging whether there is preset character in the character information;
Determining module determines after having preset character in judging the character information according to the voice messaging
The position of target object;
Gesture recognition module obtains gesture identification result for carrying out gesture identification to the target object;
Control module is inquired, for determining corresponding control instruction according to the gesture identification result queries instruction database, according to institute
It states control instruction and controls the corresponding operation of the robot execution;Wherein, described instruction library includes pre-set different gestures
Recognition result and its corresponding control instruction.
9. a kind of electronic equipment, including memory and processor, the memory are stored with computer program, which is characterized in that
The step of processor realizes any one of claims 1 to 7 the method when executing the computer program.
10. a kind of non-transient readable storage medium storing program for executing, is stored thereon with computer program, which is characterized in that the computer program
The step of any one of claims 1 to 7 the method is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910719457.0A CN110434853B (en) | 2019-08-05 | 2019-08-05 | Robot control method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910719457.0A CN110434853B (en) | 2019-08-05 | 2019-08-05 | Robot control method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110434853A true CN110434853A (en) | 2019-11-12 |
CN110434853B CN110434853B (en) | 2021-05-14 |
Family
ID=68433338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910719457.0A Active CN110434853B (en) | 2019-08-05 | 2019-08-05 | Robot control method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110434853B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111994299A (en) * | 2020-08-25 | 2020-11-27 | 新石器慧义知行智驰(北京)科技有限公司 | Unmanned vehicle baggage consignment method, device and medium |
CN113303708A (en) * | 2020-02-27 | 2021-08-27 | 佛山市云米电器科技有限公司 | Control method for maintenance device, and storage medium |
CN113510707A (en) * | 2021-07-23 | 2021-10-19 | 上海擎朗智能科技有限公司 | Robot control method and device, electronic equipment and storage medium |
CN113552949A (en) * | 2021-07-30 | 2021-10-26 | 北京凯华美亚科技有限公司 | Multifunctional immersive audio-visual interaction method, device and system |
CN113779184A (en) * | 2020-06-09 | 2021-12-10 | 大众问问(北京)信息科技有限公司 | Information interaction method and device and electronic equipment |
CN113854904A (en) * | 2021-09-29 | 2021-12-31 | 北京石头世纪科技股份有限公司 | Control method and device of cleaning equipment, cleaning equipment and storage medium |
CN113909743A (en) * | 2021-09-30 | 2022-01-11 | 北京博清科技有限公司 | Welding control method, control device and welding system |
CN114237068A (en) * | 2021-12-20 | 2022-03-25 | 珠海格力电器股份有限公司 | Intelligent device control method, intelligent device control module, intelligent device and storage medium |
CN114327056A (en) * | 2021-12-23 | 2022-04-12 | 新疆爱华盈通信息技术有限公司 | Target object control method, device and storage medium |
CN114428506A (en) * | 2022-04-06 | 2022-05-03 | 北京云迹科技股份有限公司 | Control method and device of service robot |
WO2022142830A1 (en) * | 2020-12-28 | 2022-07-07 | 展讯通信(上海)有限公司 | Application device and air gesture recognition method thereof |
CN116098536A (en) * | 2021-11-08 | 2023-05-12 | 青岛海尔科技有限公司 | Robot control method and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2367140A1 (en) * | 2010-03-15 | 2011-09-21 | OMRON Corporation, a corporation of Japan | Gesture recognition apparatus, method for controlling gesture recognition apparatus, and control program |
CN105867630A (en) * | 2016-04-21 | 2016-08-17 | 深圳前海勇艺达机器人有限公司 | Robot gesture recognition method and device and robot system |
CN106203259A (en) * | 2016-06-27 | 2016-12-07 | 旗瀚科技股份有限公司 | The mutual direction regulating method of robot and device |
CN107765855A (en) * | 2017-10-25 | 2018-03-06 | 电子科技大学 | A kind of method and system based on gesture identification control machine people motion |
CN108596092A (en) * | 2018-04-24 | 2018-09-28 | 亮风台(上海)信息科技有限公司 | Gesture identification method, device, equipment and storage medium |
CN109313485A (en) * | 2017-02-18 | 2019-02-05 | 广州艾若博机器人科技有限公司 | Robot control method, device and robot based on gesture identification |
CN110083243A (en) * | 2019-04-29 | 2019-08-02 | 深圳前海微众银行股份有限公司 | Exchange method, device, robot and readable storage medium storing program for executing based on camera |
-
2019
- 2019-08-05 CN CN201910719457.0A patent/CN110434853B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2367140A1 (en) * | 2010-03-15 | 2011-09-21 | OMRON Corporation, a corporation of Japan | Gesture recognition apparatus, method for controlling gesture recognition apparatus, and control program |
CN105867630A (en) * | 2016-04-21 | 2016-08-17 | 深圳前海勇艺达机器人有限公司 | Robot gesture recognition method and device and robot system |
CN106203259A (en) * | 2016-06-27 | 2016-12-07 | 旗瀚科技股份有限公司 | The mutual direction regulating method of robot and device |
CN109313485A (en) * | 2017-02-18 | 2019-02-05 | 广州艾若博机器人科技有限公司 | Robot control method, device and robot based on gesture identification |
CN107765855A (en) * | 2017-10-25 | 2018-03-06 | 电子科技大学 | A kind of method and system based on gesture identification control machine people motion |
CN108596092A (en) * | 2018-04-24 | 2018-09-28 | 亮风台(上海)信息科技有限公司 | Gesture identification method, device, equipment and storage medium |
CN110083243A (en) * | 2019-04-29 | 2019-08-02 | 深圳前海微众银行股份有限公司 | Exchange method, device, robot and readable storage medium storing program for executing based on camera |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113303708A (en) * | 2020-02-27 | 2021-08-27 | 佛山市云米电器科技有限公司 | Control method for maintenance device, and storage medium |
CN113779184A (en) * | 2020-06-09 | 2021-12-10 | 大众问问(北京)信息科技有限公司 | Information interaction method and device and electronic equipment |
CN111994299A (en) * | 2020-08-25 | 2020-11-27 | 新石器慧义知行智驰(北京)科技有限公司 | Unmanned vehicle baggage consignment method, device and medium |
WO2022142830A1 (en) * | 2020-12-28 | 2022-07-07 | 展讯通信(上海)有限公司 | Application device and air gesture recognition method thereof |
CN113510707A (en) * | 2021-07-23 | 2021-10-19 | 上海擎朗智能科技有限公司 | Robot control method and device, electronic equipment and storage medium |
CN113552949A (en) * | 2021-07-30 | 2021-10-26 | 北京凯华美亚科技有限公司 | Multifunctional immersive audio-visual interaction method, device and system |
CN113854904A (en) * | 2021-09-29 | 2021-12-31 | 北京石头世纪科技股份有限公司 | Control method and device of cleaning equipment, cleaning equipment and storage medium |
CN113909743A (en) * | 2021-09-30 | 2022-01-11 | 北京博清科技有限公司 | Welding control method, control device and welding system |
CN116098536A (en) * | 2021-11-08 | 2023-05-12 | 青岛海尔科技有限公司 | Robot control method and device |
CN114237068A (en) * | 2021-12-20 | 2022-03-25 | 珠海格力电器股份有限公司 | Intelligent device control method, intelligent device control module, intelligent device and storage medium |
CN114237068B (en) * | 2021-12-20 | 2024-05-03 | 珠海格力电器股份有限公司 | Intelligent device control method, module, intelligent device and storage medium |
CN114327056A (en) * | 2021-12-23 | 2022-04-12 | 新疆爱华盈通信息技术有限公司 | Target object control method, device and storage medium |
CN114428506A (en) * | 2022-04-06 | 2022-05-03 | 北京云迹科技股份有限公司 | Control method and device of service robot |
Also Published As
Publication number | Publication date |
---|---|
CN110434853B (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110434853A (en) | A kind of robot control method, device and storage medium | |
CN103869814B (en) | Terminal positioning and navigation method and mobile terminal | |
CN106355604B (en) | Tracking image target method and system | |
CN107990899A (en) | A kind of localization method and system based on SLAM | |
CN109886078A (en) | The retrieval localization method and device of target object | |
CN106030610B (en) | The real-time 3D gesture recognition and tracking system of mobile device | |
CN109073385A (en) | A kind of localization method and aircraft of view-based access control model | |
CN111680594A (en) | Augmented reality interaction method based on gesture recognition | |
CN110268225A (en) | The method of positioning device, server-side and mobile robot on map | |
CN103196430A (en) | Mapping navigation method and system based on flight path and visual information of unmanned aerial vehicle | |
CN113116224B (en) | Robot and control method thereof | |
CN110428449A (en) | Target detection tracking method, device, equipment and storage medium | |
CN110737798B (en) | Indoor inspection method and related product | |
CN109933061A (en) | Robot and control method based on artificial intelligence | |
EP2538372A1 (en) | Dynamic gesture recognition process and authoring system | |
CN110349212A (en) | Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring | |
Wang et al. | Dynamic gesture recognition using 3D trajectory | |
CN109543644A (en) | A kind of recognition methods of multi-modal gesture | |
US8970479B1 (en) | Hand gesture detection | |
TWI739339B (en) | System for indoor positioning of personnel and tracking interactions with specific personnel by mobile robot and method thereof | |
CN115278014A (en) | Target tracking method, system, computer equipment and readable medium | |
WO2019014620A1 (en) | Capturing, connecting and using building interior data from mobile devices | |
TW202011772A (en) | Target function calling method and apparatus, mobile terminal and storage medium | |
WO2024078088A1 (en) | Interaction processing method and apparatus | |
Shell et al. | Planning coordinated event observation for structured narratives |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: Room 201, building 4, courtyard 8, Dongbeiwang West Road, Haidian District, Beijing Patentee after: Beijing Yunji Technology Co.,Ltd. Address before: Room 201, building 4, courtyard 8, Dongbeiwang West Road, Haidian District, Beijing Patentee before: BEIJING YUNJI TECHNOLOGY Co.,Ltd. |