CN110293554A - Control method, the device and system of robot - Google Patents

Control method, the device and system of robot Download PDF

Info

Publication number
CN110293554A
CN110293554A CN201810236923.5A CN201810236923A CN110293554A CN 110293554 A CN110293554 A CN 110293554A CN 201810236923 A CN201810236923 A CN 201810236923A CN 110293554 A CN110293554 A CN 110293554A
Authority
CN
China
Prior art keywords
image
robot
operative scenario
region
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810236923.5A
Other languages
Chinese (zh)
Inventor
龚耘
张彦刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Orion Star Technology Co Ltd
Original Assignee
Beijing Orion Star Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Orion Star Technology Co Ltd filed Critical Beijing Orion Star Technology Co Ltd
Priority to CN201810236923.5A priority Critical patent/CN110293554A/en
Publication of CN110293554A publication Critical patent/CN110293554A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1671Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The present invention proposes a kind of control method of robot, device and system, wherein, method includes: acquisition and the operative scenario image for showing robot, in response to user's operation, determine that object region, control robot operate the corresponding object performance objective of object region from operative scenario image.After obtaining operative scenario image, in the case where object identification accuracy is lower or unrecognized situation, control robot is assisted to execute the movement such as grasping body to object region by user's operation, improve the accuracy of movement and the probability of successful execution, it solves in the prior art, by image recognition model to operative scenario image recognition, indicate that robot executes the movement such as grasping body according to the recognition result of object, when the object in image can not be accurately identified, robot is caused to execute the problem that the movement accuracy such as grasping body is lower or the probability of successful execution is lower.

Description

Control method, the device and system of robot
Technical field
The present invention relates to equipment control technology field more particularly to a kind of control methods of robot, device and system.
Background technique
Robot executes corresponding operation, assists people to complete some manpowers and be not easy by obtaining corresponding instruction At movement, give people's lives provide convenience.
In the related technology, it is identified using operative scenario image of the image recognition model to robot, utilizes image district Domain corresponding object identification result instruction robot executes the target actions such as corresponding crawl.This mode, in image recognition Object accuracy is not high, alternatively, causing robot performance objective to act when picture recognition module can not identify the object in image Accuracy it is lower, or even execute operation failure the problem of.
Summary of the invention
The present invention is directed to solve at least some of the technical problems in related technologies.
For this purpose, the present invention proposes a kind of control method of robot, after obtaining operative scenario image, in object identification Under accuracy is lower or unrecognized situation, control robot is assisted to execute object to object region by user's operation The movement such as crawl, improves the accuracy of movement and the probability of successful execution.
The present invention proposes a kind of control device of robot.
The present invention proposes a kind of electronic equipment.
The present invention proposes a kind of control system of robot.
The present invention proposes a kind of computer readable storage medium.
One aspect of the present invention embodiment proposes a kind of control method of robot, comprising:
Obtain and show the operative scenario image of robot;
In response to user's operation, object region is determined from the operative scenario image;
The robot is controlled to operate the corresponding object performance objective of the object region.
Optionally, as the first possible implementation of first aspect, the acquisition and the work for showing robot Scene image, comprising:
Obtain the first image recognition information of the operative scenario image;The first image identification information be used to indicate from The object identified in the operative scenario image;
In the operative scenario image of display, the object identified is identified.
Optionally, described in response to user's operation as second of possible implementation of first aspect, from the work Make to determine object region in scene image, comprising:
In response to user's operation, determine that the user's operation has selected object from the object identified;
It will show in the operative scenario image and select the image-region of object as the object-image region described in having Domain.
Optionally, as the third possible implementation of first aspect, the acquisition and the work for showing robot Scene image, comprising:
Obtain the second image recognition information of the operative scenario image;Second image recognition information is used to indicate institute State the image-region for showing in operative scenario image and having object;
In the operative scenario image of display, there is the image-region of subject to be identified the displaying.
Optionally, described in response to user's operation as the 4th kind of possible implementation of first aspect, from the work Make to determine object region in scene image, comprising:
In response to user's operation, is shown from the operative scenario image and determine the use in the image-region for having subject Family operates the object region selected.
Optionally, described in response to user's operation as the 5th kind of possible implementation of first aspect, from the work Make to determine object region in scene image, comprising:
In response to user's operation, by the operative scenario image, the selected image-region of the user's operation, as institute State object region.
Optionally, as the 6th kind of possible implementation of first aspect, the robot is controlled to the target figure As the corresponding object performance objective operation in region, comprising:
Obtain picture position and/or the target image of the object region in the operative scenario image The identification information of the shown object in region;
Control instruction is sent to the robot, controls the robot to the corresponding real space position in described image position Corresponding object performance objective operation is set, and/or, the corresponding object performance objective of the identification information is operated.
Optionally, as the 7th kind of possible implementation of first aspect, described image position includes: the target figure As the image coordinate of reference point each in region;
Wherein, described image coordinate pair answers the coordinate of the real space position in world coordinate system.
Optionally, as the 8th kind of possible implementation of first aspect, the user's operation, comprising: voice input At least one of with touch control operation.
In the control method of the robot of the embodiment of the present invention, the operative scenario image of robot is obtained and shown, respond In user's operation, determine that object region, control robot are corresponding to object region from operative scenario image The operation of object performance objective.After obtaining operative scenario image, object identification accuracy is lower or unrecognized situation Under, it assists control robot to execute the movement such as grasping body to object region by user's operation, improves the standard of movement The probability of exactness and successful execution.It solves in the prior art in the case where object identification accuracy is lower or unrecognized situation, Robot executes the problem that the movement accuracy such as grasping body is lower or the probability of successful execution is lower.
Another aspect of the invention embodiment proposes a kind of control device of robot, comprising:
Module is obtained, for obtaining and showing the operative scenario image of robot;
Determining module, for determining object region from the operative scenario image in response to user's operation;
Control module operates the corresponding object performance objective of the object region for controlling the robot.
In the control device of the robot of the embodiment of the present invention, the yard that module is used to obtain and show robot is obtained Scape image, determining module are used to determine object region, control module from operative scenario image in response to user's operation The corresponding object performance objective of object region is operated for controlling robot.After obtaining operative scenario image, Under object identification accuracy is lower or unrecognized situation, assist control robot to object region by user's operation The movement such as grasping body is executed, the accuracy of movement and the probability of successful execution are improved.It solves in the prior art in object Under recognition accuracy is lower or unrecognized situation, robot executes that the movement accuracy such as grasping body is lower or successful execution The lower problem of probability.
Another aspect of the invention embodiment proposes a kind of electronic equipment, comprising: memory, processor and is stored in storage On device and the computer program that can run on a processor, when the processor executes described program, realize that the above method is implemented The control method of robot described in example.
Another aspect of the invention embodiment proposes a kind of control system of robot, comprising: terminal, imaging sensor, Main control device and robot;The terminal, imaging sensor and robot are communicated to connect with the main control device respectively;
Wherein, the terminal, for executing the control method of robot described in preceding method embodiment;
Described image sensor, for acquiring the operative scenario image of the robot;
The main control device, for controlling under the instruction of terminal the robot;
The robot, under the control of the main control device performance objective operate.
Another aspect of the invention embodiment proposes a kind of computer readable storage medium, is stored thereon with computer journey Sequence when the program is executed by processor, realizes the control method of robot described in above method embodiment.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partially become from the following description Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments Obviously and it is readily appreciated that, in which:
Fig. 1 is a kind of flow diagram of the control method of robot provided by the embodiment of the present invention;
Fig. 2 is the flow diagram of the control method of another kind robot provided by the embodiment of the present invention;
Fig. 3 is the flow diagram of the control method of another robot provided by the embodiment of the present invention;
Fig. 4 is the flow diagram of the control method of another robot provided by the embodiment of the present invention;
Fig. 5 is a kind of structural schematic diagram of the control device of robot provided in an embodiment of the present invention;
Fig. 6 is a kind of structural schematic diagram of the control system of robot provided by the embodiment of the present invention;And
Fig. 7 shows the block diagram for being suitable for the example electronic device for being used to realize the application embodiment.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, it is intended to is used to explain the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings control method, the device and system of the robot of the embodiment of the present invention are described.
In the related technology, when realizing the operation such as grasping body by robot, using by robot and image vision In conjunction with by image recognition model, acquisition machine manually makees the image in scene, and carries out to the operative scenario image of robot The corresponding object region of object that robot needs to grab is determined in identification.But image recognition model is to train in advance , type and number to the object identification in operative scenario image have a degree of limitation, and some images are known Other model is only applicable to the identification of the object in fixed operative scenario, when operative scenario has greatly changed, so that image Identification model can not accurately identify the object in robot operative scenario image according to previous learning experience, then will lead to machine The accuracy that people executes the movements such as crawl is lower, or even the problem of failure.
In order to solve this problem, the embodiment of the present invention proposes a kind of possible realization side of the control method of robot Formula, by user's operation, determines object region, auxiliary machine after obtaining operative scenario image from operative scenario Device people executes the movement such as grasping body to object region, improves the accuracy of work and the probability of successful execution.
Fig. 1 is a kind of flow diagram of the control method of robot provided by the embodiment of the present invention, and this method is by can The electronic equipment of information exchange is supported to execute, which can integrate in robot interior, can also be with machine-independent people. For example, the electronic equipment referred in the present embodiment can be the terminal that user uses, the machine-independent people of the terminal;It can also be The main control device of robot, the main control device can integrate in robot interior, can also be with machine-independent people.
For ease of description, phase is carried out to execute the machine-independent artificial example of the electronic equipment of control method in the present embodiment It speaks on somebody's behalf bright.As shown in Figure 1, method includes the following steps:
Step 101, obtain and show the operative scenario image of robot.
Specifically, the electronic equipment for executing the present embodiment method is handed over by the main control device and robot of machine-independent people Mutually.Electronic equipment can indicate that main control device drives in order to which the operative scenario image of robot is obtained and shown from main control device Imaging sensor is imaged, and the operative scenario image that imaging obtains is sent to the electronic equipment.
Further, main control device can also carry out after getting operative scenario image according to operative scenario image The relevant identification information that image recognition obtains, and identification information and operative scenario image are sent to electronics above-mentioned together and set It is standby.
As a kind of possible implementation, when detecting the operation of user's control robot performance objective, electronic equipment The operative scenario image that instruction obtains robot is sent to main control device by the interactive interface with main control device, so that master control After device drives imaging sensor acquires the operative scenario image of robot, collected operative scenario image is input in advance Trained image recognition model can according to include in the corresponding database of neural network model in neural network model The object of identification is identified and is marked to operative scenario objects in images, for example, label can be text mark, either Identify frame, and export include recognition result operative scenario image, in turn, main control device is by the operative scenario comprising identification information Image transmitting electron equipment is shown.
Because the physical quantities and type that include in database are limited, the operative scenario of robot is also different, and master control is set It is standby that the recognition result of operative scenario objects in images is had differences, so that the identification letter that the operative scenario image got includes Breath is not also identical, for ease of description, can be divided into the first image recognition information according to the difference of identification information and the second image is known Other information.Wherein, the first image recognition information is used to indicate the object identified from operative scenario image;Second image recognition Information is used to indicate the image-region for showing in operative scenario image and having object.
The movement executed according to the difference for the image recognition information for including in the operative scenario image got, electronic equipment Also different, specifically, in the case where getting operative scenario image and the first image recognition information, in the yard of display In scape image, the object identified is identified.In the feelings for getting operative scenario image and the second image recognition information Under condition, in the operative scenario image of display, there is the image-region of subject to be identified displaying.
Step 102, in response to user's operation, object region is determined from operative scenario image.
Wherein, user's operation, comprising: at least one of voice input and touch control operation.
Specifically, under a kind of possible scene, identification is not included when electronic equipment only gets operative scenario image When information, then in response to user's operation, by operative scenario image, the selected image-region of user's operation, as target image Region.
Under alternatively possible scene, when electronic equipment not only gets operative scenario image, also comprising identification information When, identification information selected target image-region can be combined.It is specially in identification information as a kind of possible implementation When one image recognition information, then in response to user's operation, determine that user's operation has selected object from the object identified, by work Make in scene image, displaying has the image-region for having selected object as object region.As alternatively possible realization Mode, when identification information is specially the second image recognition information, then in response to having from the displaying of operative scenario image for operating The object region that user's operation has been selected is determined in the image-region of subject.
Step 103, control robot operates the corresponding object performance objective of object region.
Specifically, electronic equipment obtains picture position and/or target figure of the object region in operative scenario image As the identification information of the shown object in region, control instruction is generated according to picture position or identification information.And then pass through master control Equipment sends control instruction to robot, and control robot executes the corresponding object in the corresponding real space position in picture position Object run, and/or, the corresponding object performance objective of identification information is operated.
Wherein, picture position includes: the image coordinate of each reference point in object region, wherein image coordinate is corresponding The coordinate of real space position in world coordinate system.
In the control method of the robot of the embodiment of the present invention, the operative scenario image of robot is obtained and shown, respond In user's operation, determine that object region, control robot are corresponding to object region from operative scenario image The operation of object performance objective.After obtaining operative scenario image, object identification accuracy is lower or unrecognized situation Under, it assists control robot to execute the movement such as grasping body to object region by user's operation, improves the standard of movement The probability of exactness and successful execution.It solves in the prior art in the case where object identification accuracy is lower or unrecognized situation, Robot executes the problem that the movement accuracy such as grasping body is lower or the probability of successful execution is lower.
Analysis based on the above embodiment, robot application is in more fixed scene, by image recognition model Training, may be implemented to accurately identify the object for including in image, but in practical application, the operative scenario of robot It not immobilizes, when the fractional object in the operative scenario of robot changes, so that image recognition model can not be quasi- The object in operative scenario image is really identified and marks, operative scenario is different, and the result of identification is not also identical, because of the standard of identification Exactness reduces, and the accuracy for causing robot performance objective to act and success rate reduce, for this purpose, below by way of specific embodiment It is different to image recognition model recognition result, and assist control robot to know object region by user's operation Not, the method for instruction robot performance objective operation is illustrated.
For this purpose, the embodiment of the present invention proposes a kind of possible implementation of the control method of robot, Fig. 2 is this hair The flow diagram of the control method of another kind robot provided by bright embodiment illustrates in the operative scenario image obtained When having identified corresponding object, the operation that user executes, as shown in Fig. 2, method includes the following steps:
Step 201, the first image recognition information of operative scenario image is obtained, it is right in the operative scenario image of display The object identified is identified.
Wherein, the first image recognition information is used to indicate the object identified from operative scenario image.
Specifically, main control device, can be by pre- before being shown the operative scenario image transmitting of robot to terminal First trained image recognition model, e.g., neural network model identifies image, because application scenarios are relatively more fixed, scene The object for including it is also relatively fixed, by being trained to image recognition model, image recognition model learning may make to obtain The object features for including in the image then may recognize that the object for including in image by trained image recognition model, and Classification and marking is carried out to object, the object for needing to identify is determined, obtains the first image recognition information.
In turn, terminal obtains the operative scenario image and the first image recognition information that main control device is sent, according to the first figure As identification information, the operative scenario image got is identified, for example, identifying object in operative scenario using rectangle frame Specific region in image, and the operative scenario image after mark is shown.
Step 202, user's operation is responded, determines that user's operation has selected object from the object identified, by yard In scape image, displaying has the image-region for having selected object as object region.
Wherein, user's operation includes at least one of voice input and touch control operation.
Specifically, object is selected in user's operative scenario image comprising the first figure identification information displayed on the terminals Body, and object is marked, realization determines that object further determines that image recognition model, and the mark information is for referring to Show that the robot that user selectes needs to be implemented the corresponding object of object run, and the mark information of selected object is passed through into terminal Be sent to main control device, main control device obtains the mark information of object, parse the object that user selectes, to the object space into Row parsing, determines the region of object in the picture, has the image-region for having selected object as robot performance objective displaying The object region of operation.
For example, in the operative scenario of robot production coffee, in the robot operative scenario image that terminal is got, Carried identification information, i.e. the first identification information, have identified the object in scene image and object is classified and Label, and identify object in comprising production coffee need object, in turn, user in operative scenario image, from identification and The corresponding object of robot production coffee is selected in the object marked, as a kind of possible implementation, user can pass through The mode of voice acquires user speech by terminal, such as: crawl coffee cup, or crawl the left side cup, and to user speech into Row parsing and identification, obtaining the object that user selectes is coffee cup.As alternatively possible implementation, user passes through touch-control Operation, clicking selected object is coffee cup.In operative scenario image by having identified object to image recognition model, By manual operation, it further determined that robot needs the corresponding object of the movement such as crawl, improve robot performance objective Accuracy of action and success rate.
Step 203, control robot operates the corresponding object performance objective of object region.
Specifically, as a kind of possible implementation, image of the object region in operative scenario image is obtained Position, and the image coordinate of each reference point in object region is obtained, image coordinate corresponds to the practical sky in world coordinate system Between position coordinate.To according to the coordinate and known machine of the real space position of reference point each in object region The coordinate of the real space position of people determines the relative coordinate relationship of each reference point and robot in object region, according to The relative coordinate relationship determines that robot needs to be implemented the position of object in the object region of object run, controls machine People operates the corresponding object performance objective of object region.Wherein, reference point can be the marginal point of object region And/or central point.
As alternatively possible implementation, the identification information of the shown object of object region, the mark are obtained Information indicates the object that user in object region selectes, and location information of the object in operative scenario image has been known Not Que Ding, the location information of the object is converted on the coordinate of the real space position in world coordinate system, thus according to target The coordinate of the real space position of the coordinate and known robot of the real space position of shown object in image-region, The relative coordinate relationship for determining the object and robot in object region controls robot according to the relative coordinate relationship The corresponding object performance objective of object region is operated.
In the control method of the robot of the embodiment of the present invention, obtains and show believing comprising the first image recognition for robot The operative scenario image of breath determines object region in response to user's operation from operative scenario image, controls robot The corresponding object performance objective of object region is operated.After obtaining operative scenario image, according to the first image recognition What information included is used to indicate the object identified in operative scenario image, carries out further click by user and confirms, mentions The accuracy of Gao Liao robot performance objective movement and the probability of successful execution.
Analysis based on the above embodiment, when robot application scene fixed compared with, by image recognition model Training, may be implemented to accurately identify the object for including in image, but when robot operative scenario in part When object changes, so that image recognition model can not accurately identify and mark the object in operative scenario image, it is only capable of The image-region comprising object is identified, for this purpose, the embodiment of the present invention proposes a kind of the possible of the control method of robot Implementation, Fig. 3 are the flow diagram of the control method of another robot provided by the embodiment of the present invention, illustrate to obtain The operation of the image-region comprising object and user's execution is only had identified in the operative scenario image taken, as shown in figure 3, Method includes the following steps:
Step 301, the second image recognition information of operative scenario image is obtained, it is right in the operative scenario image of display Displaying has the image-region of subject to be identified.
Wherein, the second image recognition information is used to indicate the image-region for showing in operative scenario image and having object.
Specifically, main control device, can be by pre- before being shown the operative scenario image transmitting of robot to terminal First trained image recognition model, e.g., neural network model identifies image, when image recognition model is according to previous Learning experience the object for including in image can not be recognized accurately, only can recognize that in operative scenario image show have object When image-region, then the information recognized is known as the second image recognition information.
In turn, terminal obtains the operative scenario image and the second image recognition information that main control device is sent, according to the second figure As identification information, operative scenario image is identified, and the operative scenario image after mark is shown.
Step 302, in response to user's operation, determine that user's operation has been selected in the image-region for having subject from showing Object region.
Specifically, according to the operative scenario image for carrying the second image recognition information, user opens up in operative scenario image It is shown in the image-region of subject, the selected image-region for wanting performance objective to operate corresponding subject is gone forward side by side rower Note, which is used to indicate the image-region that user selectes and object therein, i.e., selected image-region is corresponding Object is determined as robot performance objective and operates corresponding object, and the mark information of selected image-region is sent out by terminal It send to main control device, main control device obtains the mark information of chosen image-region, parses user's choosing according to mark information Displaying is had the image-region for having selected object to operate as robot performance objective by fixed image-region and corresponding object Object region.
Step 303, control robot operates the corresponding object performance objective of object region.
Specifically, as a kind of possible implementation, image of the object region in operative scenario image is obtained Position, and the image coordinate of each reference point in object region is obtained, image coordinate corresponds to the practical sky in world coordinate system Between position coordinate.To according to the coordinate and known machine of the real space position of reference point each in object region The coordinate of the real space position of people determines relative coordinate relationship of each reference point relative to robot in object region, According to the relative coordinate relationship, controls robot and the corresponding object performance objective of object region is operated.
In the control method of the robot of the embodiment of the present invention, obtains and show believing comprising the second image recognition for robot The operative scenario image of breath determines object region in response to user's operation from operative scenario image, controls robot The corresponding object performance objective of object region is operated.After obtaining operative scenario image, according to the second image recognition What information included is used to indicate the image-region for showing in operative scenario image and having object, has object to the displaying by user Image-region is identified, so that it is determined that object region, realizes when that can not accurately identify the object in operative scenario, It assists determining object region by user's operation, and it is dynamic to object region execution grasping body etc. to control robot Make, improves the accuracy of movement and the probability of successful execution.
Based on the above embodiment, when the scene of robot work has greatly changed, and operative scenario is more complicated, The picture recognition module of main control device completely can not be to identifying, in this case in current operative scenario, the present invention Embodiment proposes a kind of possible implementation of the control method of robot, Fig. 4 be provided by the embodiment of the present invention again A kind of flow diagram of the control method of robot illustrates that not including any identification in the operative scenario image obtained believes When breath, the operation that user executes, as shown in figure 4, method includes the following steps:
Step 401, it obtains and shows the operative scenario image not comprising image recognition information.
Specifically, when robot application is when new operative scenario, the object for including in operative scenario becomes larger huge, master control Equipment identifies that then picture recognition module can not pass through previous completely to the operative scenario image of the robot got Habit experience identifies the object for including in operative scenario image, so that any identification information can not be obtained.That is, Terminal does not include any identification information from the operative scenario image obtained from main control device.
Step 402, in response to user's operation, by operative scenario image, the selected image-region of user's operation, as mesh Logo image region.
Specifically, when terminal obtains the operative scenario image for not including any identification information, user needs through operation choosing Image-region where earnest body, as object region, specifically, as a kind of possible implementation, user is being opened up It is shown in the image-region of subject, selectes a point, the location information of user's Chosen Point is transferred to main control device, master control Equipment parses to obtain the location information of the point, and the contour of object that the position is shown carries out delineation as object-image region Domain or main control device regard the image-region delineation near the point as object region, and it is displayed on the terminals go out should Object region.As alternatively possible implementation, if user in showing the image-region for having subject, selectes After one point, the point that main control device can not still be selected according to user identifies the profile of point touching object, then feeds back to terminal, user Then in showing the image-region for having subject, range is drawn a circle to approve manually and determines object region.
Step 403, control robot operates the corresponding object performance objective of object region.
Specifically, picture position of the object region in operative scenario image is obtained, and obtains object region In each reference point image coordinate, image coordinate corresponds to the coordinate of the real space position in world coordinate system.To according to mesh The coordinate of the real space position of the coordinate and known robot of the real space position of each reference point in logo image region, Relative coordinate relationship of each reference point relative to robot in object region is determined, according to the relative coordinate relationship, control Robot operates the corresponding object performance objective of object region.
In the control method of the robot of the embodiment of the present invention, obtains and show believing comprising the second image recognition for robot The operative scenario image of breath determines object region in response to user's operation from operative scenario image, controls robot The corresponding object performance objective of object region is operated.After obtaining operative scenario image, because of robot operative scenario It is more complex, identification information can not be obtained, the region where object is selected by user's operation, as object region, is realized When can not identify the object in operative scenario, assist determining object region by user's operation, and control robot The movement such as grasping body is executed to object region, expands applicable scene, and improve movement accuracy and at The probability that function executes.
In order to realize above-described embodiment, the present invention also proposes a kind of control device of robot.
Fig. 5 is a kind of structural schematic diagram of the control device of robot provided in an embodiment of the present invention.
As shown in figure 5, the device includes: to obtain module 51, determining module 52 and control module 53.
Module 51 is obtained, for obtaining and showing the operative scenario image of robot;
Determining module 52, for determining object region from operative scenario image in response to user's operation.
Control module 53 operates the corresponding object performance objective of object region for controlling robot.
As a kind of possible implementation, module 51 is obtained, is specifically used for:
Obtain the first image recognition information of operative scenario image, wherein the first image recognition information is used to indicate from work Make the object identified in scene image, in the operative scenario image of display, the object identified is identified.
As a kind of possible implementation, determining module 52 is specifically used for:
In response to user's operation, determine that user's operation has selected object from the object identified, by operative scenario image In, displaying has the image-region for having selected object as object region.
As alternatively possible implementation, module 51 is obtained, is specifically used for:
Obtain the second image recognition information of operative scenario image, wherein the second image recognition information is used to indicate work The image-region for having object is shown in scene image, in the operative scenario image of display, there is the image of subject to displaying Region is identified.
As alternatively possible implementation, determining module 52 is specifically used for:
In response to user's operation, is shown from operative scenario image in the image-region for having subject and determined user's operation Selected object region.
As another possible implementation, determining module 52, also particularly useful for:
In response to user's operation, by operative scenario image, the selected image-region of user's operation, as object-image region Domain.
As a kind of possible implementation, control module 53 is specifically used for:
Obtain picture position and/or object region institute display material of the object region in operative scenario image The identification information of body sends control instruction to robot, and control robot is corresponding to the corresponding real space position in picture position Object performance objective operation, and/or, to the corresponding object performance objective of identification information operate.
As a kind of possible implementation, picture position includes: the image coordinate of each reference point in object region, Wherein, image coordinate corresponds to the coordinate of the real space position in world coordinate system.
As a kind of possible implementation, user's operation, comprising: at least one of voice input and touch control operation.
It should be noted that the aforementioned device that the embodiment is also applied for the explanation of embodiment of the method, herein not It repeats again.
In the control device of the robot of the embodiment of the present invention, the yard that module is used to obtain and show robot is obtained Scape image, determining module are used to determine object region, control module from operative scenario image in response to user's operation The corresponding object performance objective of object region is operated for controlling robot.After obtaining operative scenario image, Under object identification accuracy is lower or unrecognized situation, assist control robot to object region by user's operation The movement such as grasping body is executed, the accuracy of movement and the probability of successful execution is improved, solves in the prior art, by right Operative scenario image recognition indicates that robot executes the movement such as grasping body according to the recognition result of object, because of object identification Type is limited, when that can not accurately identify the object in image, causes robot to execute the movement accuracy such as grasping body lower Or the lower problem of probability of successful execution.
In order to realize above-described embodiment, the invention proposes a kind of control system of robot, Fig. 6 is the embodiment of the present invention A kind of structural schematic diagram of the control system of provided robot, the system can be applied to the control of the robot of the application The embodiment of the control device of method or robot.
As shown in fig. 6, the system includes: terminal 11, imaging sensor 12, main control device 13 and robot 14.
Terminal 11, for executing the control method of robot 14 described in preceding method embodiment.
Imaging sensor 12, for acquiring the operative scenario image of robot 14, and by collected operative scenario image It is sent to main control device 13, so that main control device 13 can identify operative scenario image.
Main control device 13, for controlling under the instruction of terminal 11 robot 14.
Robot 14, under the control of main control device 13 performance objective operate.
In the control system of the robot of the embodiment of the present invention, imaging sensor collecting work scene image, and pass through end It is dynamic to realize that user's operation auxiliary control robot executes grasping body etc. to object region for the interaction at end and main control device Make, improves the accuracy of movement and the probability of successful execution.
In order to realize above-described embodiment, the invention also provides a kind of electronic equipment, comprising: memory, processor and deposits The computer program that can be run on a memory and on a processor is stored up, when processor executes the program, realizes that preceding method is real Apply the control method of robot described in example.
In order to realize above-described embodiment, the present invention also proposes a kind of computer readable storage medium, is stored thereon with calculating Machine program realizes the control method of robot described in preceding method embodiment when the program is executed by processor.
Fig. 7 shows the block diagram for being suitable for the example electronic device for being used to realize the application embodiment.The electricity that Fig. 7 is shown Sub- equipment 12 is only an example, should not function to the embodiment of the present application and use scope bring any restrictions.
As shown in fig. 7, electronic equipment 12 is showed in the form of universal computing device.The component of electronic equipment 12 may include But be not limited to: one or more processor or processing unit 16, system storage 28, connect different system components (including System storage 28 and processing unit 16) bus 18.
Bus 18 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller, Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts For example, these architectures include but is not limited to industry standard architecture (Industry Standard Architecture;Hereinafter referred to as: ISA) bus, microchannel architecture (Micro Channel Architecture;Below Referred to as: MAC) bus, enhanced isa bus, Video Electronics Standards Association (Video Electronics Standards Association;Hereinafter referred to as: VESA) local bus and peripheral component interconnection (Peripheral Component Interconnection;Hereinafter referred to as: PCI) bus.
Electronic equipment 12 typically comprises a variety of computer system readable media.These media can be it is any can be electric The usable medium that sub- equipment 12 accesses, including volatile and non-volatile media, moveable and immovable medium.
Memory 28 may include the computer system readable media of form of volatile memory, such as random access memory Device (Random Access Memory;Hereinafter referred to as: RAM) 30 and/or cache memory 32.Electronic equipment 12 can be into One step includes other removable/nonremovable, volatile/non-volatile computer system storage mediums.Only as an example, it deposits Storage system 34 can be used for reading and writing immovable, non-volatile magnetic media, and (Fig. 7 do not show, commonly referred to as " hard drive Device ").Although being not shown in Fig. 7, the disk for reading and writing to removable non-volatile magnetic disk (such as " floppy disk ") can be provided and driven Dynamic device, and to removable anonvolatile optical disk (such as: compact disc read-only memory (Compact Disc Read Only Memory;Hereinafter referred to as: CD-ROM), digital multi CD-ROM (Digital Video Disc Read Only Memory;Hereinafter referred to as: DVD-ROM) or other optical mediums) read-write CD drive.In these cases, each driving Device can be connected by one or more data media interfaces with bus 18.Memory 28 may include that at least one program produces Product, the program product have one group of (for example, at least one) program module, and it is each that these program modules are configured to perform the application The function of embodiment.
Program/utility 40 with one group of (at least one) program module 42 can store in such as memory 28 In, such program module 42 include but is not limited to operating system, one or more application program, other program modules and It may include the realization of network environment in program data, each of these examples or certain combination.Program module 42 is usual Execute the function and/or method in embodiments described herein.
Electronic equipment 12 can also be with one or more external equipments 14 (such as keyboard, sensing equipment, display 24 etc.) Communication, can also be enabled a user to one or more equipment interact with the electronic equipment 12 communicate, and/or with make the electricity Any equipment (such as network interface card, modem etc.) that sub- equipment 12 can be communicated with one or more of the other calculating equipment Communication.This communication can be carried out by input/output (I/O) interface 22.Also, electronic equipment 12 can also be suitable by network Orchestration 20 and one or more network (such as local area network (Local Area Network;Hereinafter referred to as: LAN), wide area network (Wide Area Network;Hereinafter referred to as: WAN) and/or public network, for example, internet) communication.As shown, network is suitable Orchestration 20 is communicated by bus 18 with other modules of electronic equipment 12.It should be understood that although not shown in the drawings, can be in conjunction with electricity Sub- equipment 12 uses other hardware and/or software module, including but not limited to: microcode, device driver, redundancy processing are single Member, external disk drive array, RAID system, tape drive and data backup storage system etc..
Processing unit 16 by the program that is stored in system storage 28 of operation, thereby executing various function application and Data processing, such as realize the method referred in previous embodiment.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples It closes and combines.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present invention, the meaning of " plurality " is at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discussed suitable Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, Lai Zhihang function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the invention can be realized with hardware, software, firmware or their combination.Above-mentioned In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage Or firmware is realized.Such as, if realized with hardware in another embodiment, following skill well known in the art can be used Any one of art or their combination are realized: have for data-signal is realized the logic gates of logic function from Logic circuit is dissipated, the specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.
It, can also be in addition, each functional unit in each embodiment of the present invention can integrate in a processing module It is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mould Block both can take the form of hardware realization, can also be realized in the form of software function module.The integrated module is such as Fruit is realized and when sold or used as an independent product in the form of software function module, also can store in a computer In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above The embodiment of the present invention is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as to limit of the invention System, those skilled in the art can be changed above-described embodiment, modify, replace and become within the scope of the invention Type.

Claims (10)

1. a kind of control method of robot, which is characterized in that the described method comprises the following steps:
Obtain and show the operative scenario image of robot;
In response to user's operation, object region is determined from the operative scenario image;
The robot is controlled to operate the corresponding object performance objective of the object region.
2. control method according to claim 1, which is characterized in that the acquisition and the operative scenario figure for showing robot Picture, comprising:
Obtain the first image recognition information of the operative scenario image;The first image identification information is used to indicate from described The object identified in operative scenario image;
In the operative scenario image of display, the object identified is identified.
3. control method according to claim 2, which is characterized in that it is described in response to user's operation, from the yard Object region is determined in scape image, comprising:
In response to user's operation, determine that the user's operation has selected object from the object identified;
It will show in the operative scenario image and select the image-region of object as the object region described in having.
4. control method according to claim 1, which is characterized in that the acquisition and the operative scenario figure for showing robot Picture, comprising:
Obtain the second image recognition information of the operative scenario image;Second image recognition information is used to indicate the work Make to show the image-region for having object in scene image;
In the operative scenario image of display, there is the image-region of subject to be identified the displaying.
5. control method according to claim 4, which is characterized in that it is described in response to user's operation, from the yard Object region is determined in scape image, comprising:
In response to user's operation, is shown from the operative scenario image and determine the user behaviour in the image-region for having subject Make the object region selected.
6. control method according to claim 1, which is characterized in that it is described in response to user's operation, from the yard Object region is determined in scape image, comprising:
In response to user's operation, by the operative scenario image, the selected image-region of the user's operation, as the mesh Logo image region.
7. a kind of control device of robot characterized by comprising
Module is obtained, for obtaining and showing the operative scenario image of robot;
Determining module, for determining object region from the operative scenario image in response to user's operation;
Control module operates the corresponding object performance objective of the object region for controlling the robot.
8. a kind of electronic equipment characterized by comprising memory, processor and storage are on a memory and can be in processor The computer program of upper operation when the processor executes described program, realizes such as machine as claimed in any one of claims 1 to 6 The control method of people.
9. a kind of control system of robot characterized by comprising terminal, imaging sensor, main control device and robot; The terminal, imaging sensor and robot are communicated to connect with the main control device respectively;
Wherein, the terminal requires the control method of the described in any item robots of 1-6 for perform claim;
Described image sensor, for acquiring the operative scenario image of the robot;
The main control device, for controlling under the instruction of terminal the robot;
The robot, under the control of the main control device performance objective operate.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The control method such as robot as claimed in any one of claims 1 to 6 is realized when execution.
CN201810236923.5A 2018-03-21 2018-03-21 Control method, the device and system of robot Pending CN110293554A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810236923.5A CN110293554A (en) 2018-03-21 2018-03-21 Control method, the device and system of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810236923.5A CN110293554A (en) 2018-03-21 2018-03-21 Control method, the device and system of robot

Publications (1)

Publication Number Publication Date
CN110293554A true CN110293554A (en) 2019-10-01

Family

ID=68025448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810236923.5A Pending CN110293554A (en) 2018-03-21 2018-03-21 Control method, the device and system of robot

Country Status (1)

Country Link
CN (1) CN110293554A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612837A (en) * 2020-05-27 2020-09-01 常州节卡智能装备有限公司 Material arrangement method and material arrangement equipment
CN112347890A (en) * 2020-10-30 2021-02-09 武汉理工大学 Insulator robot operation identification method, storage medium and system
CN112975940A (en) * 2019-12-12 2021-06-18 科沃斯商用机器人有限公司 Robot control method, information generation method and robot
WO2021217922A1 (en) * 2020-04-26 2021-11-04 广东弓叶科技有限公司 Human-robot collaboration sorting system and robot grabbing position obtaining method therefor
CN115476366A (en) * 2021-06-15 2022-12-16 北京小米移动软件有限公司 Control method, device, control equipment and storage medium for foot type robot
CN115706854A (en) * 2021-08-06 2023-02-17 北京小米移动软件有限公司 Camera control method and device for foot type robot and foot type robot

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129290A (en) * 2010-01-12 2011-07-20 索尼公司 Image processing device, object selection method and program
JP2012022457A (en) * 2010-07-13 2012-02-02 Japan Science & Technology Agency Task instruction system
CN103302664A (en) * 2012-03-08 2013-09-18 索尼公司 Robot apparatus, method for controlling the same, and computer program
CN103717358A (en) * 2011-08-02 2014-04-09 索尼公司 Control system, display control method, and non-transitory computer readable storage medium
US20140247261A1 (en) * 2010-02-17 2014-09-04 Irobot Corporation Situational Awareness for Teleoperation of a Remote Vehicle
US9399294B1 (en) * 2011-05-06 2016-07-26 Google Inc. Displaying estimated image data in a user interface
CN106506975A (en) * 2016-12-29 2017-03-15 深圳市金立通信设备有限公司 A kind of image pickup method and terminal
CN106845425A (en) * 2017-01-25 2017-06-13 迈吉客科技(北京)有限公司 A kind of visual tracking method and tracks of device
CN106891341A (en) * 2016-12-06 2017-06-27 北京臻迪科技股份有限公司 A kind of underwater robot and catching method
CN107515606A (en) * 2017-07-20 2017-12-26 北京格灵深瞳信息技术有限公司 Robot implementation method, control method and robot, electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129290A (en) * 2010-01-12 2011-07-20 索尼公司 Image processing device, object selection method and program
US20140247261A1 (en) * 2010-02-17 2014-09-04 Irobot Corporation Situational Awareness for Teleoperation of a Remote Vehicle
JP2012022457A (en) * 2010-07-13 2012-02-02 Japan Science & Technology Agency Task instruction system
US9399294B1 (en) * 2011-05-06 2016-07-26 Google Inc. Displaying estimated image data in a user interface
CN103717358A (en) * 2011-08-02 2014-04-09 索尼公司 Control system, display control method, and non-transitory computer readable storage medium
CN103302664A (en) * 2012-03-08 2013-09-18 索尼公司 Robot apparatus, method for controlling the same, and computer program
CN106891341A (en) * 2016-12-06 2017-06-27 北京臻迪科技股份有限公司 A kind of underwater robot and catching method
CN106506975A (en) * 2016-12-29 2017-03-15 深圳市金立通信设备有限公司 A kind of image pickup method and terminal
CN106845425A (en) * 2017-01-25 2017-06-13 迈吉客科技(北京)有限公司 A kind of visual tracking method and tracks of device
CN107515606A (en) * 2017-07-20 2017-12-26 北京格灵深瞳信息技术有限公司 Robot implementation method, control method and robot, electronic equipment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112975940A (en) * 2019-12-12 2021-06-18 科沃斯商用机器人有限公司 Robot control method, information generation method and robot
WO2021217922A1 (en) * 2020-04-26 2021-11-04 广东弓叶科技有限公司 Human-robot collaboration sorting system and robot grabbing position obtaining method therefor
CN111612837A (en) * 2020-05-27 2020-09-01 常州节卡智能装备有限公司 Material arrangement method and material arrangement equipment
CN111612837B (en) * 2020-05-27 2024-03-08 常州节卡智能装备有限公司 Material finishing method and material finishing equipment
CN112347890A (en) * 2020-10-30 2021-02-09 武汉理工大学 Insulator robot operation identification method, storage medium and system
CN115476366A (en) * 2021-06-15 2022-12-16 北京小米移动软件有限公司 Control method, device, control equipment and storage medium for foot type robot
CN115476366B (en) * 2021-06-15 2024-01-09 北京小米移动软件有限公司 Control method, device, control equipment and storage medium for foot robot
CN115706854A (en) * 2021-08-06 2023-02-17 北京小米移动软件有限公司 Camera control method and device for foot type robot and foot type robot

Similar Documents

Publication Publication Date Title
CN110293554A (en) Control method, the device and system of robot
KR102014377B1 (en) Method and apparatus for surgical action recognition based on learning
Kumar et al. A position and rotation invariant framework for sign language recognition (SLR) using Kinect
EP3712805B1 (en) Gesture recognition method, device, electronic device, and storage medium
Balomenos et al. Emotion analysis in man-machine interaction systems
Nickel et al. Visual recognition of pointing gestures for human–robot interaction
JP4481663B2 (en) Motion recognition device, motion recognition method, device control device, and computer program
CN110598576B (en) Sign language interaction method, device and computer medium
Madeo et al. Gesture phase segmentation using support vector machines
CN112470160A (en) Apparatus and method for personalized natural language understanding
CN109101228A (en) The execution method and apparatus of application program
CN108549643A (en) translation processing method and device
Benoit et al. Audio-visual and multimodal speech systems
CN113835522A (en) Sign language video generation, translation and customer service method, device and readable medium
Russo et al. PARLOMA–a novel human-robot interaction system for deaf-blind remote communication
CN107992602A (en) Search result methods of exhibiting and device
CN108984679A (en) Dialogue generates the training method and device of model
Ronchetti et al. Sign languague recognition without frame-sequencing constraints: A proof of concept on the argentinian sign language
CN109711285A (en) Training, test method and the device of identification model
CN110188303A (en) Page fault recognition methods and device
CN109446907A (en) A kind of method, apparatus of Video chat, equipment and computer storage medium
CN110619252B (en) Method, device and equipment for identifying form data in picture and storage medium
CN109616101A (en) Acoustic training model method, apparatus, computer equipment and readable storage medium storing program for executing
CN110084230A (en) Vehicle body direction detection method and device based on image
Neidle et al. Computer-based tracking, analysis, and visualization of linguistically significant nonmanual events in American Sign Language (ASL)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191001

RJ01 Rejection of invention patent application after publication