CN107515606A - Robot implementation method, control method and robot, electronic equipment - Google Patents
Robot implementation method, control method and robot, electronic equipment Download PDFInfo
- Publication number
- CN107515606A CN107515606A CN201710595912.1A CN201710595912A CN107515606A CN 107515606 A CN107515606 A CN 107515606A CN 201710595912 A CN201710595912 A CN 201710595912A CN 107515606 A CN107515606 A CN 107515606A
- Authority
- CN
- China
- Prior art keywords
- robot
- scene image
- real scene
- target location
- coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 230000033001 locomotion Effects 0.000 claims abstract description 57
- 230000004888 barrier function Effects 0.000 claims description 58
- 230000005540 biological transmission Effects 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000009877 rendering Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 3
- 230000002452 interceptive effect Effects 0.000 abstract description 10
- 238000010586 diagram Methods 0.000 description 10
- 230000003993 interaction Effects 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
Abstract
This application provides a kind of robot implementation method, control method and robot, electronic equipment, including:Robot sends the real scene image got and its synchronizing information, control terminal determine target location according to real scene image and send target location and synchronizing information to robot;Robot determines the target location of robot according to target location and synchronizing information;According to the target location of the obstacle information of current scene and robot, path planning is generated;The target location is moved to according to the path planning.Using technical scheme provided herein, target point is specified on the real scene image that user only needs to upload in robot, robot can be according to the target point independent navigation that user selects to the target point, so that the motion control of robot is more convenient, the control frequency is lower, reduce the quantity of control instruction, and the interactive experience of close friend can also be provided in the case where remote control network environment is bad.
Description
Technical field
The application is related to robotic technology field, more particularly to a kind of robot implementation method, control method and robot,
Electronic equipment.
Background technology
Existing tele-robotic control technology is commonly used to that robot, service robot (such as security patrol is remotely presented
Robot, merchandising machine people) etc. product motion control link.
Remotely the motion control interactive mode of presentation robot is:User regards what remote control terminal monitoring robot passed back
Frequency flows, and is referred to by modes such as rocking bar (entity is virtual), keyboard, mouse, touch-screens to robot sending direction and speed control
Order adjusts the motion of robot, and user must notice that what robotic transfer returned regards the moment while Manipulation of the machine people moves
Frequency is flowed, and the decision-making of motion adjustment in next step is done by video flowing.
It is this by way of the direction of motion and speed command carry out remote control, it is desirable to the time delay of video flowing is necessary
Very little, it is desirable to user must be noted that power high concentration when in use, and substantial amounts of control instruction is needed when passing through complex scene,
Such as personnel, barrier and more scene of turning.
The content of the invention
The embodiment of the present application proposes a kind of robot implementation method, control method, robot and electronic equipment, to solve
In the prior art by way of the direction of motion and speed command carry out remote control present in video stream delay requirements and use
The technical problems such as family requirement is higher, control instruction is more.
One side, the embodiment of the present application provide a kind of robot implementation method, comprised the following steps:
Send the real scene image got and its synchronizing information;
Receive target location and the synchronizing information that user determines according to the real scene image;
According to the target location and the synchronizing information, the target location of robot is determined;
According to the target location of the obstacle information of current scene and the robot, path planning is generated;
The target location is moved to according to the path planning.
Second aspect, the embodiment of the present application provide a kind of robot control method, comprised the following steps:
Receive real scene image and the synchronizing information that machine human hair is sent;
Determine the target location that user selects on the real scene image;
Send the target location of the determination and the synchronizing information.
3rd aspect, the embodiment of the present application provide a kind of robot, including:For obtaining the shooting of real scene image
Head, motor, mobile device, processor, memory and one or more modules;
One or more of modules are stored in the memory, and are configured to by the computing device, institute
Stating one or more modules includes being used for the instruction for performing each step in a kind of robot implementation method as described above.
4th aspect, the embodiment of the present application provide a kind of electronic equipment, including:Display screen, processor, memory with
And one or more modules;
One or more of modules are stored in the memory, and are configured to by the computing device, institute
Stating one or more modules includes being used for the instruction for performing each step in a kind of robot control method as described above.
Have the beneficial effect that:
The technical scheme provided using the embodiment of the present application, user only need to refer on the real scene image of robot upload
Set the goal a little, robot can extend the interaction of user's single according to the target point independent navigation that user selects to the target point
The target zone that can be moved so that the motion control of robot is simpler and more direct, the control frequency is lower, and user without pay close attention in real time
Video flowing simultaneously constantly selects the direction of motion and speed, greatly reduces the quantity of control instruction, and in remote control network ring
Can also be provided in the case that border is bad close friend interactive experience, will not because network environment is bad and by video flow delay
Influence.
Brief description of the drawings
The specific embodiment of the application is described below with reference to accompanying drawings, wherein:
Fig. 1 shows the schematic flow sheet that robot implementation method is implemented in the embodiment of the present application;
Fig. 2 shows the schematic flow sheet that robot control method is implemented in the embodiment of the present application;
Fig. 3 shows robot and the data interaction schematic diagram of control terminal in the embodiment of the present application;
Fig. 4 shows the process schematic of robot transmission data in the embodiment of the present application;
Fig. 5 shows that robot receives the process schematic after data in the embodiment of the present application;
Fig. 6 shows usage scenario schematic diagram in the embodiment of the present application;
Fig. 7 shows the structural representation of robot in the embodiment of the present application;
Fig. 8 shows the structural representation of electronic equipment in the embodiment of the present application.
Embodiment
In order that the technical scheme and advantage of the application are more clearly understood, below in conjunction with accompanying drawing to the exemplary of the application
Embodiment is described in more detail, it is clear that and described embodiment is only the part of the embodiment of the application, rather than
The exhaustion of all embodiments.And in the case where not conflicting, the feature in embodiment and embodiment in this explanation can be mutual
It is combined.
Inventor notices during invention:
In addition to the remote control interactive mode of robot remotely is presented, some service robots in the prior art also be present and lead
In the remote control interactive mode in domain, such as patent (A of CN 106444780) and patent (CN105652870 A), machine is man-machine
Body carries radar or vision sensor, and such robot utilizes firstly the need of user instructs control machine people in region with radar
Build figure or vision build the mode of figure and establishes global map so that user can by way of selecting map reference designated movement
Target, localization method combination path planning, the motion planning methods such as radar, vision is recycled to realize independent navigation.
This mode needs that first first whole scene is carried out by professional to build graphic operation, while requires that foot will be reached by building figure
Enough precision, this results in higher to calculating performance requirement, and can not have excessive dynamic object (bag in scene when building figure
Include people);User needs to compare the profile of two-dimensional map and according to oneself understanding to whole region in advance after obtaining global map
The target point wanted to go to is selected, this kind of mode of operation be not directly perceived, and for being unfamiliar with the user of the scene or relatively heavy being laid out
There is very big use difficulty under multiple scene.
In view of the shortcomings of the prior art, the embodiment of the present application proposes a kind of robot implementation method, control method, machine
People and electronic equipment, are illustrated below.
Mobile robot, the machine in two-dimensional space (such as ground) or three dimensions with locomotivity can be referred to
People, such as:Sweeping robot, robot (telepresense robot), unmanned plane, unmanned vehicle etc. is remotely presented.
Mobile robot remote control, it can refer in remote control terminal (PC or mobile device etc.) to being equipped with communication module
A kind of man-machine interaction mode that the motion of mobile robot is controlled.
Embodiment one,
Fig. 1 shows the schematic flow sheet that robot implementation method is implemented in the embodiment of the present application, as illustrated, the machine
Device people's implementation method may include steps of:
The real scene image and its synchronizing information that step 101, transmission are got;
Step 102, receive target location and synchronizing information that user determines according to the real scene image;
Step 103, according to the target location and the synchronizing information, determine the target location of robot;
Step 104, according to the obstacle information of current scene and the target location of the robot, generate path planning;
Step 105, the target location is moved to according to the path planning.
Wherein, synchronizing information can include odometer information, odometer information can include using robot start shooting position as
The information such as starting point, the number of turns, the distance of robot ambulation and the angle of rotation of wheel turn of robot.
When it is implemented, robot can get the real scene image 001 of current scene, the reality by wide angle camera first
Scape image can be two dimensional image;Then the real scene image 001 is sent to control terminal;
After control terminal receives the real scene image 001, user can pass through the modes such as click on the real scene image 001
Determine that user wants the target location that robot is gone, the target location is sent to robot by control terminal;
Robot after the target location is received, can according to corresponding to the real scene image 001 synchronizing information and mesh
Cursor position, determine the target location of robot;
Robot determines the obstacle information in current time scene, then in conjunction with obstacle information and the mesh of robot
Cursor position, the part operating such as a mobile route, control engine, wheel is cooked up so that robot is moved to the target position
Put.
The robot implementation method provided using the embodiment of the present application, the mesh that robot selects according to user on image
Punctuate can independent navigation to the target point, extend the target zone that the interaction of user's single can move so that robot
Motion control is simpler and more direct, the control frequency is lower, and user is without concern video flowing in real time and constantly selects the direction of motion and speed,
The quantity of control instruction is greatly reduced, and the friendship of close friend can also be provided in the case where remote control network environment is bad
Mutually experience, will not be influenceed because network environment is bad by video flow delay.
Interaction figure picture is more during due to actual use, to avoid target point place image and the synchronizing information selected by user
Inconsistent situation, robot can send synchronizing information corresponding to real scene image and the real scene image in the embodiment of the present application,
After control terminal determines target point position on real scene image, the target location and synchronizing information can be returned to machine
People, so, robot may know that the target location selected by user is carried out based on which two field picture, current so as to combine
The synchronizing information at moment determines the target location at current time.
Wherein, synchronizing information can include the information such as odometer information, frame number, real scene image mark, as long as playing mark
The target location of user's selection is the selection carried out based on which piece image.
In implementation, before the transmission real scene image, methods described may further include:
The obstacle information of the real scene image shooting time is rendered on the real scene image.
When it is implemented, robot can get the real scene image of current scene by wide angle camera, then by super
The awareness tool such as sound wave, radar, infrared, depth camera gets the obstacle information of current scene, on the real scene image
These obstacle informations are rendered, the real scene image for rendering obstacle information is finally sent to control terminal, with more preferable to user
Experience.
In implementation, the obstacle information that the real scene image shooting time is rendered on real scene image, it can include:
Obtain the obstacle article coordinate under odometer coordinate system;
When being shot according to the relation between predetermined camera coordinates system and robot coordinate system and the real scene image
Synchronizing information corresponding to quarter, determine the obstacle article coordinate under camera coordinates system;
By on the barrier coordinate projection under the camera coordinates system to the real scene image.
Sat when it is implemented, the awareness tool such as ultrasonic wave, radar, infrared, depth camera can be first passed through and obtain odometer
Obstacle article coordinate under mark system, then according to the relation between predetermined camera coordinates system and robot coordinate system, outdoor scene
Synchronizing information corresponding to image, coordinate of the barrier under camera coordinates system is determined, finally by the barrier under camera coordinates system
Coordinate projection is on real scene image.
The coordinate system involved by the application is briefly described below.
Robot coordinate system, can be using robot two-wheeled center as coordinate origin, forward direction is X-axis, and left-hand is Y-axis,
The three-dimensional system of coordinate determined upward for Z axis.
Odometer coordinate system, can be that three-dimensional determined by the robot coordinate system at moment is opened or reset to each odometer
Coordinate system, robot (robot coordinate system's origin) position at the moment are odometer coordinate origin, the machine at the moment
Device people's reference axis is the reference axis of odometer coordinate system, until before odometer is closed or reset next time, the coordinate system is not sent out
It is raw to change.
Camera coordinates system, can be the photocentre using camera (or camera) as origin, along imaging plane perpendicular forward direction
For Z axis, vertically downward direction is by three-dimensional system of coordinate that Y-axis, level are that X-axis is formed to the right;
Image coordinate system, can be that the image upper left corner is origin, vertical direction is Y-axis, laterally the two dimension determined by X-axis
Coordinate system.
To reduce the amount of calculation of robotic end, the application can also be implemented in the following way.
In implementation, the barrier letter of the real scene image shooting time can be sent while the transmission real scene image
Breath.
The embodiment of the present application both can render obstacle information in robot one end to real scene image, can also be by barrier
Information is sent to control terminal in the lump with real scene image, and obstacle information is rendered to real scene image by control terminal.
It is described according to the target location and synchronizing information in implementation, the target location of robot is determined, can be wrapped
Include:
According to projection equationThe target location for obtaining user's determination is solved in robot coordinate system
Under coordinate (x, y);
Wherein, p=(u, v, 1)T, (u, v) is the target location that user determines, K is camera internal reference matrix,For robot
Transformation matrix between coordinate system b and camera coordinates system c, P=(x, y, h, 1)T;
, will be described in machine according to the synchronizing information and the synchronizing information at current time for sending the real scene image moment
Coordinate (x, y) under people's coordinate system is converted to coordinate of the target location of user's determination under odometer coordinate system.
It can be seen from visual geometric technology, independent pixel corresponds to the ray that a photocentre is sent in physical world in image,
The embodiment of the present application can be obtained the ray by the 2D coordinates of pixel in real scene image and the internal reference of camera and be sat in camera
Equation under mark system.Assuming that user in interaction the target location specified relative to robot chassis (such as ground) height not
Become, the embodiment of the present application can intersect point coordinates to solve 3D coordinates corresponding to the target point by solving ray with plane.
When it is implemented, set user the target location coordinate that control terminal selects be expressed as homogeneous form can as p=(u,
V, 1)T, wherein u can be the x-axis coordinate of the target location of user's selection, and v can be that the y-axis of the target location of user's selection is sat
Mark;Posture changing matrix between robot coordinate system and camera coordinates system can be obtained by demarcating in advance, can be set to
Wherein, b can refer to robot coordinate system, and c can refer to camera coordinates system, the calculating for the posture changing matrix of two coordinate systems
Prior art can be used to realize, the application will not be described here.
Assuming that homogeneous coordinates of the target point under robot coordinate system can be P=(x, y, h, 1)T, as it is assumed that with
The object height that family interaction is specified is constant, therefore h is known;Can obtain camera internal reference matrix K by demarcating in advance, according to regarding
Feel that geometry can obtain projection equation and be:The value that can obtain x, y and depth s is solved to the projection equation, so as to
Obtain coordinate (x, y) of the target of user's selection under robot coordinate system;Again can be with by synchronizing information (odometer information)
Obtain coordinate of the target under odometer coordinate system.
2D real scene image can be sent to control terminal by the embodiment of the present application, and user selects on the real scene image of control terminal
After selecting target point, the 2D coordinates that user selects can be transformed into 3D coordinates by robot by coordinate system conversion, and use is this
Conversion regime not only make it that the data of alternating transmission tail off but also test proves that side with depth image to be issued to control terminal
Formula is higher compared to accuracy.
It is described according to the obstacle information of current scene and the target location of the robot in implementation, generation planning road
Footpath, it can include:
It is the coordinate under robot coordinate system by the barrier Coordinate Conversion perceived, according to obstacle article coordinate and mileage
Count the barrier map in region residing for information generation robot;
According to coordinate of the target location of robot changing coordinates and robot in barrier map, generation planning
Path.
When it is implemented, robot passes through the sense part such as ultrasonic wave, radar, infrared, depth camera in the embodiment of the present application
Part disturbance of perception thing is typically to perceive short distance, the barrier in subrange, then by obstacle article coordinate and odometer information
Combine the barrier map of regional area residing for generation robot;According to robot changing coordinates, target location in obstacle
Coordinate and barrier map in thing map etc., generate path planning.
Wherein, barrier map can be real-time change, changed according to the outdoor scene of robot ambulation and carry out real-time update.
Generation path planning can use path planning algorithm of the prior art, such as:Dijkstra, A*, RRT etc.,
As long as calculating a rational mobile route according to coordinates of targets and robot changing coordinates, obstacle information, hiding
Target location is reached in the case of obstacle avoidance thing.
In order to avoid barrier conversion in scene, scene movement is more conformed to, the application can also be real in the following way
Apply.
It is described that the target location is moved to according to the path planning in implementation, it can include:
According to the kinematic parameter of the path planning, the obstacle information at each moment and robot itself, generation fortune
Dynamic instruction;
The robot movement is controlled according to the movement instruction, until being moved to the target location.
When it is implemented, after outbound path is planned, motion planning can be carried out, passes through path planning, real-time barrier
The information such as the kinematic parameter of map and robot itself, rational movement instruction is calculated using motion planning, will be transported
Dynamic instruction is sent to motion platform (the mobile specified parts that can refer to robot).
Wherein, motion planning can use prior art, such as:The motion planning such as Path Follow, DWA, RRT
Algorithm realizes that movement instruction can include the information such as angular speed, linear velocity.
The embodiment of the present application carries out motion planning again after path planning, so as to so that Robotic Dynamic is perceived and hidden
Barrier, and realize autonomous speed adjustment so that the movement velocity change of robot is more smooth, and then gives remote control
Personnel around the user at end and robot bring more friendly experience.
In implementation, methods described may further include:
The court of robot that user determines is received while the target location that user determines according to the real scene image is received
To;
The robot direction that the user determines is converted to the direction under odometer coordinate system;
The robot is controlled by the current robot direction determined towards the conversion user.
When it is implemented, the user of control terminal can pass through the mode such as short-press/long-press of left mouse button/right button, touch-screen
The direction that robot faces, the robot direction that robot receives the target location of user's determination and user determines are wanted in selection
After, on the one hand calculating can be carried out according to target location and cook up mobile route, on the other hand can determined according to user
Robot changes itself direction towards control machine people.
The embodiment of the present application is configured by visual geometric technology to moving target, can also provide the court of accurate quick
To adjustment interactive mode, robot can only be adjusted by way of map interactive controlling robot motion in the prior art by solving
Position, can not accurately adjust robot towards the problem of.Moreover, the application is independent of global coordinate system, without using advancing
Row builds graphic operation, and it is more convenient to use, while avoids Global localization and uncontrolled movements caused by error occur.
Embodiment two,
The embodiment of the present application additionally provides a kind of robot control method, is described as follows from control terminal angle.
Fig. 2 shows the schematic flow sheet that robot control method is implemented in the embodiment of the present application, as illustrated, the machine
Device people's control method may include steps of:
Step 201, receive real scene image and synchronizing information that machine human hair is sent;
Step 202, determine the target location that user selects on the real scene image;
Step 203, the target location for sending the determination and the synchronizing information.
Control terminal only needs to receive the real scene image that machine human hair is sent in the embodiment of the present application, leads on the real scene image
Cross the i.e. achievable remote control robot in the mode selection target positions such as click and be moved to specified location, without user according to video
Stream persistently adjusts robot motion so as to continuous convergence target, and operation is more convenient, fast, and will not be by network transmission
Influence, once interaction can complete the purpose of remote control.
In implementation, methods described may further include:
The obstacle information of the real scene image shooting time is received while the real scene image that machine human hair is sent is received;
The obstacle information of the real scene image shooting time is rendered on the real scene image.
The real scene image and corresponding barrier that the embodiment of the present application can send in control terminal according to robot
Information, the obstacle information of the real scene image shooting time is rendered on the real scene image, so as to improve the sense organ body of user
Test, reduce the operand of robot.
In implementation, the obstacle information that the real scene image shooting time is rendered on real scene image, it can include:
Obtain the obstacle article coordinate under odometer coordinate system;
When being shot according to the relation between predetermined camera coordinates system and robot coordinate system and the real scene image
Odometer information corresponding to quarter, determine the obstacle article coordinate under camera coordinates system;
By on the barrier coordinate projection under the camera coordinates system to the real scene image.
In implementation, methods described may further include:
Determine the robot direction that user selects on the real scene image;
The robot direction of the determination is sent to the robot.
In the embodiment of the present application, user can with the real scene image of control terminal by clicking by mouse right button or touch-screen
Upper long-press some position determines robot direction, after the direction is issued into robot, what robot can determine according to user
Robot can accurately and fast adjust robot direction towards the corresponding direction of calculating rear steering is carried out.
After the environment used is understood, it can implement in a manner described respectively in robot side, control side.Illustrating
Cheng Zhong, it is illustrated respectively from the implementation of robot and control terminal, but this does not imply that the two must coordinate implementation, it is actual
On, when robot is performed separately with control terminal, it also each solves the problems, such as robot side, control terminal, and simply the two is combined
In use, superior technique effect can be obtained.
The application combination barrier cognition technology, Path Planning Technique, motion planning technology realize a kind of " finding i.e. institute
, it is selected i.e. gone " long-range motion control interactive mode, user can robot camera shooting real scene image on select
A bit, robot calculates physical spatial location corresponding to the point on image automatically, and using the point as target, accurately, stably
Autonomous is to the physical spatial location, robot autonomous path planning, avoiding barrier in moving process.At the same time, exist
Remote control terminal, the path when preplanning, barrier, foreseen movement track are rendered on real scene image using augmented reality
Etc. information, to the more preferable feeling of immersion of user and interactive experience.
For the ease of the implementation of the application, illustrated below with example.
Fig. 3 shows robot and the data interaction schematic diagram of control terminal in the embodiment of the present application, as illustrated, can be by
Cloud server in intermediate conveyor data, realizes the data interaction of control terminal and robotic end as medium.
Fig. 4 shows the process schematic of robot transmission data in the embodiment of the present application, and robot can obtain first
Current path planning, present speed, complaint message, wide angle camera real scene image, odometer information etc. in region, by these data
Rendered after synchronization, after it is determined that being connected with control terminal, send data to control terminal or high in the clouds.
Fig. 5 shows that robot receives the process schematic after data in the embodiment of the present application, and robot receives high in the clouds
Or parsed after the interactive instruction that issues of control terminal, and combine barrier map and carry out path planning, then moved again
Planning, moved to target location, if being not reaching to target location, return movement planning continues to generate movement instruction, until
It is finally reached target location.
Robot can include robot housing, internal processor, motor, roller, camera, inner in the embodiment of the present application
The parts such as journey meter, various sensors, internal processor can include the first sending module, the first receiving module, synchronous mould again
Block, planning module and mobile module etc., wherein, the first sending module, the first receiving module can be led to cloud server
Letter, is communicated by cloud server with control terminal (mobile phone or computer etc.).
Embodiment three,
Assuming that robot according to path planning before during traveling, per 10s shoot a width real scene image.
Fig. 6 shows usage scenario schematic diagram in the embodiment of the present application, as illustrated, robot is in three-dimensional real space
Interacted by high in the clouds with control terminal, control end subscriber is checked to the two dimensional image of display, specifies target point etc..
Robot using camera shooting current scene real scene image (or video), it is assumed that be robot 00:50
First frame real scene image of moment shooting, the real scene image can be with RGB two dimensional images;And obtain current robot using odometer
Attitude information, such as:Position (0m, 0m, 0m), towards 0rad, Use barriers thing sensing module (such as:Ultrasonic wave, radar,
Infrared, depth camera etc.) disturbance of perception thing information, including 00:Position of the 50 moment barriers in odometer coordinate system, example
Such as:(1.0m, 0m, 0m), (1.0m, 0.1m, 0m) ... etc.;Real scene image, odometer information, obstacle information are passed through into processor
The first sending module send to cloud server;
Real scene image, odometer information, obstacle information are sent to control terminal and (are assumed to be computer/meter by cloud server
Calculation machine);
After control terminal computer receives these information, can by the position coordinates of barrier (such as:(1.0m, 0m,
0m), (1.0m, 0.1m, 0m) ...) be converted under camera coordinates system obstacle article coordinate (such as:(0.0m, -1.14m, 1.37m),
(- 0.1m, -1.14m, 1.37m) ...), and by barrier coordinate projection to the real scene image, user (such as:Administrative staff or
Personal user) it can check the real scene image for rendering obstacle information on computer display screen, it is current to understand robot
Scenario;User can select a certain position as target point using single left button mouse click in the real scene image, can also be at certain
One direction a mouse click right button is as target direction.After user determines target point, target that computer/computer determines user
Point and odometer information are sent to cloud server;
Target point that cloud server determines user (such as:640*480 image centers (320pixel,
Target point 240pixel) selected for user) and odometer information send to robot.
Because robot communicated in above-mentioned high in the clouds, walking, shooting state are still in processing procedure, and constantly perceiving
Obstacle information and real-time update barrier map.Now, it is assumed that robot has shot 02:The real scene image at 00 moment, and
Currently (02:00 moment) robot attitude information under odometer coordinate system is position (2.0m, 1.0m, 0.0m), towards 0rad.
Robot is receiving the target point and 00 of user's determination:After the odometer information at 50 moment, according to 02:When 00
The odometer information at quarter calculate target point determined by user in odometer coordinate system target location (such as:Target location
For (4.0m, 2.0m, 0m)), the processor of robot can be according to the current coordinate in odometer coordinate system of robot
The target location (4.0m, 2.0m, 0m) of (2.0m, 1.0m, 0.0m), robot and barrier map generate a planning road
Footpath, path planning are the posture sequence under odometer coordinate system, such as:
(position 2.0m, 1.0m, 0.0m, towards 0rad), (position 2.1m, 1.05m, 0.0m, towards 0.1rad), (position
2.2m, 1.1m, 0.0m, towards 0.15rad), (position 2.3m, 1.15m, 0.0m, towards 0.2rad) ... (position 4.0m, 2.0m,
0m, towards 0.46rad).
After path planning is generated, robot can also be according to the path planning, the obstacle information at each moment, machine
Generation movement instruction, control machine people are moved to the target location specified to device people kinematic parameter of itself etc. in real time.Such as:
Path planning is (position 2.0m, 1.0m, 0.0m, towards 0rad), (position 2.1m, 1.05m, 0.0m, direction
0.1rad), (position 2.2m, 1.1m, 0.0m, towards 0.15rad), (position 2.3m, 1.15m, 0.0m, towards 0.2rad) ...
(position 4.0m, 2.0m, 0m, towards 0.46rad);
The kinematic parameter of robot itself can be maximum line velocity:1.2m/s, maximum angular rate:2.0rad/s, it is maximum
Linear acceleration 2.5m/s2, maximum angular acceleration:3.2rad/s2;
02:At 00 moment, there is no barrier on path planning, then generating movement instruction is:Linear velocity 0.5m/s, angular speed
1.0rad/s;Wherein, pre- motion track is the residual paths in addition to first location point in path planning;
02:At 01 moment, there is no barrier between the point of the second place on pre- motion track, then generating movement instruction is:
Linear velocity 1.0m/s, angular speed 1.0rad/s, second location point being moved in path planning;Wherein, pre- motion track can
With the residual paths being shown as in path planning in addition to first location point, second place point;
02:02 moment, to barrier between the 3rd location point being present (assuming that there is a people to stop at that time on pre- motion track
Stay between second place point and the 3rd location point), then it can change moving direction avoiding barrier, example by adjusting angular speed
Such as, movement instruction linear velocity 0.5m/s, angular speed 1.5rad/s, the 3rd location point being moved in path planning are generated;
In the manner described above, by that analogy, until being moved to source location.
When it is implemented, 03:At 00 moment, barrier be present (assuming that having between the 3rd location point on pre- motion track
One people rested between second place point and the 3rd location point at that time) when, a kind of situation be present is that the barrier is of short duration
Stop, perhaps the barrier has been removed, has no longer been barrier before robot is moved to the barrier, therefore, 03:00
At the moment, movement instruction slow-down can be generated, continue to monitor the barrier:
If before barrier position is reached, the barrier is removed, not existed, then, robot can give birth to
Recover translational speed into movement instruction to move to the location point;
If before barrier position is reached, the barrier exists always, then, robot can generate motion
Instruction changes the direction of motion and hides the barrier.
Further, the embodiment of the present application can also by real-time real scene image, obstacle information, current path planning,
Pre- motion track etc. is sent to cloud server, is sent by cloud server to control terminal, and control terminal is by obstacle information, current
Path planning and pre- motion track are rendered on the real scene image, so that user checks.
Example IV,
Robot using camera shooting current scene real scene image (or video), it is assumed that be robot 10:50
First frame real scene image of moment shooting, the real scene image can be with RGB two dimensional images;And obtain current robot using odometer
Odometer information, Use barriers thing sensing module (such as:Ultrasonic wave, radar, infrared, depth camera etc.) disturbance of perception thing letter
Breath, including 10:The positions of 50 moment barriers, depth information etc.;By current path planning, pre- motion track, barrier letter
Breath etc. is rendered on the real scene image, is then sent the real scene image after rendering and odometer information to cloud by processor
Hold server;
Real scene image, odometer information are sent to control terminal and (are assumed to be mobile phone/pad of touch-screen by cloud server
Deng);
User can check that this has rendered obstacle information, path planning and pre- moving rail on mobile phone/pad display screen
The real scene image of mark, understand the current scenario of robot;User can a certain position conduct of short-press in the real scene image
Next target point, can also be in a direction long-press as target direction.Target point, target are determined after in user, hand
Target point, target direction and the odometer information that machine/pad determines user are sent to cloud server;
Target point, target direction and the odometer information that cloud server determines user are sent to robot.
Robot is receiving the target point of user's determination, target direction and 10:, can be with after the odometer information at 50 moment
According to target direction, 10:The target direction that the odometer information at 50 moment, the odometer information at current time determine user turns
The target direction being changed under current time, robot visual angle, and turn to the direction;It can also be believed according to the odometer at current time
Breath calculates target location of the target point under current time, robot visual angle determined by user, the processor of robot according to
The current position coordinates of robot, the target location of robot and barrier map generation path planning.
After path planning is generated, robot can also be according to the path planning, the obstacle information at each moment, machine
Generation movement instruction, control machine people are moved to the target location specified to device people kinematic parameter of itself etc. in real time.
Based on same inventive concept, a kind of robot, electronic equipment are additionally provided in the embodiment of the present application, because these set
It is similar to a kind of implementation method of robot, control method for the principle solved the problems, such as, therefore the implementation of these equipment can be joined
The implementation of square method, repeat part and repeat no more.
Embodiment five,
Fig. 7 shows the structural representation of robot in the embodiment of the present application, as illustrated, the robot can wrap
Include:For obtaining camera 701, motor 702, mobile device 703, processor 704, the memory 705 and one of real scene image
Individual or multiple modules;
One or more of modules are stored in the memory, and are configured to by the computing device, institute
Stating one or more modules includes being used for the instruction for performing each step in a kind of robot implementation method as described above.
When it is implemented, one or more of modules can be:
First sending module, for sending real scene image and its synchronizing information;
First receiving module, for receiving target location and the synchronizing information that user determines according to the real scene image;
Synchronization module, for according to the target location and the synchronizing information, determining the target location of robot;
Planning module, for the target location of the obstacle information according to current scene and the robot, generation planning
Path;
Mobile module, for being rotated according to the path planning controlled motor, the mobile device is under the drive of motor
Robot is moved to the target location.
In implementation, it may further include:
First rendering module, for before the transmission real scene image, the realistic picture to be rendered on the real scene image
As the obstacle information of shooting time.
In implementation, first rendering module can include:
First acquisition unit, for obtaining the obstacle article coordinate under odometer coordinate system;
First converting unit, for according to the relation between predetermined camera coordinates system and robot coordinate system, with
And synchronizing information corresponding to the real scene image shooting time, determine the obstacle article coordinate under camera coordinates system;
First projecting cell, for by the barrier coordinate projection under the camera coordinates system to the real scene image.
In implementation, first sending module can be used for sending real scene image and the barrier of the real scene image shooting time
Hinder thing information.
In implementation, the synchronization module can include:
First coordinate unit, for according to projection equationSolve and obtain the target position that the user determines
Put the coordinate (x, y) under robot coordinate system;
Wherein, p=(u, v, 1)T, (u, v) is the target location that user determines, K is camera internal reference matrix,For robot
Transformation matrix between coordinate system b and camera coordinates system c, P=(x, y, h, 1)T;
Second coordinate unit, for according to the synchronizing information and the synchronization at current time for sending the real scene image moment
Information, the target location that the coordinate (x, y) under robot coordinate system is converted to user's determination are sat in odometer
Coordinate under mark system.
In implementation, the planning module can include:
Map generation unit, the barrier Coordinate Conversion for that will perceive are the coordinate under robot coordinate system, according to
The barrier map in region residing for obstacle article coordinate and odometer information generation robot;
Planning unit, for the coordinate according to robot changing coordinates and target location in barrier map, generation
Path planning.
In implementation, the mobile module can include:
Instruction generation unit, for according to the path planning, the obstacle information at each moment and robot itself
Kinematic parameter, generate movement instruction;
Control unit, for controlling the motor to rotate according to the movement instruction, mobile dress described in the motor driven
Movement is put, until being moved to the target location.
In implementation, first receiving module can be further used for receiving the robot direction that user determines;It can enter
One step includes:
Towards control module, the robot direction for the user to be determined is converted to the court under odometer coordinate system
To;The robot is controlled by the current robot direction determined towards the conversion user.
Embodiment six,
Fig. 8 shows the structural representation of electronic equipment in the embodiment of the present application, as illustrated, the electronic equipment can be with
Including:Display screen 801, processor 802, memory 803 and one or more modules;
One or more of modules are stored in the memory, and are configured to by the computing device, institute
Stating one or more modules includes being used for the instruction for performing each step in a kind of robot control method as described above.
When it is implemented, one or more of modules can be:
Second receiving module, for receiving real scene image and its synchronizing information that machine human hair is sent;
Target location determining module, the target location selected for determining user on the real scene image;
Second sending module, for sending target location and the synchronizing information of the determination.
In implementation, second receiving module is used to receive real scene image and the real scene image shooting that machine human hair is sent
The obstacle information at moment;The electronic equipment may further include:
Second rendering module, for rendering the obstacle information of the real scene image shooting time on the real scene image.
In implementation, second rendering module can include:
Second acquisition unit, for obtaining the obstacle article coordinate under odometer coordinate system;
Second converting unit, for according to the relation between predetermined camera coordinates system and robot coordinate system, with
And odometer information corresponding to the real scene image shooting time, determine the obstacle article coordinate under camera coordinates system;
Second projecting cell, for by the barrier coordinate projection under the camera coordinates system to the real scene image.
In implementation, it may further include:
Towards determining module, the robot direction selected for determining user on the real scene image;
Second sending module can be further used for sending the robot direction of the determination.
In above-described embodiment, it can be implemented using existing function component module.For example, planning module can be adopted
With existing route planning component, at least, just possesses realization on the path planning server used in existing robot technology
The function component;Then it is that any one equipment for possessing signal transfer functions all possesses as receiving module, sending module
Component;Meanwhile the use such as the target location calculating at the robot visual angle of synchronization module progress is all existing technological means,
Those skilled in the art can be achieved by corresponding design and develop;Mobile module, can be the electricity of robot in the prior art
Mobile purpose is realized in the components such as machine, wheel or its combination.
For convenience of description, each several part of apparatus described above is divided into various modules with function or unit describes respectively.
Certainly, each module or the function of unit can be realized in same or multiple softwares or hardware when implementing the application.
It should be understood by those skilled in the art that, embodiments herein can be provided as method, system or computer program
Product.Therefore, the application can use the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware
Apply the form of example.Moreover, the application can use the computer for wherein including computer usable program code in one or more
The computer program production that usable storage medium is implemented on (including but is not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.
The application is with reference to the flow according to the method for the embodiment of the present application, equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that can be by every first-class in computer program instructions implementation process figure and/or block diagram
Journey and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided
The processors of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce
A raw machine so that produced by the instruction of computer or the computing device of other programmable data processing devices for real
The device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to
Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or
The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted
Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, so as in computer or
The instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in individual square frame or multiple square frames.
Although having been described for the preferred embodiment of the application, those skilled in the art once know basic creation
Property concept, then can make other change and modification to these embodiments.So appended claims be intended to be construed to include it is excellent
Select embodiment and fall into having altered and changing for the application scope.
Claims (14)
1. a kind of robot implementation method, it is characterised in that comprise the following steps:
Send the real scene image got and its synchronizing information;
Receive target location and the synchronizing information that user determines according to the real scene image;
According to the target location and the synchronizing information, the target location of robot is determined;
According to the target location of the obstacle information of current scene and the robot, path planning is generated;
The target location is moved to according to the path planning.
2. the method as described in claim 1, it is characterised in that before the transmission real scene image, further comprise:
The obstacle information of the real scene image shooting time is rendered on the real scene image.
3. method as claimed in claim 2, it is characterised in that described that the real scene image shooting time is rendered on real scene image
Obstacle information, including:
Obtain the obstacle article coordinate under odometer coordinate system;
According to the relation between predetermined camera coordinates system and robot coordinate system and the real scene image shooting time pair
The synchronizing information answered, determine the obstacle article coordinate under camera coordinates system;
By on the barrier coordinate projection under the camera coordinates system to the real scene image.
4. the method as described in claim 1, it is characterised in that send the real scene image while transmission real scene image
The obstacle information of shooting time.
5. the method as described in claim 1, it is characterised in that it is described according to the target location and synchronizing information, it is determined that
The target location of robot, including:
According to projection equationThe target location for obtaining user's determination is solved under robot coordinate system
Coordinate (x, y);
Wherein, p=(u, v, 1)T, (u, v) is the target location that user determines, K is camera internal reference matrix,For robot coordinate
It is the transformation matrix between b and camera coordinates system c, P=(x, y, h, 1)T;
According to the synchronizing information and the synchronizing information at current time for sending the real scene image moment, described will be sat in robot
Coordinate (x, y) under mark system is converted to coordinate of the target location of user's determination under odometer coordinate system.
6. the method as described in claim 1, it is characterised in that the obstacle information according to current scene and the machine
The target location of people, path planning is generated, including:
It is the coordinate under robot coordinate system by the barrier Coordinate Conversion perceived, is believed according to obstacle article coordinate and odometer
The barrier map in region residing for breath generation robot;
According to coordinate of the target location of robot changing coordinates and robot in barrier map, path planning is generated.
7. the method as described in claim 1, it is characterised in that described that the target position is moved to according to the path planning
Put, including:
According to the kinematic parameter of the path planning, the obstacle information at each moment and robot itself, generation motion refers to
Order;
The robot movement is controlled according to the movement instruction, until being moved to the target location.
8. the method as described in claim 1, it is characterised in that further comprise:
The robot direction that user determines is received while the target location that user determines according to the real scene image is received;
The robot direction that the user determines is converted to the direction under odometer coordinate system;
The robot is controlled by the current robot direction determined towards the conversion user.
9. a kind of robot control method, it is characterised in that comprise the following steps:
Receive real scene image and the synchronizing information that machine human hair is sent;
Determine the target location that user selects on the real scene image;
Send the target location of the determination and the synchronizing information.
10. method as claimed in claim 9, it is characterised in that further comprise:
The obstacle information of the real scene image shooting time is received while the real scene image that machine human hair is sent is received;Described
The obstacle information of the real scene image shooting time is rendered on real scene image.
11. method as claimed in claim 10, it is characterised in that described when rendering real scene image shooting on real scene image
The obstacle information at quarter, including:
Obtain the obstacle article coordinate under odometer coordinate system;
According to the relation between predetermined camera coordinates system and robot coordinate system and the real scene image shooting time pair
The odometer information answered, determine the obstacle article coordinate under camera coordinates system;
By on the barrier coordinate projection under the camera coordinates system to the real scene image.
12. method as claimed in claim 9, it is characterised in that further comprise:
Determine the robot direction that user selects on the real scene image;
The robot direction of the determination is sent to the robot.
A kind of 13. robot, it is characterised in that including:Camera, motor, mobile device, processor, memory and one
Or multiple modules;
One or more of modules are stored in the memory, and are configured to by the computing device, described one
Individual or multiple modules include being used to perform each step in a kind of robot implementation method as described in claim 1 to 8 is any
Instruction.
14. a kind of electronic equipment, it is characterised in that including:Display screen, processor, memory and one or more modules;
One or more of modules are stored in the memory, and are configured to by the computing device, described one
Individual or multiple modules include being used to perform each step in a kind of robot control method as described in claim 9 to 13 is any
Instruction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710595912.1A CN107515606A (en) | 2017-07-20 | 2017-07-20 | Robot implementation method, control method and robot, electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710595912.1A CN107515606A (en) | 2017-07-20 | 2017-07-20 | Robot implementation method, control method and robot, electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107515606A true CN107515606A (en) | 2017-12-26 |
Family
ID=60721702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710595912.1A Pending CN107515606A (en) | 2017-07-20 | 2017-07-20 | Robot implementation method, control method and robot, electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107515606A (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108121347A (en) * | 2017-12-29 | 2018-06-05 | 北京三快在线科技有限公司 | For the method, apparatus and electronic equipment of control device movement |
CN108759822A (en) * | 2018-04-12 | 2018-11-06 | 江南大学 | A kind of mobile robot 3D positioning systems |
CN108805928A (en) * | 2018-05-23 | 2018-11-13 | 平安科技(深圳)有限公司 | Control method, apparatus, computer equipment and the storage medium of unmanned machine live streaming |
CN108958241A (en) * | 2018-06-21 | 2018-12-07 | 北京极智嘉科技有限公司 | Control method, device, server and the storage medium of robot path |
CN109725580A (en) * | 2019-01-17 | 2019-05-07 | 深圳市锐曼智能装备有限公司 | The long-range control method of robot |
CN109990889A (en) * | 2017-12-29 | 2019-07-09 | 深圳市优必选科技有限公司 | A kind of control method and device for robot of recording |
WO2019136808A1 (en) * | 2018-01-15 | 2019-07-18 | 深圳市沃特沃德股份有限公司 | Robot moving method, robot moving device, floor sweeping robot |
CN110147091A (en) * | 2018-02-13 | 2019-08-20 | 深圳市优必选科技有限公司 | Motion planning and robot control method, apparatus and robot |
CN110293554A (en) * | 2018-03-21 | 2019-10-01 | 北京猎户星空科技有限公司 | Control method, the device and system of robot |
CN110398954A (en) * | 2018-04-24 | 2019-11-01 | 北京京东尚科信息技术有限公司 | A kind of path planning, storage method and its device |
CN110897557A (en) * | 2019-12-05 | 2020-03-24 | 西安广源机电技术有限公司 | Floor sweeping robot system |
CN110909585A (en) * | 2019-08-15 | 2020-03-24 | 北京致行慕远科技有限公司 | Route determining method, travelable device and storage medium |
CN111103875A (en) * | 2018-10-26 | 2020-05-05 | 科沃斯机器人股份有限公司 | Method, apparatus and storage medium for avoiding |
CN111208738A (en) * | 2020-01-17 | 2020-05-29 | 上海高仙自动化科技发展有限公司 | Controller, intelligent robot and intelligent robot system |
CN111309024A (en) * | 2020-03-04 | 2020-06-19 | 北京小狗智能机器人技术有限公司 | Robot positioning navigation method and device based on real-time visual data |
CN111319041A (en) * | 2020-01-17 | 2020-06-23 | 深圳市优必选科技股份有限公司 | Robot pose determining method and device, readable storage medium and robot |
CN111457923A (en) * | 2019-01-22 | 2020-07-28 | 北京京东尚科信息技术有限公司 | Path planning method, device and storage medium |
CN111625001A (en) * | 2020-05-28 | 2020-09-04 | 珠海格力智能装备有限公司 | Robot control method and device and industrial robot |
CN111624997A (en) * | 2020-05-12 | 2020-09-04 | 珠海市一微半导体有限公司 | Robot control method and system based on TOF camera module and robot |
CN111752161A (en) * | 2020-06-18 | 2020-10-09 | 格力电器(重庆)有限公司 | Electric appliance control method, system and storage medium |
CN111781924A (en) * | 2020-06-21 | 2020-10-16 | 珠海市一微半导体有限公司 | Boundary crossing control system based on mowing robot and boundary crossing control method thereof |
CN111872935A (en) * | 2020-06-21 | 2020-11-03 | 珠海市一微半导体有限公司 | Robot control system and control method thereof |
CN112329530A (en) * | 2020-09-30 | 2021-02-05 | 北京航空航天大学 | Method, device and system for detecting mounting state of bracket |
CN112601693A (en) * | 2018-09-06 | 2021-04-02 | 大众汽车股份公司 | Solution for monitoring and planning the movement of a vehicle |
CN112613469A (en) * | 2020-12-30 | 2021-04-06 | 深圳市优必选科技股份有限公司 | Motion control method of target object and related equipment |
CN112806905A (en) * | 2020-12-31 | 2021-05-18 | 广州极飞科技股份有限公司 | Method and device for multi-equipment cooperative operation, unmanned aerial vehicle and sweeping robot |
CN113070882A (en) * | 2021-04-28 | 2021-07-06 | 北京格灵深瞳信息技术股份有限公司 | Maintenance robot control system, method and device and electronic equipment |
CN113362709A (en) * | 2021-05-31 | 2021-09-07 | 南京信息工程大学 | Mobile robot type sign system |
CN113534810A (en) * | 2021-07-22 | 2021-10-22 | 乐聚(深圳)机器人技术有限公司 | Logistics robot and logistics robot system |
CN114077243A (en) * | 2020-08-07 | 2022-02-22 | 上海联影医疗科技股份有限公司 | Motion control method and system for medical auxiliary equipment |
CN114503042A (en) * | 2019-08-07 | 2022-05-13 | 波士顿动力公司 | Navigation mobile robot |
CN117146828A (en) * | 2023-10-30 | 2023-12-01 | 网思科技股份有限公司 | Method and device for guiding picking path, storage medium and computer equipment |
US11969896B2 (en) | 2018-06-21 | 2024-04-30 | Beijing Geekplus Technology Co., Ltd. | Robot scheduling and robot path control method, server and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070113939A (en) * | 2006-05-26 | 2007-11-29 | 후지쯔 가부시끼가이샤 | Mobile robot, and control method and program for the same |
CN104898652A (en) * | 2011-01-28 | 2015-09-09 | 英塔茨科技公司 | Interfacing with a mobile telepresence robot |
CN105751230A (en) * | 2016-03-31 | 2016-07-13 | 纳恩博(北京)科技有限公司 | Path control method, path planning method, first equipment and second equipment |
CN105922262A (en) * | 2016-06-08 | 2016-09-07 | 北京行云时空科技有限公司 | Robot and remote control equipment and remote control method thereof |
-
2017
- 2017-07-20 CN CN201710595912.1A patent/CN107515606A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070113939A (en) * | 2006-05-26 | 2007-11-29 | 후지쯔 가부시끼가이샤 | Mobile robot, and control method and program for the same |
CN104898652A (en) * | 2011-01-28 | 2015-09-09 | 英塔茨科技公司 | Interfacing with a mobile telepresence robot |
CN105751230A (en) * | 2016-03-31 | 2016-07-13 | 纳恩博(北京)科技有限公司 | Path control method, path planning method, first equipment and second equipment |
CN105922262A (en) * | 2016-06-08 | 2016-09-07 | 北京行云时空科技有限公司 | Robot and remote control equipment and remote control method thereof |
Cited By (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108121347A (en) * | 2017-12-29 | 2018-06-05 | 北京三快在线科技有限公司 | For the method, apparatus and electronic equipment of control device movement |
CN109990889B (en) * | 2017-12-29 | 2021-06-29 | 深圳市优必选科技有限公司 | Control method and device of recording robot |
CN109990889A (en) * | 2017-12-29 | 2019-07-09 | 深圳市优必选科技有限公司 | A kind of control method and device for robot of recording |
WO2019136808A1 (en) * | 2018-01-15 | 2019-07-18 | 深圳市沃特沃德股份有限公司 | Robot moving method, robot moving device, floor sweeping robot |
CN110147091B (en) * | 2018-02-13 | 2022-06-28 | 深圳市优必选科技有限公司 | Robot motion control method and device and robot |
CN110147091A (en) * | 2018-02-13 | 2019-08-20 | 深圳市优必选科技有限公司 | Motion planning and robot control method, apparatus and robot |
CN110293554A (en) * | 2018-03-21 | 2019-10-01 | 北京猎户星空科技有限公司 | Control method, the device and system of robot |
CN108759822A (en) * | 2018-04-12 | 2018-11-06 | 江南大学 | A kind of mobile robot 3D positioning systems |
CN108759822B (en) * | 2018-04-12 | 2021-04-30 | 江南大学 | Mobile robot 3D positioning system |
CN110398954A (en) * | 2018-04-24 | 2019-11-01 | 北京京东尚科信息技术有限公司 | A kind of path planning, storage method and its device |
WO2019223159A1 (en) * | 2018-05-23 | 2019-11-28 | 平安科技(深圳)有限公司 | Method and apparatus for controlling live broadcast of unmanned device, computer device, and storage medium |
CN108805928B (en) * | 2018-05-23 | 2023-04-18 | 平安科技(深圳)有限公司 | Method and device for controlling live broadcast of unmanned equipment, computer equipment and storage medium |
CN108805928A (en) * | 2018-05-23 | 2018-11-13 | 平安科技(深圳)有限公司 | Control method, apparatus, computer equipment and the storage medium of unmanned machine live streaming |
US11969896B2 (en) | 2018-06-21 | 2024-04-30 | Beijing Geekplus Technology Co., Ltd. | Robot scheduling and robot path control method, server and storage medium |
CN108958241A (en) * | 2018-06-21 | 2018-12-07 | 北京极智嘉科技有限公司 | Control method, device, server and the storage medium of robot path |
US11934188B2 (en) | 2018-09-06 | 2024-03-19 | Volkswagen Aktiengesellschaft | Monitoring and planning a movement of a transportation device |
CN112601693B (en) * | 2018-09-06 | 2023-04-18 | 大众汽车股份公司 | Solution for monitoring and planning the movement of a vehicle |
CN112601693A (en) * | 2018-09-06 | 2021-04-02 | 大众汽车股份公司 | Solution for monitoring and planning the movement of a vehicle |
CN111103875A (en) * | 2018-10-26 | 2020-05-05 | 科沃斯机器人股份有限公司 | Method, apparatus and storage medium for avoiding |
CN111103875B (en) * | 2018-10-26 | 2021-12-03 | 科沃斯机器人股份有限公司 | Method, apparatus and storage medium for avoiding |
CN109725580A (en) * | 2019-01-17 | 2019-05-07 | 深圳市锐曼智能装备有限公司 | The long-range control method of robot |
CN111457923B (en) * | 2019-01-22 | 2022-08-12 | 北京京东乾石科技有限公司 | Path planning method, device and storage medium |
CN111457923A (en) * | 2019-01-22 | 2020-07-28 | 北京京东尚科信息技术有限公司 | Path planning method, device and storage medium |
CN114503042A (en) * | 2019-08-07 | 2022-05-13 | 波士顿动力公司 | Navigation mobile robot |
CN110909585A (en) * | 2019-08-15 | 2020-03-24 | 北京致行慕远科技有限公司 | Route determining method, travelable device and storage medium |
CN110909585B (en) * | 2019-08-15 | 2022-09-06 | 纳恩博(常州)科技有限公司 | Route determining method, travelable device and storage medium |
CN110897557A (en) * | 2019-12-05 | 2020-03-24 | 西安广源机电技术有限公司 | Floor sweeping robot system |
CN111208738A (en) * | 2020-01-17 | 2020-05-29 | 上海高仙自动化科技发展有限公司 | Controller, intelligent robot and intelligent robot system |
CN111319041A (en) * | 2020-01-17 | 2020-06-23 | 深圳市优必选科技股份有限公司 | Robot pose determining method and device, readable storage medium and robot |
CN111309024A (en) * | 2020-03-04 | 2020-06-19 | 北京小狗智能机器人技术有限公司 | Robot positioning navigation method and device based on real-time visual data |
CN111624997A (en) * | 2020-05-12 | 2020-09-04 | 珠海市一微半导体有限公司 | Robot control method and system based on TOF camera module and robot |
CN111625001B (en) * | 2020-05-28 | 2024-02-02 | 珠海格力智能装备有限公司 | Robot control method and device and industrial robot |
CN111625001A (en) * | 2020-05-28 | 2020-09-04 | 珠海格力智能装备有限公司 | Robot control method and device and industrial robot |
CN111752161B (en) * | 2020-06-18 | 2023-06-30 | 格力电器(重庆)有限公司 | Electrical appliance control method, system and storage medium |
CN111752161A (en) * | 2020-06-18 | 2020-10-09 | 格力电器(重庆)有限公司 | Electric appliance control method, system and storage medium |
CN111781924A (en) * | 2020-06-21 | 2020-10-16 | 珠海市一微半导体有限公司 | Boundary crossing control system based on mowing robot and boundary crossing control method thereof |
CN111872935A (en) * | 2020-06-21 | 2020-11-03 | 珠海市一微半导体有限公司 | Robot control system and control method thereof |
CN114077243B (en) * | 2020-08-07 | 2023-12-05 | 上海联影医疗科技股份有限公司 | Motion control method and system for medical auxiliary equipment |
CN114077243A (en) * | 2020-08-07 | 2022-02-22 | 上海联影医疗科技股份有限公司 | Motion control method and system for medical auxiliary equipment |
CN112329530B (en) * | 2020-09-30 | 2023-03-21 | 北京航空航天大学 | Method, device and system for detecting mounting state of bracket |
CN112329530A (en) * | 2020-09-30 | 2021-02-05 | 北京航空航天大学 | Method, device and system for detecting mounting state of bracket |
CN112613469A (en) * | 2020-12-30 | 2021-04-06 | 深圳市优必选科技股份有限公司 | Motion control method of target object and related equipment |
CN112613469B (en) * | 2020-12-30 | 2023-12-19 | 深圳市优必选科技股份有限公司 | Target object motion control method and related equipment |
CN112806905A (en) * | 2020-12-31 | 2021-05-18 | 广州极飞科技股份有限公司 | Method and device for multi-equipment cooperative operation, unmanned aerial vehicle and sweeping robot |
CN113070882A (en) * | 2021-04-28 | 2021-07-06 | 北京格灵深瞳信息技术股份有限公司 | Maintenance robot control system, method and device and electronic equipment |
CN113362709A (en) * | 2021-05-31 | 2021-09-07 | 南京信息工程大学 | Mobile robot type sign system |
CN113534810A (en) * | 2021-07-22 | 2021-10-22 | 乐聚(深圳)机器人技术有限公司 | Logistics robot and logistics robot system |
CN117146828A (en) * | 2023-10-30 | 2023-12-01 | 网思科技股份有限公司 | Method and device for guiding picking path, storage medium and computer equipment |
CN117146828B (en) * | 2023-10-30 | 2024-03-19 | 网思科技股份有限公司 | Method and device for guiding picking path, storage medium and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107515606A (en) | Robot implementation method, control method and robot, electronic equipment | |
CN103996322B (en) | A kind of welding operation training simulation method and system based on augmented reality | |
CN107037880A (en) | Space orientation attitude determination system and its method based on virtual reality technology | |
CN107478214A (en) | A kind of indoor orientation method and system based on Multi-sensor Fusion | |
US20110087371A1 (en) | Responsive control method and system for a telepresence robot | |
US20080267450A1 (en) | Position Tracking Device, Position Tracking Method, Position Tracking Program and Mixed Reality Providing System | |
US20100149337A1 (en) | Controlling Robotic Motion of Camera | |
CN105931263A (en) | Target tracking method and electronic equipment | |
JP3343682B2 (en) | Robot operation teaching device and operation teaching method | |
US9154769B2 (en) | Parallel online-offline reconstruction for three-dimensional space measurement | |
JP2013061937A (en) | Combined stereo camera and stereo display interaction | |
CN107253192A (en) | It is a kind of based on Kinect without demarcation human-computer interactive control system and method | |
JPH09153146A (en) | Virtual space display method | |
CN104391578A (en) | Real-time gesture control method of three-dimensional images | |
CN112634318A (en) | Teleoperation system and method for underwater maintenance robot | |
CN105797378A (en) | Game video realizing method based on virtual reality technology | |
WO2020218368A1 (en) | Exercise equipment | |
EP3913478A1 (en) | Systems and methods for facilitating shared rendering | |
Liu et al. | Mobile delivery robots: Mixed reality-based simulation relying on ros and unity 3D | |
Almeida et al. | Be the robot: Human embodiment in tele-operation driving tasks | |
CN108830944A (en) | Optical perspective formula three-dimensional near-eye display system and display methods | |
CN105824417A (en) | Method for combining people and objects through virtual reality technology | |
JPH11338532A (en) | Teaching device | |
CN107247424A (en) | A kind of AR virtual switches and its method based on laser distance sensor | |
WO2022008490A1 (en) | Method of aligning virtual and real objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 100192 Block B, Building 1, Tiandi Adjacent to Maple Industrial Park, No. 1, North Yongtaizhuang Road, Haidian District, Beijing Applicant after: BEIJING DEEPGLINT INFORMATION TECHNOLOGY CO., LTD. Address before: 100192 8-storey South of Building 20, Aubei Science Park, No. 1 Baosheng South Road, Haidian District, Beijing Applicant before: BEIJING DEEPGLINT INFORMATION TECHNOLOGY CO., LTD. |
|
CB02 | Change of applicant information | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171226 |
|
RJ01 | Rejection of invention patent application after publication |