CN108748184A - A kind of robot patrol method and robot device based on area map mark - Google Patents

A kind of robot patrol method and robot device based on area map mark Download PDF

Info

Publication number
CN108748184A
CN108748184A CN201810608998.1A CN201810608998A CN108748184A CN 108748184 A CN108748184 A CN 108748184A CN 201810608998 A CN201810608998 A CN 201810608998A CN 108748184 A CN108748184 A CN 108748184A
Authority
CN
China
Prior art keywords
patrol
area map
robot
mark
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810608998.1A
Other languages
Chinese (zh)
Other versions
CN108748184B (en
Inventor
刘孟红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Changhong Electric Co Ltd
Original Assignee
Sichuan Changhong Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Changhong Electric Co Ltd filed Critical Sichuan Changhong Electric Co Ltd
Priority to CN201810608998.1A priority Critical patent/CN108748184B/en
Publication of CN108748184A publication Critical patent/CN108748184A/en
Application granted granted Critical
Publication of CN108748184B publication Critical patent/CN108748184B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a kind of robots based on area map mark to go on patrol method, comprises the steps of:Using camera acquisition 2D images and/or depth image, according to 2D images and/or depth image, estimate movement and space 3D maps, the space 3D maps that are shown to user of robot, and receives area map information that user specifies and area map mark, receives and patrol that parsing user returns instruct and controls robot and instructed by the patrol and goes on patrol.The robot based on area map mark of the present invention goes on patrol method and robot device, on the basis of robot estimates movement and the space 3D maps of robot according to camera, structure area map information and area map mark are assisted by user, realize the region of flexible configuration of robotic patrol, the sequence of patrol, the period of the period of patrol and patrol, robot can also be according to beat position and size, best patrol place is found, keeps robot patrol more efficient.

Description

A kind of robot patrol method and robot device based on area map mark
Technical field
The present invention relates to technical field, more particularly to a kind of machine based on area map mark are gone on patrol in mobile robot room Device people goes on patrol method and robot device.
Background technology
With the fast development of robot the relevant technologies, people are higher and higher to machine Man's Demands, especially robot Security go on patrol function.In the family, when user is away from home, user can wish that robot intelligent can be gone on patrol.Big In type factory or workshop, when unattended at night, it is also desirable to what robot can be intelligent go on patrol, therefore, robot Security patrol technology be increasingly becoming the Research Emphasis of businessman.
In the prior art, application No. is CN201710417927, entitled:A kind of application of robot autonomous patrol method A kind of 2D maps patrol method based on laser radar is provided in technical solution disclosed in file, still, this method is unable to structure 3D maps are built, fixed point patrol can not be carried out to the fixed area in space, such as cannot be higher but to the window position of factory The place especially patrolled is needed to carry out fixed point patrol, and robot autonomous patrol method in the prior art includes above-mentioned method In there is also other many technological deficiencies:As robot cannot be automatically positioned with identity user need go on patrol region, cannot Realize the region of the configuration of robotic patrol of user flexibility, and the sequence of patrol, period of patrol, patrol period etc., Robot goes on patrol less efficient etc..
Invention content
It is insufficient in above-mentioned background technology the purpose of the present invention is overcoming, a kind of robot identified based on area map is provided Patrol method and robot device, on the basis of robot estimates movement and the space 3D maps of robot according to camera, Structure area map information and area map mark are assisted by user, realize the patrol of flexible configuration of robotic region (including The regions 2D and the regions 3D), sequence, the period of patrol and the period of patrol of patrol, and the 2D that goes on patrol well when user configuration or Behind the regions 3D, the position and size that robot can also be according to the regions 2D or 3D find best patrol place, robot are made to go on patrol It is more efficient.
In order to reach above-mentioned technique effect, the present invention takes following technical scheme:
A kind of robot patrol method based on area map mark, comprises the steps of:
A. camera acquisition 2D images and/or depth image are utilized;
B. according to the 2D images and/or depth image, estimate movement and the space 3D maps of robot;
C. the space 3D maps shown to user, and receive area map information and area map mark that user specifies Know;
D. the patrol instruction that user returns is received and is parsed, and controls robot and is gone on patrol by patrol instruction, In, the patrol instruction includes at least the area map mark for needing to go on patrol.
Further, include specifically in the step B:
B1. the visual odometry based on ORB features:It obtains ORB features, carried out using corresponding algorithm according to applicable scene The movement for the point estimation camera that characteristic matching, basis match;
B2. the rear end optimization based on pose figure:The figure optimization of the only track of structure one, side between pose node by Initial value is given by the estimation of the camera obtained after characteristic matching between two key frames;
B3. the winding detection based on bag of words:Based on key frame and bag of words, winding is carried out by similarity calculation Detection, estimates the movement of robot;
B4. dense to build figure:Build figure using triangle gridding, dough sheet and estimates body surface, or structure occupancy grid map, Octree map is navigated to obtain space 3D maps.
Further, the area map information in the step C is 2D plane domains or 3D solid regions.
Further, the area map information and area map that user specifies also are received comprising robot in the step C The step of robot combination self attributes and the determined property of 2D plane domains or 3D solid regions most preferably go on patrol place after mark.
Further, the robot self attributes include at least the visual field of robot with highly, the position of camera, take the photograph As can head rotate.
Further, the 2D plane domains are divided into positioned at the region of horizontal plane and perpendicular to the regions of horizontal plane,
When area map information is positioned at the 2D plane domains or 3D solid regions of horizontal plane, then robot is flat according to 2D The size in face region or the size of 3D solid regions and height judge whether to need to enter in region to go on patrol, and judge best patrol Place;When area map information is 2D plane domains and perpendicular to horizontal plane, then size of the robot according to 2D plane domains, court Judge best patrol place to height.
Further, the field range for the robot that the evaluation criteria in the best patrol place is just covers 2D The observation point of plane domain or 3D solid regions.
Further, the patrol instruction in the step D, which includes at least, needs the area map gone on patrol mark, needs to go on patrol Area map mark sequence, patrol temporal information and/or patrol cycle information.
Meanwhile the invention also discloses a kind of robot devices for being identified and being gone on patrol based on area map, including:For Acquire camera module, the 2D images and/or depth map for being obtained according to camera module of 2D images and/or depth image As estimating the movement of robot and the 3D map structurings of space 3D maps and locating module, being used for 3D map structurings and positioning mould The space 3D map denotations that block obtains receive the region of area map information and area map mark that user specifies to user Cartographic information and identifier acquisition module are instructed and control robot for receiving and parse the patrol of user and instructed by the patrol The patrol information gone on patrol receives and processing module, wherein the patrol instruction includes at least the area map for needing to go on patrol Mark.
Further, also include communication module, the communication module respectively with 3D map structurings and locating module, region Figure information and identifier acquisition module are connected, and communication module is used to send out in the space 3D maps that 3D map structurings and locating module obtain The user terminal for communicating connection is given, and receives the specified area map information and area map mark of user terminal return Know, and the specified area map information and area map that receive mark are transferred to area map information and identifier acquisition module.
Compared with prior art, the present invention having advantageous effect below:
The present invention based in the robot patrol method of area map mark and robot device, in robot according to taking the photograph On the basis of movement and the space 3D maps of estimating robot as head, structure area map information and area map are assisted by user Mark realizes the region of flexible configuration of robotic patrol, the sequence of patrol, the period of the period of patrol and patrol, and Behind the regions 2D or 3D that user configuration is gone on patrol well, the position and size that robot can also be according to the regions 2D or 3D are found most Good patrol place keeps robot patrol more efficient.
Description of the drawings
Fig. 1 is the flow signal of the robot patrol method identified based on area map of one embodiment of the present of invention Figure.
Fig. 2 is that the structure of the robot device gone on patrol based on area map mark of one embodiment of the present of invention is shown It is intended to.
Specific implementation mode
With reference to the embodiment of the present invention, the invention will be further elaborated.
Embodiment:
Embodiment one:
As shown in Figure 1, a kind of robot based on area map mark goes on patrol method, following steps are specifically included:
Step 1,2D images and/or depth image are acquired using camera.
Step 2, according to 2D images and/or depth image, estimate movement and the space 3D maps of robot.
It is specifically comprised the steps of in movement and the space 3D maps for carrying out estimation robot:
The first step, the visual odometry based on ORB features:
ORB features are promoted in terms of speed it is also obvious that can while keeping feature to have rotation, scale invariability To meet the very high instant positioning of requirement of real-time and map structuring (SLAM), specifically needed when extracting ORB features:
- FAST angle point grids:" angle point " in image is found out, the size of compared pixels brightness is specifically only needed.
- BRIEF description:Surrounding's image-region that characteristic point is extracted to back is described.
After getting ORB features, according to applicable scene, it is considered as violence matching, quickly approximate neighbour's scheduling algorithm Characteristic matching is carried out, to determine the correspondence between the road sign being currently seen and the road sign seen before, then according to matching The movement of good point estimation camera.
Wherein, if using RGB-D cameras, situation can be measured according to the depth data of pixel, adaptive mixing makes Optimized with PnP and ICP.
Second step, the rear end optimization based on pose figure:
Front-end vision odometer can provide track and map in a section time, but since inevitable error is tired Product, this map is inaccurate in a long time, therefore, in the present embodiment, one is built using on the basis of odometer A scale, larger optimization problem, to consider the optimal trajectory in long-time and map, meanwhile, in order to ensure to calculate effect Rate is using the rear end optimization based on pose figure, thinking in the present embodiment:The figure optimization of the only track of structure one, pose Side between node, by giving initial value by the estimation obtained after characteristic matching between two key frames.
Wherein, the solutions such as the literary Burger-Ma Kuaer special formula methods of gauss-newton method, row can be used in the optimization of pose figure, also contemplate for Pose figure optimization is carried out using factor graph.
Third walks, the winding detection based on bag of words:
Front end provides the initial value of the extraction and track, map of characteristic point, and rear end is responsible for optimizing all these data, Method in the prior art normally only considers the association on adjacent time, the party when building the movement locus and map of robot The defect of method is to cause the error generated before that will inevitably accumulate next moment entire SLAM is accumulated Error, eventually leads to that the result estimated for a long time is unreliable, therefore can not also build globally consistent track and map.
Winding detection is introduced in the present embodiment to solve the above-mentioned problems, the key of winding detection is how effectively to detect Go out the problem that camera passes through the same place, i.e., how to calculate the similitude between image.
Therefore, the purpose of we introduce bag of words, bag of words be piece image is described with " having which feature on image ", then It is summarized as word, and dictionary is formed by many words.And dictionary creation problem is similar to clustering problem.
To promote the efficiency of winding detection, also optimized by the selection mechanism of key frame in the present embodiment.If closed Key frame selects too close, the similarity between two key frames will be caused excessively high, is not easy to detect historical data in contrast Winding, therefore, key frame be preferably it is sparse, it is less identical each other, and entire environment can be covered.
Based on key frame and bag of words, so that it may to carry out winding detection by similarity calculation.And to prevent to cause sense Offset issue is known, preferably, winding detection usually also needs to include verification step.
4th step is dense to build figure:
It carries out building figure estimation body surface using triangle gridding (Mesh), dough sheet (Surfel), or with building occupancy grid Figure, Octree map navigate.
Step 3, by space 3D map denotations to user, and area map information and area map mark that user specifies are received Know.
Wherein, area map information is 2D plane domains or 3D solid regions.Such as:Area map information format is (map Type, area type, central point, radius or the length of side, direction), specifically,
Map style value:0 indicates 2D maps, and 1 indicates 3D maps.
Area type:0 indicates square or square, and 1 indicates round or spherical.
Central point:Value (x, y, z), indicates the three-dimensional coordinate of central point.
Radius or the length of side:For specific number, the length of side of square or square, or round or spherical radius are indicated.
Direction:It is expressed with (x, y, z) vector mode.
Area map information also may include other area types, it is possibility to have different information formats, in practice, Ke Yigen Be designed according to specific needs, and area map mark can be it is any in logic can identified areas map information, such as compile Number or name.
Meanwhile robot by space 3D map denotations to the method for user include robot itself carry display screen side Method, that is, support the interaction of user's touch screen, user can on robot display screen defined area cartographic information, and setting area map Mark.
Meanwhile space 3D map denotations are further included that space 3D maps are sent to user's end to the method for user by robot End specifies area map information and area map to identify, is then forwarded to robot, such method is wanted on mobile terminals by user It asks robot that there is communications network functionality, can interconnect with user terminal.User terminal can be mobile phone, PC etc..
Step 4, the patrol instruction of user is received and parses, patrol instruction includes the area map mark for needing to go on patrol, and Control robot is gone on patrol by patrol instruction.
Patrol instruction can also include the sequence, patrol temporal information, patrol period for the area map mark for needing to go on patrol Information etc..And patrol temporal information can be any information that may specify the period in logic, such as absolute time section start information Or relative time section start information or period length information etc..
Wherein, absolute time section start information is such as:No. 15 15 points of start time in August, 2,015 30 minutes and 0 second, end time No. 15 16 points of in August, 2015 0 minute and 0 second.
Relative time section start information is such as:Such as start time 0 second, end time 40 divided 0 second, wherein can arrange relative time At the time of reference point is the patrol instruction for receiving user.
It can be any information that can refer to fixed cycle in logic to go on patrol cycle information to be, such as primary every patrol in 10 minutes.
Meanwhile robot is instructed by patrol when being gone on patrol, and for the patrol efficiency of hoisting machine people, saves time and province Electricity.In the present embodiment, after user gives the regions 2D or 3D, robot needs to combine the attribute in self attributes and the regions 2D or 3D Judge best patrol place, the self attributes of robot include camera view and the height of itself, the position of camera, camera shooting Can head rotate.
And in the present embodiment, it is 2D plane domains and perpendicular to level by being divided into positioned at horizontal plane for 2D plane domains The 2D plane domains in face.When area map information is 2D plane domains and positioned at horizontal plane or when area map information is 3D When solid region, then robot according to the size and height of the sizes of 2D plane domains or 3D solid regions judge whether to need into Enter in region and go on patrol, and judges best patrol place;And work as area map information for 2D plane domains and perpendicular to horizontal plane, then Robot judges best patrol place according to the size of 2D plane domains, direction and height.
Generally when judging that robot most preferably goes on patrol place, the field range of camera is obtained just with the internal reference of camera The observation point position for covering 2D plane domains is best patrol place.
Such as when area map information is for 2D plane domains and perpendicular to horizontal plane, height that can be according to 2D plane domains and court To the angle of looking up at of robot camera being determined, and the field range of camera is obtained according to the internal reference of camera, just to cover The observation point position for covering 2D plane domains is best patrol place.
Therefore, the robot of the invention based on area map mark goes on patrol method, is estimated according to camera in robot On the basis of the movement of robot and space 3D maps, structure area map information and area map can be assisted to identify by user, Realize the region (including the regions 2D and the regions 3D) of flexible configuration of robotic patrol, the sequence of patrol, patrol period With the period of patrol, and behind the regions 2D or 3D that user configuration is gone on patrol well, robot can also be according to the position in the regions 2D or 3D It sets and size, the best patrol place of searching keeps robot patrol more efficient.
Embodiment two
As shown in Fig. 2, a kind of identifying the robot device gone on patrol based on area map, including what is be sequentially connected is following Module:
Camera module, 3D map structurings and locating module, area map information and identifier acquisition module, patrol information connect Receipts and processing module.
Wherein, camera module is used for root for acquiring 2D images and/or depth image, 3D map structurings with locating module The 2D images and/or depth image obtained according to camera module, and estimate movement and the space 3D maps of robot, area map The space 3D map denotations that information and identifier acquisition module are used to obtain 3D map structurings and locating module are received to user Area map information and the area map mark that user specifies, patrol information receives and processing module is for receiving and parsing user Patrol instruction, and control robot by the patrol instruct gone on patrol.
When work, 2D images and/or depth image are first acquired by camera module, then camera module is by the figure of acquisition As information is transferred to 3D map structurings and locating module, 3D map structurings and locating module according to 2D images and/or depth image, Estimate movement and the space 3D maps of robot, then again by area map information and identifier acquisition module by 3D map structurings with The space 3D map denotations that locating module obtains receive area map information and area map mark that user specifies to user Know, is finally received by patrol information and processing module receives and the patrol of parsing user instruction, and controlled robot and patrolled by described Instruction is patrolled to be gone on patrol.
Specifically, when 3D map structurings and locating module are according to 2D images and/or depth image, the movement of robot is estimated It is specific to be optimized using the visual odometry based on ORB features, the rear end based on pose figure, be based on bag of words and when the 3D maps of space The winding detection of model and the dense method and step for building figure.
Wherein, the area map information that the user that area map information and mark receive specifies is 2D plane domains or 3D Solid region, and go on patrol information receive and processing module receive patrol instruction comprising need go on patrol area map mark, Need sequence, patrol temporal information, the patrol cycle information etc. that the area map gone on patrol identifies.
And robot is by the patrol instruction when being gone on patrol, for the patrol efficiency of hoisting machine people, save the time and Power saving after user gives the regions 2D or 3D, needs the determined property for combining self attributes and the regions 2D or 3D most preferably to go on patrol place. Wherein, can the visual field of robot and the height of itself, the position of camera, camera rotate all related.
Specifically, when the area map information is 2D plane domains and is located at horizontal plane or works as the area map When information is 3D solid regions, then robot is according to the size of 2D plane domains or the size of 3D solid regions and height judgement It is no to need to enter patrol in region, and judge best patrol place;When the area map information is 2D plane domains and vertical In horizontal plane, then robot is according to the size of 2D plane domains, direction and the best patrol place of height judgement.
Preferably, space 3D map denotations are further included that space 3D maps are sent to use to the method for user by robot Family terminal specifies area map information and area map to identify, is then forwarded to robot on mobile terminals by user, therefore, Communication module is additionally provided with based on the robot device that area map mark is gone on patrol in the present embodiment, communication module respectively with 3D map structurings are connected with locating module, area map information and identifier acquisition module, and communication module is used for 3D map structurings It is sent to the user terminal for communicating and connecting with the space 3D maps that locating module obtains, and receives the finger of user terminal return Determine area map information and area map mark, and the specified area map information and area map that receive mark are transferred to area Domain cartographic information and identifier acquisition module.Wherein, user terminal can be mobile phone, PC etc..
It is multiple when specific implementation it should be noted that each module (or unit) in the present embodiment is on logical meaning Module (or unit) can be merged into a module (or unit), and a module (or unit) can also split into multiple modules (or unit).
And it will appreciated by the skilled person that all or part of flow in realization above-described embodiment method is can It is completed with instructing relevant hardware by program, the program can be stored in computer read/write memory medium, The program is when being executed, it may include the flow of the embodiment of each method as above.Wherein, the storage medium can be magnetic disc, light Disk, read-only memory (Read-OnlyMemory, ROM) or random access memory (Random Access Memory, RAM) etc..
It is understood that the principle that embodiment of above is intended to be merely illustrative of the present and the exemplary implementation that uses Mode, however the present invention is not limited thereto.For those skilled in the art, in the essence for not departing from the present invention In the case of refreshing and essence, various changes and modifications can be made therein, these variations and modifications are also considered as protection scope of the present invention.

Claims (10)

1. a kind of robot based on area map mark goes on patrol method, which is characterized in that comprise the steps of:
A. camera acquisition 2D images and/or depth image are utilized;
B. according to the 2D images and/or depth image, estimate movement and the space 3D maps of robot;
C. the space 3D maps shown to user, and receive area map information and area map mark that user specifies;
D. the patrol instruction that user returns is received and parsed, and controls robot and is gone on patrol by patrol instruction, wherein institute It states patrol instruction and includes at least the area map mark for needing to go on patrol.
2. a kind of robot based on area map mark according to claim 1 goes on patrol method, which is characterized in that described Include specifically in step B:
B1. the visual odometry based on ORB features:It obtains ORB features, feature is carried out using corresponding algorithm according to applicable scene The movement for the point estimation camera that matching, basis match;
B2. the rear end optimization based on pose figure:The figure of the only track of structure one optimizes, and the side between pose node is by two Initial value is given by the estimation of the camera obtained after characteristic matching between key frame;
B3. the winding detection based on bag of words:Based on key frame and bag of words, winding inspection is carried out by similarity calculation It surveys, estimates the movement of robot;
B4. dense to build figure:It carries out building figure estimation body surface, or structure occupancy grid map, eight forks using triangle gridding, dough sheet Tree map is navigated to obtain space 3D maps.
3. a kind of robot based on area map mark according to claim 1 goes on patrol method, which is characterized in that described Area map information in step C is 2D plane domains or 3D solid regions.
4. a kind of robot based on area map mark according to claim 3 goes on patrol method, which is characterized in that described Robot belongs in conjunction with itself after also receiving the area map information and area map mark that user specifies comprising robot in step C The step of property and the determined property of 2D plane domains or 3D solid regions most preferably go on patrol place.
5. a kind of robot based on area map mark according to claim 4 goes on patrol method, which is characterized in that described Can the visual field that robot self attributes include at least robot rotate with height, the position of camera, camera.
6. a kind of robot based on area map mark according to claim 4 goes on patrol method, which is characterized in that described 2D plane domains are divided into positioned at the region of horizontal plane and perpendicular to the regions of horizontal plane,
When area map information is positioned at the 2D plane domains or 3D solid regions of horizontal plane, then robot is according to 2D plane areas The size in domain or the size of 3D solid regions and height judge whether to need to enter in region to go on patrol, and judge best patrol ground Point;When area map information is 2D plane domains and perpendicular to horizontal plane, then size, direction of the robot according to 2D plane domains Judge best patrol place with height.
7. going on patrol method, feature according to any a kind of robot based on area map mark in claim 4 to 6 It is, the field range for the robot that the best evaluation criteria for going on patrol place is just covers 2D plane domains or 3D The observation point of solid region.
8. a kind of robot based on area map mark according to claim 1 goes on patrol method, which is characterized in that described The sequence for the area map mark that instruction identifies going on patrol in step D including at least the area map that needs are gone on patrol, needs are gone on patrol, Go on patrol temporal information and/or patrol cycle information.
9. a kind of identifying the robot device gone on patrol based on area map, which is characterized in that include:For acquiring 2D images And/or camera module, the 2D images for being obtained according to camera module and/or the depth image of depth image estimate machine The sky of the movement of people and the 3D map structurings of space 3D maps with locating module, for obtaining 3D map structurings and locating module Between 3D map denotations to user, and receive area map information that user specifies and area map mark area map information and Identifier acquisition module, the patrol instruction for receiving and parsing user simultaneously control what robot was gone on patrol by the patrol instruction Go on patrol information reception and processing module, wherein the patrol instruction includes at least the area map for needing to go on patrol and identifies.
10. a kind of robot device gone on patrol based on area map mark according to claim 9, feature are existed In also comprising communication module, the communication module is obtained with 3D map structurings and locating module, area map information and mark respectively Modulus block is connected, and communication module is used to the space 3D maps that 3D map structurings and locating module obtain being sent to the company of communicating The user terminal connect, and the specified area map information and area map mark of user terminal return are received, and the finger that will be received Determine area map information and area map mark is transferred to area map information and identifier acquisition module.
CN201810608998.1A 2018-06-13 2018-06-13 Robot patrol method based on regional map identification and robot equipment Active CN108748184B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810608998.1A CN108748184B (en) 2018-06-13 2018-06-13 Robot patrol method based on regional map identification and robot equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810608998.1A CN108748184B (en) 2018-06-13 2018-06-13 Robot patrol method based on regional map identification and robot equipment

Publications (2)

Publication Number Publication Date
CN108748184A true CN108748184A (en) 2018-11-06
CN108748184B CN108748184B (en) 2020-04-28

Family

ID=64021628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810608998.1A Active CN108748184B (en) 2018-06-13 2018-06-13 Robot patrol method based on regional map identification and robot equipment

Country Status (1)

Country Link
CN (1) CN108748184B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109822568A (en) * 2019-01-30 2019-05-31 北京镁伽机器人科技有限公司 Robot control method, system and storage medium
CN110779528A (en) * 2019-11-07 2020-02-11 四川长虹电器股份有限公司 Particle filter-based positioning recovery method and robot equipment
CN110796706A (en) * 2019-11-08 2020-02-14 四川长虹电器股份有限公司 Visual positioning method and system
CN111256689A (en) * 2020-01-15 2020-06-09 北京智华机器人科技有限公司 Robot positioning method, robot and storage medium
CN111673758A (en) * 2020-05-28 2020-09-18 广州高新兴机器人有限公司 Multi-region defense deploying method, system, storage medium and equipment for patrol robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105352508A (en) * 2015-10-22 2016-02-24 深圳创想未来机器人有限公司 Method and device of robot positioning and navigation
WO2016209029A1 (en) * 2015-06-26 2016-12-29 (주)유진로봇 Optical homing system using stereoscopic camera and logo and method thereof
CN106296812A (en) * 2016-08-18 2017-01-04 宁波傲视智绘光电科技有限公司 Synchronize location and build drawing method
CN106895847A (en) * 2015-12-17 2017-06-27 大陆汽车投资(上海)有限公司 A kind of air navigation aid smeared based on map and guider
CN107214700A (en) * 2017-06-06 2017-09-29 青岛克路德机器人有限公司 A kind of robot autonomous patrol method
CN107741234A (en) * 2017-10-11 2018-02-27 深圳勇艺达机器人有限公司 The offline map structuring and localization method of a kind of view-based access control model
CN108072370A (en) * 2016-11-18 2018-05-25 中国科学院电子学研究所 Robot navigation method based on global map and the robot with this method navigation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016209029A1 (en) * 2015-06-26 2016-12-29 (주)유진로봇 Optical homing system using stereoscopic camera and logo and method thereof
CN105352508A (en) * 2015-10-22 2016-02-24 深圳创想未来机器人有限公司 Method and device of robot positioning and navigation
CN106895847A (en) * 2015-12-17 2017-06-27 大陆汽车投资(上海)有限公司 A kind of air navigation aid smeared based on map and guider
CN106296812A (en) * 2016-08-18 2017-01-04 宁波傲视智绘光电科技有限公司 Synchronize location and build drawing method
CN108072370A (en) * 2016-11-18 2018-05-25 中国科学院电子学研究所 Robot navigation method based on global map and the robot with this method navigation
CN107214700A (en) * 2017-06-06 2017-09-29 青岛克路德机器人有限公司 A kind of robot autonomous patrol method
CN107741234A (en) * 2017-10-11 2018-02-27 深圳勇艺达机器人有限公司 The offline map structuring and localization method of a kind of view-based access control model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
丁洁琼: "基于RGB_D的SLAM算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李同: "基于ORB词袋模型的SLAM回环检测装置研究", 《信息通信》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109822568A (en) * 2019-01-30 2019-05-31 北京镁伽机器人科技有限公司 Robot control method, system and storage medium
CN110779528A (en) * 2019-11-07 2020-02-11 四川长虹电器股份有限公司 Particle filter-based positioning recovery method and robot equipment
CN110796706A (en) * 2019-11-08 2020-02-14 四川长虹电器股份有限公司 Visual positioning method and system
CN111256689A (en) * 2020-01-15 2020-06-09 北京智华机器人科技有限公司 Robot positioning method, robot and storage medium
CN111673758A (en) * 2020-05-28 2020-09-18 广州高新兴机器人有限公司 Multi-region defense deploying method, system, storage medium and equipment for patrol robot

Also Published As

Publication number Publication date
CN108748184B (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN108748184A (en) A kind of robot patrol method and robot device based on area map mark
CN110349250B (en) RGBD camera-based three-dimensional reconstruction method for indoor dynamic scene
EP3961485A1 (en) Image processing method, apparatus and device, and storage medium
US8933966B2 (en) Image processing device, image processing method and program
Cornelis et al. 3d urban scene modeling integrating recognition and reconstruction
Perez et al. Data fusion for visual tracking with particles
CN108733420A (en) Awakening method, device, smart machine and the storage medium of smart machine
CN108168539A (en) A kind of blind man navigation method based on computer vision, apparatus and system
CN110006343A (en) Measurement method, device and the terminal of object geometric parameter
CN111260661B (en) Visual semantic SLAM system and method based on neural network technology
CN105074691A (en) Context aware localization, mapping, and tracking
AU2012244275A1 (en) Method, apparatus and system for determining a boundary of an obstacle which occludes an object in an image
CN113108771A (en) Movement pose estimation method based on closed-loop direct sparse visual odometer
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
Guan et al. 3d occlusion inference from silhouette cues
CN112991534B (en) Indoor semantic map construction method and system based on multi-granularity object model
CN111476089B (en) Pedestrian detection method, system and terminal for multi-mode information fusion in image
CN111161334A (en) Semantic map construction method based on deep learning
Shalaby et al. Algorithms and applications of structure from motion (SFM): A survey
Koch et al. Wide-area egomotion estimation from known 3d structure
Singh et al. Fusing semantics and motion state detection for robust visual SLAM
Cordea et al. Real-time 2 (1/2)-D head pose recovery for model-based video-coding
CN110587602A (en) Fish tank cleaning robot motion control device and control method based on three-dimensional vision
CN111368883B (en) Obstacle avoidance method based on monocular camera, computing device and storage device
CN117274036A (en) Parking scene detection method based on multi-view and time sequence fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant