CN105856243A - Movable intelligent robot - Google Patents
Movable intelligent robot Download PDFInfo
- Publication number
- CN105856243A CN105856243A CN201610486485.9A CN201610486485A CN105856243A CN 105856243 A CN105856243 A CN 105856243A CN 201610486485 A CN201610486485 A CN 201610486485A CN 105856243 A CN105856243 A CN 105856243A
- Authority
- CN
- China
- Prior art keywords
- robot
- intelligent robot
- module
- mobile intelligent
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J5/00—Manipulators mounted on wheels or on carriages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a movable intelligent robot. The movable intelligent robot comprises an acquiring module, a positioning and map building module and a navigation module, wherein the acquiring module is used for acquiring three-dimensional image information; the positioning and map building module is used for identifying a target according to the three-dimensional image information, building a map and positioning the robot; the navigation module is used for autonomously navigating the robot according to the built map. The movable intelligent robot has the advantages that the movable intelligent robot is an intelligent robot product and can also be a secondary development platform, and the robot is high in processing performance and mobility, low in price and applicable to occasions such as daily services, object transmission, navigation and remote operations.
Description
Technical field
The present invention relates to intelligent robot technology field, particularly relate to a kind of intelligent movable machine
People.
Background technology
Mobile robot (Robot) is the installations automatically performing work.It both can accept
The mankind command, and can run again the program of layout in advance, it is also possible to according to artificial intelligence technology
The principle guiding principle action formulated.Its task is to assist or replace the work of human work, such as
Produce industry, building industry, or the work of danger.
In recent years, along with the development of robot research, robotics starts continually
Ground permeates to the every field of mankind's activity, in conjunction with the application characteristic in these fields, various
The robot with difference in functionality be developed, and all obtain in different applications
It is widely applied.
At present, the domestic market demand of Mobile Intelligent Robot platform is bigger, it is provided that one has relatively
High process performance and mobility and the relatively low Mobile Intelligent Robot of price are to have very much
Necessary.
Summary of the invention
It is an object of the invention to provide a kind of Mobile Intelligent Robot, it is therefore intended that solve existing machine
Device people's technical finesse performance and the problem that mobility is relatively low, price is higher.
For solving above-mentioned technical problem, the present invention provides a kind of Mobile Intelligent Robot, including:
Acquisition module, is used for obtaining three-dimensional image information;
Location and map structuring module, for knowing target according to described three-dimensional image information
Not, and build map, robot is carried out location positioning;
Navigation module, for utilizing the described map of structure to carry out described robot from leading
Boat.
Alternatively, described acquisition module is specifically for obtaining color image information and depth image
Information.
Alternatively, described acquisition module also includes:
Gradient and texture processing unit, face territory suppression method of weighting suppression for employing and eliminate three
Gradient in dimension image and texture;
Reticulate pattern processing unit, is used for utilizing reticulate pattern frequency and partial gradient feature to be produced from applicable
Filtering core, by self application filtering obtain original image according to a preliminary estimate, use associating edge guarantor
Hold the image after filtering acquisition processes.
Alternatively, described location includes with map structuring module:
Two-dimensional position information acquiring unit, for by wifi signal, gps signal, mobile phone base
Stand the two-dimensional position information of robot described in signal acquisition;And/or obtain described machine by sensor
Device people move after position coordinates;
Vertical position information acquiring unit, for obtaining the vertical of described robot by diastimeter
The positional information of short transverse.
Alternatively, described navigation module is for entering the path of described robot according to described map
Professional etiquette is drawn, and described robot is carried out motor control, to keep away the barrier gone up around the path.
Alternatively, also include:
Tracking module, for using the contextual relevance of relation on attributes map analysis target and carrying out
Fragment labelling, is partitioned into target area, is tracked multiple moving targets simultaneously.
Alternatively, also include:
Gesture recognition module, for projecting dot pattern from the infrared window of sensor, according to
The gesture of user is identified by the situation of change of dot matrix image.
Alternatively, also include:
Face recognition module, for judging whether face in dynamic and static background
Picture, and isolate described image surface.
Alternatively, also include:
Skeleton tracking module, for scanning human body by 3D scanning means, obtains skeleton
Data.
Alternatively, also include:
Speech recognition interactive module, for by the emulation data pre-saved are extended with
And training, under support beyond the clouds, voice messaging is identified.
Mobile Intelligent Robot provided by the present invention, including: it is used for obtaining three-dimensional image information
Acquisition module;According to described three-dimensional image information, target is identified, and builds map,
Robot is carried out location and the map structuring module of location positioning;Utilize the map built to machine
Device people carries out the navigation module of independent navigation.Mobile Intelligent Robot provided by the present invention, be
One intelligent robot product, it is also possible to be the platform of secondary development.It has higher process
Performance and mobility, and price is relatively low, it is possible to it is applied to daily service, transmission object, leads
The occasions such as boat and remotely operation.
Accompanying drawing explanation
For the clearer explanation embodiment of the present invention or the technical scheme of prior art, below will
The accompanying drawing used required in embodiment or description of the prior art is briefly described, aobvious and easy
Insight, the accompanying drawing in describing below is only some embodiments of the present invention, general for this area
From the point of view of logical technical staff, on the premise of not paying creative work, it is also possible to attached according to these
Figure obtains other accompanying drawing.
Fig. 1 is the knot of a kind of detailed description of the invention of Mobile Intelligent Robot provided by the present invention
Structure block diagram;
Fig. 2 is gradient and the process of reticulate pattern of the static non-reflective object of elimination provided by the present invention
Schematic diagram;
Fig. 3 is the process schematic of the reticulate pattern of elimination static reflecting object provided by the present invention;
Fig. 4 is that a kind of work process of two-dimensional position information acquiring unit provided by the present invention is shown
It is intended to;
Fig. 5 is the another kind of work process of two-dimensional position information acquiring unit provided by the present invention
Schematic diagram;
The position fixing process schematic diagram that Fig. 6 is provided by the present embodiment;
The schematic diagram of the tracking process that Fig. 7 is provided by the present embodiment;
The process schematic carrying out recognition of face that Fig. 8 is provided by the present embodiment;
Fig. 9 by the present embodiment provided carry out recognition of face during carry out convolutional calculation
Process schematic;
Figure 10 is the operating diagram that skeleton provided herein is followed the tracks of;
Figure 11 is the operating diagram of speech recognition process provided herein.
Detailed description of the invention
In order to make those skilled in the art be more fully understood that the present invention program, below in conjunction with the accompanying drawings
The present invention is described in further detail with detailed description of the invention.Obviously, described enforcement
Example is only a part of embodiment of the present invention rather than whole embodiments.Based in the present invention
Embodiment, those of ordinary skill in the art are obtained under not making creative work premise
Every other embodiment, broadly fall into the scope of protection of the invention.
A kind of structural frames of the detailed description of the invention of Mobile Intelligent Robot provided by the present invention
Figure is as it is shown in figure 1, this robot includes:
Acquisition module 100, is used for obtaining three-dimensional image information;
Location and map structuring module 200, for entering target according to described three-dimensional image information
Row identifies, and builds map, and robot is carried out location positioning;
Navigation module 300, for utilizing the described map of structure to carry out described robot independently
Navigation.
Mobile Intelligent Robot provided by the present invention, including: it is used for obtaining three-dimensional image information
Acquisition module;According to described three-dimensional image information, target is identified, and builds map,
Robot is carried out location and the map structuring module of location positioning;Utilize the map built to machine
Device people carries out the navigation module of independent navigation.Mobile Intelligent Robot provided by the present invention, be
One intelligent robot product, it is also possible to be the platform of secondary development.It has higher process
Performance and mobility, and price is relatively low, it is possible to it is applied to daily service, transmission object, leads
The occasions such as boat and remotely operation.
On the basis of above-described embodiment, the acquisition of Mobile Intelligent Robot provided by the present invention
Module is specifically for obtaining color image information and deep image information.
The two dimensional image of traditional photographic head based on visible ray shooting, it is impossible to three-dimensional body is entered
Row identifies accurately and positions, and the photographic head with 3D function can be according to the spy of subject
Levy and the fog-level even depth image information of object quickly to obtain subject the most three-dimensional
Model.
Robot provided by the present invention is by obtaining coloured image and depth image (hereinafter referred to as
RGBD image), solve the Image semantic classification during three-dimensional reconstruction, depth image alignment,
The series of problems such as cloud data acquisition, it is achieved that easy high-quality three-dimensional image reconstruction method.
Further, all right in the acquisition module of Mobile Intelligent Robot provided by the present invention
Including:
Gradient and texture processing unit, face territory suppression method of weighting suppression for employing and eliminate three
Gradient in dimension image and texture;
Reticulate pattern processing unit, is used for utilizing reticulate pattern frequency and partial gradient feature to be produced from applicable
Filtering core, by self application filtering obtain original image according to a preliminary estimate, use associating edge guarantor
Hold the image after filtering acquisition processes.
Owing to substantial amounts of grain can be produced during stationary body photographs photographic head in complex scene
Region, these grain regions can produce substantial amounts of ghost peak point in electronic pictures, if this
A little grain regions do not eliminate, and robot can provide the result and action not conformed to the actual conditions, and sharp
The gradient suppressed by the software processes way facing territory suppression weighting and eliminate in picture and texture.
The gradient of the static non-reflective object of elimination as provided by the present invention in Fig. 2 and the process signal of reticulate pattern
Shown in figure.
In the present embodiment, robot system utilizes the redundancy removal image of reticulate pattern texture to make an uproar at random
Sound, utilizes reticulate pattern frequency and local Gradient Features to be produced from applicable filtering core, passes through self application
Filtering obtains artwork according to a preliminary estimate, uses associating holding edge filter to obtain final high-quality
Panchromatic sheet of changing the line map.It is pointed out that edge refers to reflective object certain pictures in photographic head
Extension, it is affected minimum by light.Elimination as provided by the present invention in Fig. 3 is static anti-
Shown in the process schematic of the reticulate pattern of light object.
On the basis of any of the above-described embodiment, in Mobile Intelligent Robot provided by the present invention
Location can specifically include with map structuring module:
Two-dimensional position information acquiring unit, for by wifi signal, gps signal, mobile phone base
Stand the two-dimensional position information of robot described in signal acquisition;And/or obtain described machine by sensor
Device people move after position coordinates;
Vertical position information acquiring unit, for obtaining the vertical of described robot by diastimeter
The positional information of short transverse.
Wherein, the specific works process of two-dimensional position information acquiring unit is:
If there is signal indoor, then starts all of sensor and instrument carries out 2D (XY two dimension)
Position measurement.The longitude and latitude of machine position can be obtained by gps signal, pass through mobile phone
Base station signal can obtain position signalling, can measure distance by wifi signal, and gyroscope
One can work with accelerometer, calculate direction and speed that user moves, gaussmeter can be surveyed
Determine size and the direction in magnetic field of the earth.By these sensors, it is possible to obtain after robot moves
Position XY two-dimensional coordinate.This process can refer to Fig. 4 Two-dimensional Position provided by the present invention confidence
A kind of work process schematic diagram of breath acquiring unit.
If indoor do not have signal, can only start Laser Radar Scanning diastimeter, gyro sensors
Device, accelerometer sensor, gaussmeter, sound ranging device, equally obtain robot and move
The XY two-dimensional coordinate of position after dynamic.This process can refer to Fig. 5 Two-dimensional Position provided by the present invention
Put the another kind of work process schematic diagram of information acquisition unit.
The process obtaining 3D (Z dimension) positional information can be: while obtaining XY dimension coordinate,
Laser Radar Scanning diastimeter harmony Wave ranging device can be used to find range Z dimension, and record obtains
Obtain Z and tie up information.
The robot that the present embodiment is provided carry cellular base station signal measurement instrument, Wifi signal,
Gps signal, Laser Radar Scanning diastimeter, gyroscope, accelerometer, gaussmeter (judge
Magnetic direction), sound ranging device, robot can be uncertain at self-position and the most not
Know establishment map in environment, utilize map independently position and navigate simultaneously.
The sensor and the single precision of instrument are not the most fine, and the x, y, z recorded is sat
Mark is the most accurate, but if robot moves multiple position, different sensors and
Instrument is uploaded to the data of robot main frame simultaneously, the system that these data are carried by robot
Software, systems soft ware calculates one by " probability statistics " has certain error with actual distance
" big probability metrics ", by machine learning, pattern recognition scheduling algorithm synthesize, system of robot
System can carry out computed range in certain reliable interval, is then carried out being accurately positioned.So, though
So in unknown indoor environment, robot without reference to thing, by the data of sensor it can
With the mobile self-position of correction;Systems soft ware can draw an indoor map accurately, to environment
Information carries out correct expression, and output indoor map SLAM builds data.The present embodiment is carried
The position fixing process schematic diagram of confession is as shown in Figure 6.
On the basis of any of the above-described embodiment, Mobile Intelligent Robot provided by the present invention
In, navigation module is for planning the path of described robot according to described map and right
Described robot carries out motor control, to keep away the barrier gone up around the path.
Additionally, Mobile Intelligent Robot provided by the present invention can further include:
Tracking module, for using the contextual relevance of relation on attributes map analysis target and carrying out
Fragment labelling, is partitioned into target area, is tracked multiple moving targets simultaneously.
For target area in tradition tracking cannot full segmentation, cause following the tracks of showing unsuccessfully
As, robot provided by the present invention proposes and is divided into by the different moving target in prospect
There is feature consistency " fragment ", use the contextual relevance of relation on attributes map analysis target also
Carry out fragment labelling, recognition and tracking target while being partitioned into target area.This tracking technique
Have employed associating data and improve association multi-object tracking method, analyze target again according to association probability
Occur, disappear, block, multiple target tracking under the complex situations such as separation.The present embodiment is carried
The schematic diagram of the tracking process of confession is as shown in Figure 7.
Further, the application can also include:
Gesture recognition module, for projecting dot pattern from the infrared window of sensor, according to
The gesture of user is identified by the situation of change of dot matrix image.
The principle of gesture identification is: project large-area red point from the infrared window of sensor
To hand, dot matrix image can change system of battle formations case according to the change of hands reflection light, also can be with thing
The distance of body and emission source changes and changes.
Gesture be nature, directly perceived, be prone to the man-machine payment means of machine learning.Static gesture pair
Answer a series of static point in space, and a track in dynamic gesture correspondence space, need
Time dependent space characteristics is used to describe.Mobile Intelligent Robot gesture identification is divided into
In several stages: Hand Gesture Segmentation, gesture models, gesture analysis, gesture identification, 4 stage works
Make all to complete in control system motherboard.
The application can also include:
Face recognition module, for judging whether face in dynamic and static background
Picture, and isolate described image surface.
Due to the shooting angle of face, intensity of illumination, expression shape change, the age, make up, block,
Picture qualities etc. change, and the different face picture of same person have the biggest difference;Meanwhile, with
The sharply increasing of number data volume to be identified, the probability of the people comparing picture that occurs looking increases
Gradually increase etc. situation.Robot recognition of face mainly needs the difficult problem solved be how to allow meter
Calculation machine still can recognize under the interference of more different faces change may be fainter
Certain individual face change.
Information Mobile Service robot control system software utilizes " people be bold data ", and this " is worth
Network " go to calculate face change in inhomogeneity, by " convolutional neural networks " method and new shape
Change layer removes the individual human face selecting to need to identify, two difference " network " cooperations carry out final
Recognition of face.
Carry out the process schematic of recognition of face as shown in Figure 8, wherein carry out the mistake of convolutional calculation
Journey schematic diagram is as shown in Figure 9.
Recognition of face provided by the present invention has the advantage that 1, creative by be identified
Each committed step of each layer of object model and tradition object detecting system sets up corresponding relation;
2, on the basis of convolutional network, new deformation layer is proposed, by deformation layer, inhomogeneity object
Can be with shared components model and deformation model.
As a kind of detailed description of the invention, the present invention also provides for farther including:
Skeleton tracking module, for scanning human body by 3D scanning means, obtains skeleton
Data.
Skeleton tracking technique obtains the three dimensional local information of body joints point, for carrying out human posture
Identify the abundant information that provides, intend, on the basis of three-dimensional articulare positional information, setting up one
Plant real-time human posture and identify system.After human body is carried out skeleton identification, systems soft ware will
Storage skeleton also produces 22 articulares automatically.
Skeleton tracking technique provided by the present invention has following advantages: skeleton identification need not
Any skeleton data storehouse is called in high in the clouds, it is only necessary to the 3D scanning means carried by robot is fast
Speed scanning human body, then enter data into robotic system software.Bone because of people
Length ratio is all constant with growth position in relative age bracket, it is only necessary to soft in system
The built-in local static skeleton data storehouse of part and allow certain error.Provided herein
Skeleton follow the tracks of operating diagram as shown in Figure 10.
Mobile Intelligent Robot provided by the present invention can also include:
Speech recognition interactive module, for by the emulation data pre-saved are extended with
And training, under support beyond the clouds, voice messaging is identified.The work signal of this process
Figure is as shown in figure 11.
Call the language in high in the clouds, for Mobile Intelligent Robot, have 2 problems to need to solve
Certainly.
One is to need to set language benchmark.The emulation data preserved by this locality are extended and strengthen
Training so that robot has the basic identification ability of language.Namely the data of language cloud
Be placed in robotic system software, systems soft ware to its automatic identification and stamp " label " (as
Classification and speech quality) it is marked, so, robot just has the language on basis to be known
Other ability.
Two is from upgrade function.In robotic system software's language set, application software technology means
Expand high in the clouds to be identified language " data base ", revise the first stage " language benchmark model "
Error, combines the result of two models, forms new language benchmark and is maintained at machine
In device people's systems soft ware.This be a kind of language combination from upgrade technique, having taken into account language model can
Explanatory and performance requirement, technological rationality is from language benchmark Part I, multifunction
With the reasonability after extension by ensureing from upgrade function.
To sum up, Mobile Intelligent Robot platform provided by the present invention energy under intelligent control mode
Realize indoor map structure, independent navigation and man-machine interaction.In robot, ROS operating system is propped up
Holding down, robot can carry out indoor real-time positioning and map structuring, 3D object is identified,
Understand, position, and optimization independent navigation can be carried out completely around barrier;Robot is also
Man-machine interaction can be realized, such as gesture identification, skeleton tracking, recognition of face and language identification simultaneously.
ROS operating system is the most perfect a robot operating system increased income, it is possible to be applied to
In daily service, transmission object, navigation and remote-operated occasion and related research institutes, for
Study of various practicality intellect service robot is laid a solid foundation.
In this specification, each embodiment uses the mode gone forward one by one to describe, and each embodiment emphasis is said
Bright is all the difference with other embodiments, same or similar part between each embodiment
See mutually.For device disclosed in embodiment, disclosed in itself and embodiment
Method is corresponding, so describe is fairly simple, relevant part sees method part and illustrates.
Professional further appreciates that, describes in conjunction with the embodiments described herein
The unit of each example and algorithm steps, it is possible to electronic hardware, computer software or the two
Be implemented in combination in, in order to clearly demonstrate the interchangeability of hardware and software, in described above
In generally described composition and the step of each example according to function.These functions are actually
Perform with hardware or software mode, depend on application-specific and the design constraint of technical scheme
Condition.Each specifically should being used for can be used different methods to realize institute by professional and technical personnel
The function described, but this realization is it is not considered that beyond the scope of this invention.
The method described in conjunction with the embodiments described herein or the step of algorithm can be direct
Implement with hardware, the software module of processor execution, or the combination of the two.Software module
Random access memory (RAM), internal memory, read only memory (ROM), electrically programmable can be placed in
ROM, electrically erasable ROM, depositor, hard disk, moveable magnetic disc, CD-ROM,
Or in any other form of storage medium well known in technical field.
Above Mobile Intelligent Robot provided by the present invention is described in detail.Herein
Apply specific case principle and the embodiment of the present invention are set forth, above example
Explanation be only intended to help to understand method and the core concept thereof of the present invention.It is right to it should be pointed out that,
For those skilled in the art, under the premise without departing from the principles of the invention,
The present invention can also be carried out some improvement and modification, these improve and modification also falls into the present invention
In scope of the claims.
Claims (10)
1. a Mobile Intelligent Robot, it is characterised in that including:
Acquisition module, is used for obtaining three-dimensional image information;
Location and map structuring module, for knowing target according to described three-dimensional image information
Not, and build map, robot is carried out location positioning;
Navigation module, for utilizing the described map of structure to carry out described robot from leading
Boat.
2. Mobile Intelligent Robot as claimed in claim 1, it is characterised in that described in obtain
Delivery block is specifically for obtaining color image information and deep image information.
3. Mobile Intelligent Robot as claimed in claim 2, it is characterised in that described in obtain
Delivery block also includes:
Gradient and texture processing unit, face territory suppression method of weighting suppression for employing and eliminate three
Gradient in dimension image and texture;
Reticulate pattern processing unit, is used for utilizing reticulate pattern frequency and partial gradient feature to be produced from applicable
Filtering core, by self application filtering obtain original image according to a preliminary estimate, use associating edge guarantor
Hold the image after filtering acquisition processes.
4. Mobile Intelligent Robot as claimed in claim 1, it is characterised in that described fixed
Position includes with map structuring module:
Two-dimensional position information acquiring unit, for by wifi signal, gps signal, mobile phone base
Stand the two-dimensional position information of robot described in signal acquisition;And/or obtain described machine by sensor
Device people move after position coordinates;
Vertical position information acquiring unit, for obtaining the vertical of described robot by diastimeter
The positional information of short transverse.
5. the Mobile Intelligent Robot as described in any one of Claims 1-4, its feature exists
In, described navigation module is used for planning the path of described robot according to described map,
And described robot is carried out motor control, to keep away the barrier gone up around the path.
6. Mobile Intelligent Robot as claimed in claim 5, it is characterised in that also include:
Tracking module, for using the contextual relevance of relation on attributes map analysis target and carrying out
Fragment labelling, is partitioned into target area, is tracked multiple moving targets simultaneously.
7. Mobile Intelligent Robot as claimed in claim 6, it is characterised in that also include:
Gesture recognition module, for projecting dot pattern from the infrared window of sensor, according to
The gesture of user is identified by the situation of change of dot matrix image.
8. Mobile Intelligent Robot as claimed in claim 7, it is characterised in that also include:
Face recognition module, for judging whether face in dynamic and static background
Picture, and isolate described image surface.
9. Mobile Intelligent Robot as claimed in claim 8, it is characterised in that also include:
Skeleton tracking module, for scanning human body by 3D scanning means, obtains skeleton
Data.
10. Mobile Intelligent Robot as claimed in claim 9, it is characterised in that also include:
Speech recognition interactive module, for by the emulation data pre-saved are extended with
And training, under support beyond the clouds, voice messaging is identified.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610486485.9A CN105856243A (en) | 2016-06-28 | 2016-06-28 | Movable intelligent robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610486485.9A CN105856243A (en) | 2016-06-28 | 2016-06-28 | Movable intelligent robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105856243A true CN105856243A (en) | 2016-08-17 |
Family
ID=56655786
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610486485.9A Pending CN105856243A (en) | 2016-06-28 | 2016-06-28 | Movable intelligent robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105856243A (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106653015A (en) * | 2016-10-28 | 2017-05-10 | 海南双猴科技有限公司 | Speech recognition method by and apparatus for robot |
CN106767750A (en) * | 2016-11-18 | 2017-05-31 | 北京光年无限科技有限公司 | A kind of air navigation aid and system for intelligent robot |
CN106780631A (en) * | 2017-01-11 | 2017-05-31 | 山东大学 | A kind of robot closed loop detection method based on deep learning |
CN106931978A (en) * | 2017-02-23 | 2017-07-07 | 湖南天特智能科技有限公司 | The method of the automatic indoor map generation for building road network |
CN107229903A (en) * | 2017-04-17 | 2017-10-03 | 深圳奥比中光科技有限公司 | Method, device and the storage device of robot obstacle-avoiding |
CN107738852A (en) * | 2017-11-30 | 2018-02-27 | 歌尔科技有限公司 | Localization method, positioning map construction method and robot |
CN107844118A (en) * | 2017-11-22 | 2018-03-27 | 江苏清投视讯科技有限公司 | A kind of robot of view-based access control model navigation AGV technologies |
CN107932481A (en) * | 2017-12-04 | 2018-04-20 | 湖南瑞森可机器人科技有限公司 | A kind of composite machine people and its control method |
CN108107884A (en) * | 2017-11-20 | 2018-06-01 | 北京理工华汇智能科技有限公司 | Robot follows the data processing method and its intelligent apparatus of navigation |
CN108170166A (en) * | 2017-11-20 | 2018-06-15 | 北京理工华汇智能科技有限公司 | The follow-up control method and its intelligent apparatus of robot |
CN108789435A (en) * | 2018-06-15 | 2018-11-13 | 廊坊瑞立达智能机器有限公司 | Multifunction speech controls service robot |
CN108803588A (en) * | 2017-04-28 | 2018-11-13 | 深圳乐动机器人有限公司 | The control system of robot |
CN108789421A (en) * | 2018-09-05 | 2018-11-13 | 厦门理工学院 | Cloud robot interactive method and cloud robot based on cloud platform and cloud platform |
CN108858195A (en) * | 2018-07-16 | 2018-11-23 | 睿尔曼智能科技(北京)有限公司 | A kind of Triple distribution control system of biped robot |
CN109040677A (en) * | 2018-07-27 | 2018-12-18 | 中山火炬高新企业孵化器有限公司 | Garden security protection early warning system of defense |
CN109191499A (en) * | 2018-09-05 | 2019-01-11 | 顺德职业技术学院 | A kind of robotic tracking's route update method and system based on motion target tracking |
CN109507677A (en) * | 2018-11-05 | 2019-03-22 | 浙江工业大学 | A kind of SLAM method of combination GPS and radar odometer |
CN109906435A (en) * | 2016-11-08 | 2019-06-18 | 夏普株式会社 | Mobile member control apparatus and moving body control program |
CN110047150A (en) * | 2019-04-24 | 2019-07-23 | 大唐环境产业集团股份有限公司 | It is a kind of based on augmented reality complex device operation operate in bit emulator system |
CN110065076A (en) * | 2019-06-04 | 2019-07-30 | 佛山今甲机器人有限公司 | A kind of robot secondary development editing system |
CN110298919A (en) * | 2019-05-13 | 2019-10-01 | 广东领盛装配式建筑科技有限公司 | A kind of cloud service system applied to building intelligence measurement |
CN110728684A (en) * | 2018-07-17 | 2020-01-24 | 北京三快在线科技有限公司 | Map construction method and device, storage medium and electronic equipment |
WO2020151429A1 (en) * | 2019-01-21 | 2020-07-30 | 广东康云科技有限公司 | Robot dog system and implementation method therefor |
CN111688526A (en) * | 2020-06-18 | 2020-09-22 | 福建百城新能源科技有限公司 | User side new energy automobile energy storage charging station |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103353758A (en) * | 2013-08-05 | 2013-10-16 | 青岛海通机器人系统有限公司 | Indoor robot navigation device and navigation technology thereof |
CN103561194A (en) * | 2013-09-16 | 2014-02-05 | 湖南大学 | Scanned image descreening method based on adaptive filtering |
CN104751491A (en) * | 2015-04-10 | 2015-07-01 | 中国科学院宁波材料技术与工程研究所 | Method and device for tracking crowds and counting pedestrian flow |
CN105058389A (en) * | 2015-07-15 | 2015-11-18 | 深圳乐行天下科技有限公司 | Robot system, robot control method, and robot |
-
2016
- 2016-06-28 CN CN201610486485.9A patent/CN105856243A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103353758A (en) * | 2013-08-05 | 2013-10-16 | 青岛海通机器人系统有限公司 | Indoor robot navigation device and navigation technology thereof |
CN103561194A (en) * | 2013-09-16 | 2014-02-05 | 湖南大学 | Scanned image descreening method based on adaptive filtering |
CN104751491A (en) * | 2015-04-10 | 2015-07-01 | 中国科学院宁波材料技术与工程研究所 | Method and device for tracking crowds and counting pedestrian flow |
CN105058389A (en) * | 2015-07-15 | 2015-11-18 | 深圳乐行天下科技有限公司 | Robot system, robot control method, and robot |
Non-Patent Citations (2)
Title |
---|
杨鸿等: "基于Kinect传感器的移动机器人室内环境三维地图创建", 《东南大学学报》 * |
贾松敏等: "基于RGB-D相机的移动机器人三维SLAM", 《华中科技大学学报》 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106653015A (en) * | 2016-10-28 | 2017-05-10 | 海南双猴科技有限公司 | Speech recognition method by and apparatus for robot |
US11119722B2 (en) | 2016-11-08 | 2021-09-14 | Sharp Kabushiki Kaisha | Movable body control apparatus and recording medium |
EP3540591A4 (en) * | 2016-11-08 | 2019-12-18 | Sharp Kabushiki Kaisha | Movable body control apparatus and movable body control program |
CN109906435A (en) * | 2016-11-08 | 2019-06-18 | 夏普株式会社 | Mobile member control apparatus and moving body control program |
CN106767750A (en) * | 2016-11-18 | 2017-05-31 | 北京光年无限科技有限公司 | A kind of air navigation aid and system for intelligent robot |
CN106780631A (en) * | 2017-01-11 | 2017-05-31 | 山东大学 | A kind of robot closed loop detection method based on deep learning |
CN106931978A (en) * | 2017-02-23 | 2017-07-07 | 湖南天特智能科技有限公司 | The method of the automatic indoor map generation for building road network |
CN106931978B (en) * | 2017-02-23 | 2020-05-22 | 湖南傲派自动化设备有限公司 | Indoor map generation method for automatically constructing road network |
CN107229903A (en) * | 2017-04-17 | 2017-10-03 | 深圳奥比中光科技有限公司 | Method, device and the storage device of robot obstacle-avoiding |
CN108803588A (en) * | 2017-04-28 | 2018-11-13 | 深圳乐动机器人有限公司 | The control system of robot |
CN108107884A (en) * | 2017-11-20 | 2018-06-01 | 北京理工华汇智能科技有限公司 | Robot follows the data processing method and its intelligent apparatus of navigation |
CN108170166A (en) * | 2017-11-20 | 2018-06-15 | 北京理工华汇智能科技有限公司 | The follow-up control method and its intelligent apparatus of robot |
CN107844118A (en) * | 2017-11-22 | 2018-03-27 | 江苏清投视讯科技有限公司 | A kind of robot of view-based access control model navigation AGV technologies |
CN107738852B (en) * | 2017-11-30 | 2020-03-31 | 歌尔科技有限公司 | Positioning method, positioning map construction method and robot |
CN107738852A (en) * | 2017-11-30 | 2018-02-27 | 歌尔科技有限公司 | Localization method, positioning map construction method and robot |
CN107932481A (en) * | 2017-12-04 | 2018-04-20 | 湖南瑞森可机器人科技有限公司 | A kind of composite machine people and its control method |
CN108789435A (en) * | 2018-06-15 | 2018-11-13 | 廊坊瑞立达智能机器有限公司 | Multifunction speech controls service robot |
CN108858195A (en) * | 2018-07-16 | 2018-11-23 | 睿尔曼智能科技(北京)有限公司 | A kind of Triple distribution control system of biped robot |
CN110728684B (en) * | 2018-07-17 | 2021-02-02 | 北京三快在线科技有限公司 | Map construction method and device, storage medium and electronic equipment |
CN110728684A (en) * | 2018-07-17 | 2020-01-24 | 北京三快在线科技有限公司 | Map construction method and device, storage medium and electronic equipment |
CN109040677A (en) * | 2018-07-27 | 2018-12-18 | 中山火炬高新企业孵化器有限公司 | Garden security protection early warning system of defense |
CN108789421A (en) * | 2018-09-05 | 2018-11-13 | 厦门理工学院 | Cloud robot interactive method and cloud robot based on cloud platform and cloud platform |
CN109191499A (en) * | 2018-09-05 | 2019-01-11 | 顺德职业技术学院 | A kind of robotic tracking's route update method and system based on motion target tracking |
CN109507677A (en) * | 2018-11-05 | 2019-03-22 | 浙江工业大学 | A kind of SLAM method of combination GPS and radar odometer |
CN109507677B (en) * | 2018-11-05 | 2020-08-18 | 浙江工业大学 | SLAM method combining GPS and radar odometer |
WO2020151429A1 (en) * | 2019-01-21 | 2020-07-30 | 广东康云科技有限公司 | Robot dog system and implementation method therefor |
CN110047150A (en) * | 2019-04-24 | 2019-07-23 | 大唐环境产业集团股份有限公司 | It is a kind of based on augmented reality complex device operation operate in bit emulator system |
CN110047150B (en) * | 2019-04-24 | 2023-06-16 | 大唐环境产业集团股份有限公司 | Complex equipment operation on-site simulation system based on augmented reality |
CN110298919A (en) * | 2019-05-13 | 2019-10-01 | 广东领盛装配式建筑科技有限公司 | A kind of cloud service system applied to building intelligence measurement |
CN110298919B (en) * | 2019-05-13 | 2023-08-29 | 广东领慧数字空间科技有限公司 | Cloud service system applied to intelligent measurement of building |
CN110065076A (en) * | 2019-06-04 | 2019-07-30 | 佛山今甲机器人有限公司 | A kind of robot secondary development editing system |
CN111688526A (en) * | 2020-06-18 | 2020-09-22 | 福建百城新能源科技有限公司 | User side new energy automobile energy storage charging station |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105856243A (en) | Movable intelligent robot | |
US11030525B2 (en) | Systems and methods for deep localization and segmentation with a 3D semantic map | |
CN103680291B (en) | The method synchronizing location and mapping based on ceiling vision | |
CN102313547B (en) | Vision navigation method of mobile robot based on hand-drawn outline semantic map | |
US10086955B2 (en) | Pattern-based camera pose estimation system | |
CN106840148A (en) | Wearable positioning and path guide method based on binocular camera under outdoor work environment | |
CN108810473B (en) | Method and system for realizing GPS mapping camera picture coordinate on mobile platform | |
CN109931939A (en) | Localization method, device, equipment and the computer readable storage medium of vehicle | |
US10451403B2 (en) | Structure-based camera pose estimation system | |
CN104704384A (en) | Image processing method, particularly used in a vision-based localization of a device | |
CN105279750A (en) | Equipment display guiding system based on IR-UWB and image moment | |
WO2021082745A1 (en) | Information completion method, lane line recognition method, intelligent driving method and related product | |
CN104484868B (en) | The moving target of a kind of combination template matches and image outline is taken photo by plane tracking | |
CN103605978A (en) | Urban illegal building identification system and method based on three-dimensional live-action data | |
US9858669B2 (en) | Optimized camera pose estimation system | |
CN108955682A (en) | Mobile phone indoor positioning air navigation aid | |
CN105608417A (en) | Traffic signal lamp detection method and device | |
US20230236280A1 (en) | Method and system for positioning indoor autonomous mobile robot | |
AU2021255130B2 (en) | Artificial intelligence and computer vision powered driving-performance assessment | |
CN110260866A (en) | A kind of robot localization and barrier-avoiding method of view-based access control model sensor | |
CN113914407B (en) | Excavator excavation tunnel accurate control system based on BIM and machine vision | |
CN112833892B (en) | Semantic mapping method based on track alignment | |
CN111400423B (en) | Smart city CIM three-dimensional vehicle pose modeling system based on multi-view geometry | |
CN103759724B (en) | A kind of indoor navigation method based on lamp decoration feature and system | |
CN111161334A (en) | Semantic map construction method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160817 |