CN110109479A - Navigation processing method, device, intelligent robot and computer readable storage medium - Google Patents
Navigation processing method, device, intelligent robot and computer readable storage medium Download PDFInfo
- Publication number
- CN110109479A CN110109479A CN201910332895.1A CN201910332895A CN110109479A CN 110109479 A CN110109479 A CN 110109479A CN 201910332895 A CN201910332895 A CN 201910332895A CN 110109479 A CN110109479 A CN 110109479A
- Authority
- CN
- China
- Prior art keywords
- image
- intelligent robot
- information
- service object
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000003860 storage Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 25
- 230000008859 change Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 230000001960 triggered effect Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 238000005266 casting Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/12—Target-seeking control
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Manipulator (AREA)
Abstract
The present invention provides a kind of navigation processing method, device, intelligent robot and computer readable storage medium.This method is applied to intelligent robot, comprising: under navigation mode, acquires the first image;According to the first image, the status information of service object is identified;According to status information, corresponding navigation processing operation is executed.It can be seen that, in the embodiment of the present invention, when providing navigation Service, intelligent robot is not tactful for user service with what is be completely fixed, the strategy that intelligent robot uses can neatly be adjusted according to the virtual condition of service object, therefore, compared with prior art, the embodiment of the present invention can effectively improve service effectiveness of the intelligent robot when providing navigation Service.
Description
Technical field
The present embodiments relate to robotic technology field more particularly to a kind of navigation processing method, device, intelligence machines
People and computer readable storage medium.
Background technique
With the accuracy of speech recognition and the continuous promotion of semantic understanding ability, intelligent robot is increasingly by market
Favor, intelligent robot using more and more common.
It can have navigation module in intelligent robot, intelligent robot can be mentioned when user inquires position for user
For navigation Service.Specifically, the destination locations that intelligent robot can be provided according to user, carry out path planning, are planning
Behind path, automatic turning simultaneously guides user to advance to destination locations.In general, in all cases, intelligence machine is per capita
Service is provided for user with fixed strategy, for example, for user either with or without pause, intelligence machine is certainly per capita regardless of in navigation procedure
It cares for from advancing, in this way, in the prior art, when providing navigation Service, the service effectiveness of intelligent robot is poor.
Summary of the invention
The embodiment of the present invention provides a kind of information processing method, device, intelligent robot and computer readable storage medium,
To solve in the prior art, when providing navigation Service, the problem of the service effectiveness difference of intelligent robot.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, the embodiment of the present invention provides a kind of navigation processing method, it is applied to intelligent robot, the method packet
It includes:
Under navigation mode, the first image is acquired;
According to the first image, the status information of service object is identified;
According to the state information, corresponding navigation processing operation is executed.
Second aspect, the embodiment of the present invention provide a kind of navigation processing unit, are applied to intelligent robot, described device packet
It includes:
Acquisition module, for acquiring the first image under navigation mode;
Identification module, for identifying the status information of service object according to the first image;
Execution module, for according to the state information, executing corresponding navigation processing operation.
The third aspect, the embodiment of the present invention provide a kind of intelligent robot, including processor, and memory is stored in described
It is real when the computer program is executed by the processor on memory and the computer program that can run on the processor
The step of existing above-mentioned navigation processing method.
Fourth aspect, the embodiment of the present invention provide a kind of computer readable storage medium, the computer-readable storage medium
Computer program is stored in matter, the step of computer program realizes above-mentioned navigation processing method when being executed by processor.
In the embodiment of the present invention, under navigation mode, intelligent robot can identify service according to the first image of acquisition
The status information of object, and according to status information, execute corresponding navigation processing operation.As it can be seen that being mentioned in the embodiment of the present invention
When for navigation Service, intelligent robot is not the tactful strategy that for user service, intelligent robot is used to be completely fixed
It can neatly be adjusted according to the virtual condition of service object, therefore, compared with prior art, the embodiment of the present invention can
Effectively improve service effectiveness of the intelligent robot when providing navigation Service.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention
Attached drawing to be used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention,
For those of ordinary skill in the art, without any creative labor, it can also obtain according to these attached drawings
Take other attached drawings.
Fig. 1 is the flow chart of navigation processing method provided in an embodiment of the present invention;
Fig. 2 is the another flow chart of navigation processing method provided in an embodiment of the present invention;
Fig. 3 is the structural block diagram of navigation processing unit provided in an embodiment of the present invention;
Fig. 4 is the structural schematic diagram of intelligent robot provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, those of ordinary skill in the art's acquired every other implementation without creative efforts
Example, shall fall within the protection scope of the present invention.
Referring to Fig. 1, the flow chart of navigation processing method provided in an embodiment of the present invention is shown in figure.As shown in Figure 1, should
Method is applied to intelligent robot, and this method comprises the following steps:
Step 101, under navigation mode, the first image is acquired.
Here, intelligent robot may include camera, and the first image can call camera acquisition for intelligent robot
's.
Step 102, according to the first image, the status information of service object is identified.
Here, service object is referred to as being guided object or is guided people.
Specifically, status information can be used for characterizing whether service object is in disengaging following state;Alternatively, status information
It can be used for characterizing whether service object is in deceleration regime or state of stopping.Certainly, status information can be used for characterization clothes
Whether business object is in acceleration mode etc., will not enumerate herein.
Step 103, according to status information, corresponding navigation processing operation is executed.
It should be noted that the specific implementation form multiplicity of step 103, below illustrates to two of them way of realization
It introduces.
In the first way of realization, step 103, comprising:
If status information table levies service object and is in disengaging following state, idle state is switched to by guide service state;
Otherwise, guide service state is kept.
In the case where status information table sign service object is in and is detached from following state, it is believed that service object is not
Followed by intelligent robot, this is likely due to service object and already knows how to reach destination locations, and actively leaves
Intelligent robot, or because service object in order to handle other affairs, actively has left intelligent robot.Either with
It is any in upper two kinds of reasons, it is considered that service object no longer needs intelligent robot to provide navigation Service, therefore,
Intelligent robot can actively terminate current navigation task, to be switched to idle state by guide service state, in this way, if
There are navigation needs, intelligent robots can provide navigation Service for other objects by other service objects.
In the case where status information table sign service object is not in and is detached from following state, it is believed that service object is
Intelligent robot is followed to advance to destination locations, then, intelligent robot can keep guide service state, be led with continuing to execute
Boat task.
As it can be seen that under navigation mode, intelligent robot only can need navigation Service in service object in this way of realization
In the case where, navigation task is continued to execute, the resource consumption and power consumption of intelligent robot can be saved in this way, can be improved simultaneously
The utilization rate of intelligent robot.
In the second way of realization, step 103, comprising:
If status information table sign service object is in deceleration regime or state of stopping, waiting is switched to by guide service state
State, and export waiting prompt information;Otherwise, guide service state is kept.
Here, waiting prompt information can be speech prompt information, text prompt information etc..
Status information table sign service object be in deceleration regime or stop state in the case where, it is believed that service object
The action speed of intelligent robot is not kept up with, this is likely due to service object and slows down to handle other affairs or slightly
It pauses.In this case, it is believed that service object does not need intelligent robot temporarily and provides navigation Service, and intelligent robot can
To suspend navigation task, to be switched to wait state by guide service state, also, intelligent robot can also carry out voice and broadcast
Report, to prompt service object waiting.
Status information table sign service object be not in deceleration regime or stop state in the case where, such as characterization service pair
In the case where being at the uniform velocity state, it is believed that service object is following intelligent robot to advance to destination locations, then,
Intelligent robot can keep guide service state, to continue to execute navigation task.
As it can be seen that in this way of realization, under navigation mode, intelligent robot can in the case where service object needs,
Pause navigation task simultaneously waits, and can not only save the resource consumption and power consumption of intelligent robot in this way, additionally it is possible to facilitate service
Other affairs of object handles, to guarantee the user experience of service object during the navigation process.
In the embodiment of the present invention, under navigation mode, intelligent robot can identify service according to the first image of acquisition
The status information of object, and according to status information, execute corresponding navigation processing operation.As it can be seen that being mentioned in the embodiment of the present invention
When for navigation Service, intelligent robot is not that the strategy to be completely fixed for user provides service, what intelligent robot used
Strategy can neatly be adjusted according to the virtual condition of service object, therefore, compared with prior art, the embodiment of the present invention
Service effectiveness of the intelligent robot when providing navigation Service can be effectively improved.
It should be noted that when step 103 is using the first above-mentioned way of realization, before intelligent robot may include
Camera and rear camera are set, the first image is that rear camera is called to acquire.Here, front camera and postposition camera shooting
Head can be depth camera, and front camera can be located at the front of intelligent robot, and rear camera can be located at intelligence
The back side of energy robot.
Under navigation mode, before acquiring the first image, this method further include:
Before entering navigation mode, the second image of front camera acquisition service object is called, and according to the second figure
Picture obtains the first characteristics of objects information of service object;
According to the first image, the status information of service object is identified, comprising:
According to the first image and the first characteristics of objects information, identify whether service object is in disengaging following state.
Here, before entering navigation mode, intelligent robot can acquire the second image of service object, and to second
Image carries out image analysis, to obtain the first characteristics of objects information of simultaneously storage service object;Wherein, the first characteristics of objects information
In may include at least one of face feature information and characteristics of human body's information of service object.Specifically, characteristics of human body believes
Breath may include at least one in following information: colouring information, the worn clothes of service object of the worn clothes of service object
Style information, the information that cap whether is worn for characterizing service object.
After calling rear camera to acquire the first image, intelligent robot can be according to the first image and the first object
Whether characteristic information, identification service object are in disengaging following state, carry out citing introduction to specific identification method below.
Optionally, the first image is acquired, comprising:
According to the first time interval of setting, rear camera is called to acquire the first image;
According to the first image and the first characteristics of objects information, identify whether service object is in disengaging following state, comprising:
If there is no the objects with the second characteristics of objects information in continuous N the first image of frame, output is for prompting station to exist
The prompt information at intelligent robot rear;
It is exporting in the preset duration after prompt information, if still there is no have the second characteristics of objects in each first image
The object of information determines that service object is in and is detached from following state;
Wherein, N is the integer more than or equal to 2, the second characteristics of objects information and the first characteristics of objects information matches.
Here, the value of first time interval can be 0.5 second, the value of 1 second perhaps 2 seconds N can be 2,3,4 or
5, the value of preset duration can be 10 seconds, 12 seconds, 15 seconds or 20 seconds, certainly, first time interval, N and preset duration
Value be not limited thereto, will not enumerate herein.In addition, for prompting station in the prompt information at intelligent robot rear
It can be voice prompting, text prompt etc..
Here, the second characteristics of objects information and the first characteristics of objects information can be face feature information, then, second
Characteristics of objects information can refer to the first characteristics of objects information matches: the second characteristics of objects information and the first characteristics of objects information
Similarity be greater than default similarity, at this moment, the image that there is the object with the second characteristics of objects information may be considered clothes
The image of business object.Specifically, default similarity can be 70%, 80%, 90% etc., will not enumerate herein.
It should be noted that the acquisition of the first image is carried out since intelligent robot is according to the first time interval of setting,
Intelligent robot can obtain the first image of multiframe.After often obtaining 1 the first image of frame, intelligence machine may determine that per capita
With the presence or absence of the object with the second characteristics of objects information in one image.If being directed to the judging result of continuous the first image of N frame
It is to be not present, this illustrates that service object is not detected in the continuous N frame of intelligent robot, and service object is likely to not followed by intelligence
It can robot.
In order to whether accurately determine out service object not followed by service object, intelligent robot can carry out voice
Casting, with prompt service object station at intelligent robot rear, later, intelligent robot can continue to judge be in the first image
It is no to there is the object with the second characteristics of objects information.If in the preset duration after exporting prompt information, for each first
The judging result of image is still that there is no intelligent robot can be determined that service object is in and be detached from following state.
Certainly, when there is the case where service object is not detected in the continuous N frame of intelligent robot, intelligent robot can also
Not export prompt information, and directly determine that service object is in and be detached from following state, this is also feasible.
As it can be seen that being based on the first image and the first characteristics of objects information in the present embodiment, capable of easily determining very much to take
Whether business object is in disengaging following state.
It should be noted that when step 103 is using second above-mentioned of way of realization, after intelligent robot may include
Set depth camera.Here, postposition depth camera can be located at the back side of intelligent robot.
Acquire the first image, comprising:
According to the second time interval of setting, postposition depth camera is called to acquire the first image;
According to the first image, the status information of service object is identified, comprising:
According to each first image, change information of the service object at a distance from intelligent robot is determined;
According to change information, determine whether intelligent robot is in deceleration regime or state of stopping.
Here, the value of the second time interval can be 0.5 second, 1 second or 2 seconds, certainly, the value of the second time interval
It is not limited thereto, will not enumerate herein.In addition, the value of the second time interval and the value of first time interval can be with
It is identical, it can also be different.
It should be noted that the acquisition of the first image is carried out since intelligent robot is according to the second time interval of setting,
Intelligent robot can obtain the first image of multiframe.After often obtaining 1 the first image of frame, intelligent robot can use parallax
Principle calculates service object at a distance from intelligent robot, in this way, can calculate corresponding more according to the first image of multiframe
A distance, the change information of distance can be calculated according to multiple distances, and further determines that whether intelligent robot is in deceleration shape
State or state of stopping.
Specifically, it is assumed that in the case that service object normally follows intelligent robot, service object and intelligent robot
Travel speed is V, and intelligent robot acquires 5 the first images of frame altogether, and according to acquisition time by morning to the sequence in evening, this 5
The first image of frame can be respectively P1, P2, P3, P4 and P5.So, using principle of parallax, service object can be calculated according to P1
With intelligent robot distance S1;According to P2, service object and intelligent robot distance S2 are calculated;According to P3, service is calculated
Object and intelligent robot distance S3;According to P4, service object and intelligent robot distance S4 are calculated;According to P5, calculate
Service object and intelligent robot distance S5.Next, S1 to S5 can be compared between each other, to determine service pair
As the change information at a distance from intelligent robot.
It should be noted that in the case where service object normally follows intelligent robot, service object and intelligence machine
The distance between people should be held essentially constant.Assuming that S1 to S5 is sequentially increased, it is believed that service object and intelligent machine
The distance of device people is being gradually increased, it is possible to determine that intelligent robot is in deceleration regime or state of stopping;Assuming that S1 to S5
It is equal, it is believed that service object is constant at a distance from intelligent robot, subtracts it is possible to determine that intelligent robot is not in
Fast state or state of stopping.
As it can be seen that being based on postposition depth camera in the present embodiment, very easily whether capable of determining intelligent robot
In deceleration regime or state of stopping.
Optionally, intelligent robot includes front camera and rear camera, and the first image is to call rear camera
Acquisition.Here, the front camera of intelligent robot and rear camera can be depth camera, and front camera can
To be located at the front of intelligent robot, rear camera can be located at the back side of intelligent robot.
Under navigation mode, before acquiring the first image, this method further include:
Before entering navigation mode, the third image of front camera acquisition service object is called, and according to third figure
Picture obtains the third characteristics of objects information of service object;
If the navigation feature of intelligent robot is triggered, rear camera is called to acquire the 4th image;
If there is the object with the 4th characteristics of objects information in the 4th image, into navigation mode;Otherwise, output is used for
Prompt information of the prompt station at intelligent robot rear;
Wherein, the 4th characteristics of objects information and third characteristics of objects information matches.
Here, third characteristics of objects information and the 4th characteristics of objects information can be face feature information, then, third
Characteristics of objects information can refer to the 4th characteristics of objects information matches: third characteristics of objects information and the 4th characteristics of objects information
Similarity be greater than default similarity, at this moment, the image that there is the object with the 4th characteristics of objects information may be considered clothes
The image of business object.Specifically, default similarity can be 70%, 80%, 90% etc., will not enumerate herein.
In the present embodiment, intelligent robot front can be equipped with detection of obstacles radar.Thunder is detected when breaking the barriers
Up to detected barrier close to when, intelligent robot can star front camera, with by detection front camera range
Inside whether there is face to determine whether user close to intelligent robot.
If there is user close really, intelligent robot can star greeting process, and start to guide user.If with
Family starts to interact (such as carrying out interactive voice) with intelligent robot, which can be used as the service pair of intelligent robot
As intelligent robot can call front camera to acquire third image, and carry out image analysis to third image, to obtain simultaneously
The third characteristics of objects information of storage service object;It wherein, may include the face of service object in third characteristics of objects information
Characteristic information.
If user triggers the navigation function of intelligent robot in the interactive process of service object and intelligent robot
Can, intelligent robot can be turned round, and start rear camera during turning round, to acquire the 4th figure by rear camera
Picture.Later, intelligent robot may determine that in the 4th image with the presence or absence of the object with the 4th characteristics of objects information;Wherein,
4th characteristics of objects information and third characteristics of objects information matches.
If it is judged that for there is no intelligent robot can carry out voice broadcast, to prompt service object station intelligence
Energy robot dead astern prepares to start to navigate.After carrying out voice broadcast, if service object has stood intelligent robot just after
Side, then can enter navigation mode.
During the navigation process, intelligent robot can periodically carry out the acquisition of the first image, whether to determine service object
It follows always and change information of the detection service object at a distance from intelligent robot, so that it is determined that whether service object puts
Jogging speed or pause.Specifically, in the case where service object slows down or suspends, intelligent robot can suspend, and language
Sound casting prompt service object is waiting, and when service object is again close to intelligent robot, intelligent robot can continue
It navigates, the position so that service object can smoothly achieve the goal;In continuous the first figure of three frames of rear camera acquisition
As in the case where service object is not detected, intelligent robot can prompt service object standing intelligent robot just with voice broadcast
Rear, if service object is still not detected in casting in 10 seconds, this time navigation terminates default, and intelligent robot is switched to the free time
State.
As it can be seen that in the present embodiment, in the case where navigation feature is triggered, intelligent robot can't beginning of their own
It advances, but combines the image of acquisition, determine whether service object really has the state for starting to walk, be only in definitive result
In the case where being, the intelligence machine talent can enter navigational state and provide navigation Service, in this way, the present embodiment can save intelligence
The resource consumption and power consumption of robot, while improving the usage experience of user.
The specific implementation process of the present embodiment is described in detail with a specific example below with reference to Fig. 2.
Firstly, intelligent robot starts to service when there is user close to intelligent robot, such as voice friendship is carried out with user
Mutually, at this point, the user can be used as service object.Next, intelligent robot can call front camera to acquire third figure
Picture, and recognition of face+human bioequivalence is carried out according to third image, to obtain the third characteristics of objects information of service object.
In the case where navigation feature is triggered, this illustrates service object there are navigation purposes, and intelligent robot can be into
Enter and prepare navigational state, and turns round preparation navigation.In addition, intelligent robot can call rear camera to acquire the 4th image,
And identify whether the object in object and third image in the 4th image is same people.If it is same people, it can export and lead
Boat starts voice prompting and starts to navigate;If recognition result be it is no, can be followed by voice prompting user.
In navigation carries out, it can be based on rear camera, identification follows the object of intelligent robot and service object to be
No is same people, can also identify and follow whether the object of intelligent robot is more than distance threshold at a distance from intelligent robot.
In the case where the two definitive results are to be, can pause waiting.
Finally, navigation Service can be terminated after guide service object reaches destination locations.
To sum up, compared with prior art, the present embodiment can effectively improve intelligent robot when providing navigation Service
Service effectiveness.
Referring to Fig. 3, the structural block diagram of navigation processing unit 300 provided in an embodiment of the present invention is shown in figure.Such as Fig. 3 institute
Show, navigation processing unit 300 is applied to intelligent robot, and navigation processing unit 300 includes:
Acquisition module 301, for acquiring the first image under navigation mode;
Identification module 302, for identifying the status information of service object according to the first image;
Execution module 303, for executing corresponding navigation processing operation according to status information.
Optionally, execution module 303 are specifically used for:
If status information table levies service object and is in disengaging following state, idle state is switched to by guide service state;
Otherwise, guide service state is kept.
Optionally, intelligent robot includes front camera and rear camera, and the first image is to call rear camera
Acquisition;
Navigation processing unit 300 further include:
First processing module, for before acquiring the first image, before entering navigation mode, adjusting under navigation mode
The second image of service object is acquired with front camera, and according to the second image, obtains the first characteristics of objects of service object
Information;
Identification module 302, is specifically used for:
According to the first image and the first characteristics of objects information, identify whether service object is in disengaging following state.
Optionally, acquisition module 301 are specifically used for:
According to the first time interval of setting, rear camera is called to acquire the first image;
Identification module 302, comprising:
Output unit, if for there is no the object with the second characteristics of objects information, outputs in continuous N the first image of frame
For prompting station in the prompt information at intelligent robot rear;
First determination unit, for exporting in the preset duration after prompt information, if not deposited still in each first image
In the object with the second characteristics of objects information, determines that service object is in and be detached from following state;
Wherein, N is the integer more than or equal to 2, the second characteristics of objects information and the first characteristics of objects information matches.
Optionally, execution module 303 are specifically used for:
If status information table sign service object is in deceleration regime or state of stopping, waiting is switched to by guide service state
State, and export waiting prompt information;Otherwise, guide service state is kept.
Optionally, intelligent robot includes postposition depth camera;
Acquisition module 301, is specifically used for:
According to the second time interval of setting, postposition depth camera is called to acquire the first image;
Identification module 302, comprising:
Second determination unit, for determining variation of the service object at a distance from intelligent robot according to each first image
Information;
Third determination unit, for determining whether intelligent robot is in deceleration regime or shape of stopping according to change information
State.
Optionally, intelligent robot includes front camera and rear camera, and the first image is to call rear camera
Acquisition;
Navigation processing unit 300 further include:
Second processing module, for before acquiring the first image, before entering navigation mode, adjusting under navigation mode
The third image of service object is acquired with front camera, and according to third image, obtains the third characteristics of objects of service object
Information;
Calling module calls rear camera to acquire the 4th image if the navigation feature for intelligent robot is triggered;
Third processing module, if for there is the object with the 4th characteristics of objects information in the 4th image, into navigation
Mode;Otherwise, output is for prompting station in the prompt information at intelligent robot rear;
Wherein, the 4th characteristics of objects information and third characteristics of objects information matches.
In the embodiment of the present invention, under navigation mode, intelligent robot can identify service according to the first image of acquisition
The status information of object, and according to status information, execute corresponding navigation processing operation.As it can be seen that being mentioned in the embodiment of the present invention
When for navigation Service, intelligent robot is not the tactful strategy that for user service, intelligent robot is used to be completely fixed
It can neatly be adjusted according to the virtual condition of service object, therefore, compared with prior art, the embodiment of the present invention can
Effectively improve service effectiveness of the intelligent robot when providing navigation Service.
Referring to fig. 4, the structural schematic diagram of intelligent robot 400 provided in an embodiment of the present invention is shown in figure.Such as Fig. 4 institute
Show, intelligent robot 400 includes: processor 401, memory 403, user interface 404 and bus interface.
Processor 401 executes following process for reading the program in memory 403:
Under navigation mode, the first image is acquired;
According to the first image, the status information of service object is identified;
According to status information, corresponding navigation processing operation is executed.
In Fig. 4, bus architecture may include the bus and bridge of any number of interconnection, specifically be represented by processor 401
One or more processors and the various circuits of memory that represent of memory 403 link together.Bus architecture can be with
Various other circuits of such as peripheral equipment, voltage-stablizer and management circuit or the like are linked together, these are all these
Well known to field, therefore, it will not be further described herein.Bus interface provides interface.For different users
Equipment, user interface 404, which can also be, external the interface for needing equipment is inscribed, and the equipment of connection includes but is not limited to small key
Disk, display, loudspeaker, microphone, control stick etc..
Processor 401, which is responsible for management bus architecture and common processing, memory 403, can store processor 401 and is holding
Used data when row operation.
Optionally, processor 401 are specifically used for:
If status information table levies service object and is in disengaging following state, idle state is switched to by guide service state;
Otherwise, guide service state is kept.
Optionally, intelligent robot includes front camera and rear camera, and the first image is to call rear camera
Acquisition;
Processor 401, is also used to:
Under navigation mode, before acquiring the first image, before entering navigation mode, front camera acquisition clothes are called
Second image of business object, and according to the second image, obtain the first characteristics of objects information of service object;
Processor 401, is specifically used for:
According to the first image and the first characteristics of objects information, identify whether service object is in disengaging following state.
Optionally, processor 401 are specifically used for:
According to the first time interval of setting, rear camera is called to acquire the first image;
If there is no the objects with the second characteristics of objects information in continuous N the first image of frame, output is for prompting station to exist
The prompt information at intelligent robot rear;
It is exporting in the preset duration after prompt information, if still there is no have the second characteristics of objects in each first image
The object of information determines that service object is in and is detached from following state;
Wherein, N is the integer more than or equal to 2, the second characteristics of objects information and the first characteristics of objects information matches.
Optionally, processor 401 are specifically used for:
If status information table sign service object is in deceleration regime or state of stopping, waiting is switched to by guide service state
State, and export waiting prompt information;Otherwise, guide service state is kept.
Optionally, intelligent robot includes postposition depth camera;
Processor 401, is specifically used for:
According to the second time interval of setting, postposition depth camera is called to acquire the first image;
According to each first image, change information of the service object at a distance from intelligent robot is determined;
According to change information, determine whether intelligent robot is in deceleration regime or state of stopping.
Optionally, intelligent robot includes front camera and rear camera, and the first image is to call rear camera
Acquisition;
Processor 401, is also used to:
Under navigation mode, before acquiring the first image, before entering navigation mode, front camera acquisition clothes are called
The third image of business object, and according to third image, obtain the third characteristics of objects information of service object;
If the navigation feature of intelligent robot is triggered, rear camera is called to acquire the 4th image;
If there is the object with the 4th characteristics of objects information in the 4th image, into navigation mode;Otherwise, output is used for
Prompt information of the prompt station at intelligent robot rear;
Wherein, the 4th characteristics of objects information and third characteristics of objects information matches.
In the embodiment of the present invention, under navigation mode, intelligent robot 400 can be identified according to the first image of acquisition
The status information of service object, and according to status information, execute corresponding navigation processing operation.As it can be seen that in the embodiment of the present invention,
When providing navigation Service, intelligent robot 400 is not, intelligent robot 400 tactful for user service with what is be completely fixed
The strategy used can neatly be adjusted according to the virtual condition of service object, therefore, compared with prior art, the present invention
Embodiment can effectively improve service effectiveness of the intelligent robot 400 when providing navigation Service.
Preferably, the embodiment of the present invention also provides a kind of intelligent robot, including processor 401, memory 403, storage
On memory 403 and the computer program that can run on the processor 401, the computer program are held by processor 401
Each process of above-mentioned navigation processing method embodiment is realized when row, and can reach identical technical effect, to avoid repeating, this
In repeat no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize each process of above-mentioned navigation processing method embodiment, and energy when being executed by processor
Reach identical technical effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as only
Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation
RAM), magnetic or disk etc..
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (16)
1. a kind of navigation processing method, which is characterized in that be applied to intelligent robot, which comprises
Under navigation mode, the first image is acquired;
According to the first image, the status information of service object is identified;
According to the state information, corresponding navigation processing operation is executed.
2. the method according to claim 1, wherein it is described according to the state information, execute corresponding navigation
Processing operation, comprising:
If the status information characterizes the service object and is in disengaging following state, idle shape is switched to by guide service state
State;Otherwise, guide service state is kept.
3. according to the method described in claim 2, it is characterized in that, the intelligent robot includes that front camera and postposition are taken the photograph
As head, the first image is that the rear camera is called to acquire;
It is described under navigation mode, acquire the first image before, the method also includes:
Before entering navigation mode, the second image of front camera acquisition service object is called, and according to described the
Two images obtain the first characteristics of objects information of the service object;
It is described according to the first image, identify the status information of service object, comprising:
According to the first image and the first characteristics of objects information, identify whether service object is in disengaging following state.
4. according to the method described in claim 3, it is characterized in that,
The first image of the acquisition, comprising:
According to the first time interval of setting, the rear camera is called to acquire the first image;
It is described according to the first image and the first characteristics of objects information, whether identification service object, which is in disengaging, follows shape
State, comprising:
If there is no the objects with the second characteristics of objects information in continuous N frame the first image, output is for prompting station to exist
The prompt information at the intelligent robot rear;
It is exporting in the preset duration after prompt information, if still there is no have second object in each the first image
The object of characteristic information determines that the service object is in and is detached from following state;
Wherein, N is the integer more than or equal to 2, the second characteristics of objects information and the first characteristics of objects information matches.
5. the method according to claim 1, wherein it is described according to the state information, execute corresponding navigation
Processing operation, comprising:
If the status information characterizes the service object and is in deceleration regime or state of stopping, it is switched to by guide service state
Wait state, and export waiting prompt information;Otherwise, guide service state is kept.
6. according to the method described in claim 5, it is characterized in that, the intelligent robot includes postposition depth camera;
The first image of the acquisition, comprising:
According to the second time interval of setting, the postposition depth camera is called to acquire the first image;
It is described according to the first image, identify the status information of service object, comprising:
According to each the first image, change information of the service object at a distance from the intelligent robot is determined;
According to the change information, determine whether the intelligent robot is in deceleration regime or state of stopping.
7. the method according to claim 1, wherein the intelligent robot includes that front camera and postposition are taken the photograph
As head, the first image is that the rear camera is called to acquire;
It is described under navigation mode, acquire the first image before, the method also includes:
Before entering navigation mode, the third image of front camera acquisition service object is called, and according to described the
Three images obtain the third characteristics of objects information of the service object;
If the navigation feature of the intelligent robot is triggered, the rear camera is called to acquire the 4th image;
If there is the object with the 4th characteristics of objects information in the 4th image, into navigation mode;Otherwise, output is used for
Prompt information of the prompt station at the intelligent robot rear;
Wherein, the 4th characteristics of objects information and the third characteristics of objects information matches.
8. a kind of navigation processing unit, which is characterized in that be applied to intelligent robot, described device includes:
Acquisition module, for acquiring the first image under navigation mode;
Identification module, for identifying the status information of service object according to the first image;
Execution module, for according to the state information, executing corresponding navigation processing operation.
9. device according to claim 8, which is characterized in that the execution module is specifically used for:
If the status information characterizes the service object and is in disengaging following state, idle shape is switched to by guide service state
State;Otherwise, guide service state is kept.
10. device according to claim 9, which is characterized in that the intelligent robot includes front camera and postposition
Camera, the first image are that the rear camera is called to acquire;
Described device further include:
First processing module, for before acquiring the first image, before entering navigation mode, calling institute under navigation mode
The second image of front camera acquisition service object is stated, and according to second image, obtains the first of the service object
Characteristics of objects information;
The identification module, is specifically used for:
According to the first image and the first characteristics of objects information, identify whether service object is in disengaging following state.
11. device according to claim 10, which is characterized in that
The acquisition module, is specifically used for:
According to the first time interval of setting, the rear camera is called to acquire the first image;
The identification module, comprising:
Output unit, if for there is no the object with the second characteristics of objects information, outputs in continuous N frame the first image
For prompting station in the prompt information at the intelligent robot rear;
First determination unit, for exporting in the preset duration after prompt information, if not deposited still in each the first image
In the object with the second characteristics of objects information, determines that the service object is in and be detached from following state;
Wherein, N is the integer more than or equal to 2, the second characteristics of objects information and the first characteristics of objects information matches.
12. device according to claim 8, which is characterized in that the execution module is specifically used for:
If the status information characterizes the service object and is in deceleration regime or state of stopping, it is switched to by guide service state
Wait state, and export waiting prompt information;Otherwise, guide service state is kept.
13. device according to claim 12, which is characterized in that the intelligent robot includes postposition depth camera;
The acquisition module, is specifically used for:
According to the second time interval of setting, the postposition depth camera is called to acquire the first image;
The identification module, comprising:
Second determination unit, for determining service object at a distance from the intelligent robot according to each the first image
Change information;
Third determination unit, for determining whether the intelligent robot is in deceleration regime or stops according to the change information
Step state.
14. device according to claim 8, which is characterized in that the intelligent robot includes front camera and postposition
Camera, the first image are that the rear camera is called to acquire;
Described device further include:
Second processing module, for before acquiring the first image, before entering navigation mode, calling institute under navigation mode
The third image of front camera acquisition service object is stated, and according to the third image, obtains the third of the service object
Characteristics of objects information;
Calling module calls the rear camera acquisition the 4th if the navigation feature for the intelligent robot is triggered
Image;
Third processing module, if for there is the object with the 4th characteristics of objects information in the 4th image, into navigation
Mode;Otherwise, output is for prompting station in the prompt information at the intelligent robot rear;
Wherein, the 4th characteristics of objects information and the third characteristics of objects information matches.
15. a kind of intelligent robot, which is characterized in that including processor, memory is stored on the memory and can be in institute
The computer program run on processor is stated, such as claim 1 to 7 is realized when the computer program is executed by the processor
Any one of described in navigation processing method the step of.
16. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium
Program realizes the navigation processing method as described in any one of claims 1 to 7 when the computer program is executed by processor
The step of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910332895.1A CN110109479A (en) | 2019-04-24 | 2019-04-24 | Navigation processing method, device, intelligent robot and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910332895.1A CN110109479A (en) | 2019-04-24 | 2019-04-24 | Navigation processing method, device, intelligent robot and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110109479A true CN110109479A (en) | 2019-08-09 |
Family
ID=67486519
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910332895.1A Pending CN110109479A (en) | 2019-04-24 | 2019-04-24 | Navigation processing method, device, intelligent robot and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110109479A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113031588A (en) * | 2021-02-02 | 2021-06-25 | 广东柔乐电器有限公司 | Robot navigation system for shopping mall |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008058283A (en) * | 2006-09-04 | 2008-03-13 | Ixs Research:Kk | Robot navigation system |
CN101947182A (en) * | 2010-09-26 | 2011-01-19 | 东南大学 | Intelligent guide man-machine interaction device |
CN103294054A (en) * | 2012-02-24 | 2013-09-11 | 联想(北京)有限公司 | Robot navigation method and system |
CN105796289A (en) * | 2016-06-03 | 2016-07-27 | 京东方科技集团股份有限公司 | Blind guide robot |
CN106779857A (en) * | 2016-12-23 | 2017-05-31 | 湖南晖龙股份有限公司 | A kind of purchase method of remote control robot |
US20170368691A1 (en) * | 2016-06-27 | 2017-12-28 | Dilili Labs, Inc. | Mobile Robot Navigation |
CN207273234U (en) * | 2017-10-12 | 2018-04-27 | 天津科技大学 | A kind of guidance robot |
CN108170166A (en) * | 2017-11-20 | 2018-06-15 | 北京理工华汇智能科技有限公司 | The follow-up control method and its intelligent apparatus of robot |
CN108242007A (en) * | 2016-12-26 | 2018-07-03 | 纳恩博(北京)科技有限公司 | Service providing method and device |
CN108363393A (en) * | 2018-02-05 | 2018-08-03 | 腾讯科技(深圳)有限公司 | A kind of smart motion equipment and its air navigation aid and storage medium |
CN108734262A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | Smart machine control method, device, smart machine and medium |
CN108748071A (en) * | 2018-04-25 | 2018-11-06 | 苏州米机器人有限公司 | A kind of intelligent hotel service robot |
CN109416541A (en) * | 2016-06-14 | 2019-03-01 | Groove X 株式会社 | Seek the autonomous humanoid robot of nice and cool behavior |
-
2019
- 2019-04-24 CN CN201910332895.1A patent/CN110109479A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008058283A (en) * | 2006-09-04 | 2008-03-13 | Ixs Research:Kk | Robot navigation system |
CN101947182A (en) * | 2010-09-26 | 2011-01-19 | 东南大学 | Intelligent guide man-machine interaction device |
CN103294054A (en) * | 2012-02-24 | 2013-09-11 | 联想(北京)有限公司 | Robot navigation method and system |
CN105796289A (en) * | 2016-06-03 | 2016-07-27 | 京东方科技集团股份有限公司 | Blind guide robot |
CN109416541A (en) * | 2016-06-14 | 2019-03-01 | Groove X 株式会社 | Seek the autonomous humanoid robot of nice and cool behavior |
US20170368691A1 (en) * | 2016-06-27 | 2017-12-28 | Dilili Labs, Inc. | Mobile Robot Navigation |
CN106779857A (en) * | 2016-12-23 | 2017-05-31 | 湖南晖龙股份有限公司 | A kind of purchase method of remote control robot |
CN108242007A (en) * | 2016-12-26 | 2018-07-03 | 纳恩博(北京)科技有限公司 | Service providing method and device |
CN207273234U (en) * | 2017-10-12 | 2018-04-27 | 天津科技大学 | A kind of guidance robot |
CN108170166A (en) * | 2017-11-20 | 2018-06-15 | 北京理工华汇智能科技有限公司 | The follow-up control method and its intelligent apparatus of robot |
CN108363393A (en) * | 2018-02-05 | 2018-08-03 | 腾讯科技(深圳)有限公司 | A kind of smart motion equipment and its air navigation aid and storage medium |
CN108734262A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | Smart machine control method, device, smart machine and medium |
CN108748071A (en) * | 2018-04-25 | 2018-11-06 | 苏州米机器人有限公司 | A kind of intelligent hotel service robot |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113031588A (en) * | 2021-02-02 | 2021-06-25 | 广东柔乐电器有限公司 | Robot navigation system for shopping mall |
CN113031588B (en) * | 2021-02-02 | 2023-11-07 | 广东柔乐电器有限公司 | Mall robot navigation system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105868827B (en) | A kind of multi-modal exchange method of intelligent robot and intelligent robot | |
CN110085225A (en) | Voice interactive method, device, intelligent robot and computer readable storage medium | |
US11580454B2 (en) | Dynamic learning method and system for robot, robot and cloud server | |
CN106057205B (en) | Automatic voice interaction method for intelligent robot | |
KR101632819B1 (en) | Method and apparatus for unattended image capture | |
CN109120985A (en) | Image display method, apparatus and storage medium in live streaming | |
CN107831903A (en) | The man-machine interaction method and device that more people participate in | |
CN109176535A (en) | Exchange method and system based on intelligent robot | |
CN112487964B (en) | Gesture detection and recognition method, gesture detection and recognition equipment and computer-readable storage medium | |
CN110299152A (en) | Interactive output control method, device, electronic equipment and storage medium | |
CN108734083A (en) | Control method, device, equipment and the storage medium of smart machine | |
WO2021008339A1 (en) | Robot, robot-based cleaning method, and computer readable storage medium | |
CN109858384A (en) | Method for catching, computer readable storage medium and the terminal device of facial image | |
CN110010125A (en) | A kind of control method of intelligent robot, device, terminal device and medium | |
CN204972147U (en) | Blind person navigation based on kinect | |
CN109955257A (en) | A kind of awakening method of robot, device, terminal device and storage medium | |
CN104883505A (en) | Electronic equipment and photographing control method therefor | |
CN107492377A (en) | Method and apparatus for controlling self-timer aircraft | |
CN109558788A (en) | Silent voice inputs discrimination method, computing device and computer-readable medium | |
CN106502382A (en) | Active exchange method and system for intelligent robot | |
CN110109479A (en) | Navigation processing method, device, intelligent robot and computer readable storage medium | |
CN109656940A (en) | A kind of intelligence learning auxiliary system and method based on AR glasses | |
CN108595012A (en) | Visual interactive method and system based on visual human | |
CN109086725A (en) | Hand tracking and machine readable storage medium | |
CN107277557A (en) | A kind of methods of video segmentation and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190809 |