CN106949887A - Locus method for tracing, locus follow-up mechanism and navigation system - Google Patents
Locus method for tracing, locus follow-up mechanism and navigation system Download PDFInfo
- Publication number
- CN106949887A CN106949887A CN201710186234.3A CN201710186234A CN106949887A CN 106949887 A CN106949887 A CN 106949887A CN 201710186234 A CN201710186234 A CN 201710186234A CN 106949887 A CN106949887 A CN 106949887A
- Authority
- CN
- China
- Prior art keywords
- data
- sensor
- target device
- threshold
- environment information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
Abstract
This application discloses a kind of locus method for tracing, locus follow-up mechanism and navigation system.Methods described includes:Obtain first data related to target device that an at least first sensor is gathered at the first moment;First data and the first reference data are at least based on, starts an at least second sensor;Second data related to the target device at least gathered based on an at least second sensor, obtain the three-dimensional environment information related to the target device.The method and device that the embodiment of the present application is provided is started by the interruption of the higher sensor of power consumption/data volume, can be effectively reduced the power consumption that locus is followed the trail of and navigated.
Description
Technical field
The application belongs to technical field of visual navigation, more particularly to a kind of locus method for tracing, locus are followed the trail of
Device and navigation system.
Background technology
Autonomous navigation technology just turns into a popular research and development field, and increasing smart machine will be provided with
(for example, aircraft, unmanned/automatic driving vehicle, robot, etc.) can realize the function of independent navigation.
Independent navigation is such a technology, and it is obtained equipment and existed by a variety of sensing device furthers (alternatively referred to as sensor)
Information of position, direction and local environment in space etc., and obtained information is analyzed, handled with correlation technique
To set up environmental model, recognize and path planning.The sensing device further used in independent navigation includes but is not limited to:It is camera, complete
Ball position system GPS, accelerometer, gyroscope, infrared sensor, depth transducer, the position sensor related to attitude, etc.
Deng.According to the not same-action played in navigation procedure, different sensing device furthers has different collection/processing data amounts,
Also different power consumptions be correspond to.
The content of the invention
The embodiment of the present application provides a kind of power consumption relatively low navigation scheme.
There is provided a kind of locus method for tracing in a kind of possible embodiment, methods described includes:
Obtain first data related to target device that an at least first sensor is gathered at the first moment;
First data and the first reference data are at least based on, starts an at least second sensor;
At least based on an at least second sensor gather second data related to the target device, obtain and
The related three-dimensional environment information of the target device.
There is provided a kind of locus follow-up mechanism in alternatively possible embodiment, described device includes:
One first acquisition module, for obtain an at least first sensor gathered at the first moment it is related to target device
The first data;
One control module, for being at least based on first data and the first reference data, starts at least one second and passes
Sensor;
One second acquisition module, at least based at least second sensor collection and the target device phase
The second data closed, obtain the three-dimensional environment information related to the target device.
There is provided a kind of navigation system in alternatively possible embodiment, the system includes above-mentioned locus
Follow-up mechanism;
An at least first sensor;
An at least second sensor;And
One navigation module, for the three-dimensional environment information based on described device, navigates to the target device.
The method and device that the embodiment of the present application is provided is started by the interruption of the higher sensor of power consumption/data volume, energy
It is effectively reduced the power consumption that locus is followed the trail of and navigated.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art
The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of application.
Fig. 1 is the flow chart for the locus method for tracing that the application first embodiment is provided;
Fig. 2 (a) to Fig. 2 (c) is the relative theory schematic diagram of the method for the application first embodiment;
Fig. 3 (a) to Fig. 3 (d) is the knot of several examples for the locus follow-up mechanism that the application second embodiment is provided
Structure block diagram;
Fig. 4 is a kind of structured flowchart of the example for the navigation system that the application 3rd embodiment is provided.
Embodiment
To enable present invention purpose, feature, advantage more obvious and understandable, below in conjunction with the application
Accompanying drawing in embodiment, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described reality
Apply example only some embodiments of the present application, and not all embodiments.Based on the embodiment in the application, people in the art
The every other embodiment that member is obtained under the premise of creative work is not made, belongs to the scope of the application protection.
Set it will be understood by those skilled in the art that the term such as " first ", " second " in the application is only used for difference difference
Standby, module or parameter etc., neither represent any particular technology implication, also do not indicate that the inevitable logical order between them.
In each embodiment of the application, target device refers to aircraft, vehicle, robot or can autonomous or at it
His movable equipment drives lower mobile any other equipment, and such equipment includes or carried each embodiment of the application and retouched
The location tracking being previously mentioned in the sensor stated, each embodiment of the application, the object navigated are the target device.In addition,
An at least first sensor is to be in starting state the long period (always to start or periodically start or open as needed
It is dynamic) sensor group, the sensor group may include one or more sensors, including during multiple sensors, the plurality of sensor
Can be all identical or not all the same;An at least second sensor is opened for the data gathered based on an at least first sensor
Dynamic or disabling sensor group, the sensor group may also comprise one or more sensors, including during multiple sensors, it is the plurality of
Sensor can be all identical or not all the same.Gathered data when an at least second sensor starts, does not gather number during disabling
According to.Compared with an at least first sensor, power consumption when an at least second sensor starts is higher and/or data volume is bigger.
Sensor described in each embodiment of the application can for the position used in such as spatial vision navigation system and
Attitude transducer and three-dimensional perception sensor for perceiving three-dimensional environment information (for example, depth information of scene), etc..
For such system, in the collection of continuous time, three-dimensional perception sensor is moved such as the motion of whole system,
Its region sensed is also with changing, and in continuous three-dimensional perception data, the data that adjacent moment is obtained will likely be present
Very big repetition, at least corresponding three-dimensional data in this part is probably redundancy, is taken much if all storing needs
System resource, calculate these three-dimensional datas and also tend to consume substantial amounts of computing resource.Therefore, can be by reasonably disabling three
Detecting sensor is tieed up to realize significantly reducing for whole system power consumption.The application is based on this can be effective there is provided one kind
Reduce the scheme of navigation system power consumption.
Referring to Fig. 1, the flow chart for the locus method for tracing that Fig. 1 provides for the application first embodiment.This method
It can be implemented by any device, such device can also be first sensor, second sensor and the target described in hereafter
Equipment.As shown in Figure 1, the method comprising the steps of:
S120. first data related to target device that an at least first sensor is gathered at the first moment are obtained.
In the method for the present embodiment, according to different sensor types, an at least first sensor collect first
Data can be the position related to the target device and/or attitude data.Such data may include:The target device
Position, attitude, the variable quantity data of the variable quantity of position, and/or attitude.In a kind of possible implementation, the positional number
According to can be specific coordinate, and the three-dimensional coordinate under the coordinate system that can pre-establish for space residing for target device of the coordinate,
That is, the coordinate system set up with some point in the space for default origin;The coordinate can also be three under terrestrial coordinate system
Dimension coordinate, etc..Attitude data can refer to angle of rotation Φ (roll), yaw angle ψ (yaw) and pitching angle theta (pitch).The position
Put/attitudes vibration amount data can refer to the position/attitude of the position/attitude relative to any instant before at currently (first) moment
Change.The sensor that such first data can be gathered includes but is not limited to:It is GPS module, accelerometer, gyroscope, red
Outer sensor, etc..When target device is the robot with transmission motor, an at least first sensor is alternatively its own
Transmission motor, according to transmission motor phase output can also obtain the position of robot and the change of attitude.
S140. first data and the first reference data are at least based on, starts an at least second sensor.
In the method for the present embodiment, the first reference data can be pre-set, or in target device moving process
In any time data that at least a first sensor is collected.For example, the first reference data is at least first sensing
The data that device Startup time is collected.In the method for the present embodiment, reference data be provided for triggering at least one second
The startup of sensor, is specifically set with meeting at least functional requirement of a second sensor.
For example, if an at least second sensor is three-dimensional perception sensor, its major function is to obtain target device place
(such sensor includes but is not limited to the three-dimensional information of environment:Depth camera based on structure light, based on the light flight time
Depth camera, and obtain based on binocular triangulation the depth camera of depth).Number is perceived in continuous three-dimensional environment
In, adjacent data can have very big repetition.As shown in (a), the data gathered by an at least first sensor
The continuous time record of the two-dimensional image information of the target device environment recovered, among these, corresponding three dimension of adjacent moment
According to can have very big repetition.And reference data can be used as by the corresponding data of two dimensional image at a certain moment before, and
Rational threshold value is set, an at least second sensor whether is started to trigger.In the example shown in Fig. 2 (a), if with before
The 3rd frame image data be reference data, it is that the difference of two field pictures data is 50% to set threshold value, wherein, the number of the 4th frame
Start an at least second sensor according to, more than 50%, may be in response to this with its reference data (namely data of the first frame) difference.
It can similarly obtain, the 7th frame has 50% new data compared with the data of the 4th frame, may also respond to this and start at least one
Second sensor.It should be noted that for first data at each moment, its corresponding reference data be probably it is different,
For example, the first data for being gathered of every 3rd moment can be as reference data, or each difference with its reference data
The first data not less than corresponding first threshold all turn into the first reference data.
S160. second data related to the target device at least gathered based on an at least second sensor,
Obtain the three-dimensional environment information related to the target device.
In the method for the present embodiment, the scene information of target device can be recovered according at least to the second data, it is preferable that
For the depth information of scene of target device.For example, the second data are scene three dimensional depth image, the current depth of statistics can be passed through
The average value of the depth data for all pixels spent on image, is used as the depth data of the image.
To sum up, the method for the present embodiment is started by the interruption of the higher sensor of power consumption/data volume, can be effectively reduced
The power consumption that locus is followed the trail of.
As described above, at least the startup of a second sensor is relevant with the first reference data, specifically, and step S140 can
Further comprise:
S142. the difference in response to first data and first reference data is not less than first threshold, starts institute
State an at least second sensor.
The first threshold can be arranged as required to.For example, as with reference to described by Fig. 2 (a), triggering three-dimensional perception sensor
Difference namely first threshold between the current image frame data of startup and its reference picture frame data are 50%.If three-dimensional perception
The angle of visual field of sensor is A, and the first data include the location variation Δ T and angle of rotation variation delta Q of target device, then such as
Fig. 2 (b) is with shown in 2 (c), and it is 2*D*S*tan (A/2) that first threshold corresponding with location variation Δ T, which can be set, with rotation
Variation delta Q corresponding first thresholds in angle are A*S, wherein, D is the depth data of reference picture, and the depth data can be for for example
The average value of the depth data of all pixels on the image.
S146. second data that an at least second sensor described in obtaining is gathered at the second moment.
As described above, meet start this at least the condition of a second sensor when scene for needs tracking, this moment,
An at least first sensor should be made static so that an at least second sensor can gather the data of correspondence scene, Ye Jiyu
The current scene of the corresponding target device of first data.Therefore, before step S146, it may also include step:
S144. in response to an at least second sensor described in startup, make an at least first sensor static.
After obtaining the second data and obtaining the three-dimensional environment information related to the target device accordingly, at least one first
Sensor works on, and an at least second sensor can still keep starting state, or, can automatically or according to subsequent time extremely
The data of few first sensor collection decide whether to disable at least second sensor to start, so as to further save work(
Consumption.Specifically, the method for the present embodiment may also include:
S130. the 3rd number related to the target device that an at least first sensor is gathered at the 3rd moment is obtained
According to.
S150. the difference in response to the 3rd data and the second reference data is not less than Second Threshold, described in disabling extremely
A few second sensor.
Second reference data can be identical also different from first reference data.Second Threshold can be with first threshold phase
Together, also can be different, and can be to be adjusted according to the three-dimensional environment information obtained by step S160 relative to first threshold.Example
Such as, the three-dimensional environment information obtained according to the second data not enough sufficiently in the case of, first threshold can be reduced for Second Threshold.
In such implementation, the method for the present embodiment may also include step:
S162. the first threshold is adjusted according at least to the three-dimensional environment information, obtains Second Threshold.
In addition, according to the function and the difference of role of the square law device for performing the present embodiment, in the possible realization side of one kind
In formula, the method for the present embodiment may also include step:
S182. second data and/or the three-dimensional environment information are sent, so that subsequent navigation needs.
In alternatively possible implementation, the method for the present embodiment may also include step:
S184. second data and/or the three-dimensional environment information are stored.
In stored above or the second data of output and/or the three-dimensional environment information, data can be carried out in advance whole
Close, and then input or store data more easy to use.For example, regarding the position in the second data and attitude information as the 3rd
The coordinate mark of each content frame of environmental information so that the 3rd each frame of data is all comprising the position included in second data
Information.For example, the content corresponding with depth information of the RGB information in the second data is merged so that the 3rd data are
One include color information three-dimensional environment information.
To sum up, the method for the present embodiment helps to realize the navigation that power consumption is relatively low.
It will be understood by those skilled in the art that in the above method of the application embodiment, the sequence number of each step
Size is not meant to the priority of execution sequence, and the execution sequence of each step should be determined with its function and internal logic, without answering
Implementation process to the application embodiment constitutes any limit.
In addition, the embodiment of the present application additionally provides a kind of computer-readable medium, including following behaviour is carried out when executed
The computer-readable instruction of work:Perform the operation of each step of method shown in above-mentioned Fig. 1 in embodiment.
Refering to Fig. 3 (a), the structural frames for the locus follow-up mechanism 300 that Fig. 3 (a) provides for the application second embodiment
Figure.The device 300 can be a part for navigation system or itself be navigation system.As shown in Fig. 3 (a), the device 300 is wrapped
Include:First acquisition module 320, the acquisition module 360 of control module 340 and second.Wherein,
First acquisition module 320 be used for obtain an at least first sensor gathered at the first moment it is related to target device
The first data.
In the device of the present embodiment, according to different sensor types, an at least first sensor collect first
Data can be the position related to the target device and/or attitude data.Such data may include:The target device
Position, attitude, the variable quantity data of the variable quantity of position, and/or attitude.In a kind of possible implementation, the positional number
According to can be specific coordinate, and the three-dimensional coordinate under the coordinate system that can pre-establish for space residing for target device of the coordinate,
That is, the coordinate system set up with some point in the space for default origin;The coordinate can also be three under terrestrial coordinate system
Dimension coordinate, etc..Attitude data can refer to angle of rotation Φ (roll), yaw angle ψ (yaw) and pitching angle theta (pitch).The position
Put/attitudes vibration amount data can refer to the position/attitude of the position/attitude relative to any instant before at currently (first) moment
Change.The sensor that such first data can be gathered includes but is not limited to:It is GPS module, accelerometer, gyroscope, red
Outer sensor, etc..When target device is the robot with transmission motor, an at least first sensor is alternatively its own
Transmission motor, according to transmission motor phase output can also obtain the position of robot and the change of attitude.
Control module 340 is used at least be based on first data and the first reference data, starts at least one second and passes
Sensor.
In the device 300 of the present embodiment, the first reference data can be pre-set, or to be moved through in target device
Any time data that at least a first sensor is collected in journey.For example, the first reference data is at least first biography
The data that sensor Startup time is collected.In the device of the present embodiment, reference data is provided for triggering at least 1 the
The startup of two sensors, is specifically set with meeting at least functional requirement of a second sensor.
For example, if an at least second sensor is three-dimensional perception sensor, its major function is to obtain target device place
(such sensor includes but is not limited to the three-dimensional information of environment:Depth camera based on structure light, based on the light flight time
Depth camera, and obtain based on binocular triangulation the depth camera of depth).In continuous three-dimensional perception data
In, adjacent data can have very big repetition.As shown in Fig. 2 (a), the data gathered by an at least first sensor
The continuous time record of the two-dimensional image information of the target device environment recovered, among these, corresponding three dimension of adjacent moment
According to can have very big repetition.And reference data can be used as by the corresponding data of two dimensional image at a certain moment before, and
Rational threshold value is set, an at least second sensor whether is started to trigger.In the example shown in Fig. 2 (a), if with before
The 3rd frame image data be reference data, it is that the difference of two field pictures data is 50% to set threshold value, wherein, the number of the 4th frame
Start an at least second sensor according to, more than 50%, may be in response to this with its reference data (namely data of the first frame) difference.
It can similarly obtain, the 7th frame has 50% new data compared with the data of the 4th frame, may also respond to this and start at least one
Second sensor.It should be noted that for first data at each moment, its corresponding reference data be probably it is different,
For example, the first data for being gathered of every 3rd moment can be as reference data, or each difference with its reference data
The first data not less than corresponding first threshold all turn into the first reference data.
Second acquisition module 360, at least based at least second sensor collection and the target device
The second related data, obtain the three-dimensional environment information related to the target device.
In the device 300 of the present embodiment, the second acquisition module 360 can recover target device according at least to the second data
Three-dimensional environment information, it is preferable that be target device depth information of scene.For example, the second data are scene three-dimensional depth map
Picture, the second acquisition module 360 can be obtained by the average value of the depth data of all pixels on the current depth image of statistics
To the depth data of the image.
To sum up, the device of the present embodiment is started by the interruption of the higher sensor of power consumption/data volume, can be effectively reduced
The power consumption that locus is followed the trail of.
As described above, at least the startup of a second sensor is relevant with the first reference data, specifically, such as Fig. 3 (b) institutes
Show, control module 340 can further comprise control unit 342 and acquiring unit 344, wherein:
Control unit 342 is used to be not less than the first threshold in response to the difference of first data and first reference data
Value, an at least second sensor described in startup.
The first threshold can be arranged as required to.For example, as with reference to described by Fig. 2 (a), triggering three-dimensional perception sensor
Difference namely first threshold between the current image frame data of startup and its reference picture frame data are 50%.If three-dimensional perception
The angle of visual field of sensor is A, and the first data include the location variation Δ T and angle of rotation variation delta Q of target device, then such as
Fig. 2 (b) is with shown in 2 (c), and it is 2*D*S*tan (A/2) that first threshold corresponding with location variation Δ T, which can be set, with rotation
Variation delta Q corresponding first thresholds in angle are A*S, wherein, D is the depth data of reference picture, and the depth data can be for for example
The average value of the depth data of all pixels on the image.
Acquiring unit 344 is used to obtain second data that an at least second sensor is gathered at the second moment.
As described above, meet start this at least the condition of a second sensor when scene for needs tracking, this moment,
An at least first sensor should be made static so that an at least second sensor can gather the data of correspondence scene, Ye Jiyu
The current scene of the corresponding target device of first data.Therefore, control unit 342 is additionally operable in response to described in starting at least 1 the
Two sensors, make an at least first sensor static.
Acquiring unit 344 obtain the second data and obtain accordingly the three-dimensional environment information related to the target device it
Afterwards, at least a first sensor works on, and control unit 342 can control an at least second sensor and still keep starting shape
State, or, the data that can be gathered according to a subsequent time at least first sensor decide whether to disable at least 1 the had been turned on
Two sensors, so as to further save power consumption.Specifically:
First acquisition module 320 is additionally operable to obtain that an at least first sensor gathers at the 3rd moment is set with the target
Standby the 3rd related data.
Control module 340 is additionally operable to be not less than the second threshold in response to the difference of the 3rd data and the second reference data
Value, an at least second sensor described in disabling.
Second reference data can be identical also different from first reference data.Second Threshold can be with first threshold phase
Together, also can be different, and can be to be adjusted according to the three-dimensional environment information that the second acquisition module 360 is obtained relative to first threshold
's.For example, the three-dimensional environment information obtained according to the second data not enough sufficiently in the case of, first threshold can be reduced for the 3rd
Threshold value, reduction Second Threshold is the 4th threshold value.In such implementation, control module 340 can be additionally used in according at least to institute
Three-dimensional environment information adjustment described first is stated, Second Threshold obtains the 3rd, the 4th threshold value.
In addition, according to the function of the device of the present embodiment and the difference of role, in a kind of possible implementation, such as scheming
Shown in 3 (c), the device 300 of the present embodiment may also include:
Sending module 382, for sending second data and/or the three-dimensional environment information, so that external system exists
The need in subsequent navigation.For example, sending module 382 can for example, by the high-speed interfaces such as USB interface, HDMI or network interface shape
Formula is sent, or sending module 382 itself is such interface.
In alternatively possible implementation, such as shown in Fig. 3 (d), the device 300 of the present embodiment may also include step:
Memory module 384, for storing second data and/or the three-dimensional environment information.
In stored above or the second data of output and/or the three-dimensional environment information, data can be carried out in advance whole
Close, and then input or store data more easy to use.For example, regarding the position in the second data and attitude information as the 3rd
The coordinate mark of each content frame of environmental information so that the 3rd each frame of data is all comprising the position included in second data
Information.For example, the content corresponding with depth information of the RGB information in the second data is merged so that the 3rd data are
One include color information three-dimensional environment information.
To sum up, the device of the present embodiment helps to realize the navigation that power consumption is relatively low.
Referring to Fig. 4, Fig. 4 show a kind of structural representation of navigation system of the application 3rd embodiment offer.Such as
Shown in Fig. 4, the system 400 includes:Locus follow-up mechanism 300 and navigation mould shown in any figures of Fig. 3 (a) to Fig. 3 (d)
Block 460.The system 400 also includes an at least first sensor 420 and an at least second sensor 440.System 400 may include
But it is not limited to:Space positioning system based on radio frequency, the visual space alignment system based on image, infrared location system, it is based on
Visual space alignment system of image etc.
Wherein, an at least first sensor 420 is used to gather the first related data of target device.Such as above in conjunction with Fig. 1
Method described by, the first data may include in the related data of the position of target device and/or attitude.Such at least one
First sensor 420 includes but is not limited to:GPS module, accelerometer, gyroscope, the visual space alignment system based on image
Binocular/single camera, infrared sensor, transmission motor, etc..
An at least second sensor 440 is used to gather the second related data of target device.Such as above in conjunction with Fig. 1 method
Described, second data may include the scene three-dimensional data of target device place environment, for example, the depth based on structure light
Camera, the depth camera based on the light flight time, the depth camera of depth is obtained based on binocular triangulation.
It is being related in the system 400 with locus follow-up mechanism 300 function and corresponding locus tracing step
Further embodiment may refer to corresponding description in the above method and device embodiment, no longer repeat one by one here.
And navigation module 460 is also this area maturation based on the method that the three-dimensional environment information that locus follows the trail of 300 is navigated
Technology, will not be repeated here.
, can be with multiple embodiments provided herein, it should be understood that disclosed system, terminal and method
Realize by another way.For example, described above is only to show based on locus method for tracing and device embodiment
Meaning property, for example, the division of the module, only a kind of division of logic function, can there is other division when actually realizing
Mode, such as multiple module or components can combine or be desirably integrated into another system, or some features can be ignored, or
Do not perform.Another, shown or discussed coupling or direct-coupling or communication linkage each other can be by some
Interface, the INDIRECT COUPLING or communication linkage of module can be electrical, machinery or other forms.
The module illustrated as separating component can be or may not be it is physically separate, it is aobvious as module
The part shown can be or may not be physical module, you can with positioned at a place, or can also be distributed to multiple
On mixed-media network modules mixed-media.Some or all of module therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional module in the application each embodiment can be integrated in a processing module, can also
That modules are individually physically present, can also two or more modules be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, it would however also be possible to employ the form of software function module is realized.
If the integrated module is realized using in the form of software function module and as independent production marketing or used
When, it can be stored in a computer read/write memory medium.Understood based on such, the technical scheme of the application is substantially
The part contributed in other words to prior art or all or part of the technical scheme can be in the form of software products
Embody, the computer software product is stored in a storage medium, including some instructions are to cause a computer
Equipment (can be personal computer, server, or network equipment etc.) performs the complete of the application each embodiment methods described
Portion or part steps.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey
The medium of sequence code.
It should be noted that for foregoing each method embodiment, for simplicity description, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the application is not limited by described sequence of movement because
According to the application, some steps can use other orders or carry out simultaneously.Secondly, those skilled in the art should also know
Know, embodiment described in this description belongs to preferred embodiment, involved action and module might not all be this Shens
Please be necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not have the portion being described in detail in some embodiment
Point, it may refer to the associated description of other embodiments.
It is to the glasses recognition methods provided herein based on image and the description of device, for this area above
Technical staff, according to the thought of the embodiment of the present application, will change in specific embodiments and applications, comprehensive
On, this specification content should not be construed as the limitation to the application.
Claims (20)
1. a kind of locus method for tracing, it is characterised in that methods described includes:
Obtain first data related to target device that an at least first sensor is gathered at the first moment;
First data and the first reference data are at least based on, starts an at least second sensor;
At least based on an at least second sensor gather second data related to the target device, acquisition with it is described
The related three-dimensional environment information of target device.
2. the method as described in claim 1, it is characterised in that a startup at least second sensor further comprises:
Difference in response to first data and first reference data is not less than first threshold, start described at least 1 the
Two sensors;
Second data that an at least second sensor described in obtaining is gathered at the second moment.
3. method as claimed in claim 2, it is characterised in that an at least second sensor is at the second moment described in the acquisition
Also include before second data of collection:
In response to an at least second sensor described in startup, make an at least first sensor static.
4. the method as described in claim 1, it is characterised in that methods described also includes:
Send second data and/or the three-dimensional environment information.
5. according to the method described in claim 1, it is characterised in that methods described also includes:
Store second data and/or the three-dimensional environment information.
6. the method as described in claim 1, it is characterised in that first data include:It is related to the target device
Position and/or attitude data.
7. method as claimed in claim 6, it is characterised in that first data include:The position of the target device and/
Or the variable quantity of attitude.
8. the method as described in claim 1, it is characterised in that second data include:It is related to the target device
Scene depth data.
9. the method as described in claim 1, it is characterised in that first reference data is an at least first sensor
The data that Startup time is collected.
10. method as claimed in any one of claims 1-9 wherein, it is characterised in that methods described also includes:
Obtain threeth data related to the target device that an at least first sensor is gathered at the 3rd moment;
Difference in response to the 3rd data and the second reference data is no more than Second Threshold, at least one second biography described in disabling
Sensor.
11. method as claimed in claim 10, it is characterised in that second reference data and the first reference data phase
Together.
12. method as claimed in claim 10, it is characterised in that the Second Threshold is identical with the first threshold.
13. method as claimed in claim 10, it is characterised in that methods described also includes:
According at least to three-dimensional environment information adjustment described first, Second Threshold obtains the 3rd, the 4th threshold value.
14. a kind of locus follow-up mechanism, it is characterised in that described device includes:
One first acquisition module, for obtain that an at least first sensor gathers at the first moment related to target device the
One data;
One control module, for being at least based on first data and the first reference data, starts an at least second sensor;
One second acquisition module, at least based on the related to the target device of at least second sensor collection
Second data, obtain the three-dimensional environment information related to the target device.
15. device as claimed in claim 14, it is characterised in that the control module further comprises:
One control unit, is not less than first threshold for the difference in response to first data and first reference data,
An at least second sensor described in starting;
One acquiring unit, for obtaining second data that an at least second sensor is gathered at the second moment.
16. device as claimed in claim 15, it is characterised in that described control unit be additionally operable in response to described in starting at least
One second sensor, makes an at least first sensor static.
17. device as claimed in claim 14, it is characterised in that described device also includes:
One sending module, for sending second data and/or the three-dimensional environment information.
18. device as claimed in claim 14, it is characterised in that described device also includes:
One memory module, for storing second data and/or the three-dimensional environment information.
19. the device as any one of claim 14 to 18, it is characterised in that first acquisition module is additionally operable to obtain
Threeth data related to the target device for taking an at least first sensor to be gathered at the 3rd moment;
The control module is additionally operable to be no more than Second Threshold in response to the difference of the 3rd data and the second reference data, prohibits
With an at least second sensor.
20. a kind of navigation system, it is characterised in that the system includes:
Locus follow-up mechanism any one of claim 14 to 19;
An at least first sensor;
An at least second sensor;And
One navigation module, for the three-dimensional environment information based on described device, navigates to the target device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710186234.3A CN106949887B (en) | 2017-03-27 | 2017-03-27 | Space position tracking method, space position tracking device and navigation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710186234.3A CN106949887B (en) | 2017-03-27 | 2017-03-27 | Space position tracking method, space position tracking device and navigation system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106949887A true CN106949887A (en) | 2017-07-14 |
CN106949887B CN106949887B (en) | 2021-02-09 |
Family
ID=59473643
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710186234.3A Active CN106949887B (en) | 2017-03-27 | 2017-03-27 | Space position tracking method, space position tracking device and navigation system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106949887B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5483455A (en) * | 1992-09-08 | 1996-01-09 | Caterpillar Inc. | Method and apparatus for determining the location of a vehicle |
JP2009109290A (en) * | 2007-10-29 | 2009-05-21 | Tokyo Institute Of Technology | Target position measuring system |
CN103677225A (en) * | 2012-09-03 | 2014-03-26 | 联想(北京)有限公司 | Data processing method and first terminal device |
CN104574386A (en) * | 2014-12-26 | 2015-04-29 | 速感科技(北京)有限公司 | Indoor positioning method based on three-dimensional environment model matching |
CN104866261A (en) * | 2014-02-24 | 2015-08-26 | 联想(北京)有限公司 | Information processing method and device |
CN105492985A (en) * | 2014-09-05 | 2016-04-13 | 深圳市大疆创新科技有限公司 | Multi-sensor environment map building |
CN105866810A (en) * | 2016-03-23 | 2016-08-17 | 福州瑞芯微电子股份有限公司 | GPS low-power-consumption positioning method and device for electronic equipment |
CN106056664A (en) * | 2016-05-23 | 2016-10-26 | 武汉盈力科技有限公司 | Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision |
CN106101997A (en) * | 2016-05-26 | 2016-11-09 | 深圳市万语网络科技有限公司 | A kind of localization method and alignment system with automatically adjusting location frequency |
CN106441408A (en) * | 2016-07-25 | 2017-02-22 | 肇庆市小凡人科技有限公司 | Low-power measuring system |
CN106441320A (en) * | 2015-08-06 | 2017-02-22 | 平安科技(深圳)有限公司 | Positioning operation control method, vehicle and electronic device |
CN206146450U (en) * | 2016-07-25 | 2017-05-03 | 肇庆市小凡人科技有限公司 | Measurement system of low -power consumption |
-
2017
- 2017-03-27 CN CN201710186234.3A patent/CN106949887B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5483455A (en) * | 1992-09-08 | 1996-01-09 | Caterpillar Inc. | Method and apparatus for determining the location of a vehicle |
JP2009109290A (en) * | 2007-10-29 | 2009-05-21 | Tokyo Institute Of Technology | Target position measuring system |
CN103677225A (en) * | 2012-09-03 | 2014-03-26 | 联想(北京)有限公司 | Data processing method and first terminal device |
CN104866261A (en) * | 2014-02-24 | 2015-08-26 | 联想(北京)有限公司 | Information processing method and device |
CN105492985A (en) * | 2014-09-05 | 2016-04-13 | 深圳市大疆创新科技有限公司 | Multi-sensor environment map building |
CN104574386A (en) * | 2014-12-26 | 2015-04-29 | 速感科技(北京)有限公司 | Indoor positioning method based on three-dimensional environment model matching |
CN106441320A (en) * | 2015-08-06 | 2017-02-22 | 平安科技(深圳)有限公司 | Positioning operation control method, vehicle and electronic device |
CN105866810A (en) * | 2016-03-23 | 2016-08-17 | 福州瑞芯微电子股份有限公司 | GPS low-power-consumption positioning method and device for electronic equipment |
CN106056664A (en) * | 2016-05-23 | 2016-10-26 | 武汉盈力科技有限公司 | Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision |
CN106101997A (en) * | 2016-05-26 | 2016-11-09 | 深圳市万语网络科技有限公司 | A kind of localization method and alignment system with automatically adjusting location frequency |
CN106441408A (en) * | 2016-07-25 | 2017-02-22 | 肇庆市小凡人科技有限公司 | Low-power measuring system |
CN206146450U (en) * | 2016-07-25 | 2017-05-03 | 肇庆市小凡人科技有限公司 | Measurement system of low -power consumption |
Also Published As
Publication number | Publication date |
---|---|
CN106949887B (en) | 2021-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3234806B1 (en) | Scalable 3d mapping system | |
US10671068B1 (en) | Shared sensor data across sensor processing pipelines | |
US20200209880A1 (en) | Obstacle detection method and apparatus and robot using the same | |
CN110494360A (en) | For providing the autonomous system and method photographed and image | |
US20190387209A1 (en) | Deep Virtual Stereo Odometry | |
US11475591B2 (en) | Hybrid metric-topological camera-based localization | |
Krückel et al. | Intuitive visual teleoperation for UGVs using free-look augmented reality displays | |
US20220269943A1 (en) | Systems and methods for training neural networks on a cloud server using sensory data collected by robots | |
CN107450573B (en) | Flight shooting control system and method, intelligent mobile communication terminal and aircraft | |
WO2019082301A1 (en) | Unmanned aircraft control system, unmanned aircraft control method, and program | |
CN109858309A (en) | A kind of method and apparatus identifying Road | |
EP4090000A1 (en) | Method and device for image processing, electronic device, and storage medium | |
WO2023056789A1 (en) | Obstacle identification method and system for automatic driving of agricultural machine, device, and storage medium | |
US11308324B2 (en) | Object detecting system for detecting object by using hierarchical pyramid and object detecting method thereof | |
CN106570482A (en) | Method and device for identifying body motion | |
CN116012445A (en) | Method and system for guiding robot to perceive three-dimensional space information of pedestrians based on depth camera | |
CN105578035A (en) | Image processing method and electronic device | |
KR101799351B1 (en) | Automatic photographing method of aerial video for rendering arbitrary viewpoint, recording medium and device for performing the method | |
US20240077882A1 (en) | Systems and methods for configuring a robot to scan for features within an environment | |
Aguilar et al. | Convolutional neuronal networks based monocular object detection and depth perception for micro UAVs | |
CN113378605B (en) | Multi-source information fusion method and device, electronic equipment and storage medium | |
WO2022027015A1 (en) | Systems and methods for preserving data and human confidentiality during feature identification by robotic devices | |
CN106949887A (en) | Locus method for tracing, locus follow-up mechanism and navigation system | |
Chen et al. | Image stitching on the unmanned air vehicle in the indoor environment | |
CN110187781A (en) | Method, system, equipment and the storage medium of picture are shown in a manner of waterfall stream |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |