CN106949887B - Space position tracking method, space position tracking device and navigation system - Google Patents
Space position tracking method, space position tracking device and navigation system Download PDFInfo
- Publication number
- CN106949887B CN106949887B CN201710186234.3A CN201710186234A CN106949887B CN 106949887 B CN106949887 B CN 106949887B CN 201710186234 A CN201710186234 A CN 201710186234A CN 106949887 B CN106949887 B CN 106949887B
- Authority
- CN
- China
- Prior art keywords
- data
- sensor
- target device
- threshold value
- threshold
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000003213 activating effect Effects 0.000 claims abstract description 8
- 230000004044 response Effects 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 5
- 230000006870 function Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a space position tracking method, a space position tracking device and a navigation system. The method comprises the following steps: acquiring first data which is acquired by at least one first sensor at a first moment and is related to target equipment; activating at least one second sensor based on at least the first data and the first reference data; and acquiring three-dimensional environment information related to the target equipment at least based on second data related to the target equipment, which is acquired by the at least one second sensor. The method and the device provided by the embodiment of the application can effectively reduce the power consumption of space position tracking and navigation through the intermittent starting of the sensor with higher power consumption/data volume.
Description
Technical Field
The present application belongs to the field of visual navigation technologies, and in particular, to a spatial position tracking method, a spatial position tracking apparatus, and a navigation system.
Background
Autonomous navigation technology is becoming a popular area of research and development, and more smart devices will be equipped with functionality (e.g., aircraft, unmanned/autonomous vehicles, robots, etc.) that enable autonomous navigation.
Autonomous navigation is a technology that acquires information about a position, an orientation, and an environment of a device in space through various sensing devices (also referred to as sensors), analyzes and processes the acquired information using a related technology to build an environment model, and identifies and plans a path. Sensing devices used in autonomous navigation include, but are not limited to: cameras, global positioning system GPS, accelerometers, gyroscopes, infrared sensors, depth sensors, position and attitude related sensors, and the like. Different sensing devices have different data acquisition/processing amounts, corresponding to different power consumptions, according to different functions performed during the navigation process.
Disclosure of Invention
The embodiment of the application provides a navigation scheme with low power consumption.
In one possible embodiment, a spatial location tracking method is provided, the method comprising:
acquiring first data which is acquired by at least one first sensor at a first moment and is related to target equipment;
activating at least one second sensor based on at least the first data and the first reference data;
and acquiring three-dimensional environment information related to the target equipment at least based on second data related to the target equipment, which is acquired by the at least one second sensor.
In another possible embodiment, there is provided a spatial position tracking apparatus, the apparatus comprising:
the first acquisition module is used for acquiring first data which are acquired by at least one first sensor at a first moment and are related to target equipment;
a control module for activating at least one second sensor based on at least the first data and the first reference data;
and the second acquisition module is used for acquiring the three-dimensional environment information related to the target equipment at least based on the second data related to the target equipment, which is acquired by the at least one second sensor.
In another possible embodiment, a navigation system is provided, which includes the above-mentioned spatial position tracking device;
the at least one first sensor;
the at least one second sensor; and
and the navigation module is used for navigating the target equipment based on the three-dimensional environment information of the device.
The method and the device provided by the embodiment of the application can effectively reduce the power consumption of space position tracking and navigation through the intermittent starting of the sensor with higher power consumption/data volume.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application.
FIG. 1 is a flow chart of a spatial location tracking method according to a first embodiment of the present application;
fig. 2(a) to 2(c) are schematic diagrams of the relevant principle of the method of the first embodiment of the present application;
fig. 3(a) to 3(d) are block diagrams illustrating several examples of a spatial position tracking device according to a second embodiment of the present application;
fig. 4 is a block diagram illustrating an example of a navigation system according to a third embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood by those within the art that the terms "first", "second", etc. in this application are used only to distinguish one device, module, parameter, etc., from another, and do not denote any particular technical meaning or necessary order therebetween.
In the embodiments of the present application, the target device refers to an aircraft, a vehicle, a robot, or any other device that can move autonomously or under the driving of other movable devices, and such a device includes or carries the sensor described in the embodiments of the present application, and the target device is the object of position tracking and navigation mentioned in the embodiments of the present application. In addition, at least one first sensor is a sensor group which is in an activated state for a long time (always activated, periodically activated, or activated according to needs), the sensor group can comprise one or more sensors, and when a plurality of sensors are included, the plurality of sensors can be all the same or not all the same; the at least one second sensor is a sensor group that is enabled or disabled based on data collected by the at least one first sensor, the sensor group may also include one or more sensors, and when multiple sensors are included, the multiple sensors may all be the same or not all be the same. The at least one second sensor collects data when being started and does not collect data when being forbidden. The at least one second sensor may be configured to consume more power and/or data than the at least one first sensor when activated.
The sensors described in the embodiments of the present application may be, for example, position and attitude sensors used in a spatial vision navigation system, and three-dimensional perception sensors for perceiving three-dimensional environment information (e.g., scene depth information), and so on. For such a system, in the continuous-time acquisition, the three-dimensional sensing sensor may move along with the movement of the whole system, the sensing area of the three-dimensional sensing sensor also changes along with the movement of the whole system, in the continuous three-dimensional sensing data, the data acquired at adjacent time instants may have very large repetition, at least the part of the corresponding three-dimensional data may be redundant, and if the data are stored to occupy a lot of system resources, the calculation of the three-dimensional data also tends to consume a lot of calculation resources. Thus, a significant reduction in overall system power consumption can be achieved by reasonably disabling the three-dimensional sensing sensor. The application provides a scheme capable of effectively reducing the power consumption of a navigation system.
Referring to fig. 1, fig. 1 is a flowchart illustrating a spatial location tracking method according to a first embodiment of the present application. The method may be implemented by any means, such means also being the first sensor, the second sensor, and the target device described below. As shown in fig. 1, the method comprises the steps of:
s120, first data which are collected by at least one first sensor at a first moment and are related to target equipment are obtained.
In the method of the embodiment, the first data collected by the at least one first sensor may be position and/or attitude data related to the target device according to different sensor types. Such data may include: a position, an attitude, a change in position, and/or change in attitude data of the target device. In a possible implementation manner, the position data may be a specific coordinate, and the coordinate may be a three-dimensional coordinate in a coordinate system pre-established in a space where the target device is located, that is, a coordinate system established with a certain point in the space as a preset origin; the coordinates may also be three-dimensional coordinates in the terrestrial coordinate system, and so on. The attitude data may refer to a roll angle Φ (roll), a yaw angle ψ (yaw), and a pitch angle θ (pitch). The position/orientation variation data may refer to a change in position/orientation at the current (first) time with respect to the position/orientation at any previous time. Sensors capable of collecting such first data include, but are not limited to: GPS modules, accelerometers, gyroscopes, infrared sensors, and the like. When the target device is a robot having a transmission motor, the at least one first sensor may also be its own transmission motor, and the change in the position and attitude of the robot can also be obtained from the phase output of the transmission motor.
S140, at least one second sensor is started at least based on the first data and the first reference data.
In the method of this embodiment, the first reference data may be preset, or may be data collected by the at least one first sensor at any time during the movement of the target device. For example, the first reference data is data collected at the activation time of the at least first sensor. In the method of this embodiment, the setting of the reference data is used to trigger the activation of the at least one second sensor to meet the specific setting of the functional requirements of the at least one second sensor.
For example, if the at least one second sensor is a three-dimensional sensing sensor, its primary function is to obtain three-dimensional information about the environment in which the target device is located (such sensors include, but are not limited to, structured light-based depth cameras, light time-of-flight-based depth cameras, and depth cameras based on binocular triangulation). In the continuous three-dimensional environment perception data, adjacent data can be very greatly repeated. As shown in (a), continuous time recording of two-dimensional image information of the target device environment is recovered from data acquired by at least one first sensor, wherein there may be very large repetitions of three-dimensional data corresponding to adjacent time instants. And whether the at least one second sensor is started or not can be triggered by using data corresponding to the two-dimensional image at a certain moment in time as reference data and setting a reasonable threshold. In the example shown in fig. 2(a), if the image data of the previous third frame is used as the reference data, the threshold is set to 50% of the difference between the two frames of image data, wherein the data of the fourth frame is more than 50% different from the reference data (i.e. the data of the first frame), and the at least one second sensor can be activated in response thereto. It is equally possible that the seventh frame has 50% new data compared to the data of the fourth frame, in response to which the at least one second sensor is also activated. It should be noted that, for the first data at each time, the corresponding reference data may be different, for example, the first data acquired at every third time may be used as the reference data, or each first data whose difference from the reference data is not less than the corresponding first threshold may be used as the first reference data.
And S160, acquiring three-dimensional environment information related to the target equipment at least based on second data related to the target equipment, acquired by the at least one second sensor.
In the method of this embodiment, the scene information of the target device, preferably the scene depth information of the target device, may be recovered at least according to the second data. For example, the second data is a three-dimensional depth image of a scene, and the average value of the depth data of all pixels on the current depth image can be counted as the depth data of the image.
In summary, the method of the present embodiment can effectively reduce the power consumption of spatial location tracking by the intermittent activation of the sensor with higher power consumption/data amount.
As mentioned above, the activation of the at least one second sensor is related to the first reference data, and specifically, the step S140 may further include:
s142, responding to the fact that the difference between the first data and the first reference data is not smaller than a first threshold value, and starting the at least one second sensor.
The first threshold may be set as desired. For example, as described in conjunction with fig. 2(a), the difference between the current image frame data triggering the three-dimensional sensing sensor to be activated and the reference image frame data thereof, i.e., the first threshold value, is 50%. If the angle of view of the three-dimensional sensing sensor is a and the first data includes a position variation Δ T and a rotation angle variation Δ Q of the target device, as shown in fig. 2(b) and 2(c), a first threshold corresponding to the position variation Δ T may be set to be 2 × D × S tan (a/2), and a first threshold corresponding to the rotation angle variation Δ Q may be set to be a × S, where D is depth data of the reference image, and the depth data may be, for example, an average value of depth data of all pixels on the image.
S146, acquiring the second data acquired by the at least one second sensor at a second moment.
As mentioned above, the scene that meets the condition for activating the at least one second sensor is to be tracked, and at this moment, the at least one first sensor should be made stationary, so that the at least one second sensor can collect the data corresponding to the scene, that is, the current scene of the target device corresponding to the first data. Therefore, before step S146, the method may further include the steps of:
s144, in response to the at least one second sensor being activated, the at least one first sensor is made to be stationary.
After the second data is obtained and the three-dimensional environment information related to the target device is obtained according to the second data, the at least one first sensor continues to work, and the at least one second sensor can still be in the starting state, or whether the at least one second sensor which is started is forbidden can be automatically determined or determined according to the data collected by the at least one first sensor at the next moment, so that the power consumption is further saved. Specifically, the method of this embodiment may further include:
s130, third data which are collected by at least one first sensor at a third moment and are related to the target device are obtained.
S150, in response to the difference between the third data and the second reference data not exceeding a second threshold, disabling the at least one second sensor.
The second reference data may be the same as or different from the first reference data. The second threshold may be the same as or different from the first threshold, and may be adjusted relative to the first threshold according to the three-dimensional environment information obtained in step S160. For example, in the case where the three-dimensional environment information obtained from the second data is insufficient, the first threshold value may be lowered to the second threshold value. In such an implementation, the method of this embodiment may further include the steps of:
s162, adjusting the first threshold value at least according to the three-dimensional environment information to obtain a second threshold value.
In addition, according to the difference between the functions and roles of the apparatuses executing the method of the embodiment, in a possible implementation manner, the method of the embodiment may further include the steps of:
and S182, sending the second data and/or the three-dimensional environment information for subsequent navigation.
In another possible implementation manner, the method of this embodiment may further include the steps of:
and S184, storing the second data and/or the three-dimensional environment information.
When the second data and/or the three-dimensional environment information are stored or output, the data can be integrated in advance, and then more convenient data can be input or stored. For example, the position and orientation information in the second data is used as a coordinate marker of the content of each frame of the third environment information, so that each frame of the third data contains one piece of position information contained in the second data. For example, the RGB information in the second data is fused with the content corresponding to the depth information, so that the third data is a three-dimensional environment information containing color information.
In conclusion, the method of the embodiment is helpful for realizing navigation with lower power consumption.
It is understood by those skilled in the art that, in the method according to the embodiments of the present application, the sequence numbers of the steps do not mean the execution sequence, and the execution sequence of the steps should be determined by their functions and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Further, embodiments of the present application also provide a computer-readable medium, comprising computer-readable instructions that when executed perform the following: the operations of the steps of the method in the embodiment shown in fig. 1 described above are performed.
Referring to fig. 3(a), fig. 3(a) is a block diagram of a spatial position tracking device 300 according to a second embodiment of the present application. The device 300 may be part of a navigation system or be a navigation system itself. As shown in fig. 3(a), the apparatus 300 includes: a first acquisition module 320, a control module 340, and a second acquisition module 360. Wherein,
the first acquiring module 320 is configured to acquire first data related to a target device, which is acquired by at least one first sensor at a first time.
In the apparatus of this embodiment, the first data collected by the at least one first sensor may be position and/or attitude data related to the target device according to different sensor types. Such data may include: a position, an attitude, a change in position, and/or change in attitude data of the target device. In a possible implementation manner, the position data may be a specific coordinate, and the coordinate may be a three-dimensional coordinate in a coordinate system pre-established in a space where the target device is located, that is, a coordinate system established with a certain point in the space as a preset origin; the coordinates may also be three-dimensional coordinates in the terrestrial coordinate system, and so on. The attitude data may refer to a roll angle Φ (roll), a yaw angle ψ (yaw), and a pitch angle θ (pitch). The position/orientation variation data may refer to a change in position/orientation at the current (first) time with respect to the position/orientation at any previous time. Sensors capable of collecting such first data include, but are not limited to: GPS modules, accelerometers, gyroscopes, infrared sensors, and the like. When the target device is a robot having a transmission motor, the at least one first sensor may also be its own transmission motor, and the change in the position and attitude of the robot can also be obtained from the phase output of the transmission motor.
The control module 340 is configured to activate at least one second sensor based on at least the first data and the first reference data.
In the apparatus 300 of the embodiment, the first reference data may be preset, or may be data collected by the at least one first sensor at any time during the movement of the target device. For example, the first reference data is data collected at the activation time of the at least first sensor. In the device of this embodiment, the setting of the reference data is used to trigger the activation of the at least one second sensor to meet the specific setting of the functional requirements of the at least one second sensor.
For example, if the at least one second sensor is a three-dimensional sensing sensor, its primary function is to obtain three-dimensional information about the environment in which the target device is located (such sensors include, but are not limited to, structured light-based depth cameras, light time-of-flight-based depth cameras, and depth cameras based on binocular triangulation). In the continuous three-dimensional perception data, there may be very large repetition of adjacent data. As shown in fig. 2(a), a continuous time recording of two-dimensional image information of the target device environment is recovered from data acquired by at least one first sensor, wherein there is a very large repetition of three-dimensional data corresponding to adjacent time instants. And whether the at least one second sensor is started or not can be triggered by using data corresponding to the two-dimensional image at a certain moment in time as reference data and setting a reasonable threshold. In the example shown in fig. 2(a), if the image data of the previous third frame is used as the reference data, the threshold is set to 50% of the difference between the two frames of image data, wherein the data of the fourth frame is more than 50% different from the reference data (i.e. the data of the first frame), and the at least one second sensor can be activated in response thereto. It is equally possible that the seventh frame has 50% new data compared to the data of the fourth frame, in response to which the at least one second sensor is also activated. It should be noted that, for the first data at each time, the corresponding reference data may be different, for example, the first data acquired at every third time may be used as the reference data, or each first data whose difference from the reference data is not less than the corresponding first threshold may be used as the first reference data.
A second obtaining module 360, configured to obtain three-dimensional environment information related to the target device based on at least second data related to the target device collected by the at least one second sensor.
In the apparatus 300 of this embodiment, the second obtaining module 360 may recover the three-dimensional environment information of the target device, preferably, the scene depth information of the target device, according to at least the second data. For example, the second data is a three-dimensional depth image of a scene, and the second obtaining module 360 may obtain the depth data of the image by counting an average value of the depth data of all pixels on the current depth image.
In summary, the device of the present embodiment can effectively reduce the power consumption of spatial location tracking by the intermittent activation of the sensor with higher power consumption/data amount.
As mentioned above, the activation of the at least one second sensor is related to the first reference data, and specifically, as shown in fig. 3(b), the control module 340 may further comprise a control unit 342 and an acquisition unit 344, wherein:
the control unit 342 is configured to activate the at least one second sensor in response to the first data not differing from the first reference data by less than a first threshold.
The first threshold may be set as desired. For example, as described in conjunction with fig. 2(a), the difference between the current image frame data triggering the three-dimensional sensing sensor to be activated and the reference image frame data thereof, i.e., the first threshold value, is 50%. If the angle of view of the three-dimensional sensing sensor is a and the first data includes a position variation Δ T and a rotation angle variation Δ Q of the target device, as shown in fig. 2(b) and 2(c), a first threshold corresponding to the position variation Δ T may be set to be 2 × D × S tan (a/2), and a first threshold corresponding to the rotation angle variation Δ Q may be set to be a × S, where D is depth data of the reference image, and the depth data may be, for example, an average value of depth data of all pixels on the image.
The obtaining unit 344 is configured to obtain the second data acquired by the at least one second sensor at a second time.
As mentioned above, the scene that meets the condition for activating the at least one second sensor is to be tracked, and at this moment, the at least one first sensor should be made stationary, so that the at least one second sensor can collect the data corresponding to the scene, that is, the current scene of the target device corresponding to the first data. Thus, the control unit 342 is further adapted to, in response to activating the at least one second sensor, bring the at least one first sensor to rest.
After the obtaining unit 344 obtains the second data and obtains the three-dimensional environment information related to the target device accordingly, the at least one first sensor continues to operate, and the control unit 342 may control the at least one second sensor to remain in the activated state, or may determine whether to disable the activated at least one second sensor according to the data collected by the at least one first sensor at the next moment, thereby further saving power consumption. Specifically, the method comprises the following steps:
the first obtaining module 320 is further configured to obtain third data, which is collected by the at least one first sensor at a third time and related to the target device.
The control module 340 is further configured to disable the at least one second sensor in response to the third data not differing from the second reference data by more than a second threshold.
The second reference data may be the same as or different from the first reference data. The second threshold may be the same as or different from the first threshold, and may be adjusted relative to the first threshold according to the three-dimensional environment information obtained by the second obtaining module 360. For example, when the three-dimensional environment information obtained from the second data is insufficient, the first threshold may be reduced to a third threshold, and the second threshold may be reduced to a fourth threshold. In such implementations, the control module 340 may be further configured to adjust the first and second thresholds according to at least the three-dimensional environment information, and obtain third and fourth thresholds.
In addition, according to the difference between the functions and roles of the apparatus of the embodiment, in a possible implementation manner, as shown in fig. 3(c), the apparatus 300 of the embodiment may further include:
a sending module 382, configured to send the second data and/or the three-dimensional environment information for a need of an external system in subsequent navigation. For example, the transmitting module 382 may transmit through a high-speed interface such as a USB interface, HDMI, or internet interface, or the transmitting module 382 itself may be such an interface.
In another possible implementation manner, as shown in fig. 3(d), the apparatus 300 of the present embodiment may further include the steps of:
a storage module 384 for storing the second data and/or the three-dimensional environment information.
When the second data and/or the three-dimensional environment information are stored or output, the data can be integrated in advance, and then more convenient data can be input or stored. For example, the position and orientation information in the second data is used as a coordinate marker of the content of each frame of the third environment information, so that each frame of the third data contains one piece of position information contained in the second data. For example, the RGB information in the second data is fused with the content corresponding to the depth information, so that the third data is a three-dimensional environment information containing color information.
In conclusion, the device of the embodiment is beneficial to realizing navigation with lower power consumption.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a navigation system according to a third embodiment of the present application. As shown in fig. 4, the system 400 includes: the spatial position tracking device 300 and the navigation module 460 shown in any one of fig. 3(a) to 3 (d). The system 400 further includes at least one first sensor 420 and at least one second sensor 440. The system 400 may include, but is not limited to: radio frequency based spatial positioning system, image based visual spatial positioning system, infrared positioning system, image based visual spatial positioning system, and the like
Wherein the at least one first sensor 420 is configured to collect first data related to the target device. As described above in connection with the method of fig. 1, the first data may include data relating to the position and/or pose of the target device. Such at least one first sensor 420 includes, but is not limited to: GPS modules, accelerometers, gyroscopes, image-based visual space positioning systems binocular/single camera, infrared sensors, drive motors, and the like.
The at least one second sensor 440 is configured to acquire second data associated with the target device. As described above in connection with the method of fig. 1, the second data may include scene three-dimensional data of the environment in which the target device is located, for example, a structured light-based depth camera, a light time-of-flight-based depth camera, a depth camera that derives depth based on binocular triangulation.
For further implementation of the functions of the spatial position tracking apparatus 300 and the corresponding spatial position tracking steps in the system 400, reference may be made to the corresponding descriptions in the above method and apparatus embodiments, which are not repeated herein. The navigation module 460 is also a mature technique in the art for navigating based on the three-dimensional environment information of the spatial location tracking 300, and is not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed system, terminal and method can be implemented in other ways. For example, the above-described embodiments of the method and apparatus for tracking a spatial location are merely illustrative, and for example, the division of the modules is only one logical division, and there may be other divisions when the method and apparatus are actually implemented, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication link may be through some interfaces, and the indirect coupling or communication link of the modules may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In view of the above description of the image-based glasses recognition method and apparatus provided in the present application, those skilled in the art will recognize that changes may be made in the embodiments and applications of the method and apparatus according to the concepts of the embodiments of the present application.
Claims (12)
1. A method for spatial position tracking, the method comprising:
acquiring first data related to a target device, acquired by at least one first sensor at a first time, wherein the first data comprises: position and/or attitude data relating to the target device;
taking data collected by the at least one first sensor at any time in the moving process of the target device as first reference data, and starting at least one second sensor when the difference between the first data and the first reference data is not less than a first threshold value, wherein the second sensor is a three-dimensional sensing sensor;
acquiring three-dimensional environment information related to the target device at least based on second data related to the target device acquired by the at least one second sensor, wherein the second data comprises: an image associated with the target device with scene depth data.
2. The method of claim 1, wherein said obtaining said second data collected by said at least one second sensor at a second time further comprises:
the at least one first sensor is made stationary in response to activating the at least one second sensor.
3. The method of claim 1, wherein the first data further comprises: a change in a position and/or a pose of the target device.
4. The method of claim 1, wherein the first reference data is data collected at a time of activation of the at least one first sensor.
5. The method of any of claims 1 to 4, further comprising:
acquiring third data which is acquired by at least one first sensor at a third moment and is related to the target equipment;
disabling the at least one second sensor in response to the third data not differing from second reference data by more than a second threshold.
6. The method of claim 5, wherein the second reference data is the same as the first reference data.
7. The method of claim 5, wherein the second threshold is the same as the first threshold.
8. The method of claim 5, wherein the method further comprises:
and under the condition that the three-dimensional environment information obtained according to the second data is insufficient, reducing the first threshold value to be a third threshold value and reducing the second threshold value to be a fourth threshold value, and adjusting the first threshold value and the second threshold value at least according to the three-dimensional environment information to obtain a third threshold value and a fourth threshold value.
9. A spatial position tracking apparatus, the apparatus comprising:
a first obtaining module, configured to obtain first data related to a target device, collected by at least one first sensor at a first time, where the first data includes: position and/or attitude data relating to the target device;
the control module is used for taking data collected by the at least one first sensor at any time in the moving process of the target equipment as first reference data, and starting the at least one second sensor when the difference between the first data and the first reference data is not less than a first threshold value, wherein the second sensor is a three-dimensional sensing sensor;
a second obtaining module, configured to obtain three-dimensional environment information related to the target device based on at least second data related to the target device collected by the at least one second sensor, where the second data includes: an image associated with the target device with scene depth data.
10. The apparatus of claim 9, wherein the control module is further configured to deactivate the at least one first sensor in response to activating the at least one second sensor.
11. The apparatus according to any one of claims 9 to 10, wherein the first acquiring module is further configured to acquire third data related to the target device, acquired by at least one first sensor at a third time;
the control module is further configured to disable the at least one second sensor in response to the third data not differing from second reference data by more than a second threshold.
12. A navigation system, the system comprising:
the spatial position tracking device of any one of claims 9 to 11;
the at least one first sensor;
the at least one second sensor; and
and the navigation module is used for navigating the target equipment based on the three-dimensional environment information of the device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710186234.3A CN106949887B (en) | 2017-03-27 | 2017-03-27 | Space position tracking method, space position tracking device and navigation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710186234.3A CN106949887B (en) | 2017-03-27 | 2017-03-27 | Space position tracking method, space position tracking device and navigation system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106949887A CN106949887A (en) | 2017-07-14 |
CN106949887B true CN106949887B (en) | 2021-02-09 |
Family
ID=59473643
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710186234.3A Active CN106949887B (en) | 2017-03-27 | 2017-03-27 | Space position tracking method, space position tracking device and navigation system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106949887B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104574386A (en) * | 2014-12-26 | 2015-04-29 | 速感科技(北京)有限公司 | Indoor positioning method based on three-dimensional environment model matching |
CN104866261A (en) * | 2014-02-24 | 2015-08-26 | 联想(北京)有限公司 | Information processing method and device |
CN106101997A (en) * | 2016-05-26 | 2016-11-09 | 深圳市万语网络科技有限公司 | A kind of localization method and alignment system with automatically adjusting location frequency |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3485336B2 (en) * | 1992-09-08 | 2004-01-13 | キャタピラー インコーポレイテッド | Method and apparatus for determining the position of a vehicle |
JP5051839B2 (en) * | 2007-10-29 | 2012-10-17 | 国立大学法人東京工業大学 | Target position measuring device |
CN103677225A (en) * | 2012-09-03 | 2014-03-26 | 联想(北京)有限公司 | Data processing method and first terminal device |
EP3428766B1 (en) * | 2014-09-05 | 2021-04-07 | SZ DJI Technology Co., Ltd. | Multi-sensor environmental mapping |
CN106441320A (en) * | 2015-08-06 | 2017-02-22 | 平安科技(深圳)有限公司 | Positioning operation control method, vehicle and electronic device |
CN105866810B (en) * | 2016-03-23 | 2019-04-30 | 福州瑞芯微电子股份有限公司 | The GPS low power targeting methods and device of a kind of electronic equipment |
CN106056664B (en) * | 2016-05-23 | 2018-09-21 | 武汉盈力科技有限公司 | A kind of real-time three-dimensional scene reconstruction system and method based on inertia and deep vision |
CN106441408A (en) * | 2016-07-25 | 2017-02-22 | 肇庆市小凡人科技有限公司 | Low-power measuring system |
CN206146450U (en) * | 2016-07-25 | 2017-05-03 | 肇庆市小凡人科技有限公司 | Measurement system of low -power consumption |
-
2017
- 2017-03-27 CN CN201710186234.3A patent/CN106949887B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104866261A (en) * | 2014-02-24 | 2015-08-26 | 联想(北京)有限公司 | Information processing method and device |
CN104574386A (en) * | 2014-12-26 | 2015-04-29 | 速感科技(北京)有限公司 | Indoor positioning method based on three-dimensional environment model matching |
CN106101997A (en) * | 2016-05-26 | 2016-11-09 | 深圳市万语网络科技有限公司 | A kind of localization method and alignment system with automatically adjusting location frequency |
Also Published As
Publication number | Publication date |
---|---|
CN106949887A (en) | 2017-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Barry et al. | High‐speed autonomous obstacle avoidance with pushbroom stereo | |
CN107807632B (en) | Perceiving road conditions from fused sensor data | |
CN110068335B (en) | Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment | |
CN112262357B (en) | Determining control parameters for formation of multiple UAVs | |
US10282591B2 (en) | Systems and methods for depth map sampling | |
US20210223046A1 (en) | Method and device for extracting key frames in simultaneous localization and mapping and smart device | |
US11064178B2 (en) | Deep virtual stereo odometry | |
US10725472B2 (en) | Object tracking using depth information | |
US20210103299A1 (en) | Obstacle avoidance method and device and movable platform | |
WO2021042693A1 (en) | Mining process-based method for acquiring three-dimensional coordinates of ore and apparatus therefor | |
CN108235809B (en) | End cloud combination positioning method and device, electronic equipment and computer program product | |
JP2019011971A (en) | Estimation system and automobile | |
US11741720B2 (en) | System and method for tracking objects using using expanded bounding box factors | |
EP2960858A1 (en) | Sensor system for determining distance information based on stereoscopic images | |
US11551363B2 (en) | Systems and methods for self-supervised residual flow estimation | |
WO2020160388A1 (en) | Systems and methods for laser and imaging odometry for autonomous robots | |
CN106949887B (en) | Space position tracking method, space position tracking device and navigation system | |
US12080013B2 (en) | Multi-view depth estimation leveraging offline structure-from-motion | |
JP2021177582A (en) | Control device, control method, and program | |
Abhishek et al. | ROS based stereo vision system for autonomous vehicle | |
CN113168532A (en) | Target detection method and device, unmanned aerial vehicle and computer readable storage medium | |
Kumar et al. | ROBOG an autonomously navigating vehicle based on road detection for unstructured road | |
Premachandra et al. | High performance embedding environment for reacting suddenly appeared road obstacles | |
CN113167579A (en) | Measurement system, measurement method, and measurement program | |
JP7281760B2 (en) | Measuring system, vehicle, measuring device, measuring program and measuring method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |