CN111753938B - Position acquisition method and device and electronic equipment - Google Patents

Position acquisition method and device and electronic equipment Download PDF

Info

Publication number
CN111753938B
CN111753938B CN202010582277.5A CN202010582277A CN111753938B CN 111753938 B CN111753938 B CN 111753938B CN 202010582277 A CN202010582277 A CN 202010582277A CN 111753938 B CN111753938 B CN 111753938B
Authority
CN
China
Prior art keywords
data
position information
electronic equipment
image
electronic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010582277.5A
Other languages
Chinese (zh)
Other versions
CN111753938A (en
Inventor
杨东清
陆柳慧
刘万凯
罗圣谚
颜长建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010582277.5A priority Critical patent/CN111753938B/en
Publication of CN111753938A publication Critical patent/CN111753938A/en
Application granted granted Critical
Publication of CN111753938B publication Critical patent/CN111753938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a position acquisition method, a position acquisition device and electronic equipment, wherein the position acquisition method comprises the following steps: the method comprises the steps of obtaining scanning data, motion data and image data, wherein the scanning data are data obtained by scanning an electronic tag arranged in the environment where the RFID is located, the electronic tag corresponds to position information, the motion data are data collected by an IMU, and the image data are data collected by an image collecting device; carrying out image recognition on the image data to obtain a recognition result, wherein the recognition result at least comprises light parameters of the environment where the electronic equipment is located; under the condition that the light parameter of the environment where the electronic equipment is located is greater than or equal to the parameter threshold, processing the motion data by combining the scanning data and the image data to obtain state data of the electronic equipment; and under the condition that the light parameter of the environment where the electronic equipment is located is smaller than the parameter threshold, processing the motion data by combining the scanning data to obtain state data of the electronic equipment, wherein the state data at least comprises position data of the electronic equipment.

Description

Position acquisition method and device and electronic equipment
Technical Field
The present application relates to the field of positioning technologies, and in particular, to a position obtaining method and apparatus, and an electronic device.
Background
In a special scene such as an underground operation process, due to the fact that the action randomness of a user is strong, the current position of the user is difficult to accurately acquire, and therefore when the user mistakenly enters a dangerous area, the situation that accidents are easy to happen due to the fact that prompt is not timely given exists.
Therefore, in order to avoid a user safety accident caused by inaccurate positioning, a technical scheme capable of accurately acquiring the current position of the user is needed.
Disclosure of Invention
In view of this, the present application provides a position acquiring method, apparatus and electronic device, including:
a location acquisition method, comprising:
the method comprises the steps of obtaining scanning data, motion data and image data, wherein the scanning data are data obtained by scanning an electronic tag arranged in the environment by a radio frequency identification reader RFID, the electronic tag corresponds to position information, the motion data are data collected by an inertial measurement unit IMU, the image data are data collected by an image collection device, and the RFID, the IMU and the image collection device are components arranged on the same electronic equipment;
carrying out image recognition on the image data to obtain a recognition result, wherein the recognition result at least comprises the light parameters of the environment where the electronic equipment is located;
under the condition that the light parameter of the environment where the electronic equipment is located is larger than or equal to a parameter threshold value, combining the scanning data and the image data to process the motion data to obtain state data of the electronic equipment;
and under the condition that the light parameter of the environment where the electronic equipment is located is smaller than the parameter threshold, processing the motion data by combining the scanning data to obtain state data of the electronic equipment, wherein the state data at least comprises position data of the electronic equipment.
Preferably, the method, in which the motion data is processed to obtain the state data of the electronic device by combining the scan data and the image data, includes:
under the condition that the image data comprises an image area of the electronic tag, obtaining first position information in the scanning data; respectively extracting image characteristic points of multiple frames of images containing the same object in the image data and processing the image characteristic points of adjacent images to obtain second position information; performing integral processing on the acceleration in the motion data to obtain third position information; obtaining position data of the electronic equipment according to the first position information, the second position information and the third position information;
under the condition that the image data does not contain the image area of the electronic tag, respectively extracting image feature points of multi-frame images containing the same object in the image data and processing the image feature points of adjacent images to obtain fourth position information; performing integral processing on the acceleration in the motion data to obtain fifth position information; and obtaining the position data of the electronic equipment according to the fourth position information and the fifth position information.
Preferably, the method for obtaining the location data of the electronic device according to the first location information, the second location information, and the third location information includes:
carrying out weighted average processing on the coordinate values of the first position information, the second position information and the third position information on an X axis respectively to obtain a first coordinate value of the electronic equipment on the X axis;
carrying out weighted average processing on the coordinate values of the first position information, the second position information and the third position information on a Y axis respectively to obtain a second coordinate value of the electronic equipment on the Y axis;
and performing weighted average processing on coordinate values of the first position information, the second position information and the third position information on a Z axis respectively to obtain a third coordinate value of the electronic equipment on the Z axis, wherein the first coordinate value, the second coordinate value and the third coordinate value form position data of the electronic equipment.
Preferably, the obtaining the location data of the electronic device according to the fourth location information and the fifth location information includes:
carrying out weighted average processing on coordinate values of the fourth position information and the fifth position information on an X axis respectively to obtain a fourth coordinate value of the electronic equipment on the X axis;
carrying out weighted average processing on coordinate values of the fourth position information and the fifth position information on a Y axis respectively to obtain a fifth coordinate value of the electronic equipment on the Y axis;
and performing weighted average processing on coordinate values of the fourth position information and the fifth position information on a Z axis respectively to obtain a sixth coordinate value of the electronic equipment on the Z axis, wherein the fourth coordinate value, the fifth coordinate value and the sixth coordinate value form position data of the electronic equipment.
In the above method, preferably, when the image data includes an image area of the electronic tag, the method further includes:
and calibrating the component parameters of the IMU at least according to the position data of the electronic equipment and the third position information.
In the above method, preferably, the calibrating the component parameters of the IMU according to at least the position data of the electronic device and the third position information includes:
obtaining error data at least according to the third position information and the position data of the electronic equipment corresponding to the third position information;
obtaining a parameter matrix according to the error data, wherein the parameter matrix comprises a plurality of matrix elements;
setting element values of the matrix elements as component parameters in the IMU corresponding to the matrix elements.
Preferably, the method, in combination with the scan data, of processing the motion data to obtain the state data of the electronic device includes:
performing integral processing on the motion data to obtain position data of the electronic equipment;
updating the position information in the scanning data to the position data of the electronic equipment under the condition that the scanning data contains the position information;
when the scanning data does not contain position information, adjusting the position data of the electronic equipment at least according to the position error amount of the electronic equipment;
the position error amount of the electronic equipment is obtained according to speed data of the electronic equipment or the position error amount of the electronic equipment is obtained according to historical error amount of the electronic equipment, and the speed data of the electronic equipment is obtained by integrating the motion data.
A position acquisition device comprising:
the RFID reader is used for scanning an electronic tag arranged in the environment to obtain scanning data, and the electronic tag corresponds to position information;
the IMU is used for acquiring motion data;
the image acquisition device is used for acquiring image data, and the RFID, the IMU and the image acquisition device are components arranged on the same electronic equipment;
positioning processing means for obtaining the scan data, the motion data and the image data; carrying out image recognition on the image data to obtain a recognition result, wherein the recognition result at least comprises the light parameters of the environment where the electronic equipment is located; under the condition that the light parameter of the environment where the electronic equipment is located is larger than or equal to a parameter threshold value, combining the scanning data and the image data to process the motion data to obtain state data of the electronic equipment; and under the condition that the light parameter of the environment where the electronic equipment is located is smaller than the parameter threshold, processing the motion data by combining the scanning data to obtain state data of the electronic equipment, wherein the state data at least comprises position data of the electronic equipment.
An electronic device, comprising:
the memory is used for storing an application program and data generated by the running of the application program;
a processor for executing the application to implement: the method comprises the steps of obtaining scanning data, motion data and image data, wherein the scanning data are data obtained by scanning an electronic tag arranged in the environment by a radio frequency identification reader RFID, the electronic tag corresponds to position information, the motion data are data collected by an inertial measurement unit IMU, the image data are data collected by an image collection device, and the RFID, the IMU and the image collection device are components arranged on the same electronic equipment;
carrying out image recognition on the image data to obtain a recognition result, wherein the recognition result at least comprises the light parameters of the environment where the electronic equipment is located;
under the condition that the light parameter of the environment where the electronic equipment is located is larger than or equal to a parameter threshold value, combining the scanning data and the image data to process the motion data to obtain state data of the electronic equipment;
and under the condition that the light parameter of the environment where the electronic equipment is located is smaller than the parameter threshold, processing the motion data by combining the scanning data to obtain state data of the electronic equipment, wherein the state data at least comprises position data of the electronic equipment.
According to the above scheme, in the position obtaining method, the position obtaining device and the electronic device provided by the application, the scanning data, the motion data and the image data are respectively obtained by using the RFID, the IMU and the image acquisition device which are arranged on the electronic device, and then whether the light parameter in the environment where the electronic device is located is greater than or equal to the parameter threshold is judged by identifying the image data, so as to determine the illumination condition of the environment where the electronic device is located, and the motion data of the IMU is processed by combining different data under the condition that the illumination condition is different, for example, under the condition that the illumination condition is good, the motion data is processed by combining the scanning data and the image data, so as to realize the positioning of the electronic device, and under the condition that the illumination condition is not good, the motion data is processed by combining the scanning data, so as to realize the positioning of the electronic device. Therefore, the electronic equipment positioning method and device can assist the IMU to realize positioning of the electronic equipment by configuring the image acquisition device and the RFID on the electronic equipment without depending on complex positioning equipment such as a global positioning instrument, and the positioning accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a position acquisition method according to an embodiment of the present application;
FIG. 2 is a diagram illustrating an example of an application of an embodiment of the present application;
fig. 3-6 are partial flow charts of a position acquisition method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a position acquisition device according to a second embodiment of the present application;
fig. 8 is a schematic structural diagram of a position acquisition device according to a third embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to a third embodiment of the present application;
fig. 10 and 11 are diagrams illustrating a flow chart applicable to positioning of a miner according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, an implementation flowchart of a position obtaining method provided in an embodiment of the present application is shown, where the method may be applied to an electronic device that has a radio Frequency identification reader rfid (radio Frequency identification), an image acquisition device, and an inertial measurement unit imu (inertial measurement unit) and is capable of performing data processing. The technical scheme in the embodiment is mainly used for realizing the positioning of the electronic equipment by combining visual assistance on the basis of the RFID and the IMU so as to improve the positioning accuracy.
Specifically, the method in this embodiment may include the following steps:
step 101: scan data, motion data, and image data are obtained.
The scanning data is data obtained by scanning an electronic tag arranged in an environment where the RFID is located, the electronic tag corresponds to position information, as shown in fig. 2, the electronic tag may be arranged at different positions in a specific environment such as a mine, specifically, a two-dimensional code or a barcode, and the like, the electronic tag arranged at each position corresponds to the position information of the position where the electronic tag is located, and specifically, the position information corresponding to each electronic tag may include coordinate information of longitude and latitude, and may also include coordinate information of altitude. It should be noted that, in this embodiment, the RFID on the electronic device is continuously in a working state, and scans whether the electronic tag exists in the environment where the electronic device is located, so as to obtain scan data that may include the electronic tag. In addition, the RFID can also acquire time information, such as time, so that the scanning data of the RFID contains the time information, and can be used in a subsequent location acquisition scheme.
The motion data is data acquired by the IMU, and the IMU on the electronic device may acquire motion data of the electronic device, such as acceleration, angular velocity, and the like, along with the motion of the electronic device.
The image data is data acquired by an image acquisition device such as a camera. The image acquisition device on the electronic equipment acquires image data right in front of the image acquisition device in the environment at regular intervals, so that the image acquisition device can continuously acquire multi-frame image data along with the lapse of time, and the time information acquired by each frame of image data and the RFID has a corresponding relation, namely, the RFID sets image acquisition time for the image data when the image acquisition device acquires one frame of image data.
It should be noted that the RFID, the IMU, and the image acquisition device, which respectively acquire the scan data, the motion data, and the image data, are components disposed on the same electronic device, such as components disposed on a head-mounted device for mine workers, where the RFID and the image acquisition device may be disposed on a housing of the head-mounted device and may not be shielded, and the RFID and the image acquisition device are oriented in the same direction, so that the RFID may acquire the scan data that may include an electronic tag disposed in an environment where the RFID is located, and at the same time, the image acquisition device may acquire the image data of the environment where the image acquisition device is located, and the IMU may be disposed inside the head-mounted device and may be fixed to the head-mounted device body, so as to acquire the motion data, such as acceleration, angular velocity, and the like, of the head-mounted device.
Step 102: and carrying out image recognition on the image data to obtain a recognition result.
The identification result at least comprises the light parameters of the environment where the electronic equipment is located.
Specifically, in this embodiment, gray processing may be performed on the pixel value of each pixel point in the image data to obtain a gray value of each pixel point, and then the gray value of each pixel point in the image data represents the light parameter of the environment where the electron is located, and the light parameter representing the pixel point is greater than the parameter threshold value when the gray value of the pixel point is greater than or equal to the gray threshold value, for example, the ambient light brightness is greater than the brightness threshold value; representing the result that the light parameter on the pixel is smaller than the parameter threshold value under the condition that the gray value of the pixel is smaller than the gray threshold value, for example, the ambient light brightness is smaller than the brightness threshold value;
alternatively, in this embodiment, feature recognition may be performed on the image data, and the light parameter of the environment where the electronic device is located is obtained according to the definition of the image feature of the recognition processing, so as to obtain the recognition result. If the image features identified in the image data meet the definition condition, if the definition of the image features is greater than a definition threshold value, namely high-definition image features, then the light parameters representing the electronic equipment contained in the identification result are greater than or equal to the parameter threshold value; if the image features identified in the image data do not meet the definition condition, if the definition of the image features is smaller than a definition threshold value, the image features are fuzzy, and the light parameters representing the environment where the electronic equipment is located, which are contained in the identification result, are smaller than a parameter threshold value.
Step 103: and judging whether the light parameter of the environment where the electronic equipment is located in the identification result is greater than or equal to the parameter threshold, executing the step 104 when the pipeline parameter of the environment where the electronic equipment is located is greater than or equal to the parameter threshold, and executing the step 105 when the light parameter of the environment where the electronic equipment is located is less than the parameter threshold.
Step 104: and processing the motion data by combining the scanning data and the image data to obtain the state data of the electronic equipment.
The state data at least includes position data of the electronic device, such as coordinate data of the electronic device at the acquisition time corresponding to the image data. Of course, the state data may also include speed data and attitude data of the electronic device, where the speed data may include moving speed data in the panning direction, and may also include rotation speed data in 6 degrees of freedom, and the attitude data is angle data of the electronic device in 6 degrees of freedom.
That is to say, in this embodiment, when it is determined through the identification of the image data that the light parameter of the environment where the electronic device is located is greater than or equal to the threshold, that is, when the ambient lighting condition of the electronic device is good, the state data of the electronic device is not obtained solely by depending on the acceleration and the angular velocity in the motion data, but the motion data is processed based on the motion data in combination with the scan data that may include the electronic tag and the image data that can identify the image feature, so that the obtained state data of the electronic device can be more accurate.
Step 105: and processing the motion data by combining the scanning data to obtain the state data of the electronic equipment.
That is to say, in this embodiment, when it is determined that the light parameter of the environment where the electronic device is located is smaller than the threshold, that is, when the lighting condition of the environment where the electronic device is located is poor, the image data that cannot identify the clear image feature is discarded, and the motion data is processed in combination with the scan data that may include the electronic tag, so that even if the lighting condition of the environment where the electronic device is located is poor, the accuracy of the status data of the electronic device may be improved by combining the scan data that may include the electronic tag.
According to the above scheme, in the position obtaining method provided in the embodiment of the present application, the RFID, the IMU, and the image acquisition device that are arranged on the electronic device are used to obtain the scan data, the motion data, and the image data, and then the image data is identified to determine whether the light parameter in the environment where the electronic device is located is greater than or equal to the parameter threshold, so as to determine the illumination condition of the environment where the electronic device is located, and the motion data of the IMU is processed in combination with different data under different illumination conditions, for example, under the condition that the illumination condition is good, the motion data is processed in combination with the scan data and the image data, so as to achieve the positioning of the electronic device, and under the condition that the illumination condition is not good, the motion data is processed in combination with the scan data, so as to achieve the positioning of the electronic device. Therefore, in the embodiment, the image acquisition device and the RFID are configured on the electronic device to assist the IMU to realize the positioning of the electronic device without depending on a complex positioning device such as a global positioning instrument, so that the positioning accuracy is improved.
In one implementation, when the motion data is processed to obtain the status data of the electronic device in step 104 by combining the scan data and the image data, the following method may be specifically implemented, as shown in fig. 3:
step 141: and identifying whether the image data contains the image area of the electronic tag, if so, executing the steps 142 to 145, and if not, executing the steps 146 to 148.
In this embodiment, it may be identified by identifying image features in the image data to identify whether there is an image feature in the image area in the image data matching with the electronic tag, so as to identify whether the image area in the image data includes the electronic tag. Based on this, when the image data includes the image area of the electronic tag, step 142 and the subsequent steps are started to be executed, and if the image data does not include the image area of the electronic tag, step 146 and the subsequent steps are started to be executed.
Step 142: first position information in the scan data is obtained.
Here, since the image data includes the image area of the electronic tag, the scanning data includes the electronic tag, and at this time, the first position information corresponding to the electronic tag included in the scanning data, such as the coordinate a, may be obtained and may include coordinate values on the X axis, the Y axis, and the Z axis, respectively.
Step 143: and respectively extracting image characteristic points of multi-frame images containing the same object in the image data and processing the image characteristic points of adjacent images to obtain second position information.
Specifically, in this embodiment, after image feature recognition is performed on each of multiple frames of images in the image data, image feature points belonging to the same object are extracted, and an image tracking model is further established according to the image feature points of adjacent images, so that possible second position information of the electronic device, such as a coordinate B, may include coordinate values on an X axis, a Y axis, and a Z axis, respectively, and the coordinate B may be the same as or similar to the coordinate a.
Step 144: and integrating the acceleration in the motion data to obtain third position information.
In this embodiment, the acceleration may be integrated twice to obtain a displacement vector relative to the previous group of motion data, and then, on the basis of the position information corresponding to the previous group of motion data, third position information corresponding to the current motion data, such as a coordinate C, may include coordinate values on an X axis, a Y axis, and a Z axis, respectively, and the coordinate C may be the same as or similar to the coordinate B and the coordinate a.
It should be noted that, the execution sequence of step 142, step 143, and step 144 may adopt the sequence shown in the figure, or step 142 may be executed first and then step 143 is executed and then step 144 is executed, or step 143 may be executed first and then step 144 is executed then step 142 is executed, or step 143 may be executed first and then step 142 is executed then step 144 is executed, or step 144 may be executed first and then step 142 is executed then step 143 is executed, or step 144 may be executed first and then step 143 is executed then step 142 is executed, or the steps may be executed simultaneously, and different technical solutions formed by different execution sequences of step 142, step 143, and step 144 all belong to the same inventive idea, and are within the scope of the present application.
Step 145: and obtaining the position data of the electronic equipment according to the first position information, the second position information and the third position information.
Therefore, in the embodiment, since each of the position information may have an error, confidence parameters are preset for the position information obtained in the three different manners, and the three position information are processed according to the confidence parameters, so as to obtain the position data of the electronic device with higher accuracy.
Specifically, in this embodiment, a corresponding weight may be generated according to the confidence parameter, for example, the first location information weight, that is, the weight corresponding to the RFID, is 0.5, the second location information, that is, the weight corresponding to the image acquisition device, is 0.2, and the third location information, that is, the weight corresponding to the IMU, is 0.3, based on which, in this embodiment, the location information of the electronic device is obtained through the following method:
carrying out weighted average processing on coordinate values of the first position information, the second position information and the third position information on the X axis according to respective corresponding weights to obtain a first coordinate value of the electronic equipment on the X axis; for example, the coordinate values on three X axes in the coordinate a, the coordinate B and the coordinate C are weighted and averaged to obtain the coordinate value of the electronic device on the X axis;
carrying out weighted average processing on the coordinate values of the first position information, the second position information and the third position information on the Y axis according to the respective corresponding weights to obtain a second coordinate value of the electronic equipment on the Y axis; for example, the coordinate values on three Y axes of the coordinate a, the coordinate B and the coordinate C are weighted and averaged to obtain the coordinate value of the electronic device on the Y axis;
carrying out weighted average processing on coordinate values of the first position information, the second position information and the third position information on the Z axis according to respective corresponding weights to obtain a third coordinate value of the electronic equipment on the Z axis; for example, the coordinate values on three Z axes in the coordinate a, the coordinate B and the coordinate C are weighted and averaged to obtain the coordinate value of the electronic device on the Z axis; the first coordinate value, the second coordinate value and the third coordinate value form position data of the electronic device.
Certainly, the setting of the weight in the above scheme may be set to other values according to requirements, for example, when the image data includes an image area of the electronic tag and the scan data includes the electronic tag, the weight corresponding to the RFID is 0.9 (may even be 1, that is, the first location information is used as the location information of the electronic device), the weight corresponding to the image acquisition device is 0.05, and the weight corresponding to the IMU is 0.05. For another example, in a case that the image data includes an image area of the electronic tag but the scan data does not include the electronic tag, the weight corresponding to the RFID is 0.1 (may even be 0, that is, the position information of the electronic device is obtained according to the second position information and the third position information), the weight corresponding to the image acquisition device is 0.4, and the weight corresponding to the IMU is 0.5.
Step 146: respectively extracting image characteristic points of multiple frames of images containing the same object in the image data and processing the image characteristic points of adjacent images to obtain fourth position information;
specifically, in this embodiment, after image feature recognition is performed on each of multiple frames of images in the image data, image feature points belonging to the same object are extracted, and an image tracking model is further established according to the image feature points of adjacent images, so that possible fourth position information of the electronic device, such as a coordinate D, can be predicted based on the image tracking model, and may include coordinate values on an X axis, a Y axis, and a Z axis, respectively.
Step 147: and performing integral processing on the acceleration in the motion data to obtain fifth position information.
In this embodiment, the acceleration may be integrated twice to obtain a displacement vector relative to the previous group of motion data, and then, on the basis of the position information corresponding to the previous group of motion data, third position information corresponding to the current motion data, such as a coordinate E, may include coordinate values on an X axis, a Y axis, and a Z axis, where the coordinate E may be the same as or similar to the coordinate D.
It should be noted that, the execution sequence of step 146 and step 147 may adopt the sequence shown in the figure, or step 147 may be executed first and then step 146 is executed, or step 146 and step 147 may be executed simultaneously, and different technical solutions formed by different execution sequences of step 146 and step 147 all belong to the same inventive concept and are within the scope of the present application.
Step 148: and obtaining the position data of the electronic equipment according to the fourth position information and the fifth position information.
Therefore, in the embodiment, under the condition that the image data does not include the image area of the electronic tag, the scanning data does not include the electronic tag, and the position information corresponding to the corresponding electronic tag cannot be obtained, so that the position information of the electronic device, that is, the fourth position information and the fifth position information, can be obtained in two different manners in the embodiment, because each kind of position information may have an error, in the embodiment, confidence coefficient parameters are preset for the position information obtained in the two different manners, and then the two pieces of position information are processed according to the confidence coefficient parameters, so as to obtain the position data of the electronic device with higher accuracy.
Specifically, in this embodiment, a corresponding weight may be generated according to the confidence parameter, for example, the weight corresponding to the fourth location information, that is, the image acquisition device, is 0.5, and the weight corresponding to the fifth location information, that is, the IMU, is 0.5, based on which, in this embodiment, the location information of the electronic device is obtained through the following method:
carrying out weighted average processing on coordinate values of the fourth position information and the fifth position information on the X axis according to the respective corresponding weights to obtain a fourth coordinate value of the electronic equipment on the X axis; for example, the coordinate values on two X axes in the coordinate D and the coordinate E are weighted and averaged to obtain the coordinate value of the electronic device on the X axis;
carrying out weighted average processing on coordinate values of the fourth position information and the fifth position information on the Y axis according to the respective corresponding weights to obtain a fifth coordinate value of the electronic equipment on the Y axis; for example, the coordinate values on two Y axes in the coordinate D and the coordinate E are weighted and averaged to obtain the coordinate value of the electronic device on the Y axis;
carrying out weighted average processing on coordinate values of the fourth position information and the fifth position information on the Z axis according to respective corresponding weights to obtain a sixth coordinate value of the electronic equipment on the Z axis; for example, the coordinate values on two Z axes in the coordinate D and the coordinate E are weighted and averaged to obtain the coordinate value of the electronic device on the Z axis; the fourth coordinate value, the fifth coordinate value and the sixth coordinate value form position data of the electronic device.
Certainly, the setting of the weight in the above scheme may be set to other values according to requirements, for example, under the condition that the accuracy of the IMU in the electronic device is high, the weight corresponding to the image acquisition device is 0.3, and the weight corresponding to the IMU is 0.7, so as to improve the accuracy of the finally obtained position data of the electronic device.
In one implementation, in the case that the image data includes the image area of the electronic tag in step 141, the method in this embodiment may further include the following steps, as shown in fig. 4:
step 149: and calibrating the component parameters of the IMU at least according to the position data and the third position information of the electronic equipment.
Based on that, in the present embodiment, component parameters of the IMU, such as a scale factor, a non-orthogonality angle, and a zero-bias parameter of an accelerometer in the IMU, a scale factor, a non-orthogonality angle, and a zero-bias parameter of a gyroscope in the IMU, and the like, may be recalibrated through error analysis between the two, so that the IMU performs motion data acquisition through the recalibrated component parameters, and further improves accuracy of the acquired motion data.
It should be noted that step 149 may be performed before or after any step after step 141, and different technical solutions are within the scope of the present application.
Specifically, in step 149, when calibrating the component parameters of the IMU, the following steps may be implemented, as shown in fig. 5:
step 501: and obtaining error data at least according to the third position information and the position data of the corresponding electronic equipment.
It should be noted that the error data in this embodiment may only include the measurement residual data of the IMU, or the error data may include the reprojection residual data of the image acquisition apparatus in addition to the measurement residual data of the IMU. The measurement residual data of the IMU refers to error data caused by component parameters of the IMU, and the re-projection residual data of the image acquisition apparatus refers to error data caused by image processing deviation of the image acquisition apparatus.
The measurement residual data of the IMU may include, among other things, a residual data item regarding position, a residual data item regarding velocity, and a residual data item regarding attitude.
Specifically, in this embodiment, the third position information and the position data of the electronic device may be processed by using a preset data distance calculation method, such as a mahalanobis distance algorithm, to obtain a residual data item about the position, that is, error data;
alternatively, in this embodiment, before obtaining the error data, the measurement speed data and the measurement attitude data may be obtained by using the motion data of the IMU, the speed data and the attitude data of the electronic device may be obtained by combining the scan data, the image data, and the motion data (specifically, the obtaining manner of the position data may be referred to), and then the residual data item about the position, the residual data item about the speed, and the residual data item about the attitude may be obtained to constitute the error data.
For example, the acceleration in the motion data of the IMU is integrated once and the current measured velocity data is obtained on the basis of the initial velocity data; integrating the angular velocity in the motion data of the IMU, acquiring current measurement attitude data on the basis of the initial attitude data, and processing the motion data by combining the scanning data and the image data to acquire the velocity data and the attitude data of the electronic equipment; then, processing the third position information and the position data of the electronic equipment by using a preset data distance calculation mode such as a Mahalanobis distance algorithm to obtain a residual error data item related to the position; processing the measured speed data and the speed data of the electronic equipment by using a preset data distance calculation mode to obtain a residual error data item related to the speed; and processing the measurement attitude data and the attitude data of the electronic equipment by using a preset data distance calculation mode to obtain residual error data about the attitude.
Specifically, when obtaining the re-projection residual data of the image capturing device, in this embodiment, the second position information may be obtained according to the image data, and the corresponding speed data and the corresponding posture data may be obtained, so that the re-projection residual data of the image capturing device is obtained based on the second position information and the corresponding speed data and the corresponding posture data.
Step 502: and obtaining a parameter matrix according to the error data.
The parameter matrix comprises a plurality of matrix elements, each matrix element corresponds to one component parameter of the IMU, and the element value of each matrix element is the parameter value of the corresponding component parameter in the IMU.
Specifically, in this embodiment, the error data is processed to obtain a parameter matrix that can minimize the error data.
For example, in this embodiment, an objective function may be constructed through error data, as shown in formula (1), where the objective function includes measurement residual data of the IMU and reprojection residual data of the image acquisition device, and both of the two items of error data include unknown independent variables, that is, variables corresponding to component parameters of the IMU.
min{||Rp||2+Σ||Rb||2+Σ||Rc||2Equation (1)
Wherein Rp is marginalized prior information, which is a known data value, Rb is measurement residual data of the IMU, Rc is re-projection residual data of the image acquisition apparatus, Rb and Rc include unknown variables corresponding to component parameters of the IMU, and a parameter matrix is formed at values of the component parameters obtained by minimizing a function of formula (1).
Step 503: the element values of the matrix elements are set to the component parameters in the IMU corresponding to the matrix elements.
In this embodiment, after the parameter matrix is obtained, the component parameters of the IMU are calibrated according to matrix elements in the parameter matrix. The method specifically comprises the following steps: the value of the element of each matrix element in the parameter matrix is set to the value of the component parameter in the IMU corresponding to that matrix element.
For example, the element values of the matrix elements of the accelerometer scale factor in the parameter matrix are set to the values of the accelerometer scale factor of the IMU; setting element values of matrix elements of the non-orthogonality angle of the accelerometer in the parameter matrix as values of the non-orthogonality angle of the accelerometer of the IMU; setting element values of matrix elements of a zero offset parameter of an accelerometer in the parameter matrix as values of the zero offset parameter of the accelerometer of the IMU; setting element values of matrix elements of the gyroscope scale factors in the parameter matrix as values of the gyroscope scale factors of the IMU; setting element values of matrix elements of the non-orthogonality angle of the gyroscope in the parameter matrix as values of the non-orthogonality angle of the gyroscope of the IMU; and setting the element values of the matrix elements of the zero offset parameters of the gyroscope in the parameter matrix as the values of the zero offset parameters of the gyroscope of the IMU.
In one implementation, when the motion data is processed in combination with the scan data to obtain the status data of the electronic device in step 105, the following may be specifically implemented, as shown in fig. 6:
step 601: and performing integration processing on the motion data to obtain position data of the electronic equipment.
In this embodiment, the acceleration in the motion data may be subjected to second-order integration processing to obtain a displacement increment, and the current position data of the electronic device is obtained on the basis of the initial position of the electronic device. Of course, in this embodiment, the acceleration and the angular velocity in the motion data may be integrated to obtain the velocity data and the posture data of the electronic device.
Step 602: it is determined whether or not the scan data includes the position information, and if the scan data includes the position information, step 603 is executed, whereas if the scan data does not include the position information, step 604 may be executed.
Whether the scanning data contains the position information or not can be determined by judging whether the speed value in the speed data of the electronic device is zero or not and the RFID is at a correction point (namely, the electronic tag is scanned in the scanning data of the RFID), so that the step 603 is executed when the speed data of the electronic device is 0 and one electronic tag is scanned in the scanning data of the RFID, otherwise, the step 604 is executed.
Step 603: and updating the position information in the scanning data into the position data of the electronic equipment.
When the speed of the electronic device is 0 and the scanning data of the RFID includes the electronic tag, it indicates that the position information corresponding to the electronic tag is accurate position information, and at this time, the position information in the scanning data is updated to the position data of the electronic device, so as to improve the positioning accuracy.
Step 604: and adjusting the position data of the electronic equipment at least according to the position error amount of the electronic equipment.
The position error amount of the electronic device can be obtained according to the speed data of the electronic device, or the position error amount of the electronic device can be obtained according to the historical error amount of the electronic device, and the speed data of the electronic device is obtained by integrating the motion data.
Specifically, the position error amount in this embodiment may be obtained by estimating current speed data of the electronic device through a kalman filter, or may be obtained by estimating a position error amount estimated last time (previous time) through the kalman filter.
Based on this, the adjustment of the position data of the electronic device in this embodiment is divided into several cases, as follows:
in this embodiment, when it is determined that the speed of the electronic device is 0 (zero speed state) and the RFID is at the correction point, in addition to setting the position information and the attitude information stored at the correction point of the RFID as the current position data and attitude data (speed data is 0) of the electronic device, the position information and the attitude information stored at the correction point of the RFID and the calculated position data and attitude data of the electronic device may be subtracted, and then the obtained difference is input to the kalman filter to estimate the current state quantity (position error quantity), so as to perform feedback correction on the acceleration and angular velocity acquired by the IMU at the next time (next time) by using the current state quantity;
in the embodiment, when the speed of the electronic equipment is judged to be 0 but the RFID is not at the correction point, the current speed data of the electronic equipment is input into a Kalman filter to estimate the current state quantity, the current position data, the speed data and the attitude data of the electronic equipment are compensated by using the state quantity, and meanwhile, the acceleration and the angular speed acquired by the IMU next time are subjected to feedback correction;
in this embodiment, when it is determined that the speed of the electronic device is not 0, the current estimated state quantity of the electronic device is predicted in the kalman filter according to the state quantity estimated at the previous time (previous time), and then the current position data, speed data, and attitude data of the electronic device are compensated by using the estimated state quantity, and meanwhile, the acceleration and angular velocity acquired by the next IMU can be feedback-corrected.
Referring to fig. 7, a schematic structural diagram of a data processing apparatus according to a second embodiment of the present disclosure is provided, where the apparatus may be configured in an electronic device that has an RFID, an image capture device, and an IMU and is capable of performing data processing. The technical scheme in the embodiment is mainly used for realizing the positioning of the electronic equipment by combining visual assistance on the basis of the RFID and the IMU so as to improve the positioning accuracy.
Specifically, the apparatus in this embodiment may include the following structure:
the data acquisition unit 701 is used for acquiring scanning data, motion data and image data, wherein the scanning data is obtained by scanning an electronic tag arranged in the environment by a radio frequency identification reader RFID, the electronic tag corresponds to position information, the motion data is data acquired by an inertial measurement unit IMU, the image data is data acquired by an image acquisition device, and the RFID, the IMU and the image acquisition device are components arranged on the same electronic equipment;
an image recognition unit 702, configured to perform image recognition on the image data to obtain a recognition result, where the recognition result at least includes a light parameter of an environment where the electronic device is located;
the first processing unit 703 is configured to, when the light parameter of the environment where the electronic device is located is greater than or equal to the parameter threshold, process the motion data by combining the scan data and the image data to obtain state data of the electronic device;
a second processing unit 704, configured to, when the light parameter of the environment where the electronic device is located is smaller than the parameter threshold, combine the scan data to process the motion data to obtain status data of the electronic device, where the status data at least includes location data of the electronic device
According to the above scheme, in the position obtaining apparatus provided in the second embodiment of the present application, the RFID, the IMU, and the image acquisition device, which are disposed on the electronic device, are used to obtain the scan data, the motion data, and the image data, and then the image data is identified to determine whether the light parameter in the environment where the electronic device is located is greater than or equal to the parameter threshold, so as to determine the illumination condition of the environment where the electronic device is located, and the motion data of the IMU is processed in combination with different data under different illumination conditions, for example, under the condition that the illumination condition is good, the motion data is processed in combination with the scan data and the image data, so as to achieve the positioning of the electronic device, and under the condition that the illumination condition is not good, the motion data is processed in combination with the scan data, so as to achieve the positioning of the electronic device. Therefore, in the embodiment, the image acquisition device and the RFID are configured on the electronic device to assist the IMU to realize the positioning of the electronic device without depending on a complex positioning device such as a global positioning instrument, so that the positioning accuracy is improved.
In an implementation manner, the first processing unit 703 is specifically configured to:
under the condition that the image data comprises an image area of the electronic tag, obtaining first position information in the scanning data; respectively extracting image characteristic points of multi-frame images containing the same object in the image data and processing the image characteristic points of adjacent images to obtain second position information; performing integral processing on the acceleration in the motion data to obtain third position information; obtaining position data of the electronic equipment according to the first position information, the second position information and the third position information;
under the condition that the image data does not contain the image area of the electronic tag, respectively extracting image feature points of multi-frame images containing the same object in the image data and processing the image feature points of adjacent images to obtain fourth position information; performing integral processing on the acceleration in the motion data to obtain fifth position information; and obtaining the position data of the electronic equipment according to the fourth position information and the fifth position information.
Specifically, when obtaining the location data of the electronic device according to the first location information, the second location information, and the third location information, the first processing unit 703 is specifically configured to: carrying out weighted average processing on the coordinate values of the first position information, the second position information and the third position information on an X axis respectively to obtain a first coordinate value of the electronic equipment on the X axis; carrying out weighted average processing on the coordinate values of the first position information, the second position information and the third position information on a Y axis respectively to obtain a second coordinate value of the electronic equipment on the Y axis; and performing weighted average processing on coordinate values of the first position information, the second position information and the third position information on a Z axis respectively to obtain a third coordinate value of the electronic equipment on the Z axis, wherein the first coordinate value, the second coordinate value and the third coordinate value form position data of the electronic equipment.
Specifically, when obtaining the position data of the electronic device according to the fourth position information and the fifth position information, the first processing unit 703 is specifically configured to: carrying out weighted average processing on coordinate values of the fourth position information and the fifth position information on an X axis respectively to obtain a fourth coordinate value of the electronic equipment on the X axis; carrying out weighted average processing on coordinate values of the fourth position information and the fifth position information on a Y axis respectively to obtain a fifth coordinate value of the electronic equipment on the Y axis; and performing weighted average processing on coordinate values of the fourth position information and the fifth position information on a Z axis respectively to obtain a sixth coordinate value of the electronic equipment on the Z axis, wherein the fourth coordinate value, the fifth coordinate value and the sixth coordinate value form position data of the electronic equipment.
Optionally, in a case that the image data includes an image area of the electronic tag, the first processing unit 703 is further configured to:
and calibrating the component parameters of the IMU at least according to the position data of the electronic equipment and the third position information. For example: obtaining error data at least according to the third position information and the position data of the electronic equipment corresponding to the third position information; obtaining a parameter matrix according to the error data, wherein the parameter matrix comprises a plurality of matrix elements; setting element values of the matrix elements as component parameters in the IMU corresponding to the matrix elements.
Specifically, the second processing unit 704 is specifically configured to:
performing integral processing on the motion data to obtain position data of the electronic equipment; updating the position information in the scanning data to the position data of the electronic equipment under the condition that the scanning data contains the position information; when the scanning data does not contain position information, adjusting the position data of the electronic equipment at least according to the position error amount of the electronic equipment; the position error amount of the electronic equipment is obtained according to speed data of the electronic equipment or the position error amount of the electronic equipment is obtained according to historical error amount of the electronic equipment, and the speed data of the electronic equipment is obtained by integrating the motion data.
It should be noted that, for the specific implementation of each unit in the present embodiment, reference may be made to the corresponding description in the foregoing, and details are not described here.
Referring to fig. 8, a schematic structural diagram of a position acquiring apparatus provided in a third embodiment of the present application, the position acquiring apparatus being an apparatus that can be configured on a portable electronic device, such as an apparatus mounted on a head-mounted electronic device worn by a miner, such as a helmet or glasses. The device in the embodiment can be used for assisting the positioning of the electronic equipment on which the device is arranged by combining the RFID and the IMU through the configured RFID, IMU and image acquisition device on the basis of the RFID and the IMU, so that the positioning accuracy is improved.
Specifically, the position acquiring device in this embodiment may include the following structure:
the RFID801 is used for scanning electronic tags arranged in the environment to obtain scanning data, and the electronic tags correspond to position information;
an IMU802 for collecting motion data;
an image acquisition device 803 for acquiring image data, wherein the RFID801, the IMU802 and the image acquisition device 803 are components disposed on the same electronic device;
a positioning processing device 804 for obtaining the scan data, the motion data, and the image data; carrying out image recognition on the image data to obtain a recognition result, wherein the recognition result at least comprises the light parameters of the environment where the electronic equipment is located; under the condition that the light parameter of the environment where the electronic equipment is located is larger than or equal to a parameter threshold value, combining the scanning data and the image data to process the motion data to obtain state data of the electronic equipment; and under the condition that the light parameter of the environment where the electronic equipment is located is smaller than the parameter threshold, processing the motion data by combining the scanning data to obtain state data of the electronic equipment, wherein the state data at least comprises position data of the electronic equipment.
According to the above scheme, in the position obtaining apparatus provided in the third embodiment of the present application, the RFID, the IMU, and the image acquisition device that are disposed on the device are used to obtain the scan data, the motion data, and the image data, and then the image data is identified to determine whether the light parameter in the environment where the device is located is greater than or equal to the parameter threshold, so as to determine the illumination condition of the environment where the device is located, and the motion data of the IMU is processed in combination with different data under different illumination conditions, for example, under the condition that the illumination condition is good, the motion data is processed in combination with the scan data and the image data, so as to achieve the positioning of the device, and under the condition that the illumination condition is not good, the motion data is processed in combination with the scan data, so as to achieve the positioning of the device. Therefore, in the embodiment, the positioning of the equipment can be realized by configuring the image acquisition device and the RFID on the equipment to assist the IMU without depending on complex positioning equipment such as a global positioning instrument, so that the positioning accuracy is improved.
In a specific implementation, the positioning processing device 804 may be a central processing unit CPU or the like capable of performing data processing to achieve the above functions. It should be noted that, the specific implementation of the positioning processing device 904 in the present embodiment may refer to the corresponding content in the foregoing, and will not be described in detail here.
Referring to fig. 9, a schematic structural diagram of an electronic device according to a fourth embodiment of the present disclosure is shown, where the electronic device may be an electronic device that has an RFID, an image capture device, and an IMU and is capable of performing data processing. The technical scheme in the embodiment is mainly used for realizing the positioning of the electronic equipment by combining visual assistance on the basis of the RFID and the IMU so as to improve the positioning accuracy. The method comprises the following steps:
a memory 901 for storing applications and data generated by the applications;
a processor 902 for executing an application to implement: the method comprises the steps of obtaining scanning data, motion data and image data, wherein the scanning data are data obtained by scanning an electronic tag arranged in the environment by a radio frequency identification reader RFID, the electronic tag corresponds to position information, the motion data are data collected by an inertial measurement unit IMU, the image data are data collected by an image collection device, and the RFID, the IMU and the image collection device are components arranged on the same electronic equipment; carrying out image recognition on the image data to obtain a recognition result, wherein the recognition result at least comprises light parameters of the environment where the electronic equipment is located; under the condition that the light parameter of the environment where the electronic equipment is located is greater than or equal to the parameter threshold, processing the motion data by combining the scanning data and the image data to obtain state data of the electronic equipment; and under the condition that the light parameter of the environment where the electronic equipment is located is smaller than the parameter threshold, processing the motion data by combining the scanning data to obtain state data of the electronic equipment, wherein the state data at least comprises position data of the electronic equipment.
According to the above technical scheme, in the electronic device provided in the fourth embodiment of the present application, the RFID, the IMU, and the image acquisition device disposed on the electronic device are used to obtain the scan data, the motion data, and the image data, and then the image data is identified to determine whether the light parameter in the environment where the electronic device is located is greater than or equal to the parameter threshold, so as to determine the illumination condition of the environment where the electronic device is located, and the motion data of the IMU is processed in combination with different data under different illumination conditions, for example, under the condition that the illumination condition is good, the motion data is processed in combination with the scan data and the image data, so as to achieve the positioning of the electronic device, and under the condition that the illumination condition is not good, the motion data is processed in combination with the scan data, so as to achieve the positioning of the electronic device. Therefore, in the embodiment, the image acquisition device and the RFID are configured on the electronic device to assist the IMU to realize the positioning of the electronic device without depending on a complex positioning device such as a global positioning instrument, so that the positioning accuracy is improved.
Based on the above technical solutions, the following technical solutions are exemplified by taking the positioning after the miners wear the electronic device to go down the well as an example:
firstly, the electronic device worn by the miners is integrated with an IMU (inertial measurement unit), a camera, namely an image acquisition device and an RFID (radio frequency identification) reading module, so that the electronic device can face the electronic tag and coincide with the electronic tag to determine the position information of the miners through the RFID reading module, and the time is measured through the RFID reading module to comprehensively determine the position information of the miners, and the method is particularly realized in the following steps:
under the condition of good illumination conditions, miners wear the electronic device, calculate and superimpose visual feature tracking by combining IMU integration, measure time by using an RFID reading module and calculate the distance between the worn electronic device and the electronic tag, so that the positioning information obtained by the IMU, the camera and the RFID are fused with each other, and the positioning position of the miners is determined in real time by a graph optimization method; in addition, the electronic label with the special mark is reversely scanned by the visual image, the electronic label with the special mark becomes an image characteristic point with known coordinates in a camera scanning image, and parameters such as zero offset of the IMU are estimated in real time, so that the error of calculation is compensated, and therefore, the position drift of the IMU and the camera tracking for a long time is restrained, and the accuracy of long-time positioning is kept.
Under the condition of poor visualization conditions, the electronic device can be degraded into a device for realizing positioning by combining RFID and IMU, for example, a Kalman filtering mathematical model is established by combining built-in position information of an electronic tag, time information measured by an RFID reading module and information calculated by IMU, such as speed, position and posture, and then the real-time position of spacious work is calculated.
Based on the technical scheme, the method is based on simple components such as cameras, IMUs, RFID and the like, is easy to implement, and only needs to place corresponding electronic tags according to position information when miners measure and loft, so that equipment cost is not increased, and extra labor cost is not increased; in addition, according to the technical scheme, the IMU, the RFID and the relevant information of the image are fused on the electronic device, long-time accumulated errors are avoided, position drift is restrained, and the positioning information is good in robustness and high in positioning accuracy.
Referring to fig. 10, a specific implementation flow is as follows:
firstly, reading electronic tag information at a fixed position in advance, and initializing the positioning device system realized by the technical scheme of the application.
Then, as a worker (or a miner) starts to move irregularly, under the condition that image data can be imaged, namely, the illumination condition is good, the RFID reader collects time data and electronic tag data, the IMU collects acceleration and gyro data such as angular velocity, and the camera collects image data, so that the RFID data is used for building a space rear intersection model, the IMU collects data for building a motion integral system model, the camera collects image data for building a feature tracking model, the position information obtained by the IMU, the camera and the camera are fused with each other, and the actual position of the worker is solved through graph optimization;
in the implementation process, if the electronic tag with the mark is scanned in the camera, the position information in the electronic tag is used as known prior information to obtain accurate positioning data, and then the IMU error parameters are corrected and calibrated by combining with the motion data acquired by the IMU;
and under the condition that the image data cannot be imaged, the image data is abandoned, so that the positioning device system is degraded into a fusion system combining RFID and IMU, and the positioning calculation can be carried out by establishing a Kalman filtering model.
The specific calculation process is shown in fig. 11:
firstly, aiming at the time T +1, wherein T is the previous time, and the time T +1 is the current time, combining the initial position, the speed and the attitude of the time T, and obtaining the position, the speed and the attitude of the time T +1 by the strapdown algorithm for the acceleration and the angular speed measured by IMU (inertial navigation) at the time T + 1;
then, when the speed is judged to be 0 and is an RFID correction point, subtracting the position information and the attitude information stored on the RFID correction point from the calculated position data and attitude data at the time of T +1, inputting the obtained difference into a Kalman filter to estimate the state quantity at the time of T +1, further performing feedback correction on the acceleration and the angular velocity acquired by the IMU at the time of T +2 by using the state quantity at the time of T +1, and setting the position information and the attitude information stored on the RFID correction point as the position data and the attitude data at the time of T +1, wherein the speed data is 0;
when the speed is judged to be 0 but the RFID is not at the correction point, the speed data at the T +1 moment is input into a Kalman filter to estimate the state quantity at the T +1 moment, the state quantity estimated at the T +1 moment is used for compensating the position data, the speed data and the attitude data at the T +1 moment, and meanwhile, the acceleration and the angular speed acquired by the IMU at the T +2 moment are subjected to feedback correction;
when the speed is judged to be not 0, predicting the state quantity estimated at the T +1 moment in a Kalman filter according to the state quantity estimated at the T moment, then compensating the position data, the speed data and the attitude data at the T +1 moment by using the state quantity estimated at the T +1 moment, and simultaneously performing feedback correction on the acceleration and the angular speed acquired by the IMU at the T +2 moment;
finally, the position, speed, and posture at time T +1 are output, and thereafter, T +1 is further output, thereby entering position acquisition at the next time.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A location acquisition method, comprising:
the method comprises the steps of obtaining scanning data, motion data and image data, wherein the scanning data are data obtained by scanning an electronic tag arranged in the environment by a radio frequency identification reader RFID, the electronic tag corresponds to position information, the motion data are data collected by an inertial measurement unit IMU, the image data are data collected by an image collection device, and the RFID, the IMU and the image collection device are components arranged on the same electronic equipment;
carrying out image recognition on the image data to obtain a recognition result, wherein the recognition result at least comprises the light parameters of the environment where the electronic equipment is located;
under the condition that the light parameter of the environment where the electronic equipment is located is larger than or equal to a parameter threshold value, combining the scanning data and the image data to process the motion data to obtain state data of the electronic equipment;
under the condition that the light parameter of the environment where the electronic equipment is located is smaller than the parameter threshold, processing the motion data by combining the scanning data to obtain state data of the electronic equipment, wherein the state data at least comprises position data of the electronic equipment;
wherein, combining the scan data and the image data to process the motion data to obtain the state data of the electronic device, comprises:
under the condition that the image data comprises an image area of the electronic tag, obtaining first position information in the scanning data; respectively extracting image characteristic points of multiple frames of images containing the same object in the image data and processing the image characteristic points of adjacent images to obtain second position information; performing integral processing on the acceleration in the motion data to obtain third position information; obtaining position data of the electronic equipment according to the first position information, the second position information and the third position information;
under the condition that the image data does not contain the image area of the electronic tag, respectively extracting image feature points of multi-frame images containing the same object in the image data and processing the image feature points of adjacent images to obtain fourth position information; performing integral processing on the acceleration in the motion data to obtain fifth position information; and obtaining the position data of the electronic equipment according to the fourth position information and the fifth position information.
2. The method of claim 1, obtaining location data of the electronic device from the first location information, the second location information, and the third location information, comprising:
carrying out weighted average processing on the coordinate values of the first position information, the second position information and the third position information on an X axis respectively to obtain a first coordinate value of the electronic equipment on the X axis;
carrying out weighted average processing on the coordinate values of the first position information, the second position information and the third position information on a Y axis respectively to obtain a second coordinate value of the electronic equipment on the Y axis;
and performing weighted average processing on coordinate values of the first position information, the second position information and the third position information on a Z axis respectively to obtain a third coordinate value of the electronic equipment on the Z axis, wherein the first coordinate value, the second coordinate value and the third coordinate value form position data of the electronic equipment.
3. The method of claim 1, obtaining location data of the electronic device according to the fourth location information and the fifth location information, comprising:
carrying out weighted average processing on coordinate values of the fourth position information and the fifth position information on an X axis respectively to obtain a fourth coordinate value of the electronic equipment on the X axis;
carrying out weighted average processing on coordinate values of the fourth position information and the fifth position information on a Y axis respectively to obtain a fifth coordinate value of the electronic equipment on the Y axis;
and performing weighted average processing on coordinate values of the fourth position information and the fifth position information on a Z axis respectively to obtain a sixth coordinate value of the electronic equipment on the Z axis, wherein the fourth coordinate value, the fifth coordinate value and the sixth coordinate value form position data of the electronic equipment.
4. The method according to claim 1, in a case where an image area of the electronic tag is included in the image data, the method further comprising:
and calibrating the component parameters of the IMU at least according to the position data of the electronic equipment and the third position information.
5. The method of claim 4, said calibrating component parameters of the IMU based at least on the location data of the electronic device and the third location information, comprising:
obtaining error data at least according to the third position information and the position data of the electronic equipment corresponding to the third position information;
obtaining a parameter matrix according to the error data, wherein the parameter matrix comprises a plurality of matrix elements;
setting element values of the matrix elements as component parameters in the IMU corresponding to the matrix elements.
6. The method of claim 1, processing the motion data in conjunction with the scan data to derive status data for the electronic device, comprising:
performing integral processing on the motion data to obtain position data of the electronic equipment;
updating the position information in the scanning data to the position data of the electronic equipment under the condition that the scanning data contains the position information;
when the scanning data does not contain position information, adjusting the position data of the electronic equipment at least according to the position error amount of the electronic equipment;
the position error amount of the electronic equipment is obtained according to speed data of the electronic equipment or the position error amount of the electronic equipment is obtained according to historical error amount of the electronic equipment, and the speed data of the electronic equipment is obtained by integrating the motion data.
7. A position acquisition device comprising:
the RFID reader is used for scanning an electronic tag arranged in the environment to obtain scanning data, and the electronic tag corresponds to position information;
the IMU is used for acquiring motion data;
the image acquisition device is used for acquiring image data, and the RFID, the IMU and the image acquisition device are components arranged on the same electronic equipment;
positioning processing means for obtaining the scan data, the motion data and the image data; carrying out image recognition on the image data to obtain a recognition result, wherein the recognition result at least comprises the light parameters of the environment where the electronic equipment is located; under the condition that the light parameter of the environment where the electronic equipment is located is larger than or equal to a parameter threshold value, combining the scanning data and the image data to process the motion data to obtain state data of the electronic equipment; under the condition that the light parameter of the environment where the electronic equipment is located is smaller than the parameter threshold, processing the motion data by combining the scanning data to obtain state data of the electronic equipment, wherein the state data at least comprises position data of the electronic equipment;
wherein, combining the scan data and the image data to process the motion data to obtain the state data of the electronic device, comprises:
under the condition that the image data comprises an image area of the electronic tag, obtaining first position information in the scanning data; respectively extracting image characteristic points of multiple frames of images containing the same object in the image data and processing the image characteristic points of adjacent images to obtain second position information; performing integral processing on the acceleration in the motion data to obtain third position information; obtaining position data of the electronic equipment according to the first position information, the second position information and the third position information;
under the condition that the image data does not contain the image area of the electronic tag, respectively extracting image feature points of multi-frame images containing the same object in the image data and processing the image feature points of adjacent images to obtain fourth position information; performing integral processing on the acceleration in the motion data to obtain fifth position information; and obtaining the position data of the electronic equipment according to the fourth position information and the fifth position information.
8. An electronic device, comprising:
the memory is used for storing an application program and data generated by the running of the application program;
a processor for executing the application to implement: the method comprises the steps of obtaining scanning data, motion data and image data, wherein the scanning data are data obtained by scanning an electronic tag arranged in the environment by a radio frequency identification reader RFID, the electronic tag corresponds to position information, the motion data are data collected by an inertial measurement unit IMU, the image data are data collected by an image collection device, and the RFID, the IMU and the image collection device are components arranged on the same electronic equipment;
carrying out image recognition on the image data to obtain a recognition result, wherein the recognition result at least comprises the light parameters of the environment where the electronic equipment is located;
under the condition that the light parameter of the environment where the electronic equipment is located is larger than or equal to a parameter threshold value, combining the scanning data and the image data to process the motion data to obtain state data of the electronic equipment;
under the condition that the light parameter of the environment where the electronic equipment is located is smaller than the parameter threshold, processing the motion data by combining the scanning data to obtain state data of the electronic equipment, wherein the state data at least comprises position data of the electronic equipment;
wherein, combining the scan data and the image data to process the motion data to obtain the state data of the electronic device, comprises:
under the condition that the image data comprises an image area of the electronic tag, obtaining first position information in the scanning data; respectively extracting image characteristic points of multiple frames of images containing the same object in the image data and processing the image characteristic points of adjacent images to obtain second position information; performing integral processing on the acceleration in the motion data to obtain third position information; obtaining position data of the electronic equipment according to the first position information, the second position information and the third position information;
under the condition that the image data does not contain the image area of the electronic tag, respectively extracting image feature points of multi-frame images containing the same object in the image data and processing the image feature points of adjacent images to obtain fourth position information; performing integral processing on the acceleration in the motion data to obtain fifth position information; and obtaining the position data of the electronic equipment according to the fourth position information and the fifth position information.
CN202010582277.5A 2020-06-23 2020-06-23 Position acquisition method and device and electronic equipment Active CN111753938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010582277.5A CN111753938B (en) 2020-06-23 2020-06-23 Position acquisition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010582277.5A CN111753938B (en) 2020-06-23 2020-06-23 Position acquisition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111753938A CN111753938A (en) 2020-10-09
CN111753938B true CN111753938B (en) 2021-12-24

Family

ID=72676906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010582277.5A Active CN111753938B (en) 2020-06-23 2020-06-23 Position acquisition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111753938B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113556515B (en) * 2021-07-20 2022-06-14 铜仁市市政公用设施管理处 A monitoring device for garden building engineering management

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107562054A (en) * 2017-08-31 2018-01-09 深圳波比机器人科技有限公司 The independent navigation robot of view-based access control model, RFID, IMU and odometer
CN109764880A (en) * 2019-02-19 2019-05-17 中国科学院自动化研究所 The vision inertia ranging method and system of close coupling vehicle wheel encoder data
CN111290404A (en) * 2020-03-25 2020-06-16 成都智创利源科技有限公司 Intelligent carrying robot based on switched reluctance motor and control method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI457539B (en) * 2012-12-19 2014-10-21 Ind Tech Res Inst Multi-posture step length calibration system and method for indoor positioning
US10824773B2 (en) * 2017-03-28 2020-11-03 Faro Technologies, Inc. System and method of scanning an environment and generating two dimensional images of the environment
CN109186592B (en) * 2018-08-31 2022-05-20 腾讯科技(深圳)有限公司 Method and device for visual and inertial navigation information fusion and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107562054A (en) * 2017-08-31 2018-01-09 深圳波比机器人科技有限公司 The independent navigation robot of view-based access control model, RFID, IMU and odometer
CN109764880A (en) * 2019-02-19 2019-05-17 中国科学院自动化研究所 The vision inertia ranging method and system of close coupling vehicle wheel encoder data
CN111290404A (en) * 2020-03-25 2020-06-16 成都智创利源科技有限公司 Intelligent carrying robot based on switched reluctance motor and control method

Also Published As

Publication number Publication date
CN111753938A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN110118554B (en) SLAM method, apparatus, storage medium and device based on visual inertia
CN110009681B (en) IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method
CN107402000B (en) Method and system for correlating a display device with respect to a measurement instrument
US7467061B2 (en) Information processing method and apparatus for finding position and orientation of targeted object
US9071829B2 (en) Method and system for fusing data arising from image sensors and from motion or position sensors
CN111210477B (en) Method and system for positioning moving object
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
WO2022193508A1 (en) Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
CN111197984A (en) Vision-inertial motion estimation method based on environmental constraint
EP2175237B1 (en) System and methods for image-based navigation using line features matching
US20180128620A1 (en) Method, apparatus, and system for determining a movement of a mobile platform
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
EP2851868A1 (en) 3D Reconstruction
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
EP2479537A2 (en) Vision based zero velocity and zero attitude rate update
CN113899364B (en) Positioning method and device, equipment and storage medium
CN111753938B (en) Position acquisition method and device and electronic equipment
CN112179373A (en) Measuring method of visual odometer and visual odometer
JP5086824B2 (en) TRACKING DEVICE AND TRACKING METHOD
CN114608554A (en) Handheld SLAM equipment and robot instant positioning and mapping method
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN112556681A (en) Visual-based orchard machine navigation positioning method
CN113379850B (en) Mobile robot control method, device, mobile robot and storage medium
CN114842224A (en) Monocular unmanned aerial vehicle absolute vision matching positioning scheme based on geographical base map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant