CN114396943A - Fusion positioning method and terminal - Google Patents

Fusion positioning method and terminal Download PDF

Info

Publication number
CN114396943A
CN114396943A CN202210039694.4A CN202210039694A CN114396943A CN 114396943 A CN114396943 A CN 114396943A CN 202210039694 A CN202210039694 A CN 202210039694A CN 114396943 A CN114396943 A CN 114396943A
Authority
CN
China
Prior art keywords
positioning data
error
equation
information
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210039694.4A
Other languages
Chinese (zh)
Inventor
刘宝升
李强
李洪斌
刘超
赵峰
毛舒乐
赵建伟
王佩光
赵光
周航帆
傅宁
马胜
马名阳
胡鑫
张永欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Siji Shenwang Position Service Beijing Co ltd
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
Original Assignee
State Grid Siji Shenwang Position Service Beijing Co ltd
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Siji Shenwang Position Service Beijing Co ltd, State Grid Corp of China SGCC, State Grid Information and Telecommunication Co Ltd filed Critical State Grid Siji Shenwang Position Service Beijing Co ltd
Priority to CN202210039694.4A priority Critical patent/CN114396943A/en
Publication of CN114396943A publication Critical patent/CN114396943A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/393Trajectory determination or predictive tracking, e.g. Kalman filtering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Abstract

The invention discloses a fusion positioning method and a terminal; receiving environment information of the monocular camera, and estimating the position and the posture of the monocular camera according to the environment information; receiving first positioning data of an inertia measurement unit, and optimizing the first positioning data according to the position and the posture of the monocular camera; receiving second positioning data of the inertial navigation system, and assisting a global navigation satellite system to carry out multi-frequency multi-mode real-time differential positioning according to the second positioning data to obtain third positioning data; establishing a measurement equation and a state equation according to the first positioning data and the third positioning data; estimating a state quantity error by adopting an extended Kalman filtering algorithm according to the first positioning data, a measurement equation and a state equation, and compensating the first positioning data based on the state quantity error to obtain final positioning data; the advantages of long-term absolute position, IMU and monocular vision short-term accurate position calculation of the global navigation satellite system are integrated, and the positioning accuracy is improved.

Description

Fusion positioning method and terminal
Technical Field
The invention relates to the technical field of positioning, in particular to a fusion positioning method and a terminal.
Background
The GNSS positioning continuity and the reliability bring great challenges due to the limited complex environment of the operation site, the satellite signals are frequently interrupted due to shielding and electromagnetic interference, the precision of the floating ambiguity is limited due to frequent reinitialization, and a large positioning deviation is generated. In order to improve the positioning performance in a complex environment, integrated satellite navigation and inertial navigation systems are widely used to provide continuous positioning, velocity measurement, and attitude measurement. However, the main disadvantage of MEMS-IMU is that in the absence of effective control, its navigation errors can quickly diverge in a short time, resulting in an insufficiently accurate positioning.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a fusion positioning method and a terminal are provided to improve positioning accuracy.
In order to solve the technical problems, the invention adopts the technical scheme that:
a fusion localization method, comprising the steps of:
s1, acquiring environment information from the monocular camera, and estimating the position and the posture of the monocular camera according to the environment information;
s2, acquiring first positioning data from an inertia measurement unit, and optimizing the first positioning data according to the position and the posture of the monocular camera;
s3, second positioning data are obtained from an inertial navigation system, multi-frequency multi-mode real-time differential positioning is carried out on the global navigation satellite system assisted by the second positioning data, and third positioning data are obtained;
s4, establishing a measurement equation and a state equation according to the first positioning data and the third positioning data;
s5, estimating a state quantity error by adopting an extended Kalman filtering algorithm according to the first positioning data, the measurement equation and the state equation, and compensating the first positioning data based on the state quantity error to obtain final positioning data.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a converged positioning terminal comprising a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above approach when executing the computer program.
The invention has the beneficial effects that: the technical scheme of the invention fuses data acquired by a multi-frequency multi-mode real-time differential positioning unit, an inertial measurement unit and a monocular camera by adopting an extended Kalman algorithm, integrates the advantages of long-time absolute position of a global navigation satellite system and IMU (inertial measurement unit) and short-time accurate position estimation of monocular vision to obtain the estimation of the optimal position state, adopts the IMU to recover the measurement scale of the monocular vision, improves the motion tracking performance, limits the error drift of the IMU by using vision measurement, and obtains a position and an attitude with higher precision and credibility, thereby improving the positioning precision.
Drawings
Fig. 1 is a flowchart of a fusion positioning method according to an embodiment of the present invention;
fig. 2 is a structural diagram of a converged positioning terminal according to an embodiment of the present invention;
fig. 3 is a schematic data processing diagram of a fusion positioning method according to an embodiment of the present invention;
description of reference numerals:
1. a converged positioning terminal; 2. a processor; 3. a memory.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1 and fig. 2, a fusion positioning method includes the steps of:
s1, acquiring environment information from the monocular camera, and estimating the position and the posture of the monocular camera according to the environment information;
s2, acquiring first positioning data from an inertia measurement unit, and optimizing the first positioning data according to the position and the posture of the monocular camera;
s3, second positioning data are obtained from an inertial navigation system, multi-frequency multi-mode real-time differential positioning is carried out on the global navigation satellite system assisted by the second positioning data, and third positioning data are obtained;
s4, establishing a measurement equation and a state equation according to the first positioning data and the third positioning data;
s5, estimating a state quantity error by adopting an extended Kalman filtering algorithm according to the first positioning data, the measurement equation and the state equation, and compensating the first positioning data based on the state quantity error to obtain final positioning data.
From the above description, the beneficial effects of the present invention are: the technical scheme of the invention fuses data acquired by a multi-frequency multi-mode real-time differential positioning unit, an inertial measurement unit and a monocular camera by adopting an extended Kalman algorithm, integrates the advantages of long-time absolute position of a global navigation satellite system and IMU (inertial measurement unit) and short-time accurate position estimation of monocular vision to obtain the estimation of the optimal position state, adopts the IMU to recover the measurement scale of the monocular vision, improves the motion tracking performance, limits the error drift of the IMU by using vision measurement, and obtains a position and an attitude with higher precision and credibility, thereby improving the positioning precision.
Further, the step S1 specifically includes the steps of:
s11, receiving environment information of the monocular camera, and preprocessing the environment information;
s12, extracting characteristic points of Harris angular point characteristics and SIFT scale invariant characteristics from a continuous image of the environmental information, and tracking the characteristic points in adjacent images by adopting an LK sparse optical flow algorithm;
s13, estimating the relative motion of the landmark points and the monocular camera according to the feature points in the adjacent images, and estimating the position and the posture of the monocular camera by combining the relative motion of the landmark points and the monocular camera.
From the above description, after the environment information acquired by the monocular camera is obtained, the position and the posture of the monocular camera can be accurately obtained by tracking the feature points in the adjacent image frames.
Further, the step between the step S12 and the step S13 further includes the steps of:
and S121, checking the correctness of the feature points by an affine checking method, judging whether the number of the feature points is sufficient, removing the feature points with wrong correctness verification, and adding new feature points when the number of the feature points is insufficient.
As can be seen from the above description, after the feature points are extracted, the method of the invention checks the correctness of the feature points by an affine check method, and removes the wrong feature points, so that the position and the posture of the monocular camera determined subsequently are more accurate.
Further, the first positioning data includes first position information, first speed information, and first attitude information;
the step S2 specifically includes the steps of:
s21, receiving the angular velocity, the acceleration and the gravity vector of the three axes of the inertial measurement unit during the static state, integrating the angular velocity, the acceleration and the gravity vector within a preset time to obtain first position information and first velocity information, and obtaining first attitude information according to the update of quaternion;
s22, establishing a position error equation and an attitude error equation according to the first position information, the first speed information, the first attitude information and the position and attitude estimated by the monocular camera within the preset time;
and S23, performing optimization solution on the position error equation and the attitude error equation, and performing optimization compensation on the first position information and the first attitude information according to a calculation result.
From the above description, it can be known that the first position information and the first attitude information calculated by the data collected by the inertial measurement unit are optimally compensated according to the position and attitude information estimated by the monocular camera, that is, the error drift of the inertial measurement unit is limited by using the visual measurement, so as to obtain the position and attitude data with higher precision and credibility.
Further, the calculating of the real-time differential positioning by the global navigation satellite system further comprises the following steps: obtaining a first pseudo-range double-difference model and a first phase double-difference model between a receiver and a satellite according to the collected pseudo-range, carrier and Doppler observed values, and performing whole-cycle ambiguity resolution;
the step S3 specifically includes the steps of:
s31, receiving second positioning data of an inertial navigation system, judging whether the pseudo range, the carrier wave and the Doppler observed value acquired by the global navigation satellite system in the integer ambiguity solution are missing due to signal interruption, if so, carrying out linearization processing on the first pseudo range double difference model and the first phase double difference model at the interruption position according to the second positioning data, estimating an optimal floating ambiguity solution by a least square method in combination with historical ambiguity and covariance, and calculating by an LAMBDA (label-based ambiguity resolution) method to obtain the integer ambiguity solution;
and S32, continuing resolving the multi-frequency multi-mode real-time differential positioning according to the integer ambiguity solution, and finally obtaining the third positioning data.
According to the above description, when the micro signal of the global navigation satellite system is interrupted and lost, the positioning data of the inertial navigation system is linearly processed, and the integer ambiguity solution is calculated by the least square method and the LAMBDA method, so that the defect that the global navigation satellite system is easily unlocked by the signal affected by the environment is overcome, and the positioning accuracy is improved.
Further, the steps between S31 and S32 further include the steps of:
s311, carrying out correctness verification on the integer ambiguity solution, carrying out statistical analysis on hypothesis test statistics according to historical integer ambiguity solutions stored when global navigation satellite system signals are normal, and calculating to obtain an ambiguity threshold value under a preset confidence level;
and S312, judging whether the integer ambiguity solution falls into the range of the ambiguity threshold, if so, checking the integer ambiguity solution correctly, otherwise, checking the integer ambiguity solution incorrectly, and calculating the integer ambiguity solution by adopting a pseudo-range measurement method.
From the above description, the present invention also adopts a hypothesis testing method to determine whether ambiguity solution is available, so as to improve the reliability of ambiguity solution.
Further, the first positioning data includes first position information, first speed information, and first attitude information;
the step S4 specifically includes the steps of:
s41, constructing a second pseudorange double-difference model according to the first position information, respectively calculating a first double-difference pseudorange, a second double-difference pseudorange, a first double-difference pseudorange rate and a second double-difference pseudorange rate according to the first pseudorange double-difference model and the second pseudorange double-difference model, and constructing a measurement equation according to an error between the first double-difference pseudorange and the second double-difference pseudorange and an error between the first double-difference pseudorange rate and the second double-difference pseudorange rate as observed quantities of data fusion;
s42, establishing a state equation according to the first position information, the first speed information and the first posture information, wherein the state equation comprises a position equation, a speed equation and a posture equation.
Further, the state quantity errors include a speed error, an attitude error, and a position error;
the step S5 specifically includes the steps of:
s51, estimating the speed error, the attitude error and the position error by adopting an extended Kalman filter algorithm according to the first position information, the first speed information, the first attitude information, the measurement equation, a position equation, a speed equation and an attitude equation;
and S52, respectively performing compensation optimization on the first position information, the first speed information and the first attitude information according to the position error, the speed error and the attitude error to obtain final positioning data.
According to the technical scheme, the positioning data of the inertial measurement unit, the positioning data of the global navigation satellite system and the measurement equation and the state equation of the inertial measurement unit are optimized based on monocular vision, each error of the state quantity is estimated by adopting an extended Kalman filtering algorithm, and the position, speed and attitude information obtained by the inertial measurement unit is compensated, so that high-precision information is obtained, and the positioning accuracy is improved.
Further, the step S5 is followed by the step of:
and S6, updating the final positioning data to the first positioning data of the inertial measurement unit.
As can be seen from the above description, the position, velocity, and attitude information with higher accuracy obtained finally is updated to the initial values estimated by the inertial measurement unit, so that the influence of the drift of the inertial measurement unit is reduced.
Referring to fig. 2, a converged positioning terminal includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the steps of the above scheme are implemented.
The fusion positioning method and the terminal are suitable for a scene needing positioning, but because the operation site is complex, the global navigation satellite system is often shielded and subjected to electromagnetic interference to generate satellite signal interruption, so that the positioning data is not accurate enough.
Referring to fig. 1 and fig. 3, a first embodiment of the present invention is:
a fusion localization method, comprising the steps of:
s1, acquiring environment information from the monocular camera, and estimating the position and the posture of the monocular camera according to the environment information;
the step S1 specifically includes the steps of:
s11, receiving environment information of the monocular camera, and preprocessing the environment information;
s12, extracting characteristic points of Harris angular point characteristics and SIFT scale invariant characteristics from a continuous image of the environmental information, and tracking the characteristic points in adjacent images by adopting an LK sparse optical flow algorithm;
the method between the step S12 and the step S13 further comprises the steps of:
s121, checking the correctness of the feature points by an affine checking method, judging whether the number of the feature points is sufficient, removing the feature points with wrong correctness checking, and adding new feature points when the number of the feature points is insufficient;
s13, estimating the relative motion of the landmark points and the monocular camera according to the feature points in the adjacent images, and estimating the position and the posture of the monocular camera by combining the relative motion of the landmark points and the monocular camera.
In this embodiment, the environment map is recognized by the monocular camera, and information collected by the monocular camera is read and preprocessed, including filtering, correction, and grayscale processing. Extracting Harris angular point characteristics and SIFT scale invariant characteristics from a continuous section of adjacent frame images, tracking the adjacent frame characteristics by adopting an LK sparse optical flow algorithm, simultaneously detecting the correctness of characteristic points by adopting an affine test method, removing the characteristic points with tracking errors, and adding new characteristic points when the number of the characteristic points is insufficient. And then estimating a landmark point and the relative motion of the monocular camera according to the feature points in the adjacent images, and estimating the position and the posture of the monocular camera by combining the landmark point and the relative motion of the monocular camera to obtain the visual odometer. The visual odometer includes position, attitude, landmark point coordinates, and relative motion.
S2, acquiring first positioning data from an inertia measurement unit, and optimizing the first positioning data according to the position and the posture of the monocular camera;
the first positioning data comprises first position information, first speed information and first posture information;
the step S2 specifically includes the steps of:
s21, receiving the angular velocity, the acceleration and the gravity vector of the three axes of the inertial measurement unit during the static state, integrating the angular velocity, the acceleration and the gravity vector within a preset time to obtain first position information and first velocity information, and obtaining first attitude information according to the update of quaternion;
s22, establishing a position error equation and an attitude error equation according to the first position information, the first speed information, the first attitude information and the position and attitude estimated by the monocular camera within the preset time;
and S23, performing optimization solution on the position error equation and the attitude error equation, and performing optimization compensation on the first position information and the first attitude information according to a calculation result.
In this embodiment, the MEMS-IMU (inertial measurement unit, IMU) may measure the angular velocity and acceleration of the three axes and the gravity vector at rest, integrate them in a short time to obtain the position and velocity information, and obtain the attitude information according to the update of the quaternion. And between continuous frame images, acquiring the position, the speed and the rotation relative to an inertial space by adopting IMU data, establishing an error equation by combining the position and the attitude estimated by the monocular camera, and performing optimization solution on the error equation to compensate an integral term of the IMU, so as to obtain optimized first position information and first attitude information.
Wherein, the quaternion updating equation is as follows:
Figure BDA0003466937400000081
Figure BDA0003466937400000082
in the form of a gyro output angular velocity right-hand matrix,
Figure BDA0003466937400000083
an IMU rotation quaternion under a world coordinate system (an upper index W) at the time t is shown, and a lower index B of the IMU rotation quaternion is shown in the IMU coordinate system;
and estimating the position, the speed and the rotation between two corresponding adjacent images according to the data acquired by the monocular camera and the data acquired by the IMU.
Figure BDA0003466937400000084
Wherein the content of the first and second substances,
Figure BDA0003466937400000085
representing the position of a world coordinate system k frame image IMU;
Figure BDA0003466937400000086
representing the speed of a world coordinate system k frame image IMU; Δ tkThe time step of k frame images;
Figure BDA0003466937400000087
represents the acceleration of the IMU measurement;
Figure BDA0003466937400000088
representing an acceleration deviation; gwRepresents the acceleration of gravity;
Figure BDA0003466937400000089
representing the angular velocity of the IMU measurement;
Figure BDA00034669374000000810
indicating an angular velocity deviation;
Figure BDA00034669374000000811
representing the quaternion of the IMU under k frame images.
Namely, pre-integrating data acquired by the corresponding IMU between two adjacent images:
Figure BDA00034669374000000812
wherein the content of the first and second substances,
Figure BDA00034669374000000813
showing the relative displacement and relative velocity estimated between the k frame image and the k +1 frame image,
Figure BDA00034669374000000814
and
Figure BDA00034669374000000815
for the pre-integral term, the relative displacement, speed and rotation between two image frames are reflected:
Figure BDA0003466937400000091
Figure BDA0003466937400000092
Figure BDA0003466937400000093
wherein the content of the first and second substances,
Figure BDA0003466937400000094
representing a left-hand matrix form of a quaternion multiplication operation.
Establishing an equation according to a pre-integral term:
Figure BDA0003466937400000095
wherein, I3×1Representing a 3 by 1 unit matrix;
Figure BDA0003466937400000096
representing a rotation matrix of the camera system to the IMU system;
Figure BDA0003466937400000097
representing the displacement calculated for k +1 frame images of the camera.
Expressed as:
Figure BDA00034669374000000911
among multiple frame images, can be expressed as solving a least squares equation:
Figure BDA0003466937400000098
wherein argmin represents taking the minimum value; Γ denotes all image frames in the sliding window.
In this way, the motion speed of each frame image can be obtained
Figure BDA0003466937400000099
The gravity vector is
Figure BDA00034669374000000910
And a visual scale factor S. And calculating an attitude angle according to the included angle between the gravity vector and each axis of the sensor.
The estimation of the IMU is as follows:
xk+1=Axk
xkposition velocity acceleration at time K:
xk=[L B H vE vN vU ae an au];
wherein subscript E, E denotes the ENU coordinate system E direction; l B H vE vN vU ae an auLongitude, latitude, height, E-direction speed, N-direction speed, U-direction speed, E-direction acceleration, N-direction acceleration and U-direction acceleration.
The value of A is:
Figure BDA0003466937400000101
wherein R isMIs the curvature radius of the mortise-tenon unitary ring, RNΔ t is the interval between the last time and the current time, which is the radius of curvature of the meridian.
And the attitude calculation adopts a quaternion updating method, and then an attitude angle is calculated according to the quaternion:
Figure BDA0003466937400000102
wherein gamma, theta and psi are respectively the attitude of pitch, roll and courseAngle, quaternion q ═ q (q)0,q1,q2,q3)。
S3, second positioning data are obtained from an inertial navigation system, multi-frequency multi-mode real-time differential positioning is carried out on the global navigation satellite system assisted by the second positioning data, and third positioning data are obtained;
the global navigation satellite system further comprises the following steps when resolving the real-time differential positioning: obtaining a first pseudo-range double-difference model and a first phase double-difference model between a receiver and a satellite according to the collected pseudo-range, carrier and Doppler observed values, and performing whole-cycle ambiguity resolution;
the step S3 specifically includes the steps of:
s31, receiving second positioning data of an inertial navigation system, judging whether the pseudo range, the carrier wave and the Doppler observed value acquired by the global navigation satellite system in the integer ambiguity solution are missing due to signal interruption, if so, carrying out linearization processing on the first pseudo range double difference model and the first phase double difference model at the interruption position according to the second positioning data, estimating an optimal floating ambiguity solution by a least square method in combination with historical ambiguity and covariance, and calculating by an LAMBDA (label-based ambiguity resolution) method to obtain the integer ambiguity solution;
the steps between S31 and S32 further comprise the steps of:
s311, carrying out correctness verification on the integer ambiguity solution, carrying out statistical analysis on hypothesis test statistics according to historical integer ambiguity solutions stored when global navigation satellite system signals are normal, and calculating to obtain an ambiguity threshold value under a preset confidence level;
and S312, judging whether the integer ambiguity solution falls into the range of the ambiguity threshold, if so, checking the integer ambiguity solution correctly, otherwise, checking the integer ambiguity solution incorrectly, and calculating the integer ambiguity solution by adopting a pseudo-range measurement method.
And S32, continuing resolving the multi-frequency multi-mode real-time differential positioning according to the integer ambiguity solution, and finally obtaining the third positioning data.
In this embodiment, in GNSS (global navigation satellite system) solution, a pseudo-range and a phase double difference model between a receiver and a satellite are obtained by using a pseudo-range, a carrier, and a doppler observation value as raw data. In the integer ambiguity solution, based on the characteristic that GNSS signals are easily interfered by the outside and INS (inertial navigation system) is not easily interfered by the outside, position information output by the INS is used for assisting the GNSS to fix the integer ambiguity, when the GNSS information is interrupted, a pseudo range and phase double difference model is subjected to linearization processing at the position calculated by the INS, the optimal floating ambiguity solution is estimated by a least square method in combination with historical ambiguity and covariance, then an LAMBDA method is adopted to calculate an integer ambiguity vector, and correctness is verified.
Specifically, upon initiation of the system in the presence of GNSS signal disruptions, the INS provides high-rate navigation information output including position, velocity, and attitude (PVA). Since the base station of the multi-frequency GNSS is fixed and the geographical position is known, after the base station and the rover station receive the multi-frequency GNSS signals together and the rover station receives the base station information, the position of the rover station can be determined by using a positioning method of RTK (real time differential positioning). According to the RTK principle, double-difference pseudo range and carrier phase observation are formed on the premise that a reference value carrier phase observation value received by a mobile station is available. And calculating double-difference floating-point fuzzy solutions and corresponding covariance optimal solutions by using least square adjustment to obtain position constraint. And then, resolving ambiguity by adopting an LAMBDA method, carrying out hypothesis test to verify the correctness of the ambiguity solution, namely correctness check, storing the ambiguity resolved by the LAMBDA when the GNSS signal is normal, then calculating a reasonable threshold under a certain confidence level condition according to the statistical analysis of general hypothesis test statistics, and determining whether the searched ambiguity should be accepted or not according to the comparison with the threshold. If the correctness check is passed, the current data is used for the subsequent steps, otherwise, the pseudorange measurements are used.
S4, establishing a measurement equation and a state equation according to the first positioning data and the third positioning data;
the step S4 specifically includes the steps of:
s41, constructing a second pseudorange double-difference model according to the first position information, respectively calculating a first double-difference pseudorange, a second double-difference pseudorange, a first double-difference pseudorange rate and a second double-difference pseudorange rate according to the first pseudorange double-difference model and the second pseudorange double-difference model, and constructing a measurement equation according to an error between the first double-difference pseudorange and the second double-difference pseudorange and an error between the first double-difference pseudorange rate and the second double-difference pseudorange rate as observed quantities of data fusion;
s42, establishing a state equation according to the first position information, the first speed information and the first posture information, wherein the state equation comprises a position equation, a speed equation and a posture equation.
In this embodiment, the GNSS and the IMU are tightly combined to establish a measurement equation and a state equation. A pseudo-range double-difference model of inertial navigation can be constructed based on the position calculated by the IMU, double-difference pseudo-range and double-difference pseudo-range rate are obtained by respectively calculating GNSS and IMU data, and an error between the double-difference pseudo-range and the double-difference pseudo-range rate is used as observed quantity of data fusion to construct a measurement equation.
S5, estimating a state quantity error by adopting an extended Kalman filtering algorithm according to the first positioning data, the measurement equation and the state equation, and compensating the first positioning data based on the state quantity error to obtain final positioning data.
The state quantity errors comprise speed errors, attitude errors and position errors;
the step S5 specifically includes the steps of:
s51, estimating the speed error, the attitude error and the position error by adopting an extended Kalman filter algorithm according to the first position information, the first speed information, the first attitude information, the measurement equation, a position equation, a speed equation and an attitude equation;
and S52, respectively performing compensation optimization on the first position information, the first speed information and the first attitude information according to the position error, the speed error and the attitude error to obtain final positioning data.
The step S5 is followed by the step of:
and S6, updating the final positioning data to the first positioning data of the inertial measurement unit.
In this embodiment, the state quantity error is an attitude error, a velocity error, a position error, a gyroscope error, an accelerometer error, a clock error equivalent distance error, and a clock drift equivalent pseudo range rate error. Based on the IMU data after monocular vision optimization, the GNSS and the measurement equation and the state equation of the IMU, the extended Kalman filtering algorithm is adopted to estimate each error of the state quantity, and finally, the position, the speed and the attitude are compensated to obtain information with higher precision. Meanwhile, the obtained high-precision position, speed and posture are used for updating the IMU calculation initial value, and the drift influence of the IMU is reduced.
The state equation comprises an IMU error state equation and a GNSS error state equation, and the state equation formed by attitude error, speed error, position error, gyroscope error and accelerometer error is taken as an example, and is as follows:
Figure BDA0003466937400000131
wherein, WIFor random noise, XIIn the state:
Figure BDA0003466937400000132
wherein the content of the first and second substances,
Figure BDA0003466937400000133
respectively showing longitude error, latitude error, height error, E-direction speed error, N-direction speed error, U-direction speed error, E-axis misalignment angle error, N-axis misalignment angle error, U-axis misalignment angle error, x-axis acceleration error, y-axis acceleration error, z-axis acceleration error, x-axis angular speed error, y-axis angular speed error and z-axis angular speed error in sequence.
GIFor the noise transformation matrix:
Figure BDA0003466937400000134
wherein zero (3, 3) represents a 3X3 0 th order matrix,
Figure BDA0003466937400000135
a rotation matrix representing a carrier coordinate system to a world coordinate system;
FIfor the state transition matrix:
Figure BDA0003466937400000141
Figure BDA0003466937400000142
Figure BDA0003466937400000143
Figure BDA0003466937400000144
Figure BDA0003466937400000145
Figure BDA0003466937400000146
Figure BDA0003466937400000147
Figure BDA0003466937400000151
Figure BDA0003466937400000152
wherein R isMIs the curvature radius of the mortise-tenon unitary ring, RNRadius of curvature of meridian, wieIs ECEF based on the spin angular velocity of the earthe、fn、fuThe sum of the acceleration and the error in the directions e, n, and u, respectively.
The equations of the clock error equivalent distance error and the clock drift equivalent pseudorange rate error are as follows:
Figure BDA0003466937400000153
wherein:
XG=[δtu δtru]T
WG=[wtu wtru]T
Figure BDA0003466937400000154
Figure BDA0003466937400000155
wherein t isuIs the clock difference equivalent distance, truIs clock drift equivalent pseudo range rate, betatruFor error correlation time, wtu、wtruIs random white noise.
The measurement equation, namely the observation equation comprises an IMU error observation equation and a pseudo-range rate error observation equation, wherein the IMU observation equation is as follows:
ZI=HIXI+VI
wherein Z isITo observe the vector, HIFor the observation matrix, VITo observe the error:
Figure BDA0003466937400000161
Figure BDA0003466937400000162
wherein eye (3, 3) represents a 3X3 unit matrix, and zero (3, 3) represents a 3X3 0-order matrix.
The pseudo-range and pseudo-range rate error observation equation is as follows:
ZG=HGXG+UG+VG
wherein the content of the first and second substances,
Figure BDA0003466937400000163
Figure BDA0003466937400000164
Figure BDA0003466937400000165
wherein, deltaxFor the difference between the position of the moving carrier estimated by the IMU and the satellite positions derived from the satellite ephemeris,
Figure BDA0003466937400000166
the difference between the velocity of the moving carrier estimated by the IMU and the satellite velocity obtained from the satellite ephemeris, VGThe pseudo range and the pseudo range rate observation error.
The method for estimating the state of the mobile terminal by using the extended Kalman filter is characterized in that the method comprises the following steps:
(1) and (3) carrying out linearization processing on the system model:
Figure BDA0003466937400000167
wherein f isk-1Representing a non-linear function; omegak-1Representing process noise; x is the number ofkThe true value at the time k is the true value,
Figure BDA0003466937400000168
is an estimated value of the state at the time k-1,
Figure BDA0003466937400000169
is the prediction error value at time k-1.
First order Taylor expansion:
Figure BDA00034669374000001610
wherein the content of the first and second substances,
Figure BDA0003466937400000171
representing the partial derivative of the non-linear function to state x,
Figure BDA0003466937400000172
representing the partial derivative of the non-linear function on the error w.
(2) And (3) performing one-step prediction on the time k through the state value of the time k-1:
Figure BDA0003466937400000173
the error generated by the one-step state prediction is:
Figure BDA0003466937400000174
the covariance matrix of the state prediction error is:
Figure BDA0003466937400000175
wherein, Pk-1|k-1State covariance matrix, Q, representing time k-1k-1Representing the error covariance at time k-1 and the superscript T representing the transpose.
(3) The linearized equation for the measurement at time k is:
Figure BDA0003466937400000176
wherein z iskRepresents the observed quantity, hkRepresenting an observation function, vkWhich is indicative of the observed noise,
Figure BDA0003466937400000177
representing the partial derivative of the observation function to state x,
Figure BDA0003466937400000178
representing the partial derivative of the observation function with respect to the noise v.
(4) One-step prediction of k-time measurement:
Figure BDA0003466937400000179
the measurement prediction error is:
Figure BDA00034669374000001710
the covariance matrix of the state prediction error and the measurement prediction error is:
Figure BDA00034669374000001711
(5) the update formula of the state filtering is as follows:
the state update formula:
Figure BDA0003466937400000181
covariance update formula:
Figure BDA0003466937400000182
wherein I represents a unit matrix.
The kalman gain K at time K is:
Figure BDA0003466937400000183
referring to fig. 2, the second embodiment of the present invention is:
a converged positioning terminal 1, comprising a processor 2, a memory 3 and a computer program stored in the memory 3 and operable on the processor 2, wherein the processor 2 implements the steps of the first embodiment when executing the computer program.
The main principle of the method is that data acquired by a multi-frequency multi-mode real-time differential positioning unit, an inertial measurement unit and a monocular camera are fused by adopting an extended Kalman algorithm to improve positioning accuracy.
In summary, the fusion positioning method and the terminal provided by the invention fuse the data acquired by the multi-frequency multi-mode real-time differential positioning, the inertial measurement unit and the monocular camera by using the extended kalman algorithm, integrate the advantages of the long-time absolute position of the global navigation satellite system and the short-time accurate calculation position of the IMU and the monocular vision to obtain the estimation of the optimal position state, recover the measurement scale of the monocular vision by using the IMU, improve the motion tracking performance, limit the error drift of the IMU by using the vision measurement, and obtain the position and the attitude with higher precision and credibility, thereby improving the positioning precision.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (10)

1. A fusion positioning method is characterized by comprising the following steps:
s1, receiving environment information of the monocular camera, and estimating the position and the posture of the monocular camera according to the environment information;
s2, receiving first positioning data of an inertia measurement unit, and optimizing the first positioning data according to the position and the posture of the monocular camera;
s3, receiving second positioning data of the inertial navigation system, and assisting the global navigation satellite system to perform multi-frequency multi-mode real-time differential positioning according to the second positioning data to obtain third positioning data;
s4, establishing a measurement equation and a state equation according to the first positioning data and the third positioning data;
s5, estimating a state quantity error by adopting an extended Kalman filtering algorithm according to the first positioning data, the measurement equation and the state equation, and compensating the first positioning data based on the state quantity error to obtain final positioning data.
2. The fusion positioning method according to claim 1, wherein the step S1 specifically includes the steps of:
s11, receiving environment information of the monocular camera, and preprocessing the environment information;
s12, extracting characteristic points of Harris angular point characteristics and SIFT scale invariant characteristics from a continuous image of the environmental information, and tracking the characteristic points in adjacent images by adopting an LK sparse optical flow algorithm;
s13, estimating the relative motion of the landmark points and the monocular camera according to the feature points in the adjacent images, and estimating the position and the posture of the monocular camera by combining the relative motion of the landmark points and the monocular camera.
3. The fusion positioning method as claimed in claim 2, further comprising, between the step S12 and the step S13:
and S121, checking the correctness of the feature points by an affine checking method, judging whether the number of the feature points is sufficient, removing the feature points with wrong correctness verification, and adding new feature points when the number of the feature points is insufficient.
4. The fused positioning method according to claim 1, wherein the first positioning data comprises first position information, first velocity information and first attitude information;
the step S2 specifically includes the steps of:
s21, receiving the angular velocity, the acceleration and the gravity vector of the three axes of the inertial measurement unit during the static state, integrating the angular velocity, the acceleration and the gravity vector within a preset time to obtain first position information and first velocity information, and obtaining first attitude information according to the update of quaternion;
s22, establishing a position error equation and an attitude error equation according to the first position information, the first speed information, the first attitude information and the position and attitude estimated by the monocular camera within the preset time;
and S23, performing optimization solution on the position error equation and the attitude error equation, and performing optimization compensation on the first position information and the first attitude information according to a calculation result.
5. The fused positioning method according to claim 1, wherein the global navigation satellite system performs the solution of real-time differential positioning, and further comprising the steps of: obtaining a first pseudo-range double-difference model and a first phase double-difference model between a receiver and a satellite according to the collected pseudo-range, carrier and Doppler observed values, and performing whole-cycle ambiguity resolution;
the step S3 specifically includes the steps of:
s31, receiving second positioning data of an inertial navigation system, judging whether the pseudo range, the carrier wave and the Doppler observed value acquired by the global navigation satellite system in the integer ambiguity solution are missing due to signal interruption, if so, carrying out linearization processing on the first pseudo range double difference model and the first phase double difference model at the interruption position according to the second positioning data, estimating an optimal floating ambiguity solution by a least square method in combination with historical ambiguity and covariance, and calculating by an LAMBDA (label-based ambiguity resolution) method to obtain the integer ambiguity solution;
and S32, continuing resolving the multi-frequency multi-mode real-time differential positioning according to the integer ambiguity solution, and finally obtaining the third positioning data.
6. The fusion positioning method as claimed in claim 5, further comprising, between the steps S31 and S32:
s311, carrying out correctness verification on the integer ambiguity solution, carrying out statistical analysis on hypothesis test statistics according to historical integer ambiguity solutions stored when global navigation satellite system signals are normal, and calculating to obtain an ambiguity threshold value under a preset confidence level;
and S312, judging whether the integer ambiguity solution falls into the range of the ambiguity threshold, if so, checking the integer ambiguity solution correctly, otherwise, checking the integer ambiguity solution incorrectly, and calculating the integer ambiguity solution by adopting a pseudo-range measurement method.
7. The fused positioning method according to claim 5, wherein the first positioning data comprises first position information, first velocity information and first attitude information;
the step S4 specifically includes the steps of:
s41, constructing a second pseudorange double-difference model according to the first position information, respectively calculating a first double-difference pseudorange, a second double-difference pseudorange, a first double-difference pseudorange rate and a second double-difference pseudorange rate according to the first pseudorange double-difference model and the second pseudorange double-difference model, and constructing a measurement equation according to an error between the first double-difference pseudorange and the second double-difference pseudorange and an error between the first double-difference pseudorange rate and the second double-difference pseudorange rate as observed quantities of data fusion;
s42, establishing a state equation according to the first position information, the first speed information and the first posture information, wherein the state equation comprises a position equation, a speed equation and a posture equation.
8. The fusion positioning method according to claim 7, wherein the state quantity errors comprise a velocity error, an attitude error and a position error;
the step S5 specifically includes the steps of:
s51, estimating the speed error, the attitude error and the position error by adopting an extended Kalman filter algorithm according to the first position information, the first speed information, the first attitude information, the measurement equation, a position equation, a speed equation and an attitude equation;
and S52, respectively performing compensation optimization on the first position information, the first speed information and the first attitude information according to the position error, the speed error and the attitude error to obtain final positioning data.
9. The fusion positioning method according to claim 1, further comprising, after the step S5, the steps of:
and S6, updating the final positioning data to the first positioning data of the inertial measurement unit.
10. A converged positioning terminal, comprising a processor, a memory and a computer program stored in the memory and executable on the processor, characterised in that the processor, when executing the computer program, implements the steps of a converged positioning method according to any one of claims 1 to 9.
CN202210039694.4A 2022-01-12 2022-01-12 Fusion positioning method and terminal Pending CN114396943A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210039694.4A CN114396943A (en) 2022-01-12 2022-01-12 Fusion positioning method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210039694.4A CN114396943A (en) 2022-01-12 2022-01-12 Fusion positioning method and terminal

Publications (1)

Publication Number Publication Date
CN114396943A true CN114396943A (en) 2022-04-26

Family

ID=81230523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210039694.4A Pending CN114396943A (en) 2022-01-12 2022-01-12 Fusion positioning method and terminal

Country Status (1)

Country Link
CN (1) CN114396943A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116182873A (en) * 2023-05-04 2023-05-30 长沙驰芯半导体科技有限公司 Indoor positioning method, system and computer readable medium
CN116681759A (en) * 2023-04-19 2023-09-01 中国科学院上海微系统与信息技术研究所 Camera pose estimation method based on self-supervision visual inertial odometer
CN117310756A (en) * 2023-11-30 2023-12-29 宁波路特斯机器人有限公司 Multi-sensor fusion positioning method and system and machine-readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106597480A (en) * 2016-12-08 2017-04-26 深圳大学 Anti-interference positioning method and system for satellite navigation RTK transmitting station
CN107478221A (en) * 2017-08-11 2017-12-15 黄润芳 A kind of high-precision locating method for mobile terminal
CN109212573A (en) * 2018-10-15 2019-01-15 东南大学 For surveying and drawing the positioning system and method for vehicle under a kind of urban canyon environment
CN110412635A (en) * 2019-07-22 2019-11-05 武汉大学 A kind of environment beacon support under GNSS/SINS/ vision tight integration method
CN111077556A (en) * 2020-01-02 2020-04-28 东南大学 Airport luggage tractor positioning device and method integrating Beidou and multiple sensors
CN111413720A (en) * 2020-03-21 2020-07-14 哈尔滨工程大学 Multi-frequency Beidou carrier phase difference/INS combined positioning method
CN112987065A (en) * 2021-02-04 2021-06-18 东南大学 Handheld SLAM device integrating multiple sensors and control method thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106597480A (en) * 2016-12-08 2017-04-26 深圳大学 Anti-interference positioning method and system for satellite navigation RTK transmitting station
CN107478221A (en) * 2017-08-11 2017-12-15 黄润芳 A kind of high-precision locating method for mobile terminal
CN109099912A (en) * 2017-08-11 2018-12-28 黄润芳 Outdoor accurate positioning air navigation aid, device, electronic equipment and storage medium
CN109212573A (en) * 2018-10-15 2019-01-15 东南大学 For surveying and drawing the positioning system and method for vehicle under a kind of urban canyon environment
CN110412635A (en) * 2019-07-22 2019-11-05 武汉大学 A kind of environment beacon support under GNSS/SINS/ vision tight integration method
CN111077556A (en) * 2020-01-02 2020-04-28 东南大学 Airport luggage tractor positioning device and method integrating Beidou and multiple sensors
CN111413720A (en) * 2020-03-21 2020-07-14 哈尔滨工程大学 Multi-frequency Beidou carrier phase difference/INS combined positioning method
CN112987065A (en) * 2021-02-04 2021-06-18 东南大学 Handheld SLAM device integrating multiple sensors and control method thereof

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681759A (en) * 2023-04-19 2023-09-01 中国科学院上海微系统与信息技术研究所 Camera pose estimation method based on self-supervision visual inertial odometer
CN116681759B (en) * 2023-04-19 2024-02-23 中国科学院上海微系统与信息技术研究所 Camera pose estimation method based on self-supervision visual inertial odometer
CN116182873A (en) * 2023-05-04 2023-05-30 长沙驰芯半导体科技有限公司 Indoor positioning method, system and computer readable medium
CN117310756A (en) * 2023-11-30 2023-12-29 宁波路特斯机器人有限公司 Multi-sensor fusion positioning method and system and machine-readable storage medium
CN117310756B (en) * 2023-11-30 2024-03-29 宁波路特斯机器人有限公司 Multi-sensor fusion positioning method and system and machine-readable storage medium

Similar Documents

Publication Publication Date Title
CN109931926B (en) Unmanned aerial vehicle seamless autonomous navigation method based on station-core coordinate system
CN111578935B (en) Method for assisting GNSS ambiguity fixing by inertial navigation position increment
EP2434256B1 (en) Camera and inertial measurement unit integration with navigation data feedback for feature tracking
US20200124421A1 (en) Method and apparatus for estimating position
CN114396943A (en) Fusion positioning method and terminal
US6496778B1 (en) Real-time integrated vehicle positioning method and system with differential GPS
EP2133662B1 (en) Methods and system of navigation using terrain features
CN112230242B (en) Pose estimation system and method
CN113406682B (en) Positioning method, positioning device, electronic equipment and storage medium
Georgy et al. Vehicle navigator using a mixture particle filter for inertial sensors/odometer/map data/GPS integration
CN104729506A (en) Unmanned aerial vehicle autonomous navigation positioning method with assistance of visual information
Liao et al. Enhancing navigation performance through visual-inertial odometry in GNSS-degraded environment
CN113203418B (en) GNSSINS visual fusion positioning method and system based on sequential Kalman filtering
CN114002725A (en) Lane line auxiliary positioning method and device, electronic equipment and storage medium
Li et al. Multi-GNSS PPP/INS/Vision/LiDAR tightly integrated system for precise navigation in urban environments
Zou et al. A nonlinear transfer alignment of distributed POS based on adaptive second-order divided difference filter
Kanhere et al. Integrity for GPS/LiDAR fusion utilizing a RAIM framework
JP3900365B2 (en) Positioning device and positioning method
CN115435779A (en) Intelligent body pose estimation method based on GNSS/IMU/optical flow information fusion
CN107270904B (en) Unmanned aerial vehicle auxiliary guide control system and method based on image registration
Dabove et al. Monocular visual odometry with unmanned underwater vehicle using low cost sensors
CN116184430B (en) Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit
Ćwian et al. GNSS-augmented lidar slam for accurate vehicle localization in large scale urban environments
CN115900732A (en) Combined navigation method and system based on roadside camera and vehicle-mounted unit
CN112924999B (en) Unmanned aerial vehicle positioning method, system, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination