CN116068585B - High-definition map acquisition data time synchronization method and system based on signal closed loop - Google Patents

High-definition map acquisition data time synchronization method and system based on signal closed loop Download PDF

Info

Publication number
CN116068585B
CN116068585B CN202310212693.XA CN202310212693A CN116068585B CN 116068585 B CN116068585 B CN 116068585B CN 202310212693 A CN202310212693 A CN 202310212693A CN 116068585 B CN116068585 B CN 116068585B
Authority
CN
China
Prior art keywords
signal
feature vector
waveform
pps
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310212693.XA
Other languages
Chinese (zh)
Other versions
CN116068585A (en
Inventor
章洪亮
刘力
苏林峰
龚利恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhangrui Electronic Co ltd
Original Assignee
Shenzhen Zhangrui Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhangrui Electronic Co ltd filed Critical Shenzhen Zhangrui Electronic Co ltd
Priority to CN202310212693.XA priority Critical patent/CN116068585B/en
Publication of CN116068585A publication Critical patent/CN116068585A/en
Application granted granted Critical
Publication of CN116068585B publication Critical patent/CN116068585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/03Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors

Abstract

A method and system for synchronizing the time of collecting data of high-definition map based on signal closed loop are disclosed. The method comprises the following steps: the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-level chip at the same time, wherein after the system-level chip receives the PPS signals, the camera module is started; the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard; after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server. Thus, the map positioning accuracy can be improved.

Description

High-definition map acquisition data time synchronization method and system based on signal closed loop
Technical Field
The application relates to the field of signal synchronization, and in particular relates to a high-definition map acquisition data time synchronization method and system based on signal closed loop.
Background
The high-definition map acquisition product generally comprises a positioning module (RTK GNSS), an inertial navigation module (IMU), a CAMERA module (CAMERA), an MCU, an SOC and the like. The acquired values of the camera photo, the positioning pose, the IMU measured value sequence and the like are required to be synchronously aligned to the GPS timestamp of the positioning module, the time error is required to be within 10ms, and the smaller the time error is, the better the time error is. The existing high-definition map acquisition product has the problems of large time delay and asynchronous map data and positioning information, so that the map positioning accuracy is insufficient.
Therefore, an optimized high definition map acquisition data time synchronization scheme is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a high-definition map acquisition data time synchronization method and system based on signal closed loop. The method comprises the following steps: the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-level chip at the same time, wherein after the system-level chip receives the PPS signals, the camera module is started; the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard; after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server. Thus, the map positioning accuracy can be improved.
According to one aspect of the application, there is provided a method for synchronizing time of high-definition map acquisition data based on signal closed loop, comprising: the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-level chip at the same time, wherein after the system-level chip receives the PPS signals, the camera module is started; the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard; after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
In the above method for time synchronization of high-definition map acquisition data based on signal closed loop, the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether synchronization accuracy meets a predetermined standard, including: passing the PPS signal through a first convolutional neural network model serving as a filter to obtain a PPS signal waveform characteristic vector; the frame synchronization signal passes through a second convolution neural network model serving as a filter to obtain a waveform characteristic vector of the frame synchronization signal; calculating a waveform difference characteristic vector between the PPS signal waveform characteristic vector and the frame synchronization signal waveform characteristic vector; performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and passing the constrained waveform difference feature vector through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard.
In the above method for time synchronization of high-definition map acquisition data based on signal closed loop, the passing the PPS signal through a first convolutional neural network model as a filter to obtain a PPS signal waveform feature vector includes: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the first convolution neural network model as a filter to output the waveform feature vector of the PPS signal by the last layer of the first convolution neural network model as the filter, wherein the input of the first layer of the first convolution neural network model as the filter is the PPS signal.
In the above method for time synchronization of high-definition map acquisition data based on signal closed loop, the method for obtaining the waveform feature vector of the frame synchronization signal by passing the frame synchronization signal through a second convolutional neural network model as a filter comprises: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the second convolution neural network model as a filter to output the waveform feature vector of the frame synchronous signal by the last layer of the second convolution neural network model as the filter, wherein the input of the first layer of the second convolution neural network model as the filter is the frame synchronous signal.
In the above method for time synchronization of high-definition map acquisition data based on signal closed loop, the first convolutional neural network model used as a filter and/or the second convolutional neural network model used as a filter is a depth residual network model.
In the above method for time synchronization of high-definition map acquisition data based on signal closed loop, calculating a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector includes: calculating the waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector in the following formula; wherein, the formula is:
Figure SMS_1
wherein ,
Figure SMS_2
for the waveform characteristic vector of the PPS signal, < >>
Figure SMS_3
For the waveform feature vector of the frame synchronization signal, < >>
Figure SMS_4
For the waveform difference feature vector, +_>
Figure SMS_5
Representing the difference by location.
In the above method for synchronizing the time of collecting data from a high-definition map based on signal closed loop, performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector, including: performing feature distribution constraint on the waveform difference feature vector by using the following formula to obtain the constrained waveform difference feature vector; wherein, the formula is:
Figure SMS_6
wherein ,
Figure SMS_9
representing the waveform difference feature vector, and +.>
Figure SMS_12
In the form of a row vector>
Figure SMS_17
A transpose vector representing the waveform difference feature vector, < >>
Figure SMS_8
Representing the square of the two norms of the vector, +.>
Figure SMS_13
Frobenius norms of the matrix are represented, < >>
Figure SMS_15
And
Figure SMS_18
the ++th of the waveform difference feature vector before and after correction, respectively>
Figure SMS_7
Characteristic value of individual position->
Figure SMS_11
and />
Figure SMS_14
Is feature set +.>
Figure SMS_16
Mean and variance of>
Figure SMS_10
Representing the calculation of a value of a natural exponent function that is a power of the value.
In the above method for time synchronization of high definition map collected data based on signal closed loop, the step of passing the constrained waveform difference feature vector through a classifier to obtain a classification result, where the classification result is used to indicate whether the synchronization precision meets a predetermined standard, includes: performing full-connection coding on the constrained waveform difference feature vector by using a plurality of full-connection layers of the classifier to obtain a coded classification feature vector; and passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
According to another aspect of the present application, there is provided a signal closed loop-based high definition map acquisition data time synchronization system, comprising: the system comprises a positioning module, a micro control unit, an inertial navigation module, a system level chip and a camera module; the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-in-chip at the same time, wherein the camera module is started after the system-in-chip receives the PPS signals; the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard; after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
Compared with the prior art, the method and the system for synchronizing the time of the high-definition map acquisition data based on the signal closed loop, provided by the application, comprise the following steps: the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-level chip at the same time, wherein after the system-level chip receives the PPS signals, the camera module is started; the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard; after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server. Thus, the map positioning accuracy can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. The following drawings are not intended to be drawn to scale, with emphasis instead being placed upon illustrating the principles of the present application.
Fig. 1 is a block diagram of a high definition map acquisition data time synchronization system based on signal closed loop according to an embodiment of the present application.
Fig. 2 is a flowchart of a method for time synchronization of high definition map acquisition data based on signal closed loop according to an embodiment of the present application.
Fig. 3 is a functional block diagram of a high-definition map acquisition data time synchronization system based on signal closed loop according to an embodiment of the present application.
Fig. 4 is a schematic diagram of an operation process of a high-definition map acquisition data time synchronization system based on signal closed loop according to an embodiment of the application.
Fig. 5 is a schematic view of a scenario of a sub-step S300 in a high-definition map acquisition data time synchronization method based on signal closed loop according to an embodiment of the present application.
Fig. 6 is a flowchart of sub-step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of sub-step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application.
Fig. 8 is a flowchart of sub-step S350 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application.
Fig. 9 is a block diagram of the signal synchronization control system according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, are also within the scope of the present application.
As used in this application and in the claims, the terms "a," "an," "the," and/or "the" are not specific to the singular, but may include the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
Flowcharts are used in this application to describe the operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Accordingly, in the technical solution of the present application, it is critical to improve the synchronicity between the map data and the positioning information to monitor the synchronization signals in real time, so as to ensure that the synchronization between the synchronization signals accurately meets the predetermined standard.
Specifically, as shown in fig. 1, in the technical solution of the present application, the system 100 for synchronizing the time of acquiring data from a high-definition map based on a closed signal loop includes: the positioning module 110, the micro control unit 120, the inertial navigation module 130, the system-level chip 140 and the camera module 150, wherein the high-definition map acquisition data time synchronization system based on signal closed loop operates in the following manner:
1) And the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-in-chip at the same time, wherein the camera module is started after the system-in-chip receives the PPS signals.
2) And the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module.
3) The micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether synchronization accuracy meets a predetermined standard.
4) And after determining that the synchronization precision meets the preset standard, the micro control unit transmits the positioning information acquired by the positioning module and the pose signal acquired by the inertial navigation module to the system-in-chip.
5) And the system-level chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
Correspondingly, as shown in fig. 2, in the technical scheme of the present application, a corresponding method for synchronizing time of acquiring data from a high-definition map based on signal closed loop is also provided, which includes: s100, a positioning module simultaneously transmits PPS signals to a micro control unit, an inertial navigation module and a system-level chip, wherein after the system-level chip receives the PPS signals, a camera module is started; s200, the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; s300, the micro control unit analyzes the PPS signal and the frame synchronization signal based on a preset algorithm model to determine whether the synchronization precision meets a preset standard; s400, after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and S500, the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
Referring to fig. 3 and fig. 4, in the technical scheme of the application, a PPS signal of the GPS is used as a main synchronization signal, and meanwhile, frame synchronization signals of the camera module are fed back to the MCU and the IMU, so as to achieve the purpose of accurate synchronization of the GPS, the IMU and the photo data. Specific: the PPS signals are simultaneously output to the MCU, the IMU and the SOC, after the SOC receives the synchronizing signals, the camera is started to draw, meanwhile, the frame synchronizing signals F_SYNC of the camera are fed back to the MCU and the IMU module, and the MCU accurately compares the PPS synchronizing signals with the F_SYNC synchronizing signals and controls time synchronization. Within the synchronization precision, the MCU transmits the positioning "+" pose information to the SOC, and the SOC repacks the image data based on the positioning "+" pose information and transmits the image data to the background server together. Compared with the prior art, the technical scheme of the method and the device utilizes PPS and F_SYNC signals to close the loop, and the PPS and F_SYNC signals are accurately compared, so that time delay is reduced, and data synchronization precision is improved.
Specifically, in the technical solution of the present application, the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard, and includes the steps of: firstly, passing the PPS signal through a first convolution neural network model serving as a filter to obtain a waveform characteristic vector of the PPS signal; and simultaneously, the frame synchronization signal passes through a second convolution neural network model serving as a filter to obtain a waveform characteristic vector of the frame synchronization signal. That is, in the technical solution of the present application, a convolutional neural network model having excellent performance in the image feature extraction field is used as a feature filter to capture local waveform feature vectors of the PPS signal and the frame synchronization signal in the image field.
Further, a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector is calculated. That is, the high-dimensional waveform characteristic representation between the synchronization difference between the PPS signal and the frame synchronization signal is represented by a waveform difference characteristic vector between the PPS signal waveform characteristic vector and the frame synchronization signal waveform characteristic vector. Finally, the waveform difference feature vector is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard. That is, the class probability labels to which the waveform difference feature vectors belong are determined by the classifier to determine whether synchronization between the synchronization signals is accurate within a predetermined standard range.
Here, since the waveform image semantic difference between the PPS signal and the frame synchronization signal itself in the image source domain may amplify the difference between feature semantics in the feature domain due to feature extraction of the first convolutional neural network model as a filter and the second convolutional neural network model as a filter, although it is advantageous for the waveform difference feature vector to express the difference feature between the PPS signal and the frame synchronization signal, the position-by-position difference computation between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector may also cause discretization of the overall feature distribution of the waveform difference feature vector, so that the waveform difference feature vector has a poor convergence of a predetermined class label with respect to a classifier when classified by the classifier, which may affect the training speed of the classifier and the accuracy of the classification result.
Therefore, in the technical scheme of the application, the waveform difference characteristic vector
Figure SMS_19
Performing geometric constraint re-parameterization of a positive-localization space, wherein the geometric constraint re-parameterization specifically comprises the following steps:
Figure SMS_20
Figure SMS_23
and />
Figure SMS_26
Is feature set +.>
Figure SMS_29
Mean and variance of>
Figure SMS_22
Representing the square of the two norms of the vector, +.>
Figure SMS_25
Frobenius norms of the matrix are represented, < >>
Figure SMS_28
and />
Figure SMS_30
The waveform difference feature vector before and after correction respectively +.>
Figure SMS_21
Is>
Figure SMS_24
Characteristic value of individual position, and->
Figure SMS_27
Is in the form of a row vector.
Here, the waveform difference feature vector
Figure SMS_31
The geometric constrained repartitioning of the forward-defined excipient space of (2) may be based on a projection modulo length relation of the Bessel inequality by projecting the square of the vector norm expressed in the form of an inner product within the associated set space of vectors themselves such that the set of distributions of vectors has modulo length constraints within the geometric metric subspace of the forward-defined excipient space to repartitionize the distribution space to a bounded forward-defined excipient space having a closed subspace based on the geometric constraints of the feature distribution. Thus, the waveform difference feature vector +.>
Figure SMS_32
The convergence of the overall characteristic distribution under the preset class label improves the training speed of the classifier and the accuracy of the classification result.
Fig. 5 is an application scenario diagram of sub-step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application. As shown in fig. 5, in this application scenario, first, a PPS signal (for example, D1 illustrated in fig. 5) received by a system-in-chip and a frame synchronization signal (for example, D2 illustrated in fig. 5) transmitted by a camera module are input to a server (for example, S illustrated in fig. 5) in which a signal synchronization control algorithm is deployed, wherein the server can process the PPS signal and the frame synchronization signal using the signal synchronization control algorithm to obtain a classification result for indicating whether synchronization accuracy meets a predetermined standard.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Fig. 6 is a flowchart of sub-step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application. As shown in fig. 6, in a step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application, the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard, and includes the steps of: s310, passing the PPS signal through a first convolution neural network model serving as a filter to obtain a waveform characteristic vector of the PPS signal; s320, the frame synchronization signal passes through a second convolution neural network model serving as a filter to obtain a waveform characteristic vector of the frame synchronization signal; s330, calculating waveform difference characteristic vectors between the waveform characteristic vectors of the PPS signals and the waveform characteristic vectors of the frame synchronous signals; s340, performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and S350, passing the constrained waveform difference feature vector through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard.
Fig. 7 is a schematic structural diagram of sub-step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application. As shown in fig. 7, in the network architecture, first, the PPS signal is passed through a first convolutional neural network model as a filter to obtain a PPS signal waveform feature vector; then, the frame synchronizing signal passes through a second convolution neural network model serving as a filter to obtain a waveform characteristic vector of the frame synchronizing signal; then, calculating waveform difference characteristic vectors between the PPS signal waveform characteristic vectors and the frame synchronization signal waveform characteristic vectors; then, carrying out feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and finally, the constrained waveform difference feature vector is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard.
The convolutional neural network (Convolutional Neural Network, CNN) is an artificial neural network and has wide application in the fields of image recognition and the like. The convolutional neural network may include an input layer, a hidden layer, and an output layer, where the hidden layer may include a convolutional layer, a pooling layer, an activation layer, a full connection layer, etc., where the previous layer performs a corresponding operation according to input data, outputs an operation result to the next layer, and obtains a final result after the input initial data is subjected to a multi-layer operation. The convolutional neural network model has excellent performance in the aspect of image local feature extraction by taking a convolutional kernel as a feature filtering factor, and has stronger feature extraction generalization capability and fitting capability compared with the traditional image feature extraction algorithm based on statistics or feature engineering.
In the technical scheme of the application, a convolutional neural network model with excellent performance in the field of image feature extraction is firstly used as a feature filter to capture local waveform feature vectors of the PPS signal and the frame synchronization signal in the image field.
More specifically, in step S310, the PPS signal is passed through a first convolutional neural network model as a filter to obtain a PPS signal waveform feature vector.
Accordingly, in one specific example, passing the PPS signal through a first convolutional neural network model as a filter to obtain a PPS signal waveform feature vector, including: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the first convolution neural network model as a filter to output the waveform feature vector of the PPS signal by the last layer of the first convolution neural network model as the filter, wherein the input of the first layer of the first convolution neural network model as the filter is the PPS signal.
More specifically, in step S320, the frame synchronization signal is passed through a second convolutional neural network model as a filter to obtain a frame synchronization signal waveform feature vector.
Accordingly, in one specific example, passing the frame synchronization signal through a second convolutional neural network model as a filter to obtain a frame synchronization signal waveform feature vector, comprising: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the second convolution neural network model as a filter to output the waveform feature vector of the frame synchronous signal by the last layer of the second convolution neural network model as the filter, wherein the input of the first layer of the second convolution neural network model as the filter is the frame synchronous signal.
Accordingly, in a specific example, the first convolutional neural network model as a filter and/or the second convolutional neural network model as a filter is a depth residual network model.
More specifically, in step S330, a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector is calculated. That is, the high-dimensional waveform characteristic representation between the synchronization difference between the PPS signal and the frame synchronization signal is represented by a waveform difference characteristic vector between the PPS signal waveform characteristic vector and the frame synchronization signal waveform characteristic vector. For example, the waveform difference feature vector may be obtained by calculating a difference between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector in a position-differential manner.
Accordingly, in one specific example, calculating a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector includes: calculating the waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector in the following formula; wherein, the formula is:
Figure SMS_33
wherein ,
Figure SMS_34
for the waveform characteristic vector of the PPS signal, < >>
Figure SMS_35
For the waveform feature vector of the frame synchronization signal, < >>
Figure SMS_36
For the waveformDifferential feature vector(s)>
Figure SMS_37
Representing the difference by location.
More specifically, in step S340, feature distribution constraint is performed on the waveform difference feature vector to obtain a constrained waveform difference feature vector. Here, since the waveform image semantic difference between the PPS signal and the frame synchronization signal itself in the image source domain may amplify the difference between feature semantics in the feature domain due to feature extraction of the first convolutional neural network model as a filter and the second convolutional neural network model as a filter, although it is advantageous for the waveform difference feature vector to express the difference feature between the PPS signal and the frame synchronization signal, the position-by-position difference computation between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector may also cause discretization of the overall feature distribution of the waveform difference feature vector, so that the waveform difference feature vector has a poor convergence of a predetermined class label with respect to a classifier when classified by the classifier, which may affect the training speed of the classifier and the accuracy of the classification result. Therefore, in the technical solution of the present application, the geometric constraint re-parameterization of the normal-localization space is performed on the waveform difference feature vector V.
Accordingly, in one specific example, performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector, including: performing feature distribution constraint on the waveform difference feature vector by using the following formula to obtain the constrained waveform difference feature vector; wherein, the formula is:
Figure SMS_38
wherein ,
Figure SMS_40
representing the waveform difference feature vector, and +.>
Figure SMS_43
In the form of a row vector>
Figure SMS_46
A transpose vector representing the waveform difference feature vector, < >>
Figure SMS_41
Representing the square of the two norms of the vector, +.>
Figure SMS_44
Frobenius norms of the matrix are represented, < >>
Figure SMS_47
And
Figure SMS_49
the ++th of the waveform difference feature vector before and after correction, respectively>
Figure SMS_39
Characteristic value of individual position->
Figure SMS_45
and />
Figure SMS_48
Is feature set +.>
Figure SMS_50
Mean and variance of>
Figure SMS_42
Representing the calculation of a value of a natural exponent function that is a power of the value.
Here, the geometric constraint re-parameterization of the positive-definite excipient space of the waveform difference feature vector V may be based on the projection modulo length relation of the bezier inequality, by the projection of the vector norm square expressed in the form of inner product in the associated set space of the vector itself, so that the distribution set of the vector has modulo length constraint in the geometric metric subspace having the positive-definite excipient space, to re-parameterize the distribution space to the bounded positive-definite excipient space having the closed subspace based on the geometric constraint of the feature distribution. Therefore, the convergence of the overall characteristic distribution of the waveform difference characteristic vector V under the preset class label is improved, and the training speed of the classifier and the accuracy of the classification result are improved.
More specifically, in step S350, the constrained waveform difference feature vector is passed through a classifier to obtain a classification result indicating whether or not the synchronization accuracy meets a predetermined criterion. That is, the class probability labels to which the waveform difference feature vectors belong are determined by the classifier to determine whether synchronization between the synchronization signals is accurate within a predetermined standard range.
The role of the classifier is to learn the classification rules and classifier using a given class, known training data, and then classify (or predict) the unknown data. Logistic regression (logistics), SVM, etc. are commonly used to solve the classification problem, and for multi-classification problems (multi-class classification), logistic regression or SVM can be used as well, but multiple bi-classifications are required to compose multiple classifications, but this is error-prone and inefficient, and the commonly used multi-classification method is the Softmax classification function.
Accordingly, in one specific example, as shown in fig. 8, the constrained waveform difference feature vector is passed through a classifier to obtain a classification result, where the classification result is used to indicate whether the synchronization accuracy meets a predetermined criterion, and the method includes: s351, performing full-connection coding on the constrained waveform difference feature vector by using a plurality of full-connection layers of the classifier to obtain a coded classification feature vector; and S352, passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
In summary, the method for synchronizing the time of collecting data from the high-definition map based on the signal closed loop according to the embodiment of the application comprises the following sub-steps: the micro control unit analyzes the PPS signal and the frame synchronization signal based on a preset algorithm model to determine whether the synchronization precision meets a preset standard, and the PPS signal and the frame synchronization signal are respectively passed through a first convolution neural network model serving as a filter and a second convolution neural network model serving as the filter to obtain a PPS signal waveform characteristic vector and a frame synchronization signal waveform characteristic vector; then, calculating waveform difference characteristic vectors between the PPS signal waveform characteristic vectors and the frame synchronization signal waveform characteristic vectors; then, carrying out feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and finally, the constrained waveform difference feature vector is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard.
Fig. 9 is a block diagram of a signal synchronization control system 300 according to an embodiment of the present application. As shown in fig. 9, a signal synchronization control system 300 according to an embodiment of the present application includes: a first convolutional encoding module 310, configured to pass the PPS signal through a first convolutional neural network model serving as a filter to obtain a PPS signal waveform feature vector; a second convolutional encoding module 320, configured to pass the frame synchronization signal through a second convolutional neural network model serving as a filter to obtain a waveform feature vector of the frame synchronization signal; a waveform difference calculation module 330, configured to calculate a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector; the feature distribution constraint module 340 is configured to perform feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and a classification module 350, configured to pass the constrained waveform difference feature vector through a classifier to obtain a classification result, where the classification result is used to indicate whether the synchronization accuracy meets a predetermined criterion.
In one example, in the signal synchronization control system 300, the first convolutional encoding module 310 is configured to: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the first convolution neural network model as a filter to output the waveform feature vector of the PPS signal by the last layer of the first convolution neural network model as the filter, wherein the input of the first layer of the first convolution neural network model as the filter is the PPS signal.
In one example, in the signal synchronization control system 300, the second convolutional encoding module 320 is configured to: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the second convolution neural network model as a filter to output the waveform feature vector of the frame synchronous signal by the last layer of the second convolution neural network model as the filter, wherein the input of the first layer of the second convolution neural network model as the filter is the frame synchronous signal.
In one example, in the signal synchronization control system 300, the first convolutional neural network model as a filter and/or the second convolutional neural network model as a filter is a depth residual network model.
In one example, in the signal synchronization control system 300, the waveform difference calculation module 330 is configured to: calculating the waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector in the following formula; wherein, the formula is:
Figure SMS_51
wherein ,
Figure SMS_52
for the waveform characteristic vector of the PPS signal, < >>
Figure SMS_53
For the waveform feature vector of the frame synchronization signal, < >>
Figure SMS_54
For the waveform difference feature vector, +_>
Figure SMS_55
Representing the difference by location.
In one example, in the signal synchronization control system 300, the feature distribution constraint module 340 is configured to: performing feature distribution constraint on the waveform difference feature vector by using the following formula to obtain the constrained waveform difference feature vector; wherein, the formula is:
Figure SMS_56
wherein ,
Figure SMS_59
representing the waveform difference feature vector, and +.>
Figure SMS_63
In the form of a row vector>
Figure SMS_66
A transpose vector representing the waveform difference feature vector, < > >
Figure SMS_58
Representing the square of the two norms of the vector, +.>
Figure SMS_62
Frobenius norms of the matrix are represented, < >>
Figure SMS_65
And
Figure SMS_68
the ++th of the waveform difference feature vector before and after correction, respectively>
Figure SMS_57
Characteristic value of individual position->
Figure SMS_61
and />
Figure SMS_64
Is feature set +.>
Figure SMS_67
Mean and variance of>
Figure SMS_60
Representing the calculation of a value of a natural exponent function that is a power of the value.
In one example, in the signal synchronization control system 300, the classification module 350 is configured to: performing full-connection coding on the constrained waveform difference feature vector by using a plurality of full-connection layers of the classifier to obtain a coded classification feature vector; and passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described signal synchronization control system 300 have been described in detail in the above description of the sub-step S300 in the signal closed-loop-based high-definition map acquisition data time synchronization method of fig. 6 to 8, and thus, repetitive descriptions thereof will be omitted.
As described above, the signal synchronization control system 300 according to the embodiment of the present application may be implemented in various wireless terminals, for example, a server or the like having a signal synchronization control algorithm. In one example, the signal synchronization control system 300 according to embodiments of the present application may be integrated into a wireless terminal as a software module and/or hardware module. For example, the signal synchronization control system 300 may be a software module in the operating system of the wireless terminal or may be an application developed for the wireless terminal; of course, the signal synchronization control system 300 may also be one of a plurality of hardware modules of the wireless terminal.
Alternatively, in another example, the signal synchronization control system 300 and the wireless terminal may be separate devices, and the signal synchronization control system 300 may be connected to the wireless terminal through a wired and/or wireless network and transmit the interactive information in a agreed data format.
According to another aspect of the present application, there is also provided a non-volatile computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a computer, can perform a method as described above.
Program portions of the technology may be considered to be "products" or "articles of manufacture" in the form of executable code and/or associated data, embodied or carried out by a computer readable medium. A tangible, persistent storage medium may include any memory or storage used by a computer, processor, or similar device or related module. Such as various semiconductor memories, tape drives, disk drives, or the like, capable of providing storage functionality for software.
All or a portion of the software may sometimes communicate over a network, such as the internet or other communication network. Such communication may load software from one computer device or processor to another. For example: a hardware platform loaded from a server or host computer of the video object detection device to a computer environment, or other computer environment implementing the system, or similar functioning system related to providing information needed for object detection. Thus, another medium capable of carrying software elements may also be used as a physical connection between local devices, such as optical, electrical, electromagnetic, etc., propagating through cable, optical cable, air, etc. Physical media used for carrier waves, such as electrical, wireless, or optical, may also be considered to be software-bearing media. Unless limited to a tangible "storage" medium, other terms used herein to refer to a computer or machine "readable medium" mean any medium that participates in the execution of any instructions by a processor.
This application uses specific words to describe embodiments of the application. Reference to "a first/second embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the invention are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the following claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.

Claims (9)

1. A high-definition map acquisition data time synchronization method based on signal closed loop is characterized by comprising the following steps:
the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-level chip at the same time, wherein after the system-level chip receives the PPS signals, the camera module is started;
the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module;
the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard;
after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and
and the system-level chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
2. The method according to claim 1, wherein the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether synchronization accuracy meets a predetermined criterion, comprising:
Passing the PPS signal through a first convolutional neural network model serving as a filter to obtain a PPS signal waveform characteristic vector;
the frame synchronization signal passes through a second convolution neural network model serving as a filter to obtain a waveform characteristic vector of the frame synchronization signal;
calculating a waveform difference characteristic vector between the PPS signal waveform characteristic vector and the frame synchronization signal waveform characteristic vector;
performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and
and the constrained waveform difference feature vector is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard.
3. The method for time synchronization of high definition map acquisition data based on signal closed loop according to claim 2, wherein passing the PPS signal through a first convolutional neural network model as a filter to obtain a PPS signal waveform feature vector comprises:
and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the first convolution neural network model as a filter to output the waveform feature vector of the PPS signal by the last layer of the first convolution neural network model as the filter, wherein the input of the first layer of the first convolution neural network model as the filter is the PPS signal.
4. The method for time synchronization of high definition map acquisition data based on signal closed loop according to claim 3, wherein the step of passing the frame synchronization signal through a second convolutional neural network model as a filter to obtain a waveform feature vector of the frame synchronization signal comprises the steps of:
and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the second convolution neural network model as a filter to output the waveform feature vector of the frame synchronous signal by the last layer of the second convolution neural network model as the filter, wherein the input of the first layer of the second convolution neural network model as the filter is the frame synchronous signal.
5. The method for time synchronization of high-definition map acquisition data based on closed-loop signals according to claim 4, wherein the first convolutional neural network model as a filter and/or the second convolutional neural network model as a filter is a depth residual network model.
6. The method of claim 5, wherein calculating a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector, comprises:
Calculating the waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector in the following formula;
wherein, the formula is:
Figure QLYQS_1
wherein ,
Figure QLYQS_2
for the waveform characteristic vector of the PPS signal, < >>
Figure QLYQS_3
For the waveform feature vector of the frame synchronization signal, < >>
Figure QLYQS_4
For the waveform difference feature vector, +_>
Figure QLYQS_5
Representing the difference by location.
7. The method for time synchronization of high definition map acquisition data based on signal closed loop of claim 6, wherein performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector, comprises:
performing feature distribution constraint on the waveform difference feature vector by using the following formula to obtain the constrained waveform difference feature vector;
wherein, the formula is:
Figure QLYQS_6
wherein ,
Figure QLYQS_8
representing the waveform difference feature vector, and +.>
Figure QLYQS_13
In the form of a row vector>
Figure QLYQS_16
A transpose vector representing the waveform difference feature vector, < >>
Figure QLYQS_9
Representing the square of the two norms of the vector, +.>
Figure QLYQS_12
Frobenius norms of the matrix are represented, < >>
Figure QLYQS_15
and />
Figure QLYQS_18
The ++th of the waveform difference feature vector before and after correction, respectively>
Figure QLYQS_7
Characteristic value of individual position->
Figure QLYQS_11
and />
Figure QLYQS_14
Is a feature set
Figure QLYQS_17
Mean and variance of>
Figure QLYQS_10
Representing the calculation of a value of a natural exponent function that is a power of a value.
8. The method for time synchronization of high definition map collected data based on signal closed loop according to claim 7, wherein the step of passing the constrained waveform difference feature vector through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization accuracy meets a predetermined standard, comprises:
performing full-connection coding on the constrained waveform difference feature vector by using a plurality of full-connection layers of the classifier to obtain a coded classification feature vector; and
and the coding classification feature vector is passed through a Softmax classification function of the classifier to obtain the classification result.
9. The utility model provides a high definition map acquisition data time synchronization system based on signal closed loop which characterized in that includes: the system comprises a positioning module, a micro control unit, an inertial navigation module, a system level chip and a camera module;
the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-in-chip at the same time, wherein the camera module is started after the system-in-chip receives the PPS signals;
The camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module;
the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard;
after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and
and the system-level chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
CN202310212693.XA 2023-03-08 2023-03-08 High-definition map acquisition data time synchronization method and system based on signal closed loop Active CN116068585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310212693.XA CN116068585B (en) 2023-03-08 2023-03-08 High-definition map acquisition data time synchronization method and system based on signal closed loop

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310212693.XA CN116068585B (en) 2023-03-08 2023-03-08 High-definition map acquisition data time synchronization method and system based on signal closed loop

Publications (2)

Publication Number Publication Date
CN116068585A CN116068585A (en) 2023-05-05
CN116068585B true CN116068585B (en) 2023-06-09

Family

ID=86178626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310212693.XA Active CN116068585B (en) 2023-03-08 2023-03-08 High-definition map acquisition data time synchronization method and system based on signal closed loop

Country Status (1)

Country Link
CN (1) CN116068585B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103957344A (en) * 2014-04-28 2014-07-30 广州杰赛科技股份有限公司 Video synchronization method and system for multiple camera devices
CN111182226A (en) * 2019-07-16 2020-05-19 北京欧比邻科技有限公司 Method, device, medium and electronic equipment for synchronous working of multiple cameras
CN111860604A (en) * 2020-06-24 2020-10-30 国汽(北京)智能网联汽车研究院有限公司 Data fusion method, system and computer storage medium
CN115529096A (en) * 2021-06-24 2022-12-27 高德软件有限公司 Timestamp synchronization method, data acquisition platform and chip

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5846165B2 (en) * 2013-07-11 2016-01-20 カシオ計算機株式会社 Feature amount extraction apparatus, method, and program
CN113110160B (en) * 2021-04-09 2023-03-21 黑芝麻智能科技(上海)有限公司 Time synchronization method and device of domain controller, domain controller and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103957344A (en) * 2014-04-28 2014-07-30 广州杰赛科技股份有限公司 Video synchronization method and system for multiple camera devices
CN111182226A (en) * 2019-07-16 2020-05-19 北京欧比邻科技有限公司 Method, device, medium and electronic equipment for synchronous working of multiple cameras
CN111860604A (en) * 2020-06-24 2020-10-30 国汽(北京)智能网联汽车研究院有限公司 Data fusion method, system and computer storage medium
CN115529096A (en) * 2021-06-24 2022-12-27 高德软件有限公司 Timestamp synchronization method, data acquisition platform and chip

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
无人机航测视频时间戳同步方法研究;江志东等;《仪表技术》(第9期);第12-14、27页 *

Also Published As

Publication number Publication date
CN116068585A (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN108229442B (en) Method for rapidly and stably detecting human face in image sequence based on MS-KCF
CN109564618B (en) Method and system for facial image analysis
CN111160297A (en) Pedestrian re-identification method and device based on residual attention mechanism space-time combined model
US9070041B2 (en) Image processing apparatus and image processing method with calculation of variance for composited partial features
US20200125947A1 (en) Method and apparatus for quantizing parameters of neural network
US11715190B2 (en) Inspection system, image discrimination system, discrimination system, discriminator generation system, and learning data generation device
CN112184508A (en) Student model training method and device for image processing
CN111902826A (en) Positioning, mapping and network training
CN112784778B (en) Method, apparatus, device and medium for generating model and identifying age and sex
CN113313053B (en) Image processing method, device, apparatus, medium, and program product
JP2022554302A (en) Systems, methods and media for manufacturing processes
JP2022078310A (en) Image classification model generation method, device, electronic apparatus, storage medium, computer program, roadside device and cloud control platform
CN116992226A (en) Water pump motor fault detection method and system
WO2022169681A1 (en) Learning orthogonal factorization in gan latent space
CN117237359B (en) Conveyor belt tearing detection method and device, storage medium and electronic equipment
CN116068585B (en) High-definition map acquisition data time synchronization method and system based on signal closed loop
CN112669452B (en) Object positioning method based on convolutional neural network multi-branch structure
CN115994558A (en) Pre-training method, device, equipment and storage medium of medical image coding network
CN116563291B (en) SMT intelligent error-proofing feeding detector
CN111460909A (en) Vision-based goods location management method and device
Gupta et al. VehiPose: a multi-scale framework for vehicle pose estimation
CN116048682A (en) Terminal system interface layout comparison method and electronic equipment
CN111931767B (en) Multi-model target detection method, device and system based on picture informativeness and storage medium
CN116593890B (en) Permanent magnet synchronous motor rotor and forming detection method thereof
US20230401427A1 (en) Training neural network with budding ensemble architecture based on diversity loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant