CN116068585B - High-definition map acquisition data time synchronization method and system based on signal closed loop - Google Patents
High-definition map acquisition data time synchronization method and system based on signal closed loop Download PDFInfo
- Publication number
- CN116068585B CN116068585B CN202310212693.XA CN202310212693A CN116068585B CN 116068585 B CN116068585 B CN 116068585B CN 202310212693 A CN202310212693 A CN 202310212693A CN 116068585 B CN116068585 B CN 116068585B
- Authority
- CN
- China
- Prior art keywords
- signal
- feature vector
- waveform
- pps
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 239000013598 vector Substances 0.000 claims description 200
- 238000003062 neural network model Methods 0.000 claims description 33
- 238000013527 convolutional neural network Methods 0.000 claims description 32
- 238000009826 distribution Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 24
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 9
- 238000011176 pooling Methods 0.000 claims description 9
- 230000001360 synchronised effect Effects 0.000 claims description 9
- 238000012546 transfer Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000000605 extraction Methods 0.000 description 7
- 239000000546 pharmaceutical excipient Substances 0.000 description 6
- 238000003860 storage Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/01—Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/03—Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
- G01S19/47—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
Abstract
A method and system for synchronizing the time of collecting data of high-definition map based on signal closed loop are disclosed. The method comprises the following steps: the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-level chip at the same time, wherein after the system-level chip receives the PPS signals, the camera module is started; the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard; after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server. Thus, the map positioning accuracy can be improved.
Description
Technical Field
The application relates to the field of signal synchronization, and in particular relates to a high-definition map acquisition data time synchronization method and system based on signal closed loop.
Background
The high-definition map acquisition product generally comprises a positioning module (RTK GNSS), an inertial navigation module (IMU), a CAMERA module (CAMERA), an MCU, an SOC and the like. The acquired values of the camera photo, the positioning pose, the IMU measured value sequence and the like are required to be synchronously aligned to the GPS timestamp of the positioning module, the time error is required to be within 10ms, and the smaller the time error is, the better the time error is. The existing high-definition map acquisition product has the problems of large time delay and asynchronous map data and positioning information, so that the map positioning accuracy is insufficient.
Therefore, an optimized high definition map acquisition data time synchronization scheme is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a high-definition map acquisition data time synchronization method and system based on signal closed loop. The method comprises the following steps: the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-level chip at the same time, wherein after the system-level chip receives the PPS signals, the camera module is started; the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard; after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server. Thus, the map positioning accuracy can be improved.
According to one aspect of the application, there is provided a method for synchronizing time of high-definition map acquisition data based on signal closed loop, comprising: the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-level chip at the same time, wherein after the system-level chip receives the PPS signals, the camera module is started; the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard; after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
In the above method for time synchronization of high-definition map acquisition data based on signal closed loop, the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether synchronization accuracy meets a predetermined standard, including: passing the PPS signal through a first convolutional neural network model serving as a filter to obtain a PPS signal waveform characteristic vector; the frame synchronization signal passes through a second convolution neural network model serving as a filter to obtain a waveform characteristic vector of the frame synchronization signal; calculating a waveform difference characteristic vector between the PPS signal waveform characteristic vector and the frame synchronization signal waveform characteristic vector; performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and passing the constrained waveform difference feature vector through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard.
In the above method for time synchronization of high-definition map acquisition data based on signal closed loop, the passing the PPS signal through a first convolutional neural network model as a filter to obtain a PPS signal waveform feature vector includes: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the first convolution neural network model as a filter to output the waveform feature vector of the PPS signal by the last layer of the first convolution neural network model as the filter, wherein the input of the first layer of the first convolution neural network model as the filter is the PPS signal.
In the above method for time synchronization of high-definition map acquisition data based on signal closed loop, the method for obtaining the waveform feature vector of the frame synchronization signal by passing the frame synchronization signal through a second convolutional neural network model as a filter comprises: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the second convolution neural network model as a filter to output the waveform feature vector of the frame synchronous signal by the last layer of the second convolution neural network model as the filter, wherein the input of the first layer of the second convolution neural network model as the filter is the frame synchronous signal.
In the above method for time synchronization of high-definition map acquisition data based on signal closed loop, the first convolutional neural network model used as a filter and/or the second convolutional neural network model used as a filter is a depth residual network model.
In the above method for time synchronization of high-definition map acquisition data based on signal closed loop, calculating a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector includes: calculating the waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector in the following formula; wherein, the formula is:
wherein ,for the waveform characteristic vector of the PPS signal, < >>For the waveform feature vector of the frame synchronization signal, < >>For the waveform difference feature vector, +_>Representing the difference by location.
In the above method for synchronizing the time of collecting data from a high-definition map based on signal closed loop, performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector, including: performing feature distribution constraint on the waveform difference feature vector by using the following formula to obtain the constrained waveform difference feature vector; wherein, the formula is:
wherein ,representing the waveform difference feature vector, and +.>In the form of a row vector>A transpose vector representing the waveform difference feature vector, < >>Representing the square of the two norms of the vector, +.>Frobenius norms of the matrix are represented, < >>Andthe ++th of the waveform difference feature vector before and after correction, respectively>Characteristic value of individual position-> and />Is feature set +.>Mean and variance of>Representing the calculation of a value of a natural exponent function that is a power of the value.
In the above method for time synchronization of high definition map collected data based on signal closed loop, the step of passing the constrained waveform difference feature vector through a classifier to obtain a classification result, where the classification result is used to indicate whether the synchronization precision meets a predetermined standard, includes: performing full-connection coding on the constrained waveform difference feature vector by using a plurality of full-connection layers of the classifier to obtain a coded classification feature vector; and passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
According to another aspect of the present application, there is provided a signal closed loop-based high definition map acquisition data time synchronization system, comprising: the system comprises a positioning module, a micro control unit, an inertial navigation module, a system level chip and a camera module; the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-in-chip at the same time, wherein the camera module is started after the system-in-chip receives the PPS signals; the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard; after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
Compared with the prior art, the method and the system for synchronizing the time of the high-definition map acquisition data based on the signal closed loop, provided by the application, comprise the following steps: the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-level chip at the same time, wherein after the system-level chip receives the PPS signals, the camera module is started; the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard; after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server. Thus, the map positioning accuracy can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. The following drawings are not intended to be drawn to scale, with emphasis instead being placed upon illustrating the principles of the present application.
Fig. 1 is a block diagram of a high definition map acquisition data time synchronization system based on signal closed loop according to an embodiment of the present application.
Fig. 2 is a flowchart of a method for time synchronization of high definition map acquisition data based on signal closed loop according to an embodiment of the present application.
Fig. 3 is a functional block diagram of a high-definition map acquisition data time synchronization system based on signal closed loop according to an embodiment of the present application.
Fig. 4 is a schematic diagram of an operation process of a high-definition map acquisition data time synchronization system based on signal closed loop according to an embodiment of the application.
Fig. 5 is a schematic view of a scenario of a sub-step S300 in a high-definition map acquisition data time synchronization method based on signal closed loop according to an embodiment of the present application.
Fig. 6 is a flowchart of sub-step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of sub-step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application.
Fig. 8 is a flowchart of sub-step S350 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application.
Fig. 9 is a block diagram of the signal synchronization control system according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, are also within the scope of the present application.
As used in this application and in the claims, the terms "a," "an," "the," and/or "the" are not specific to the singular, but may include the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
Flowcharts are used in this application to describe the operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Accordingly, in the technical solution of the present application, it is critical to improve the synchronicity between the map data and the positioning information to monitor the synchronization signals in real time, so as to ensure that the synchronization between the synchronization signals accurately meets the predetermined standard.
Specifically, as shown in fig. 1, in the technical solution of the present application, the system 100 for synchronizing the time of acquiring data from a high-definition map based on a closed signal loop includes: the positioning module 110, the micro control unit 120, the inertial navigation module 130, the system-level chip 140 and the camera module 150, wherein the high-definition map acquisition data time synchronization system based on signal closed loop operates in the following manner:
1) And the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-in-chip at the same time, wherein the camera module is started after the system-in-chip receives the PPS signals.
2) And the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module.
3) The micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether synchronization accuracy meets a predetermined standard.
4) And after determining that the synchronization precision meets the preset standard, the micro control unit transmits the positioning information acquired by the positioning module and the pose signal acquired by the inertial navigation module to the system-in-chip.
5) And the system-level chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
Correspondingly, as shown in fig. 2, in the technical scheme of the present application, a corresponding method for synchronizing time of acquiring data from a high-definition map based on signal closed loop is also provided, which includes: s100, a positioning module simultaneously transmits PPS signals to a micro control unit, an inertial navigation module and a system-level chip, wherein after the system-level chip receives the PPS signals, a camera module is started; s200, the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module; s300, the micro control unit analyzes the PPS signal and the frame synchronization signal based on a preset algorithm model to determine whether the synchronization precision meets a preset standard; s400, after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and S500, the system-in-chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
Referring to fig. 3 and fig. 4, in the technical scheme of the application, a PPS signal of the GPS is used as a main synchronization signal, and meanwhile, frame synchronization signals of the camera module are fed back to the MCU and the IMU, so as to achieve the purpose of accurate synchronization of the GPS, the IMU and the photo data. Specific: the PPS signals are simultaneously output to the MCU, the IMU and the SOC, after the SOC receives the synchronizing signals, the camera is started to draw, meanwhile, the frame synchronizing signals F_SYNC of the camera are fed back to the MCU and the IMU module, and the MCU accurately compares the PPS synchronizing signals with the F_SYNC synchronizing signals and controls time synchronization. Within the synchronization precision, the MCU transmits the positioning "+" pose information to the SOC, and the SOC repacks the image data based on the positioning "+" pose information and transmits the image data to the background server together. Compared with the prior art, the technical scheme of the method and the device utilizes PPS and F_SYNC signals to close the loop, and the PPS and F_SYNC signals are accurately compared, so that time delay is reduced, and data synchronization precision is improved.
Specifically, in the technical solution of the present application, the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard, and includes the steps of: firstly, passing the PPS signal through a first convolution neural network model serving as a filter to obtain a waveform characteristic vector of the PPS signal; and simultaneously, the frame synchronization signal passes through a second convolution neural network model serving as a filter to obtain a waveform characteristic vector of the frame synchronization signal. That is, in the technical solution of the present application, a convolutional neural network model having excellent performance in the image feature extraction field is used as a feature filter to capture local waveform feature vectors of the PPS signal and the frame synchronization signal in the image field.
Further, a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector is calculated. That is, the high-dimensional waveform characteristic representation between the synchronization difference between the PPS signal and the frame synchronization signal is represented by a waveform difference characteristic vector between the PPS signal waveform characteristic vector and the frame synchronization signal waveform characteristic vector. Finally, the waveform difference feature vector is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard. That is, the class probability labels to which the waveform difference feature vectors belong are determined by the classifier to determine whether synchronization between the synchronization signals is accurate within a predetermined standard range.
Here, since the waveform image semantic difference between the PPS signal and the frame synchronization signal itself in the image source domain may amplify the difference between feature semantics in the feature domain due to feature extraction of the first convolutional neural network model as a filter and the second convolutional neural network model as a filter, although it is advantageous for the waveform difference feature vector to express the difference feature between the PPS signal and the frame synchronization signal, the position-by-position difference computation between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector may also cause discretization of the overall feature distribution of the waveform difference feature vector, so that the waveform difference feature vector has a poor convergence of a predetermined class label with respect to a classifier when classified by the classifier, which may affect the training speed of the classifier and the accuracy of the classification result.
Therefore, in the technical scheme of the application, the waveform difference characteristic vectorPerforming geometric constraint re-parameterization of a positive-localization space, wherein the geometric constraint re-parameterization specifically comprises the following steps:
and />Is feature set +.>Mean and variance of>Representing the square of the two norms of the vector, +.>Frobenius norms of the matrix are represented, < >> and />The waveform difference feature vector before and after correction respectively +.>Is>Characteristic value of individual position, and->Is in the form of a row vector.
Here, the waveform difference feature vectorThe geometric constrained repartitioning of the forward-defined excipient space of (2) may be based on a projection modulo length relation of the Bessel inequality by projecting the square of the vector norm expressed in the form of an inner product within the associated set space of vectors themselves such that the set of distributions of vectors has modulo length constraints within the geometric metric subspace of the forward-defined excipient space to repartitionize the distribution space to a bounded forward-defined excipient space having a closed subspace based on the geometric constraints of the feature distribution. Thus, the waveform difference feature vector +.>The convergence of the overall characteristic distribution under the preset class label improves the training speed of the classifier and the accuracy of the classification result.
Fig. 5 is an application scenario diagram of sub-step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application. As shown in fig. 5, in this application scenario, first, a PPS signal (for example, D1 illustrated in fig. 5) received by a system-in-chip and a frame synchronization signal (for example, D2 illustrated in fig. 5) transmitted by a camera module are input to a server (for example, S illustrated in fig. 5) in which a signal synchronization control algorithm is deployed, wherein the server can process the PPS signal and the frame synchronization signal using the signal synchronization control algorithm to obtain a classification result for indicating whether synchronization accuracy meets a predetermined standard.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Fig. 6 is a flowchart of sub-step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application. As shown in fig. 6, in a step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application, the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard, and includes the steps of: s310, passing the PPS signal through a first convolution neural network model serving as a filter to obtain a waveform characteristic vector of the PPS signal; s320, the frame synchronization signal passes through a second convolution neural network model serving as a filter to obtain a waveform characteristic vector of the frame synchronization signal; s330, calculating waveform difference characteristic vectors between the waveform characteristic vectors of the PPS signals and the waveform characteristic vectors of the frame synchronous signals; s340, performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and S350, passing the constrained waveform difference feature vector through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard.
Fig. 7 is a schematic structural diagram of sub-step S300 in the method for time synchronization of high-definition map acquisition data based on signal closed loop according to an embodiment of the present application. As shown in fig. 7, in the network architecture, first, the PPS signal is passed through a first convolutional neural network model as a filter to obtain a PPS signal waveform feature vector; then, the frame synchronizing signal passes through a second convolution neural network model serving as a filter to obtain a waveform characteristic vector of the frame synchronizing signal; then, calculating waveform difference characteristic vectors between the PPS signal waveform characteristic vectors and the frame synchronization signal waveform characteristic vectors; then, carrying out feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and finally, the constrained waveform difference feature vector is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard.
The convolutional neural network (Convolutional Neural Network, CNN) is an artificial neural network and has wide application in the fields of image recognition and the like. The convolutional neural network may include an input layer, a hidden layer, and an output layer, where the hidden layer may include a convolutional layer, a pooling layer, an activation layer, a full connection layer, etc., where the previous layer performs a corresponding operation according to input data, outputs an operation result to the next layer, and obtains a final result after the input initial data is subjected to a multi-layer operation. The convolutional neural network model has excellent performance in the aspect of image local feature extraction by taking a convolutional kernel as a feature filtering factor, and has stronger feature extraction generalization capability and fitting capability compared with the traditional image feature extraction algorithm based on statistics or feature engineering.
In the technical scheme of the application, a convolutional neural network model with excellent performance in the field of image feature extraction is firstly used as a feature filter to capture local waveform feature vectors of the PPS signal and the frame synchronization signal in the image field.
More specifically, in step S310, the PPS signal is passed through a first convolutional neural network model as a filter to obtain a PPS signal waveform feature vector.
Accordingly, in one specific example, passing the PPS signal through a first convolutional neural network model as a filter to obtain a PPS signal waveform feature vector, including: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the first convolution neural network model as a filter to output the waveform feature vector of the PPS signal by the last layer of the first convolution neural network model as the filter, wherein the input of the first layer of the first convolution neural network model as the filter is the PPS signal.
More specifically, in step S320, the frame synchronization signal is passed through a second convolutional neural network model as a filter to obtain a frame synchronization signal waveform feature vector.
Accordingly, in one specific example, passing the frame synchronization signal through a second convolutional neural network model as a filter to obtain a frame synchronization signal waveform feature vector, comprising: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the second convolution neural network model as a filter to output the waveform feature vector of the frame synchronous signal by the last layer of the second convolution neural network model as the filter, wherein the input of the first layer of the second convolution neural network model as the filter is the frame synchronous signal.
Accordingly, in a specific example, the first convolutional neural network model as a filter and/or the second convolutional neural network model as a filter is a depth residual network model.
More specifically, in step S330, a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector is calculated. That is, the high-dimensional waveform characteristic representation between the synchronization difference between the PPS signal and the frame synchronization signal is represented by a waveform difference characteristic vector between the PPS signal waveform characteristic vector and the frame synchronization signal waveform characteristic vector. For example, the waveform difference feature vector may be obtained by calculating a difference between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector in a position-differential manner.
Accordingly, in one specific example, calculating a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector includes: calculating the waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector in the following formula; wherein, the formula is:
wherein ,for the waveform characteristic vector of the PPS signal, < >>For the waveform feature vector of the frame synchronization signal, < >>For the waveformDifferential feature vector(s)>Representing the difference by location.
More specifically, in step S340, feature distribution constraint is performed on the waveform difference feature vector to obtain a constrained waveform difference feature vector. Here, since the waveform image semantic difference between the PPS signal and the frame synchronization signal itself in the image source domain may amplify the difference between feature semantics in the feature domain due to feature extraction of the first convolutional neural network model as a filter and the second convolutional neural network model as a filter, although it is advantageous for the waveform difference feature vector to express the difference feature between the PPS signal and the frame synchronization signal, the position-by-position difference computation between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector may also cause discretization of the overall feature distribution of the waveform difference feature vector, so that the waveform difference feature vector has a poor convergence of a predetermined class label with respect to a classifier when classified by the classifier, which may affect the training speed of the classifier and the accuracy of the classification result. Therefore, in the technical solution of the present application, the geometric constraint re-parameterization of the normal-localization space is performed on the waveform difference feature vector V.
Accordingly, in one specific example, performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector, including: performing feature distribution constraint on the waveform difference feature vector by using the following formula to obtain the constrained waveform difference feature vector; wherein, the formula is:
wherein ,representing the waveform difference feature vector, and +.>In the form of a row vector>A transpose vector representing the waveform difference feature vector, < >>Representing the square of the two norms of the vector, +.>Frobenius norms of the matrix are represented, < >>Andthe ++th of the waveform difference feature vector before and after correction, respectively>Characteristic value of individual position-> and />Is feature set +.>Mean and variance of>Representing the calculation of a value of a natural exponent function that is a power of the value.
Here, the geometric constraint re-parameterization of the positive-definite excipient space of the waveform difference feature vector V may be based on the projection modulo length relation of the bezier inequality, by the projection of the vector norm square expressed in the form of inner product in the associated set space of the vector itself, so that the distribution set of the vector has modulo length constraint in the geometric metric subspace having the positive-definite excipient space, to re-parameterize the distribution space to the bounded positive-definite excipient space having the closed subspace based on the geometric constraint of the feature distribution. Therefore, the convergence of the overall characteristic distribution of the waveform difference characteristic vector V under the preset class label is improved, and the training speed of the classifier and the accuracy of the classification result are improved.
More specifically, in step S350, the constrained waveform difference feature vector is passed through a classifier to obtain a classification result indicating whether or not the synchronization accuracy meets a predetermined criterion. That is, the class probability labels to which the waveform difference feature vectors belong are determined by the classifier to determine whether synchronization between the synchronization signals is accurate within a predetermined standard range.
The role of the classifier is to learn the classification rules and classifier using a given class, known training data, and then classify (or predict) the unknown data. Logistic regression (logistics), SVM, etc. are commonly used to solve the classification problem, and for multi-classification problems (multi-class classification), logistic regression or SVM can be used as well, but multiple bi-classifications are required to compose multiple classifications, but this is error-prone and inefficient, and the commonly used multi-classification method is the Softmax classification function.
Accordingly, in one specific example, as shown in fig. 8, the constrained waveform difference feature vector is passed through a classifier to obtain a classification result, where the classification result is used to indicate whether the synchronization accuracy meets a predetermined criterion, and the method includes: s351, performing full-connection coding on the constrained waveform difference feature vector by using a plurality of full-connection layers of the classifier to obtain a coded classification feature vector; and S352, passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
In summary, the method for synchronizing the time of collecting data from the high-definition map based on the signal closed loop according to the embodiment of the application comprises the following sub-steps: the micro control unit analyzes the PPS signal and the frame synchronization signal based on a preset algorithm model to determine whether the synchronization precision meets a preset standard, and the PPS signal and the frame synchronization signal are respectively passed through a first convolution neural network model serving as a filter and a second convolution neural network model serving as the filter to obtain a PPS signal waveform characteristic vector and a frame synchronization signal waveform characteristic vector; then, calculating waveform difference characteristic vectors between the PPS signal waveform characteristic vectors and the frame synchronization signal waveform characteristic vectors; then, carrying out feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and finally, the constrained waveform difference feature vector is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard.
Fig. 9 is a block diagram of a signal synchronization control system 300 according to an embodiment of the present application. As shown in fig. 9, a signal synchronization control system 300 according to an embodiment of the present application includes: a first convolutional encoding module 310, configured to pass the PPS signal through a first convolutional neural network model serving as a filter to obtain a PPS signal waveform feature vector; a second convolutional encoding module 320, configured to pass the frame synchronization signal through a second convolutional neural network model serving as a filter to obtain a waveform feature vector of the frame synchronization signal; a waveform difference calculation module 330, configured to calculate a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector; the feature distribution constraint module 340 is configured to perform feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and a classification module 350, configured to pass the constrained waveform difference feature vector through a classifier to obtain a classification result, where the classification result is used to indicate whether the synchronization accuracy meets a predetermined criterion.
In one example, in the signal synchronization control system 300, the first convolutional encoding module 310 is configured to: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the first convolution neural network model as a filter to output the waveform feature vector of the PPS signal by the last layer of the first convolution neural network model as the filter, wherein the input of the first layer of the first convolution neural network model as the filter is the PPS signal.
In one example, in the signal synchronization control system 300, the second convolutional encoding module 320 is configured to: and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the second convolution neural network model as a filter to output the waveform feature vector of the frame synchronous signal by the last layer of the second convolution neural network model as the filter, wherein the input of the first layer of the second convolution neural network model as the filter is the frame synchronous signal.
In one example, in the signal synchronization control system 300, the first convolutional neural network model as a filter and/or the second convolutional neural network model as a filter is a depth residual network model.
In one example, in the signal synchronization control system 300, the waveform difference calculation module 330 is configured to: calculating the waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector in the following formula; wherein, the formula is:
wherein ,for the waveform characteristic vector of the PPS signal, < >>For the waveform feature vector of the frame synchronization signal, < >>For the waveform difference feature vector, +_>Representing the difference by location.
In one example, in the signal synchronization control system 300, the feature distribution constraint module 340 is configured to: performing feature distribution constraint on the waveform difference feature vector by using the following formula to obtain the constrained waveform difference feature vector; wherein, the formula is:
wherein ,representing the waveform difference feature vector, and +.>In the form of a row vector>A transpose vector representing the waveform difference feature vector, < > >Representing the square of the two norms of the vector, +.>Frobenius norms of the matrix are represented, < >>Andthe ++th of the waveform difference feature vector before and after correction, respectively>Characteristic value of individual position-> and />Is feature set +.>Mean and variance of>Representing the calculation of a value of a natural exponent function that is a power of the value.
In one example, in the signal synchronization control system 300, the classification module 350 is configured to: performing full-connection coding on the constrained waveform difference feature vector by using a plurality of full-connection layers of the classifier to obtain a coded classification feature vector; and passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described signal synchronization control system 300 have been described in detail in the above description of the sub-step S300 in the signal closed-loop-based high-definition map acquisition data time synchronization method of fig. 6 to 8, and thus, repetitive descriptions thereof will be omitted.
As described above, the signal synchronization control system 300 according to the embodiment of the present application may be implemented in various wireless terminals, for example, a server or the like having a signal synchronization control algorithm. In one example, the signal synchronization control system 300 according to embodiments of the present application may be integrated into a wireless terminal as a software module and/or hardware module. For example, the signal synchronization control system 300 may be a software module in the operating system of the wireless terminal or may be an application developed for the wireless terminal; of course, the signal synchronization control system 300 may also be one of a plurality of hardware modules of the wireless terminal.
Alternatively, in another example, the signal synchronization control system 300 and the wireless terminal may be separate devices, and the signal synchronization control system 300 may be connected to the wireless terminal through a wired and/or wireless network and transmit the interactive information in a agreed data format.
According to another aspect of the present application, there is also provided a non-volatile computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a computer, can perform a method as described above.
Program portions of the technology may be considered to be "products" or "articles of manufacture" in the form of executable code and/or associated data, embodied or carried out by a computer readable medium. A tangible, persistent storage medium may include any memory or storage used by a computer, processor, or similar device or related module. Such as various semiconductor memories, tape drives, disk drives, or the like, capable of providing storage functionality for software.
All or a portion of the software may sometimes communicate over a network, such as the internet or other communication network. Such communication may load software from one computer device or processor to another. For example: a hardware platform loaded from a server or host computer of the video object detection device to a computer environment, or other computer environment implementing the system, or similar functioning system related to providing information needed for object detection. Thus, another medium capable of carrying software elements may also be used as a physical connection between local devices, such as optical, electrical, electromagnetic, etc., propagating through cable, optical cable, air, etc. Physical media used for carrier waves, such as electrical, wireless, or optical, may also be considered to be software-bearing media. Unless limited to a tangible "storage" medium, other terms used herein to refer to a computer or machine "readable medium" mean any medium that participates in the execution of any instructions by a processor.
This application uses specific words to describe embodiments of the application. Reference to "a first/second embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the invention are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the following claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.
Claims (9)
1. A high-definition map acquisition data time synchronization method based on signal closed loop is characterized by comprising the following steps:
the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-level chip at the same time, wherein after the system-level chip receives the PPS signals, the camera module is started;
the camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module;
the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard;
after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and
and the system-level chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
2. The method according to claim 1, wherein the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether synchronization accuracy meets a predetermined criterion, comprising:
Passing the PPS signal through a first convolutional neural network model serving as a filter to obtain a PPS signal waveform characteristic vector;
the frame synchronization signal passes through a second convolution neural network model serving as a filter to obtain a waveform characteristic vector of the frame synchronization signal;
calculating a waveform difference characteristic vector between the PPS signal waveform characteristic vector and the frame synchronization signal waveform characteristic vector;
performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector; and
and the constrained waveform difference feature vector is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization precision meets a preset standard.
3. The method for time synchronization of high definition map acquisition data based on signal closed loop according to claim 2, wherein passing the PPS signal through a first convolutional neural network model as a filter to obtain a PPS signal waveform feature vector comprises:
and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the first convolution neural network model as a filter to output the waveform feature vector of the PPS signal by the last layer of the first convolution neural network model as the filter, wherein the input of the first layer of the first convolution neural network model as the filter is the PPS signal.
4. The method for time synchronization of high definition map acquisition data based on signal closed loop according to claim 3, wherein the step of passing the frame synchronization signal through a second convolutional neural network model as a filter to obtain a waveform feature vector of the frame synchronization signal comprises the steps of:
and respectively carrying out two-dimensional convolution processing, mean pooling processing based on a feature matrix and nonlinear activation processing on input data in forward transfer of layers by using each layer of the second convolution neural network model as a filter to output the waveform feature vector of the frame synchronous signal by the last layer of the second convolution neural network model as the filter, wherein the input of the first layer of the second convolution neural network model as the filter is the frame synchronous signal.
5. The method for time synchronization of high-definition map acquisition data based on closed-loop signals according to claim 4, wherein the first convolutional neural network model as a filter and/or the second convolutional neural network model as a filter is a depth residual network model.
6. The method of claim 5, wherein calculating a waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector, comprises:
Calculating the waveform difference feature vector between the PPS signal waveform feature vector and the frame synchronization signal waveform feature vector in the following formula;
wherein, the formula is:
7. The method for time synchronization of high definition map acquisition data based on signal closed loop of claim 6, wherein performing feature distribution constraint on the waveform difference feature vector to obtain a constrained waveform difference feature vector, comprises:
performing feature distribution constraint on the waveform difference feature vector by using the following formula to obtain the constrained waveform difference feature vector;
wherein, the formula is:
wherein ,representing the waveform difference feature vector, and +.>In the form of a row vector>A transpose vector representing the waveform difference feature vector, < >>Representing the square of the two norms of the vector, +.>Frobenius norms of the matrix are represented, < >> and />The ++th of the waveform difference feature vector before and after correction, respectively>Characteristic value of individual position-> and />Is a feature setMean and variance of>Representing the calculation of a value of a natural exponent function that is a power of a value.
8. The method for time synchronization of high definition map collected data based on signal closed loop according to claim 7, wherein the step of passing the constrained waveform difference feature vector through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the synchronization accuracy meets a predetermined standard, comprises:
performing full-connection coding on the constrained waveform difference feature vector by using a plurality of full-connection layers of the classifier to obtain a coded classification feature vector; and
and the coding classification feature vector is passed through a Softmax classification function of the classifier to obtain the classification result.
9. The utility model provides a high definition map acquisition data time synchronization system based on signal closed loop which characterized in that includes: the system comprises a positioning module, a micro control unit, an inertial navigation module, a system level chip and a camera module;
the positioning module transmits PPS signals to the micro control unit, the inertial navigation module and the system-in-chip at the same time, wherein the camera module is started after the system-in-chip receives the PPS signals;
The camera module sends a frame synchronization signal to the micro control unit and the inertial navigation module;
the micro control unit analyzes the PPS signal and the frame synchronization signal based on a predetermined algorithm model to determine whether the synchronization accuracy meets a predetermined standard;
after determining that the synchronization precision meets a preset standard, the micro control unit transmits positioning information acquired by the positioning module and pose signals acquired by the inertial navigation module to the system-in-chip; and
and the system-level chip transmits the pose signal, the positioning information and the image data acquired by the camera module to a background server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310212693.XA CN116068585B (en) | 2023-03-08 | 2023-03-08 | High-definition map acquisition data time synchronization method and system based on signal closed loop |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310212693.XA CN116068585B (en) | 2023-03-08 | 2023-03-08 | High-definition map acquisition data time synchronization method and system based on signal closed loop |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116068585A CN116068585A (en) | 2023-05-05 |
CN116068585B true CN116068585B (en) | 2023-06-09 |
Family
ID=86178626
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310212693.XA Active CN116068585B (en) | 2023-03-08 | 2023-03-08 | High-definition map acquisition data time synchronization method and system based on signal closed loop |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116068585B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103957344A (en) * | 2014-04-28 | 2014-07-30 | 广州杰赛科技股份有限公司 | Video synchronization method and system for multiple camera devices |
CN111182226A (en) * | 2019-07-16 | 2020-05-19 | 北京欧比邻科技有限公司 | Method, device, medium and electronic equipment for synchronous working of multiple cameras |
CN111860604A (en) * | 2020-06-24 | 2020-10-30 | 国汽(北京)智能网联汽车研究院有限公司 | Data fusion method, system and computer storage medium |
CN115529096A (en) * | 2021-06-24 | 2022-12-27 | 高德软件有限公司 | Timestamp synchronization method, data acquisition platform and chip |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5846165B2 (en) * | 2013-07-11 | 2016-01-20 | カシオ計算機株式会社 | Feature amount extraction apparatus, method, and program |
CN113110160B (en) * | 2021-04-09 | 2023-03-21 | 黑芝麻智能科技(上海)有限公司 | Time synchronization method and device of domain controller, domain controller and storage medium |
-
2023
- 2023-03-08 CN CN202310212693.XA patent/CN116068585B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103957344A (en) * | 2014-04-28 | 2014-07-30 | 广州杰赛科技股份有限公司 | Video synchronization method and system for multiple camera devices |
CN111182226A (en) * | 2019-07-16 | 2020-05-19 | 北京欧比邻科技有限公司 | Method, device, medium and electronic equipment for synchronous working of multiple cameras |
CN111860604A (en) * | 2020-06-24 | 2020-10-30 | 国汽(北京)智能网联汽车研究院有限公司 | Data fusion method, system and computer storage medium |
CN115529096A (en) * | 2021-06-24 | 2022-12-27 | 高德软件有限公司 | Timestamp synchronization method, data acquisition platform and chip |
Non-Patent Citations (1)
Title |
---|
无人机航测视频时间戳同步方法研究;江志东等;《仪表技术》(第9期);第12-14、27页 * |
Also Published As
Publication number | Publication date |
---|---|
CN116068585A (en) | 2023-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108229442B (en) | Method for rapidly and stably detecting human face in image sequence based on MS-KCF | |
CN109564618B (en) | Method and system for facial image analysis | |
CN111160297A (en) | Pedestrian re-identification method and device based on residual attention mechanism space-time combined model | |
US9070041B2 (en) | Image processing apparatus and image processing method with calculation of variance for composited partial features | |
US20200125947A1 (en) | Method and apparatus for quantizing parameters of neural network | |
US11715190B2 (en) | Inspection system, image discrimination system, discrimination system, discriminator generation system, and learning data generation device | |
CN112184508A (en) | Student model training method and device for image processing | |
CN111902826A (en) | Positioning, mapping and network training | |
CN112784778B (en) | Method, apparatus, device and medium for generating model and identifying age and sex | |
CN113313053B (en) | Image processing method, device, apparatus, medium, and program product | |
JP2022554302A (en) | Systems, methods and media for manufacturing processes | |
JP2022078310A (en) | Image classification model generation method, device, electronic apparatus, storage medium, computer program, roadside device and cloud control platform | |
CN116992226A (en) | Water pump motor fault detection method and system | |
WO2022169681A1 (en) | Learning orthogonal factorization in gan latent space | |
CN117237359B (en) | Conveyor belt tearing detection method and device, storage medium and electronic equipment | |
CN116068585B (en) | High-definition map acquisition data time synchronization method and system based on signal closed loop | |
CN112669452B (en) | Object positioning method based on convolutional neural network multi-branch structure | |
CN115994558A (en) | Pre-training method, device, equipment and storage medium of medical image coding network | |
CN116563291B (en) | SMT intelligent error-proofing feeding detector | |
CN111460909A (en) | Vision-based goods location management method and device | |
Gupta et al. | VehiPose: a multi-scale framework for vehicle pose estimation | |
CN116048682A (en) | Terminal system interface layout comparison method and electronic equipment | |
CN111931767B (en) | Multi-model target detection method, device and system based on picture informativeness and storage medium | |
CN116593890B (en) | Permanent magnet synchronous motor rotor and forming detection method thereof | |
US20230401427A1 (en) | Training neural network with budding ensemble architecture based on diversity loss |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |