WO2019080879A1 - 数据处理方法、计算机设备和存储介质 - Google Patents

数据处理方法、计算机设备和存储介质

Info

Publication number
WO2019080879A1
WO2019080879A1 PCT/CN2018/111691 CN2018111691W WO2019080879A1 WO 2019080879 A1 WO2019080879 A1 WO 2019080879A1 CN 2018111691 W CN2018111691 W CN 2018111691W WO 2019080879 A1 WO2019080879 A1 WO 2019080879A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal frame
input
original signal
frame
target
Prior art date
Application number
PCT/CN2018/111691
Other languages
English (en)
French (fr)
Inventor
肖泽东
郑远力
陈宗豪
顾照鹏
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP18871662.5A priority Critical patent/EP3614256B1/en
Publication of WO2019080879A1 publication Critical patent/WO2019080879A1/zh
Priority to US16/599,004 priority patent/US11245763B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted

Definitions

  • the present application relates to the field of Internet technologies, and in particular, to a data processing method, a computer device, and a storage medium.
  • sensing devices such as accelerometers, gyroscopes, visual image sensors, inertial sensors and radars
  • sensors can be integrated into a variety of mobile devices (eg, intelligent robots, virtual reality and augmented reality devices) to provide location navigation.
  • mobile devices eg, intelligent robots, virtual reality and augmented reality devices
  • the position information of the robot can be estimated by the signal fusion between the sensing signals collected by the sensors such as the gyroscope and the radar.
  • the frame rate of the image sensor output sensing signal is 30 Hz
  • the frame rate of the inertial sensor output sensing signal is usually greater than 100 Hz, for example, 500 Hz. Therefore, these sensing signals (ie, the inertial sensor signal and the image sensor signal) can be input to the signal processor for sensor signal fusion.
  • the inertial sensor signal of the inertial sensor of a plurality of high frame rates may be present within the input interval of any two image sensor signals of the image sensor. Therefore, in the process of performing signal fusion in the input interval duration, the low frame rate sensing signal for correcting the high frame rate sensing signal input is lacking, and thus an accurate estimation amount cannot be obtained, thereby resulting in estimated position information.
  • the true value In order to output a higher frame rate, it is often considered to output at a high frame rate of the sensor signal frame rate, but since the output frame rate of the image sensor of the low frame rate is smaller than the output frame rate of the inertial sensor of the high frame rate, therefore, in the low frame
  • the inertial sensor signal of the inertial sensor of a plurality of high frame rates may be present within the input interval of any two image sensor signals of the image sensor. Therefore, in the process of performing signal fusion in the input interval duration, the low frame rate sensing signal for correcting the high frame rate sensing signal input is lacking, and thus an accurate estimation amount cannot
  • a data processing method, computer device, and storage medium are provided.
  • a data processing method comprising:
  • the terminal When the first frame rate of the first data collector in the terminal is smaller than the second frame rate of the second data collector, the terminal generates a supplementary signal frame according to the first original signal frame of the first data collector, and the terminal The second frame rate determines an input timestamp of the supplemental signal frame;
  • the terminal When the current time reaches the input timestamp of the supplementary signal frame, the terminal inputs the supplementary signal frame to the first input queue of the first data collector, and the terminal acquires the current input of the second data collector. a second original signal frame of the second input queue;
  • the terminal signals the supplemental signal frame in the first input queue with a second original signal frame currently input to the second input queue.
  • the terminal generates a supplementary signal frame according to the first original signal frame of the first data collector, including:
  • the terminal extracts a target first original signal frame corresponding to the first data collector in the first input queue of the first data collector, and generates the first input according to the target first original signal frame.
  • a supplemental signal frame corresponding to the queue; when the target first original signal frame is the first original signal frame having the largest input timestamp in the first input queue, the value of the supplementary signal frame and the target number The value of an original signal frame is the same.
  • the terminal generates a supplementary signal frame according to the first original signal frame of the first data collector, including:
  • the terminal extracts an input first timestamp of the historical first original signal frame and the target first original signal frame in the first input queue of the first data collector, and extracts in the second input queue of the second data collector An input timestamp of the target second original signal frame; the target second original signal frame when the target first original signal frame is the first original signal frame having the largest input timestamp in the first input queue a second original signal frame having the largest input time stamp in the second input queue;
  • the terminal estimates a first complement frame parameter according to the historical first original signal frame
  • the terminal generates a supplementary signal frame according to the target first original signal frame, the first complement frame parameter, and the complement frame interval duration.
  • the terminal generates a supplementary signal frame according to the first original signal frame of the first data collector, including:
  • the terminal extracts an input timestamp of the target first original signal frame in the first input queue of the first data collector, and extracts the target second original signal frame in the second input queue of the second data collector Entering a timestamp; when the target first original signal frame is the first original signal frame having the largest input timestamp in the first input queue, the target second original signal frame is the second input queue a second original signal frame having the largest input time stamp;
  • the terminal acquires a second supplementary frame parameter
  • the terminal generates a supplementary signal frame according to the target first original signal frame, the second complementary frame parameter, and the complement frame interval duration.
  • the terminal determines an input timestamp of the supplementary signal frame according to the second frame rate, and includes:
  • the terminal determines an input timestamp of the supplementary signal frame according to the second frame rate, and includes:
  • the terminal Before determining the input timestamp of the supplementary signal frame according to the second frame rate, the method further includes:
  • the terminal acquires a target first original signal frame in the first input queue and a target second original signal frame in the second input queue; when the target first original signal frame has the largest input time in the first input queue When the first original signal frame is stamped, the target second original signal frame is a second original signal frame having the largest input timestamp in the second input queue;
  • the terminal When the input timestamp of the target second original signal frame is smaller than the input timestamp of the target first original signal frame, when the current time reaches the input timestamp of the target first original signal frame, the terminal will The target first original signal frame in the first data collector and the target second original signal frame in the second data collector are signal-fused.
  • the method further includes:
  • the terminal uses the supplementary signal frame in the first input queue as a target first original signal frame of the first data collector, and generates a target supplementary signal frame according to the target first original signal frame;
  • a computer device comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
  • first frame rate of the first data collector is smaller than the second frame rate of the second data collector, generating a supplementary signal frame according to the first original signal frame of the first data collector, and according to the second frame Rate determining an input timestamp of the supplemental signal frame;
  • the supplemental signal frame in the first input queue is signal fused with a second original signal frame currently input to the second input queue.
  • One or more non-volatile storage media storing computer readable instructions, when executed by one or more processors, cause one or more processors to perform the following steps:
  • first frame rate of the first data collector is smaller than the second frame rate of the second data collector, generating a supplementary signal frame according to the first original signal frame of the first data collector, and according to the second frame Rate determining an input timestamp of the supplemental signal frame;
  • the supplemental signal frame in the first input queue is signal fused with a second original signal frame currently input to the second input queue.
  • FIG. 1 is a schematic structural diagram of a network architecture provided by an embodiment of the present application.
  • FIG. 1a is an internal structural diagram of an intelligent terminal according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a data processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a fused signal frame provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart diagram of another data processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a fused signal frame provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of generating a supplementary signal frame according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an input timestamp of calculating a target supplemental signal frame according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of another data processing apparatus according to an embodiment of the present application.
  • FIG. 1 is a schematic structural diagram of a network architecture according to an embodiment of the present application.
  • the network architecture may include a smart terminal 2000 and a data collector cluster;
  • the data collector cluster may include multiple sensors, as shown in FIG. 1 , specifically including a sensor 3000 a, a sensor 3000 b, . . . , a sensor. 3000n;
  • the sensor 3000a, the sensor 3000b, ..., the sensor 3000n may be respectively connected to the smart terminal 2000 by a network. These sensors (eg, sensor 3000a, sensor 3000d, sensor 3000e..., sensor 3000n) may be integrated into the smart terminal. Alternatively, these sensors (eg, sensor 3000b, sensor 3000c) may also be attached as separate acquisition devices. On the smart terminal.
  • the smart terminal 2000 can be configured to receive sensing signals collected by each sensor, and parse the sensing signals to obtain input timestamps corresponding to original signal frames of the respective sensors.
  • Each of the sensing signals is a discrete time signal, and each sensor outputs the discrete time signals at a certain frame rate. Therefore, the smart terminal can receive the sensing input by each sensor at different input time stamps.
  • a supplemental signal frame based on the input signal of the low frame rate sensor (ie, the first original signal frame) to supplement the input signal of the low frame rate sensor and reach the input time stamp of the supplementary signal frame at the current time
  • the input signal of the supplemental signal frame and the high frame rate sensor ie, the target second original signal frame, the target second original signal frame being the second original having the largest input timestamp in the second input queue
  • Signal frame performs signal fusion to ensure that the high frame rate estimator is output while improving the measurement accuracy to further obtain accurate position information.
  • the VR helmet can simultaneously network with multiple sensors (for example, the sensor 3000a, the sensor 3000b, and the sensor 3000c shown in FIG. 1). connection.
  • the output frame rate of the sensor 3000a ie, the first frame rate
  • the output frame rate of the sensor 3000b ie, the second frame rate
  • the output frame rate of the sensor 3000c ie, the third frame rate
  • the first frame rate is smaller than the second frame rate
  • the second frame rate is smaller than the third frame rate
  • the sensor 3000c is the highest frame rate sensor
  • the sensor 3000b is the lower frame rate sensor.
  • Sensor 3000a is the lowest frame rate sensor.
  • the input signals (supplement signal frames) of the other two sensors (3000a and 3000b) are at the input signal with the highest frame rate sensor (3000c) (the target third original signal frame, the target third original signal frame is the third
  • the input signals of the sensor 3000a and the sensor 3000b need to be separately complemented to obtain the first input to be input to the sensor 3000a.
  • a supplemental signal frame A in the queue is obtained and a supplemental signal frame B to be input into the second input queue of the sensor 3000b is obtained.
  • the VR helmet can further synchronously fuse the supplementary signal frame A, the supplementary signal frame B and the target third original signal frame when the current time reaches the input time stamp of the target third original signal frame to ensure high output.
  • the measurement accuracy of the VR helmet can be improved to further obtain accurate position information.
  • an internal structure diagram of an intelligent terminal includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus.
  • the memory comprises a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system and can also store computer readable instructions that, when executed by the processor, cause the processor to implement a data processing method.
  • the internal memory can also store computer readable instructions that, when executed by the processor, cause the processor to perform a data processing method.
  • the display screen of the computer device may be a liquid crystal display or an electronic ink display screen
  • the input device of the computer device may be a touch layer covered on the display screen, or a button, a trackball or a touchpad provided on the computer device casing, and It can be an external keyboard, trackpad or mouse.
  • FIG. 1a is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the smart terminal to which the solution of the present application is applied.
  • the specific intelligent terminal may be It includes more or fewer components than those shown in the figures, or some components are combined, or have different component arrangements.
  • FIG. 2 is a schematic flowchart of a data processing method according to an embodiment of the present application. As shown in FIG. 2, the method may include:
  • Step S101 if the first frame rate of the first data collector is smaller than the second frame rate of the second data collector, generating a supplementary signal frame according to the first original signal frame of the first data collector, and according to the The second frame rate determines an input timestamp of the supplemental signal frame;
  • the data processing apparatus may be further generated according to the target first original signal frame of the first data collector.
  • a supplementary signal frame the value of the supplementary signal frame may be the same as the value of the target first original signal frame, that is, the target first original signal frame may be the first original signal frame having the largest input timestamp in the first input queue
  • the target first original signal frame may be the first original signal frame newly input into the data processing device; optionally, the data processing device may also be first according to the history of the first data collector
  • the original signal frame and the target first original signal frame generate a supplementary signal frame; optionally, the data processing device may further generate a supplementary signal frame according to the externally input signal and the target first original signal frame.
  • the data processing apparatus may further determine an input timestamp of the supplemental signal frame based on the second frame rate.
  • the first original signal frame may include a historical first original signal frame and a target first original signal frame.
  • the target first original signal frame may be the first original signal frame having the largest input time stamp in the first input queue, that is, the first original input signal frame newly input in the first input queue.
  • the historical first original signal frame may be a sensing signal collected by the first data collector (for example, the sensor 3000a in the embodiment corresponding to FIG. 1 above) that has been input into the first input queue.
  • the historical first original signal frame may also be a supplementary signal frame that has been input into the first input queue for supplemental input of the input signal of the first data collector.
  • the data processing device may be integrated and applied to any one of the smart terminals (for example, the smart terminal 2000 in the embodiment corresponding to FIG. 1).
  • the smart terminal may include: a smart phone, a tablet, and a table. Computers, smart TVs, VR glasses, VR gloves, VR helmets, Augmented Reality (AR) devices (such as AR glasses) or artificial intelligence robots.
  • the smart terminal may include a user operation interface such as a keyboard, a mouse, a joystick, a touch screen or a display so that any suitable user input can interact with the smart terminal, for example, by manually inputting commands, sound control, The smart terminal is controlled by gesture control or orientation information.
  • the smart terminal can be connected to multiple data collectors (for example, multiple sensors in the embodiment corresponding to FIG. 1 above), and the sensors can be integrated into the smart terminal or can be used as a separate display device.
  • these sensing devices can be permanently or removably attached to the smart terminal.
  • These sensing devices may include, but are not limited to, a GPS (Global Positioning System) sensor, an inertial sensor, a pose sensor, a proximity sensor, or a visual image sensor.
  • the smart terminal can receive the sensing signals collected by the sensors (eg, the GPS sensor and the pose sensor) (ie, receive the first original signal frame collected by the first data collector and the second data collector)
  • the acquired second original signal frame is analyzed, and the sensing signals are analyzed to obtain input timestamps corresponding to the original signal frames of the respective sensors.
  • the input timestamp refers to a relative timestamp that is counted from the time when the system is powered on.
  • Each of the sensing signals is a discrete time signal, and each sensor outputs the discrete time signals at a certain frame rate. Therefore, the smart terminal can receive the input of different discrete time signals at different input timestamps.
  • the first original signal frame input by the first data collector may be received when the input time stamp is 50 ms, and the second original signal frame input by the second data collector is received when the input time stamp is 48 ms.
  • the first original signal frame and the second original signal frame may be discrete time signals describing one or more of an orientation, a direction, a speed, or an acceleration of the smart terminal.
  • the data processing applied to the smart terminal may perform a frame-complementing process on the first data collector, that is, the data processing device may perform supplementary input on the sensing signal input by the first data collector, that is, the data processing device may be based on the first data sensor.
  • the input sensing signal ie, the sensing signal having the largest input time stamp
  • generates a supplementary signal frame of the first data sensor ie, the supplementary signal frame can be used for the next sensing signal of the second data collector
  • the first original signal frame newly input by the first data collector ie, the low frame rate sensor
  • the first data collector ie, the low frame rate sensor
  • a target first original signal frame in the input queue of the device the target first original signal frame is a first original signal frame having the largest input time stamp in the input queue
  • a supplementary signal frame is generated, so that the target signal frame can be timely and effectively
  • the input signal of the low frame rate sensor is supplementally input to ensure that the output time of the low frame rate sensor can be aligned with the output time of the high frame rate sensor.
  • the embodiment of the present application is only connected to the two data collectors, that is, the data processing device in the smart terminal can be used to receive the first input by the first data collector.
  • the first data collector may be a low frame rate sensor (for example, a visual image sensor with an output frame rate of 30 Hz), and the second data collector may be a high frame rate sensor (for example, an output frame rate of 250 Hz).
  • the data processing apparatus may further perform steps S101 to S103 to align the output time of the low frame rate sensor with the output time of the high frame rate sensor to obtain a more accurate fusion estimation amount, thereby obtaining The current accurate orientation and/or motion information of the smart terminal.
  • the smart terminal can still reuse the original input frame of the latest input of the low frame rate sensor (ie, Reproducing the first original signal frame having the largest input time stamp in the first input queue), generating a supplementary signal frame corresponding to the highest frame rate sensor, so that the output time of each sensor can be synchronized to facilitate the highest frame rate
  • the sensor performs signal fusion.
  • Step S102 if the current time reaches the input timestamp of the supplementary signal frame, input the supplementary signal frame to the first input queue of the first data collector, and acquire the current input of the second data collector. a second original signal frame to the second input queue;
  • the first input queue is configured to store each discrete time signal collected by the first data collector (ie, a historical first original signal frame) and a supplementary signal frame to be input, if the data processing device When it is detected that the current time reaches the input timestamp of the supplementary signal frame, the supplementary signal frame to be input is input to the first input queue of the first data collector.
  • each signal frame corresponds to an input timestamp that is different from each other.
  • the second input queue is configured to store a historical second original signal frame that has been input into the data processing device and a second original signal frame to be input into the data processing device. If the current time reaches the input timestamp of the supplementary signal frame (ie, the input timestamp of the second original signal frame), the second data collector may input the second original signal frame to be input to the second input. In the queue.
  • the data processing device can simultaneously acquire the second original signal frame currently input by the second data collector while acquiring the supplementary signal frame currently input by the first data collector, so as to ensure that the current time is reached.
  • the data processing apparatus can further perform step S103.
  • the data processing apparatus may input the specific generation process in any one of the three cases for generating the supplementary signal frame described in step S101, and input the supplementary signal frame into the first input queue.
  • the specific input process is used as a complement frame process. Therefore, by the complement processing process, the data processing apparatus can ensure that when the output time of the low frame rate sensor is aligned with the output time of the high frame rate sensor, step S103 is further performed to accurately output the signal fusion estimation of the higher frame rate. value.
  • Step S103 the supplementary signal frame in the first input queue is signal-fused with the second original signal frame currently input to the second input queue.
  • the data processing apparatus may further perform signal fusion on the supplementary signal frame and the second original signal frame having the same input timestamp, so that the frame rate of the fused output reaches a high frame rate sensor.
  • the output frame rate is such that a higher frame rate signal fusion estimate (ie, a fused estimate) can be accurately output, which can be used to estimate the current location information of the smart terminal (eg, an artificial intelligence robot). That is, the data processing apparatus may signal the newly received second original signal frame with the newly received supplementary signal frame to obtain an estimated amount for describing the orientation and/or motion information of the smart terminal.
  • FIG. 3 is a schematic diagram of a fused signal frame provided by an embodiment of the present application.
  • the smart terminal is a robot A
  • the two data collectors connected to the robot A are a first data collector and a second data collector, respectively.
  • the first data collector may be a visual image sensor
  • the second data collector may be an inertial sensor, and the inertial sensor may include: a line accelerometer and/or an angular rate gyroscope, mainly for detecting and measuring the six-degree-of-freedom posture of the robot A.
  • the inertial sensor can input the collected six-degree-of-freedom attitude signals into the data processing device of the robot A every other input interval duration (Tb) (for example, the inertial sensor can be every 4 ms)
  • Tb input interval duration
  • the input interval duration inputs the acquired second raw signal frame into the data processing device).
  • the original signal frame b1 is a target second original signal frame in the second input queue. And according to the input timestamp of the original signal frame b1 and the input interval duration (4ms), the input timestamp 52ms of the original signal frame b2 to be input into the second input queue can be obtained.
  • the frame signal of the input signal of the low frame rate visual image sensor needs to be complemented, and the original signal frame B1 with the input time stamp of 50 ms can be directly reused, that is, the original signal frame B1 is directly copied to obtain the vision.
  • the supplemental signal frame corresponding to the image sensor ie, the supplemental signal frame C1 as shown in FIG.
  • the low frame rate sensor may be determined as the first data collector, and the high frame rate sensor is determined as the second data collector to perform the foregoing steps S101-S103.
  • the highest frame rate sensor of all the sensors may be determined as the second data collector, and any other one or more sensors are determined as the first data.
  • Collector Each of the determined first data collectors may generate a supplementary signal frame corresponding to the second data collector based on the foregoing steps S101-S102, and the step S103 may be understood as: each first data collector
  • the complementary signal frames in the corresponding first input queues are collectively fused with the second original signal frame currently input to the second input queue.
  • the embodiments of the present application do not limit the number of sensors that perform signal fusion.
  • the sensor that satisfies the fused signal output frame rate requirement and the non-highest frame rate among all the sensors may also be determined as the second data collector.
  • the fused signal output frame rate is required to be greater than or equal to 90 Hz, and the frame rates of the three sensors are 30 Hz, 100 Hz, and 300 Hz, respectively, a 100 Hz sensor may be selected as the second data collector.
  • any one or more sensors may be selected as the first data collector in other sensors whose frame rate is smaller than the frame rate of the second data collector.
  • Each of the determined first data collectors may generate a supplementary signal frame corresponding to the second data collector based on the foregoing steps S101-S102, and the step S103 may be understood as: each first data collector
  • the complementary signal frames in the corresponding first input queues are commonly combined with the second original signal frame newly outputted by the second data collector, and the original signal frame of the latest output of the sensor whose other frame rate is greater than the frame rate of the second data collector. Fusion.
  • the fused signal output frame rate is required to be greater than or equal to 150 Hz, and the frame rates of four sensors are 30 Hz, 100 Hz, 300 Hz, and 500 Hz, respectively
  • a 300 Hz sensor can be selected as the second data collector, and 30 Hz and The corresponding signals of the 100 Hz are determined as the first data collector, and the complementary signal frames corresponding to the two first data collectors are A and B, and the latest second original signal frame of the second data collector is C, 500 Hz.
  • the latest raw signal frame of the sensor is D, then A, B, C and D can be combined to process 4 signals to output a more accurate estimation value.
  • the supplementary signal frame when the first frame rate of the first data collector is smaller than the second frame rate of the second data collector, the supplementary signal frame is generated according to the first original signal frame of the first data collector, and according to Determining, by the second frame rate, an input timestamp of the supplementary signal frame; if the current time reaches an input timestamp of the supplementary signal frame, inputting the supplementary signal frame to the first of the first data collector Entering a queue and acquiring a second original signal frame currently input by the second data collector to the second input queue; inputting the supplementary signal frame in the first input queue to the current input queue
  • the second original signal frame performs signal fusion.
  • the supplementary signal frame generated based on the first original signal frame can be used as an input signal of the first data collector (ie, after the frame is processed)
  • the supplemental signal frame is input to the first input queue) such that the supplemental signal frame is signal fused with the target input signal of the second data collector (ie, the second original signal frame currently input into the second queue).
  • the second data collector based on the second frame rate of the second data collector, it can be ensured that the second data collector has a corresponding signal frame (for example, a supplementary signal frame) for each target second original signal frame. Correction is performed to ensure that the measurement accuracy is further improved while outputting an estimate of a higher frame rate.
  • FIG. 4 is a schematic flowchart diagram of another data processing method provided by an embodiment of the present application. As shown in FIG. 4, the method may include:
  • Step S201 acquiring a target first original signal frame in the first input queue and a target second original signal frame in the second input queue;
  • the first input queue is configured to store each discrete time signal collected by the first data collector (ie, a historical first original signal frame) and a target first original during the execution of the step S201.
  • the signal frame for example, the original signal frame B1 in the embodiment corresponding to FIG. 3 above.
  • the target first original signal frame may be the first original signal frame having the largest input timestamp in the first input queue, that is, the target first original signal frame may be the latest input into the first input queue.
  • the first original signal frame may be the first original signal frame having the largest input timestamp in the first input queue, that is, the target first original signal frame may be the latest input into the first input queue.
  • the second input queue is configured to store a historical second original signal frame that has been input into the data processing device and a second original signal frame that is newly input into the data processing device (for example, FIG. 3 above)
  • the frame may be the most recent input to the second original signal frame in the second input queue.
  • Step S202 if the input timestamp of the target second original signal frame is smaller than the input timestamp of the target first original signal frame, when the input timestamp of the target first original signal frame is reached at the current time,
  • the target first original signal frame in the first data collector and the target second original signal frame in the second data collector are signal-fused.
  • FIG. 5 is a schematic diagram of another fused signal frame provided by an embodiment of the present application.
  • the smart terminal is a VR helmet
  • the data collectors connected to the VR helmet are a first data collector and a second data collector, respectively.
  • the first data collector is a pose sensor
  • the first frame rate (ie, output frame rate) of the pose sensor is aHz
  • the pose sensor can be used to collect the first six-degree-of-freedom attitude signal in real time (ie, An original signal frame), and input each of the collected first original signal frames to the first input queue, so that the VR helmet connected to the pose sensor can be obtained from the first input queue as shown in FIG.
  • the input timestamp is the target first original signal frame of 50ms, and the remaining signal frames that have been input to the first input queue before 50ms (ie, the first original signal frame collected by the pose sensor itself and the supplement The input supplemental signal frame) is used as the historical first original signal frame.
  • the second data collector is an inertial sensor, the second frame rate of the inertial sensor is bHz, and the second frame rate of the inertial sensor is greater than the first frame rate of the pose sensor.
  • the inertial sensor can be used for detecting and measuring a second six-degree-of-freedom attitude signal of the human head (ie, a second original signal frame), and inputting each of the second original signal frames to the second input queue, and thus,
  • the VR helmet connected to the second data collector can obtain the target second original signal frame with the input time stamp of 48 ms as shown in FIG. 5 from the second input queue, and can be input to the second input before 48 ms.
  • the remaining signal frames in the queue ie, the second original signal frame acquired by the inertial sensor itself) are used as historical second original signal frames.
  • the input timestamp (48ms) of the target second original signal frame acquired by the VR helmet is smaller than the input timestamp (50ms) of the target first original signal frame, so the VR helmet can detect the current
  • the target first original signal frame in the first input queue and the target second original signal frame in the second input queue are signal-fused to accurately estimate the head of the human head at the current time. Gesture information.
  • the VR helmet can perform frame-to-frame fusion on the input signals of the two sensors with different output frame rates, that is, the inertial sensor inputs a frame of the second six-degree-of-freedom attitude signal, and the pose sensor can correspondingly
  • the first six-degree-of-freedom attitude signal ie, the target first original signal frame
  • the ground merges with the target second original signal frame.
  • the input interval lengths of the two original signal frames input into the VR helmet will be different (for example, the inertial sensor can be input every 4 ms)
  • the position sensor can input the first six degrees of freedom attitude signal every 33ms). Therefore, in the case where the input time stamps of the two original signal frames are not the same, the input time stamps of the next two original signal frames input into the VR helmet are also bound to be different. Therefore, in order to enable each of the second original signal frames input into the VR helmet to have a corresponding first original signal frame can be signal-fused thereto.
  • the VR helmet may further perform a frame-complementing process on the input signal of the low-frame rate sensor, that is, the VR helmet may further perform the subsequent steps S203-S206 to further ensure the output of the low-frame rate sensor.
  • the time can be aligned with the input time of the high frame rate sensor (specifically, as shown in FIG. 5, the supplementary signal frame to be input and the second original signal frame to be input have the same input time stamp), thereby realizing the two sensors
  • the synchronization of the input signal frames allows for a more accurate estimate to obtain more accurate head position information.
  • Step S203 if the first frame rate of the first data collector is smaller than the second frame rate of the second data collector, generating a supplementary signal frame according to the first original signal frame of the first data collector;
  • FIG. 6 is a schematic diagram of calculating and generating a supplementary signal frame according to an embodiment of the present application.
  • the input timestamp of the target first original signal frame in the first input queue of the first data collector is 50 ms
  • the output frame rate of the first data collector is aHz
  • the output frame is The input interval corresponding to the rate is Ta.
  • the input timestamp of the target second original signal frame in the second input queue of the second data collector is 48 ms
  • the output frame rate of the second data collector is bHz
  • the output frame is The input interval corresponding to the rate is Tb.
  • the input signal of the first data collector needs to be complemented to generate A supplemental signal frame corresponding to the first data collector.
  • the specific process of generating the supplementary signal frame may be as follows.
  • the target first original signal frame corresponding to the first data collector is extracted in the first input queue of the first data collector, and the first original signal frame is generated according to the target.
  • the target first original signal frame is a first original signal frame having the largest input timestamp in the first input queue; at this time, the value of the supplementary signal frame and the target first original signal frame The value is the same, that is, the data processing apparatus may copy the first original signal frame newly acquired by the first data collector to obtain a complementary frame signal frame having the same value as the target first original signal frame.
  • the target first original signal frame may be the first original signal frame with an input timestamp of 50 ms as shown in FIG. 6 (for example, the first data collector collected in the embodiment corresponding to FIG. 5 above) One or six degrees of freedom attitude signal).
  • the VR helmet can reuse the target first original signal frame in the first input queue, that is, the VR helmet can directly input the first six-degree-of-freedom attitude signal with a time stamp of 50 ms in the first input queue.
  • the value is copied to obtain a supplemental signal frame corresponding to the first data collector. Therefore, the data processing apparatus may further perform step S204 to obtain an input time stamp of the supplemental signal frame.
  • the data processing apparatus first extracts an input first timestamp of the historical first original signal frame and the target first original signal frame in the first input queue of the first data collector, and in the second Extracting an input timestamp of the second original signal frame of the target in the second input queue of the data collector; secondly, the data processing apparatus may further estimate the first complement frame parameter according to the historical first original signal frame; subsequently, the The data processing apparatus may determine an input interval duration of the second data collector according to the second frame rate, and according to an input timestamp of the target first original signal frame, an input timestamp of the target second original signal frame, and Calculating a duration of the complement frame interval corresponding to the first complement frame parameter; and finally, the data processing apparatus may be configured according to the target first original signal frame, the first complement frame parameter, and the Complement the frame interval and generate a supplementary signal frame.
  • the target first original signal frame is a first original signal frame having the largest input timestamp in the first input queue; the target second original signal frame is the largest in the second input queue. Enter the second original signal frame of the timestamp.
  • the historical first original signal frame in the first input queue may be a historical sensing signal collected by the first data collector itself (that is, before the input timestamp 50ms in the embodiment corresponding to FIG. 5 mentioned above) The history of the first six degrees of freedom gesture signal).
  • the VR helmet can estimate the speed value V1 of the head motion through the historical first original signal frame (one or more historical first six-degree-of-freedom attitude signals) as shown in FIG. 6, that is, through the history first original
  • the signal frame can estimate a first complement frame parameter, which is a velocity value V1 for describing a head motion condition.
  • the timestamp is 48ms.
  • the VR helmet may calculate and start according to an input timestamp (50 ms) of the target first original signal frame, an input timestamp (48 ms) of the target second original signal frame, and the input interval duration (4 ms).
  • the input time difference is a difference between an input timestamp of the first original signal frame and an input timestamp of the second original signal frame.
  • the VR helmet can reuse the target first original signal frame in the first input queue, the velocity value V1 of the head motion and the complement frame interval duration, and generate a supplementary signal frame to be input as shown in FIG. 6.
  • the supplementary signal frame can be expressed as:
  • P is the generated supplemental signal frame and P - is the target first original signal frame in the first input queue.
  • the data processing device first extracts an input timestamp of the target first original signal frame in the first input queue of the first data collector, and is at the second input of the second data collector An input timestamp of the second original signal frame of the target is extracted from the queue; secondly, the data processing apparatus may further acquire a second complement parameter (the second complement parameter may be used for describing the header input by the third data collector)
  • the value of the motion of the part, the third data collector may be the sensing device in the embodiment corresponding to FIG.
  • the data processing apparatus may determine an input interval duration of the second data collector according to the second frame rate, and according to an input timestamp of the target first original signal frame, the target second original signal frame Inputting a timestamp and the input interval duration, calculating a complement frame interval duration corresponding to the second complement frame parameter; finally, the data processing device may further The first original target signal frame, said second frame complement and the complement variable length frame interval, generating supplemental signal frame.
  • the target first original signal frame may be a first original signal frame with an input timestamp of 50 ms as shown in FIG. 6, and the target second original signal frame may be an input timestamp of 48 ms as shown in FIG.
  • the VR helmet can further acquire the velocity value V2 of the head motion collected by the third data collector (for example, the motion sensor) (ie, the second complement frame parameter, and the second complement frame parameter is also available. To describe the movement of the head).
  • the VR helmet may calculate and start according to an input timestamp (50 ms) of the target first original signal frame, an input timestamp (48 ms) of the target second original signal frame, and the input interval duration (4 ms).
  • the input time difference is a difference between an input time stamp of the first original signal frame and an input time stamp of the second original signal frame.
  • the VR helmet can reuse the target first original signal frame in the first input queue, the velocity value V2 of the head motion and the complement frame interval duration, and generate a supplementary signal frame to be input as shown in FIG. 6.
  • the supplemental signal frame can be expressed as:
  • P is the generated supplemental signal frame and P - is the target first original signal frame in the first input queue.
  • Step S204 if the first frame rate of the first data collector is smaller than the second frame rate of the second data collector, determining an input timestamp of the supplementary signal frame according to the second frame rate;
  • the data processing apparatus may calculate an input timestamp of the second original signal frame to be input to the second input queue according to the second frame rate, as an input timestamp of the supplementary signal frame.
  • the data processing apparatus may further determine an input timestamp of the supplementary signal frame according to an input timestamp of the target first original signal frame and the complement frame interval duration.
  • an input timestamp ie, 52 ms
  • Step S205 if the current time reaches the input timestamp of the supplementary signal frame, input the supplementary signal frame to the first input queue of the first data collector, and acquire the current input of the second data collector. a second original signal frame to the second input queue;
  • the supplementary signal frame generated in step S204 may be input to the first input queue, and the current input is input to the second input.
  • the second original signal frame in the queue is used to further perform step S206.
  • Step S206 performing signal fusion on the supplementary signal frame in the first input queue and the second original signal frame currently input to the second input queue;
  • step S205 to step S206 can participate in the description of step S102 to step S103 in the corresponding embodiment of FIG. 2, and the specific process of merging the second original signal frame of the supplementary signal frame can be referred to the foregoing FIG.
  • the description of the signal fusion in the corresponding embodiment will not be repeated here.
  • Step S207 the supplementary signal frame in the first input queue is used as a target first original signal frame of the first data collector, and a target supplementary signal frame is generated according to the target first original signal frame;
  • the specific implementation process of the step S207 may participate in the step S101 in the embodiment corresponding to the foregoing FIG. 2, and may also participate in the specific description of the supplementary signal frame in the corresponding embodiment in FIG. Continue to repeat them.
  • FIG. 7 is a schematic diagram of calculating an input timestamp of a target supplemental signal frame according to an embodiment of the present application.
  • the historical first original signal frame is the first original signal frame collected by the first data collector in the embodiment corresponding to FIG. 6
  • the target first original signal frame is the foregoing FIG. 6 .
  • a supplemental signal frame in the corresponding embodiment.
  • the historical second original signal frame is the second original signal frame collected by the second data collector in the embodiment corresponding to FIG. 6
  • the target second original signal frame is the foregoing FIG. 6 .
  • the data processing apparatus can according to the input timestamp (52ms) and The input interval duration (Tb) determines an input timestamp (ie, 56 ms) of the second original signal frame to be input into the second input queue, and takes the input timestamp of the second original signal frame as the first input to be input to the first input The input timestamp of the target supplemental signal frame in the queue.
  • the specific process of calculating an input timestamp of the target supplementary signal frame may further be performed according to a sum of an input timestamp (52 ms) and a complementary frame interval duration ( ⁇ t') of the target first original signal frame to further
  • the input timestamp (ie, 56 ms) of the target supplemental signal frame to be input into the first input queue is determined.
  • Step S208 when the current time reaches the input timestamp of the target supplementary signal frame, the target supplemental signal frame in the first input queue and the target second original signal frame in the second input queue are performed. Signal fusion.
  • step S208 can participate in the description of the signal fusion in the step S103 in the corresponding embodiment in FIG. 2, and details are not described herein again.
  • the supplementary signal frame when the first frame rate of the first data collector is smaller than the second frame rate of the second data collector, the supplementary signal frame is generated according to the first original signal frame of the first data collector, and according to Determining, by the second frame rate, an input timestamp of the supplementary signal frame; if the current time reaches an input timestamp of the supplementary signal frame, inputting the supplementary signal frame to the first of the first data collector Entering a queue and acquiring a second original signal frame currently input by the second data collector to the second input queue; inputting the supplementary signal frame in the first input queue to the current input queue
  • the second original signal frame performs signal fusion.
  • the supplementary signal frame generated based on the first original signal frame can be used as an input signal of the first data collector (ie, after the frame is processed)
  • the supplemental signal frame is input to the first input queue) such that the supplemental signal frame is signal fused with the target input signal of the second data collector (ie, the second original signal frame currently input into the second queue).
  • the second data collector based on the second frame rate of the second data collector, it can be ensured that the second data collector has a corresponding signal frame (for example, a supplementary signal frame) for each target second original signal frame. Correction is performed to ensure that the measurement accuracy is further improved while outputting an estimate of a higher frame rate.
  • an intelligent terminal is further provided.
  • the internal structure of the smart terminal can be as shown in FIG. 1a.
  • the smart terminal includes a data processing device, and the data processing device includes various modules, and each module can be all or part of It is implemented by software, hardware or a combination thereof.
  • FIG. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
  • the data processing device 1 can be applied to a smart terminal, and the smart terminal can include: a smart phone, a tablet computer, a desktop computer, a smart TV, a VR glasses, a VR glove, a VR helmet, and an enhanced Realistic devices (such as AR glasses) or artificial intelligence robots.
  • the data processing apparatus 1 may include: a supplementary frame generation module 10, a timestamp determination module 20, a signal frame input module 30, a fusion module 40, and a signal frame acquisition module 50;
  • the supplementary frame generating module 10 is configured to generate a supplement according to the first original signal frame of the first data collector if the first frame rate of the first data collector is smaller than the second frame rate of the second data collector Signal frame
  • the supplementary frame generating module 10 is configured to: in the first input queue of the first data collector, extract a target first original signal frame corresponding to the first data collector, and according to the The target first original signal frame generates a supplementary signal frame corresponding to the first input queue; the target first original signal frame is a first original signal frame having the largest input timestamp in the first input queue; The value of the supplemental signal frame is the same as the value of the target first original signal frame.
  • the supplementary frame generating module 10 includes: a first extracting unit 101, a parameter estimating unit 102, a first calculating unit 103, and a first generating unit 104. Further, the supplementary frame generating module 10 further includes: a second extracting unit 105, a parameter obtaining unit 106, a second calculating unit 107 and a second generating unit 108;
  • the first extracting unit 101 is configured to extract, in a first input queue of the first data collector, an input first timestamp of a historical first original signal frame and a target first original signal frame, and in the second data Extracting an input timestamp of the target second original signal frame in the second input queue of the collector;
  • the target first original signal frame is a first original signal frame having the largest input timestamp in the first input queue;
  • the target second original signal frame is a second original signal frame having the largest input time stamp in the second input queue;
  • the parameter estimating unit 102 is configured to estimate a first complement frame parameter according to the historical first original signal frame
  • the first calculating unit 103 is configured to determine an input interval duration of the second data collector according to the second frame rate, and according to the input timestamp of the target first original signal frame, the target second original signal Calculating an input timestamp of the frame and the input interval duration, and calculating a complement frame interval duration corresponding to the first complement frame parameter;
  • the first generating unit 104 is configured to generate a supplementary signal frame according to the target first original signal frame, the first complementary frame parameter, and the complementary frame interval duration.
  • the first extracting unit 101, the parameter estimating unit 102, the first calculating unit 103, and the first generating unit 104 in the supplementary frame generating module 10 are used to generate the supplementary signal frame
  • the first The two extraction unit 105, the parameter acquisition unit 106, the second calculation unit 107 and the second generation unit 108 will not be used to generate the supplemental signal frame.
  • the second extraction unit 105, the parameter acquisition unit 106, the second calculation unit 107, and the second generation unit 108 in the supplementary frame generation module 10 are configured to generate the supplementary signal frame.
  • the first extraction unit 101, the parameter estimation unit 102, the first calculation unit 103 and the first generation unit 104 will not be used to generate the supplemental signal frame.
  • the second extracting unit 105 is configured to extract an input timestamp of the target first original signal frame in the first input queue of the first data collector, and in the second input queue of the second data collector Extracting an input timestamp of the target second original signal frame;
  • the target first original signal frame is a first original signal frame having the largest input timestamp in the first input queue;
  • the target second original signal frame a second original signal frame having the largest input time stamp in the second input queue;
  • the parameter obtaining unit 106 is configured to acquire a second complement frame parameter
  • the second calculating unit 107 is configured to determine an input interval duration of the second data collector according to the second frame rate, and according to the input timestamp of the target first original signal frame, the target second original signal Calculating an input timestamp of the frame and the input interval duration, and calculating a complement frame interval duration corresponding to the second complement frame parameter;
  • the second generating unit 108 is configured to generate a supplementary signal frame according to the target first original signal frame, the second complementary frame parameter, and the complement frame interval duration.
  • the timestamp determining module 20 is configured to determine an input time of the supplementary signal frame according to the second frame rate if a first frame rate of the first data collector is smaller than a second frame rate of the second data collector stamp;
  • the timestamp determining module 20 is configured to calculate, according to the second frame rate, an input timestamp of a second original signal frame to be input to the second input queue, as an input time of the supplementary signal frame. stamp;
  • the timestamp determining module 20 is configured to determine an input timestamp of the supplementary signal frame according to an input timestamp of the target first original signal frame and a duration of the complementary frame interval.
  • the signal frame input module 30 is configured to: if the current time reaches an input timestamp of the supplementary signal frame, input the supplementary signal frame to a first input queue of the first data collector, and obtain the a second original signal frame currently input by the second data collector to the second input queue;
  • the merging module 40 is configured to perform signal fusion between the supplementary signal frame in the first input queue and the second original signal frame in the second input queue.
  • the signal frame obtaining module 50 is configured to acquire a target first original signal frame in the first input queue and a target second original signal frame in the second input queue;
  • the target first original signal frame is a first original signal frame having a largest input timestamp in the first input queue;
  • the target second original signal frame being a second original signal frame having a largest input timestamp in the second input queue;
  • the specific implementation manners of the supplementary frame generating module 10, the timestamp determining module 20, the signal frame input module 30, the merging module 40, and the signal frame acquiring module 50 can be referred to the step S201-step in the corresponding embodiment of FIG. The description of S205 will not be repeated here.
  • the merging module 40 is further configured to: if the input timestamp of the target second original signal frame is smaller than the input timestamp of the target first original signal frame, reach the target first original signal frame at the current time. When the timestamp is input, the target first original signal frame in the first data collector and the target second original signal frame in the second data collector are signal-fused.
  • the supplementary frame generating module 10 is further configured to use the supplementary signal frame in the first input queue as a target first original signal frame of the first data collector, and according to the target Generating a target supplemental signal frame by the first original signal frame;
  • the merging module 40 is further configured to: when the current time reaches the input timestamp of the target supplementary signal frame, the target supplemental signal frame in the first input queue and the target in the second input queue The second original signal frame performs signal fusion.
  • the supplementary signal frame when the first frame rate of the first data collector is smaller than the second frame rate of the second data collector, the supplementary signal frame is generated according to the first original signal frame of the first data collector, and according to Determining, by the second frame rate, an input timestamp of the supplementary signal frame; if the current time reaches an input timestamp of the supplementary signal frame, inputting the supplementary signal frame to the first of the first data collector Entering a queue and acquiring a second original signal frame currently input by the second data collector to the second input queue; inputting the supplementary signal frame in the first input queue to the current input queue
  • the second original signal frame performs signal fusion.
  • the supplementary signal frame generated based on the first original signal frame can be used as an input signal of the first data collector (ie, after the frame is processed)
  • the supplemental signal frame is input to the first input queue) such that the supplemental signal frame is signal fused with the target input signal of the second data collector (ie, the second original signal frame currently input into the second queue).
  • the second data collector based on the second frame rate of the second data collector, it can be ensured that the second data collector has a corresponding signal frame (for example, a supplementary signal frame) for each target second original signal frame. Correction is performed to ensure that the measurement accuracy is further improved while outputting an estimate of a higher frame rate.
  • FIG. 9 is a schematic structural diagram of another data processing apparatus according to an embodiment of the present application.
  • the data processing apparatus 1000 can be applied to the smart terminal 2000 in the corresponding embodiment of FIG. 1, the data processing apparatus 1000 can include: a processor 1001, a network interface 1004, a memory 1005, and first data.
  • the data processing apparatus 1000 can further include: a user interface 1003, and at least one communication bus 1002. Among them, the communication bus 1002 is used to implement connection communication between these components.
  • the user interface 1003 may include a display.
  • the user interface 1003 may further include a standard wired interface and a wireless interface.
  • the network interface 1004 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • the memory 1004 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory.
  • the memory 1005 can also optionally be at least one storage device located remotely from the aforementioned processor 1001. As shown in FIG. 9, an operating system, a network communication module, a user interface module, and a device control application may be included in the memory 1005 as a computer storage medium.
  • the memory includes a nonvolatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device can store operating system and computer readable instructions. When the computer readable instructions are executed, the processor can be caused to perform a data processing method suitable for the smart terminal.
  • the network interface 1004 can provide a network communication function; and the user interface 1003 is mainly used to provide an input interface for the user; and the processor 1001 can be used to call the device control stored in the memory 1005.
  • first frame rate of the first data collector is smaller than the second frame rate of the second data collector, generating a supplementary signal frame according to the first original signal frame of the first data collector, and according to the second frame Rate determining an input timestamp of the supplemental signal frame;
  • the supplemental signal frame in the first input queue is signal fused with a second original signal frame currently input to the second input queue.
  • the data processing apparatus 1000 described in the embodiment of the present application may perform the description of the data processing method in the embodiment corresponding to FIG. 2 or FIG. 4, and may also perform the corresponding embodiment in the foregoing embodiment of FIG.
  • the description of the data processing apparatus 1 will not be repeated here.
  • the description of the beneficial effects of the same method will not be repeated.
  • the embodiment of the present application further provides a computer storage medium, and the computer storage medium stores the computer program executed by the data processing device 1 mentioned above, and the computer program includes The program instruction, when the processor executes the program instruction, can describe the data processing method in the embodiment corresponding to FIG. 2 or FIG. 4, and therefore, no further description is made here.
  • the description of the beneficial effects of the same method will not be repeated.
  • For technical details not disclosed in the embodiment of the computer storage medium involved in the present application refer to the description of the method embodiment of the present application.
  • the various steps in the various embodiments of the present application are not necessarily performed in the order indicated by the steps. Except as explicitly stated herein, the execution of these steps is not strictly limited, and the steps may be performed in other orders. Moreover, at least some of the steps in the embodiments may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be executed at different times, and the execution of these sub-steps or stages The order is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of the other steps.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM dual data rate SDRAM
  • ESDRAM enhanced SDRAM
  • Synchlink DRAM SLDRAM
  • Memory Bus Radbus
  • RDRAM Direct RAM
  • DRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Arrangements For Transmission Of Measured Signals (AREA)
  • Image Analysis (AREA)
  • Synchronisation In Digital Transmission Systems (AREA)

Abstract

一种数据处理方法、计算机设备和存储介质,所述方法包括:当终端中的第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据第一数据采集器的第一原始信号帧生成补充信号帧,并根据第二帧率确定所述补充信号帧的输入时间戳;当当前时间达到补充信号帧的输入时间戳时,将补充信号帧输入到第一数据采集器的第一输入队列,并获取第二数据采集器当前输入到第二输入队列的第二原始信号帧;将第一输入队列中的补充信号帧与第二输入队列中的第二原始信号帧进行信号融合。

Description

数据处理方法、计算机设备和存储介质
本申请要求于2017年10月25日提交中国专利局,申请号为2017110087700,申请名称为“一种数据处理方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及互联网技术领域,尤其涉及一种数据处理方法、计算机设备和存储介质。
背景技术
随着科学技术的快速发展,衍生出了各式各样的传感设备(如:加速度计、陀螺仪、视觉图像传感器、惯性传感器和雷达等)。这些传感器可融合应用于各类可移动设备(比如,智能机器人,虚拟现实和增强现实设备)中以提供定位导航功能。比如,在机器人定位导航过程中,可以通过陀螺仪和雷达等传感器所采集到的传感信号之间的信号融合,来估计机器人的位置信息。
目前,由于各类传感器均按照一定的帧率输出这些离散的传感信号(比如,图像传感器输出传感信号的帧率为30Hz,惯性传感器输出传感信号的帧率通常大于100Hz,例如,500Hz),因此,可将这些传感信号(即惯性传感器信号和图像传感器信号)输入到信号处理器中进行传感器信号融合。
为了输出较高的帧率,往往考虑以高帧率的传感器信号帧率输出,但是由于低帧率的图像传感器的输出帧率小于高帧率的惯性传感器的输出帧率,因此,在低帧率的图像传感器的任意两个图像传感器信号的输入间隔时长内,可以有多个高帧率的惯性传感器的惯性传感器信号。于是,在该输入间隔时长内进行信号融合的过程中,缺少用于校正该高帧率传感信号输入的低帧率传感信号,进而无法得到准确的估计量,从而导致估计出的位置信息严重偏离真实值。
发明内容
根据本申请提供的各种实施例,提供一种数据处理方法、计算机设备和存 储介质。
一种数据处理方法,包括:
当终端中的第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则终端根据所述第一数据采集器的第一原始信号帧生成补充信号帧,终端并根据所述第二帧率确定所述补充信号帧的输入时间戳;
当当前时间达到所述补充信号帧的输入时间戳时,终端将所述补充信号帧输入到所述第一数据采集器的第一输入队列,终端并获取所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;
终端将所述第一输入队列中的所述补充信号帧与当前输入到所述第二输入队列的第二原始信号帧进行信号融合。
其中,所述终端根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
终端在所述第一数据采集器的所述第一输入队列中提取所述第一数据采集器对应的目标第一原始信号帧,并根据所述目标第一原始信号帧生成所述第一输入队列对应的补充信号帧;当所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧时,所述补充信号帧的值与所述目标第一原始信号帧的值相同。
其中,所述终端根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
终端在所述第一数据采集器的第一输入队列中提取历史第一原始信号帧以及目标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;当所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧时,所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
终端根据所述历史第一原始信号帧估算第一补帧参量;
终端根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间戳以及所述输入间隔时长,计算与所述第一补帧参量对应的补帧间隔时长;
终端根据所述目标第一原始信号帧、所述第一补帧参量和所述补帧间隔时长,生成补充信号帧。
其中,所述终端根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
终端在所述第一数据采集器的第一输入队列中提取目标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;当所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧时,所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
终端获取第二补帧参量;
终端根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间戳以及所述输入间隔时长,计算与所述第二补帧参量对应的补帧间隔时长;
终端根据所述目标第一原始信号帧、所述第二补帧参量和所述补帧间隔时长,生成补充信号帧。
其中,所述终端根据所述第二帧率确定所述补充信号帧的输入时间戳,包括:
终端根据所述第二帧率计算待输入到所述第二输入队列的第二原始信号帧的输入时间戳,作为所述补充信号帧的输入时间戳。
其中,所述终端根据所述第二帧率确定所述补充信号帧的输入时间戳,包括:
终端根据所述目标第一原始信号帧的输入时间戳和所述补帧间隔时长确定所述补充信号帧的输入时间戳。
其中,终端在所述若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳之前,还包括:
终端获取第一输入队列中的目标第一原始信号帧以及第二输入队列中的目标第二原始信号帧;当所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧时,所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
当所述目标第二原始信号帧的输入时间戳小于所述目标第一原始信号帧的输入时间戳时,在当当前时间达到所述目标第一原始信号帧的输入时间戳时, 终端将所述第一数据采集器中的所述目标第一原始信号帧和所述第二数据采集器中的所述目标第二原始信号帧进行信号融合。
其中,所述方法还包括:
终端将所述第一输入队列中的所述补充信号帧作为所述第一数据采集器的目标第一原始信号帧,并根据所述目标第一原始信号帧生成目标补充信号帧;
终端在当当前时间达到所述目标补充信号帧的输入时间戳时,将所述第一输入队列中的所述目标补充信号帧与所述第二输入队列中的目标第二原始信号帧进行信号融合。
一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳;
若当前时间达到所述补充信号帧的输入时间戳,则将所述补充信号帧输入到所述第一数据采集器的第一输入队列,并获取所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;
将所述第一输入队列中的所述补充信号帧与当前输入到所述第二输入队列的第二原始信号帧进行信号融合。
一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如下步骤:
若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳;
若当前时间达到所述补充信号帧的输入时间戳,则将所述补充信号帧输入到所述第一数据采集器的第一输入队列,并获取所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;
将所述第一输入队列中的所述补充信号帧与当前输入到所述第二输入队列 的第二原始信号帧进行信号融合。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种网络架构的结构示意图;
图1a是本申请实施例提供的一种智能终端的内部结构图;
图2是本申请实施例提供的一种数据处理方法的流程示意图;
图3是本申请实施例提供的一种融合信号帧的示意图;
图4是本申请实施例提供的另一种数据处理方法的流程示意图;
图5是本申请实施例提供的融合信号帧的示意图;
图6是本申请实施例提供的一种生成补充信号帧的示意图;
图7是本申请实施例提供的一种计算目标补充信号帧的输入时间戳的示意图;
图8是本申请实施例提供的一种数据处理装置的结构示意图;
图9是本申请实施例提供的另一种数据处理装置的结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
请参见图1,是本申请实施例提供的一种网络架构的结构示意图。如图1所示,所述网络架构可以包括智能终端2000以及数据采集器集群;所述数据采集器集群可以包括多个传感器,如图1所示,具体包括传感器3000a、传感器 3000b、…、传感器3000n;
所述传感器3000a、传感器3000b、…、传感器3000n可以分别与所述智能终端2000进行网络连接。这些传感器(比如,传感器3000a、传感器3000d、传感器3000e…、传感器3000n)可以集成于该智能终端中,可选的,这些传感器(比如,传感器3000b、传感器3000c)也可以作为独立的采集设备而附在该智能终端上。
如图1所示,所述智能终端2000可用于接收各传感器所采集到的传感信号,并对这些传感信号进行解析,以得到与各传感器的原始信号帧分别对应的输入时间戳。其中,各传感信号均为离散时间信号,且各传感器均按一定的帧率分别输出这些离散时间信号,因此,该智能终端可以在不同的输入时间戳时接收到各传感器所输入的传感信号,并基于低帧率传感器的输入信号(即第一原始信号帧)生成补充信号帧,以对低帧率传感器的输入信号进行补充输入,并在当前时间达到该补充信号帧的输入时间戳时,将该补充信号帧与高帧率传感器的输入信号(即目标第二原始信号帧,所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧)进行信号融合,以确保在输出高帧率估计量的同时,还可提高测量精度,以进一步得到准确的位置信息。
比如,以该智能终端2000为虚拟现实(VR,virtual reality)头盔为例,该VR头盔可以同时与多个传感器(例如,可以与图1所示的传感器3000a,传感器3000b和传感器3000c)进行网络连接。其中,传感器3000a的输出帧率(即第一帧率)为aHz,传感器3000b的输出帧率(即第二帧率)为bHz,传感器3000c的输出帧率(即第三帧率)为cHz,且这三个输出帧率的大小关系满足:第一帧率小于第二帧率,第二帧率小于第三帧率,即传感器3000c为最高帧率传感器,传感器3000b为较低帧率传感器,传感器3000a为最低帧率传感器。因此,其它两个传感器(3000a和3000b)的输入信号(补充信号帧)在与最高帧率传感器(3000c)的输入信号(目标第三原始信号帧,所述目标第三原始信号帧为第三输入队列中具有最大的输入时间戳的第三原始信号帧)进行信号融合之前,需要预先对传感器3000a和传感器3000b的输入信号分别进行补帧处理,以得到待输入到该传感器3000a的第一输入队列中的补充信号帧A,并得到待输入到该传感器3000b的第二输入队列中的补充信号帧B。随后,该VR头盔可进一步在当前时间到达该目标第三原始信号帧的输入时间戳时,将补充信号 帧A,补充信号帧B以及目标第三原始信号帧进行同步融合,以确保在输出高帧率估计量的同时,还可提高该VR头盔的测量精度,以进一步得到准确的位置信息。
其中,所述智能终端2000生成所述补充信号帧以及信号融合的具体过程可以参见如下图2至图7对应的实施例。
在一个实施例中,如图1a所示,提供了一种智能终端的内部结构图。该智能终端包括通过系统总线连接的处理器、存储器、网络接口、输入装置和显示屏。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器实现数据处理方法。该内存储器中也可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行数据处理方法。计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。本领域技术人员可以理解,图1a中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的智能终端的限定,具体的智能终端可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
进一步的,请参见图2,是本申请实施例提供的一种数据处理方法的流程示意图。如图2所示,所述方法可以包括:
步骤S101,若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳;
具体地,若数据处理装置检测到第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则可进一步根据所述第一数据采集器的目标第一原始信号帧生成补充信号帧,该补充信号帧的值可与目标第一原始信号帧的值相同,即所述目标第一原始信号帧可以为第一输入队列中具有最大的输入时间戳的第一原始信号帧,换言之,该目标第一原始信号帧可以为最新输入到该数据处理装置中的第一原始信号帧;可选的,所述数据处理装置还可根据所述第一数据采 集器的历史第一原始信号帧和目标第一原始信号帧生成补充信号帧;可选的,所述数据处理装置还可根据外界额外输入的信号和目标第一原始信号帧生成补充信号帧。随后,所述数据处理装置可进一步根据第二帧率确定所述补充信号帧的输入时间戳。
应当理解,在生成所述补充信号帧的过程中,所述第一原始信号帧可以包括历史第一原始信号帧和目标第一原始信号帧。其中,所述目标第一原始信号帧可以为第一输入队列中具有最大的输入时间戳的第一原始信号帧,即该第一输入队列中最新输入的第一原始信号帧。所述历史第一原始信号帧可以为已输入到第一输入队列中的第一数据采集器(比如:上述图1所对应实施例中的传感器3000a)自身所采集到的传感信号。可选的,所述历史第一原始信号帧还可以为已输入到第一输入队列中用于对该第一数据采集器的输入信号进行补充输入的补充信号帧。
其中,所述数据处理装置可以集成应用于任何一种智能终端(例如,上述图1所对应实施例中的智能终端2000),此外,所述智能终端可以包括:智能手机、平板电脑、桌上型电脑、智能电视、VR眼镜、VR手套、VR头盔、增强现实(AR,Augmented Reality)设备(比如,AR眼镜)或者人工智能机器人。此外,所述智能终端可以包括用户操作界面,如键盘、鼠标、操纵杆、触摸屏或者显示器,以便于任何适合的用户输入可以与该智能终端进行交互,比如,可通过手动输入命令、声音控制、手势控制或者方位信息等控制该智能终端。
此外,该智能终端可以与多个数据采集器(例如,可以同时与上述图1所对应实施例中的多个传感器)相连,这些传感器可以集成于该智能终端中,也可以作为单独的显示设备而独立存在,即这些传感设备可以永久的或者可移除的附在该智能终端上。这些传感设备可以包括但不限于:GPS(Global Positioning System,全球定位系统)传感器、惯性传感器、位姿传感器、近距离传感器或者视觉图像传感器。因此,该智能终端可以接收这些传感器(比如,GPS传感器和位姿传感器)所采集到的传感信号(即可接收第一数据采集器采集到的第一原始信号帧以及第二数据采集器所采集到的第二原始信号帧),并对这些传感信号进行分析,以得到与各传感器的原始信号帧分别对应的输入时间戳。其中,所述输入时间戳是指从系统开机时计时所统计到的相对时间戳。其中,各传感信号均为离散时间信号,且各传感器均按一定的帧率分别输出这些离散时间信 号,因此,该智能终端可以在不同的输入时间戳时接收不同离散时间信号的输入。比如,可在输入时间戳为50ms时接收第一数据采集器所输入的第一原始信号帧,并在输入时间戳为48ms时接收第二数据采集器所输入的第二原始信号帧。其中,所述第一原始信号帧和第二原始信号帧可以为描述该智能终端的方位、方向、速度、或者加速度中的一种或者多种的离散时间信号。
其中,当第一数据采集器的输出帧率(即第一帧率)小于第二数据采集器的输出帧率(即第二帧率)时,应用于该智能终端中的所述该数据处理装置可对该第一数据采集器进行补帧处理,即该数据处理装置可对该第一数据采集器所输入的传感信号进行补充输入,即该数据处理装置可基于该第一数据传感器最新输入的传感信号(即具有最大输入时间戳的传感信号),生成该第一数据传感器的补充信号帧(即该补充信号帧可以用于与第二数据采集器的下一个传感信号进行信号融合)。因此,在生成该补充信号帧(新的传感信号)的过程中,可重复利用该第一数据采集器(即低帧率传感器)最新输入的第一原始信号帧(即该第一数据采集器的输入队列中的目标第一原始信号帧,所述目标第一原始信号帧为该输入队列中具有最大的输入时间戳的第一原始信号帧)生成补充信号帧,从而可及时有效地对该低帧率传感器的输入信号进行补充输入,以确保低帧率传感器的输出时间能对准高帧率传感器的输出时间。
为便于更好的理解本方案,本申请实施例仅以该智能终端与两个数据采集器相连为例,即该智能终端中的数据处理装置可用于接收第一数据采集器所输入的第一原始信号帧和第二数据采集器所输入的第二原始信号帧。其中,所述第一数据采集器可以为低帧率传感器(例如,输出帧率为30Hz的视觉图像传感器),所述第二数据采集器可以为高帧率传感器(例如,输出帧率为250Hz的惯性传感器),进而该数据处理装置可进一步执行步骤S101-步骤S103,以使低帧率传感器的输出时间对准高帧率传感器的输出时间,以得到更准确的融合估计量,从而可以得到该智能终端当前准确的方位和/或运动信息。当然,当存在三个或三个以上具备不同帧率的数据采集器同步将传感信号输入到该智能终端中时,该智能终端仍可重复利用低帧率传感器最新输入的原始信号帧(即可重复利用第一输入队列中具有最大输入时间戳的第一原始信号帧),生成与该最高帧率传感器对应的补充信号帧,从而可同步各传感器的输出时间,以便于与该最高帧率传感器进行信号融合。
步骤S102,若当前时间达到所述补充信号帧的输入时间戳,则将所述补充信号帧输入到所述第一数据采集器的第一输入队列,并获取所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;
其中,所述第一输入队列,用于存储所述第一数据采集器所采集到的各离散时间信号(即历史第一原始信号帧)以及待输入的补充信号帧,若所述数据处理装置检测到当前时间达到所述补充信号帧的输入时间戳,则将该待输入的补充信号帧输入到所述第一数据采集器的第一输入队列。在该第一输入队列中,每个信号帧彼此对应着互不相同的输入时间戳。
其中,所述第二输入队列,用于存储已输入到所述数据处理装置中的历史第二原始信号帧以及待输入到所述数据处理装置中的第二原始信号帧。若当前时间达到所述补充信号帧的输入时间戳(即第二原始信号帧的输入时间戳),则所述第二数据采集器可将待输入的第二原始信号帧输入到该第二输入队列中。
因此,该数据处理装置在获取到所述第一数据采集器当前输入的补充信号帧的同时,还可同步获取所述第二数据采集器当前输入的第二原始信号帧,以确保当前时间达到该第二原始信号帧的输入时间戳时,存在一个与该第二原始信号帧匹配的补充信号帧,以便于该数据处理装置可进一步执行步骤S103。
其中,该数据处理装置可以将步骤S101中所描述的三种用于生成补充信号帧的情况中的任意一种情况下的具体生成过程,以及将该补充信号帧输入至该第一输入队列中的具体输入过程作为补帧处理过程。因此,通过该补帧处理过程,该数据处理装置可以确保在低帧率传感器的输出时间对准高帧率传感器的输出时间时,进一步执行步骤S103,以准确输出较高帧率的信号融合估计值。
步骤S103,将所述第一输入队列中的所述补充信号帧与当前输入到所述第二输入队列的第二原始信号帧进行信号融合。
具体的,所述数据处理装置在执行完上述步骤S102之后,可进一步将具备相同输入时间戳的补充信号帧和第二原始信号帧进行信号融合,以使融合输出的帧率达到高帧率传感器的输出帧率,从而可准确输出较高帧率的信号融合估计值(即融合后的估计量),该估计量可用于估计该智能终端(例如,人工智能机器人)当前的位置信息。即所述数据处理装置可将最新接收到的第二原始信号帧与最新接收到的补充信号帧进行信号融合,以得到用于描述该智能终端的方位和/或运动信息的估计量。
进一步的,请参见图3,是本申请实施例提供的一种融合信号帧的示意图。如图3所示,该智能终端为机器人A,且与该机器人A相连的两个数据采集器分别为第一数据采集器和第二数据采集器。其中,所述第一数据采集器可以为视觉图像传感器,该视觉图像传感器的第一帧率(即输出帧率)为FaHz(Fa=1/Ta,其中,Ta为该视觉图像传感器自身所采集的任意两个图像信号之间的输入间隔时长),该视觉图像传感器可用于实时采集图像信号,并以输入间隔时长Ta将采集到的各图像信号依次输入到第一输出队列。因此,该第一输入队列中可以存在如图3所示的输入时间戳为50ms的原始信号帧B1,该原始信号帧B1即为该第一输入队列中的目标第一原始信号帧,并可在一个输入间隔时长(比如,Ta=33ms)之后接收该视觉图像传感器自身所采集到的下一个第一原始信号帧。
如图3所示,所述第二数据采集器可以为惯性传感器,该惯性传感器可以包括:线加速度计和/或角速率陀螺仪,主要是用于检测和测量该机器人A的六自由度姿态信号,且该惯性传感器的输出帧率(即第二帧率)为FbHz(Fb=1/Tb,其中,Tb为该惯性传感器自身所采集的任意两个六自由度姿态信号之间的输入间隔时长),因此,该惯性传感器可以每隔一个输入间隔时长(Tb)将采集到的各六自由度姿态信号分别输入到该机器人A的数据处理装置中(比如,该惯性传感器可以以每隔4ms的输入间隔时长将采集到的第二原始信号帧输入到数据处理装置中)。其中,在该惯性传感器的第二输入队列中存在如图3所示的输入时间戳为48ms的原始信号帧b1,该原始信号帧b1为该第二输入队列中的目标第二原始信号帧,且根据该原始信号帧b1的输入时间戳以及输入间隔时长(4ms)可以得到待输入到该第二输入队列中的原始信号帧b2的输入时间戳52ms。
因此,为了使这两个传感器的传感信号在信号融合之后能得到更高的输出帧率,即得到更为准确的估计量。需要对上述低帧率的视觉图像传感器的输入信号进行补帧处理,即可直接重复利用输入时间戳为50ms的原始信号帧B1,即直接对该原始信号帧B1进行复制,以得到与该视觉图像传感器对应的补充信号帧(即如图3所示的补充信号帧C1),并可在当前时间达到原始信号帧b2的输入时间戳时,将当前输入到第二输入队列中的原始信号帧b2与该第一输入队列中的补充信号帧C1进行信号融合,以使这两个传感器的传感信号在进行信号融合后,得到融合输出的帧率达到高帧率传感器的输出帧率,以确保在输出较高帧率的估计量的同时,进一步提高测量精度,从而可以估计出更为准确的位 置信息。
可选的,若只有两个传感器,则可以将低帧率的传感器确定为第一数据采集器,将高帧率的传感器确定为第二数据采集器,以执行上述步骤S101-S103中对第一数据采集器和第二数据采集器的信号融合操作。
可选的,若存在2个或2个以上的传感器,则可以将所有传感器中的最高帧率的传感器确定为第二数据采集器,并将其他任意一个或一个以上的传感器确定为第一数据采集器。其中,所确定出的每个第一数据采集器都可以基于上述S101-S102步骤生成与第二数据采集器对应的补充信号帧,进而上述S103步骤可以理解为:将每个第一数据采集器分别对应的第一输入队列中的补充信号帧共同与当前输入到第二输入队列的第二原始信号帧进行信号融合。例如,若存在3个第一数据采集器,且3个第一数据采集器分别对应的补充信号帧为A、B、C,且当前输入到第二输入队列的第二原始信号帧为D,则可以将A、B、C与D进行4个信号的融合处理,以输出更为精确的估量值。因此,本申请实施例并不对进行信号融合的传感器的数量进行限制。
可选的,若存在3个或3个以上的传感器,则也可以将所有传感器中满足融合信号输出帧率要求且非最高帧率的传感器确定为第二数据采集器。例如,若融合信号输出帧率要求为大于或等于90Hz,且有3个传感器的帧率分别为30Hz、100Hz、300Hz,则可以选择100Hz的传感器作为第二数据采集器。进一步的,还可以在其他帧率小于第二数据采集器的帧率的传感器中,选择任意一个或一个以上的传感器作为第一数据采集器。其中,所确定出的每个第一数据采集器都可以基于上述S101-S102步骤生成与第二数据采集器对应的补充信号帧,进而上述S103步骤可以理解为:将每个第一数据采集器分别对应的第一输入队列中的补充信号帧共同与第二数据采集器最新输出的第二原始信号帧、其他帧率大于第二数据采集器的帧率的传感器最新输出的原始信号帧进行信号融合。例如,若融合信号输出帧率要求为大于或等于150Hz,且有4个传感器的帧率分别为30Hz、100Hz、300Hz、500Hz,则可以选择300Hz的传感器作为第二数据采集器,并将30Hz和100Hz对应的传感器均确定为第一数据采集器,且两个第一数据采集器分别对应的补充信号帧为A、B,第二数据采集器的最新的第二原始信号帧为C,500Hz对应的传感器的最新的原始信号帧为D,则可以将A、B、C与D进行4个信号的融合处理,以输出更为精确的估量值。
本申请实施例通过在第一数据采集器的第一帧率小于第二数据采集器的第二帧率时,根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳;若当前时间达到所述补充信号帧的输入时间戳,则将所述补充信号帧输入到所述第一数据采集器的第一输入队列,并获取所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;将所述第一输入队列中的所述补充信号帧与当前输入到所述第二输入队列的第二原始信号帧进行信号融合。由此可见,在当前时间达到所述补充信号帧的输入时间戳时,可将基于第一原始信号帧所生成的补充信号帧作为该第一数据采集器的输入信号(即将补帧处理后所得的补充信号帧输入到第一输入队列),从而可使补充信号帧与该第二数据采集器的目标输入信号(即当前输入到第二队列中的第二原始信号帧)进行信号融合。此外,基于该第二数据采集器的第二帧率,可确保该第二数据采集器每输入一个目标第二原始信号帧,都存在一个与其对应的信号帧(比如,补充信号帧)对其进行校正,以确保在输出较高帧率的估计量的同时,进一步提高测量精度。
进一步的,请参见图4,是本申请实施例提供的另一种数据处理方法的流程示意图。如图4所示,所述方法可以包括:
步骤S201,获取第一输入队列中的目标第一原始信号帧以及第二输入队列中的目标第二原始信号帧;
其中,在该步骤S201的执行过程中,所述第一输入队列,用于存储所述第一数据采集器所采集到的各离散时间信号(即历史第一原始信号帧)以及目标第一原始补信号帧(例如,上述图3所对应实施例中的原始信号帧B1)。可见,该目标第一原始信号帧可以为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧,即该目标第一原始信号帧可以为最新输入至该第一输入队列中的第一原始信号帧。
其中,所述第二输入队列,用于存储已输入到所述数据处理装置中的历史第二原始信号帧以及最新输入到所述数据处理装置中的第二原始信号帧(例如,上述图3所对应实施例中的原始信号帧b1),其中,所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧,即该目标第二原始信号帧可以为最新输入至该第二输入队列中的第二原始信号帧。
步骤S202,若所述目标第二原始信号帧的输入时间戳小于所述目标第一原 始信号帧的输入时间戳,则在当前时间达到所述目标第一原始信号帧的输入时间戳时,将所述第一数据采集器中的所述目标第一原始信号帧和所述第二数据采集器中的所述目标第二原始信号帧进行信号融合。
进一步的,请参见图5,是本申请实施例提供的另一种融合信号帧的示意图。如图4所示,该智能终端为VR头盔,与VR头盔相连的数据采集器分别为第一数据采集器和第二数据采集器。其中,所述第一数据采集器为位姿传感器,该位姿传感器的第一帧率(即输出帧率)为aHz,该位姿传感器可用于实时采集第一六自由度姿态信号(即第一原始信号帧),并将采集到的各第一原始信号帧输入到第一输入队列,因此,与该位姿传感器相连的VR头盔可从该第一输入队列中获取到如图5所示的输入时间戳为50ms的目标第一原始信号帧,并将在50ms之前已输入至该第一输入队列中的其余信号帧(即该位姿传感器自身所采集到的第一原始信号帧以及补充输入的补充信号帧)作为历史第一原始信号帧。所述第二数据采集器为惯性传感器,该惯性传感器的第二帧率为bHz,且该惯性传感器的第二帧率大于位姿传感器的第一帧率。该惯性传感器可用于检测和测量人体头部的第二六自由度姿态信号(即第二原始信号帧),并将采集到各第二原始信号帧输入到第二输入队列,因此,与该第二数据采集器相连的VR头盔可从该第二输入队列中获取到如图5所示的输入时间戳为48ms的目标第二原始信号帧,且可将在48ms之前已输入至该第二输入队列中的其余信号帧(即该惯性传感器自身所采集到的第二原始信号帧)作为历史第二原始信号帧。此时,该VR头盔所获取到的目标第二原始信号帧的输入时间戳(48ms)小于所述目标第一原始信号帧的输入时间戳(50ms),因此,该VR头盔可在检测到当前时间达到50ms时,将所述第一输入队列中目标第一原始信号帧和所述第二输入队列中目标第二原始信号帧进行信号融合,以准确预估当前时间时人体头部的头部姿态信息。
可见,该VR头盔可对这两个具有不同输出帧率的传感器的输入信号进行帧对帧的融合,即惯性传感器每输入一帧第二六自由度姿态信号,则位姿传感器可对应的将第一输入队列中具有最大的输入时间戳的第一六自由度姿态信号(即目标第一原始信号帧)输入到VR头盔中,以使输入到VR头盔中的目标第一原始信号帧能有效地与目标第二原始信号帧进行融合。由于与该VR头盔相连的两个传感器具有不同的输出帧率,因此,输入到该VR头盔中的两路原始信号 帧的输入间隔时长将存在不同(比如,惯性传感器可以每隔4ms输入一次第二六自由度姿态信号,位姿传感器可以每隔33ms输入一次第一六自由度姿态信号)。因此,在两路原始信号帧的输入时间戳不相同的情况下,下一个输入到该VR头盔中的两路原始信号帧的输入时间戳也势必存在不同。因此,为了能使每个输入到该VR头盔中第二原始信号帧能有一对应的第一原始信号帧能与其进行信号融合。该VR头盔可在执行完上述步骤S202之后,进一步对低帧率的传感器的输入信号进行补帧处理,即该VR头盔可进一步执行后续步骤S203-步骤S206,以进一步确保低帧率传感器的输出时间可以对准高帧率传感器的输入时间(具体的,如图5所示,待输入的补充信号帧和待输入的第二原始信号帧具备相同的输入时间戳),从而实现这两个传感器的输入信号帧的同步,进而可得到更为准确的估计量,以得到更为准确的头部位姿信息。
步骤S203,若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集器的第一原始信号帧生成补充信号帧;
进一步的,请参见图6,是本申请实施例提供的一种计算生成补充信号帧的示意图。如图6所示,所述第一数据采集器的第一输入队列中目标第一原始信号帧的输入时间戳为50ms,该第一数据采集器的输出帧率为aHz,且与该输出帧率对应的输入间隔时长为Ta。如图6所示,所述第二数据采集器的第二输入队列中目标第二原始信号帧的输入时间戳为48ms,该第二数据采集器的输出帧率为bHz,且与该输出帧率对应的输入间隔时长为Tb。此时,由于第一数据采集器的第一帧率小于第二数据采集器的第二帧率(即aHz小于bHz),故而需要对第一数据采集器的输入信号进行补帧处理,以生成与第一数据采集器对应的补充信号帧。
其中,生成所述补充信号帧的具体过程可以有如下三种情况。
第一种情况,在所述第一数据采集器的所述第一输入队列中提取所述第一数据采集器对应的目标第一原始信号帧,并根据所述目标第一原始信号帧生成所述第一输入队列对应的补充信号帧。其中,所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;此时,所述补充信号帧的值与所述目标第一原始信号帧的值相同,即该数据处理装置可对该第一数据采集器最新采集到的第一原始信号帧进行复制,以得到与该目标第一原始信号帧具有相同值的补帧信号帧。
其中,所述目标第一原始信号帧可以为图6所示的输入时间戳为50ms的第一原始信号帧(比如,上述图5所对应实施例中第一数据采集器自身所采集到的第一六自由度姿态信号)。此时,该VR头盔可重复利用该第一输入队列中的目标第一原始信号帧,即该VR头盔可直接对该第一输入队列中输入时间戳为50ms的第一六自由度姿态信号的值进行复制,以得到与该第一数据采集器对应的补充信号帧。因此,该数据处理装置可进一步执行步骤S204,从而得到该补充信号帧的输入时间戳。
第二种情况,所述数据处理装置首先在所述第一数据采集器的第一输入队列中提取历史第一原始信号帧以及目标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;其次,所述数据处理装置可进一步根据所述历史第一原始信号帧估算第一补帧参量;随后,所述数据处理装置可根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间戳以及所述输入间隔时长,计算与所述第一补帧参量对应的补帧间隔时长;最后,所述数据处理装置可根据所述目标第一原始信号帧、所述第一补帧参量和所述补帧间隔时长,生成补充信号帧。
其中,所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧。
其中,所述第一输入队列中的历史第一原始信号帧可以为该第一数据采集器自身所采集到的历史传感信号(即上述图5所对应实施例中输入时间戳50ms之前所涉及的历史第一六自由度姿态信号)。此时,该VR头盔可通过如图6所示的历史第一原始信号帧(一个或多个历史第一六自由度姿态信号)预估头部运动的速度值V1,即通过历史第一原始信号帧可估算出第一补帧参量,该第一补帧参量为用于描述头部运动情况的速度值V1。
如图6所示,与该第二数据采集器的输出帧率(bHz)对应的输入间隔时长为Tb(例如,Tb=4ms),且在第二输入队列中目标第二原始信号帧的输入时间戳为48ms。此时,该VR头盔可根据目标第一原始信号帧的输入时间戳(50ms)、所述目标第二原始信号帧的输入时间戳(48ms)以及所述输入间隔时长(4ms),计算与头部运动的速度值V1对应的补帧间隔时长,此时,该补帧间隔时长(Δ t)=所述输入间隔时长-输入时间差=4ms-2ms=2ms。其中,该输入时间差为所述第一原始信号帧的输入时间戳与所述第二原始信号帧的输入时间戳之间的差值。随后,该VR头盔可重复利用该第一输入队列中的目标第一原始信号帧,头部运动的速度值V1和补帧间隔时长,生成如图6所示的待输入的补充信号帧。此时,(假设所述第一数据采集器的输入间隔时长内的运动模型为匀速模型)该补充信号帧可以表示为:
P=P -+V1*Δt;
其中,P为生成的补充信号帧,P -为第一输入队列中目标第一原始信号帧。
第三种情况,所述数据处理装置首先在所述第一数据采集器的第一输入队列中提取目标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;其次,所述数据处理装置可进一步获取第二补帧参量(该第二补帧参量可以为第三数据采集器所输入的用于描述头部运动情况的值,该第三数据采集器可以为上述图1所对应实施例中的传感设备,还可以为其他具有数据处理能力的外部通信设备,例如,其他VR设备或移动终端);随后,所述数据处理装置可根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间戳以及所述输入间隔时长,计算与所述第二补帧参量对应的补帧间隔时长;最后,所述数据处理装置可进一步根据所述目标第一原始信号帧、所述第二补帧参量和所述补帧间隔时长,生成补充信号帧。
其中,所述目标第一原始信号帧可以为图6所示的输入时间戳为50ms的第一原始信号帧,所述目标第二原始信号帧可以为图6所示的输入时间戳为48ms的第二原始信号帧,且与该第二数据采集器的输出帧率(bHz)对应的输入间隔时长为Tb(例如,Tb=4s)。与此同时,该VR头盔还可进一步获取到第三数据采集器(例如,运动传感器)所采集到的头部运动的速度值V2(即第二补帧参量,该第二补帧参量也可用于描述头部运动情况)。此时,该VR头盔可根据目标第一原始信号帧的输入时间戳(50ms)、所述目标第二原始信号帧的输入时间戳(48ms)以及所述输入间隔时长(4ms),计算与头部运动的速度值V2对应的补帧间隔时长(Δt),此时,该补帧间隔时长(Δt)=所述输入间隔时长-输入时间差=4ms-2ms=2ms。其中,该输入时间差为所述第一原始信号帧的输入时间 戳与所述第二原始信号帧的输入时间戳之间的差值。随后,该VR头盔可重复利用该第一输入队列中的目标第一原始信号帧,头部运动的速度值V2和补帧间隔时长,生成如图6所示的待输入的补充信号帧。此时,该补充信号帧可以表示为:
P=P -+V2*Δt;
其中,P为生成的补充信号帧,P -为第一输入队列中目标第一原始信号帧。
步骤S204,若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第二帧率确定所述补充信号帧的输入时间戳;
具体的,所述数据处理装置可根据所述第二帧率计算待输入到所述第二输入队列的第二原始信号帧的输入时间戳,作为所述补充信号帧的输入时间戳。
比如:以第二帧率为bHz为例,此时,可以得到与该第二帧率对应的输入间隔时长(比如,Tb=1/b)。因此,只要该VR头盔获取到第二输入队列中目标第二原始信号帧的输入时间戳,则可根据该输入时间戳和Tb计算出待输入到该第二输入队列的第二原始信号帧的输入时间戳。比如,如图6所示的目标第二原始信号帧的输入时间戳为48ms,输入间隔时长Tb=4ms,则待输入到该第二输入队列的第二原始信号帧的输入时间戳为52ms,此时,该VR头盔可直接将该输入时间戳作为补充信号帧的输入时间戳。
可选的,所述数据处理装置还可根据所述目标第一原始信号帧的输入时间戳和所述补帧间隔时长确定所述补充信号帧的输入时间戳。
比如,当VR头盔获取到第一输入队列中目标第一原始信号帧的输入时间戳为50ms,且补帧间隔时长可以为上述第二种情况和第三种情况所描述的补帧间隔时长(Δt=2ms)时,该VR头盔可进一步确定待输入到该第一输入队列的第一补充信号帧的输入时间戳(即52ms)。
步骤S205,若当前时间达到所述补充信号帧的输入时间戳,则将所述补充信号帧输入到所述第一数据采集器的第一输入队列,并获取所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;
如图6所示,若当前时间到达所述补充信号帧的输入时间戳(即52ms),可将步骤S204中所生成的补充信号帧输入到第一输入队列,并获取当前输入到第二输入队列中的第二原始信号帧,以便于进一步执行步骤S206。
步骤S206,将所述第一输入队列中的所述补充信号帧与当前输入到所述第 二输入队列的第二原始信号帧进行信号融合;
其中,步骤S205-步骤S206的具体实现过程可参加上述图2所对应实施例中对步骤S102-步骤S103的描述,且融合该补充信号帧第二原始信号帧的具体过程可参见上述图3所对应实施例对信号融合的描述,这里将不再继续进行赘述。
步骤S207,将所述第一输入队列中的所述补充信号帧作为所述第一数据采集器的目标第一原始信号帧,并根据所述目标第一原始信号帧生成目标补充信号帧;
具体的,所述步骤S207的具体实现过程可参加上述图2所对应实施例中的步骤S101,也可参加上述图6所对应实施例中对所述补充信号帧的具体描述,这里将不再继续进行赘述。
进一步的,请参见图7,是本申请实施例提供的一种计算目标补充信号帧的输入时间戳的示意图。如图7所示,在所述第一数据采集器的第一输入队列中,存在输入时间戳为50ms的历史第一原始信号帧,以及输入时间戳为52ms的目标第一原始信号帧。此时,所述历史第一原始信号帧为上述图6所对应实施例中的该第一数据采集器自身所采集到的第一原始信号帧,所述目标第一原始信号帧为上述图6所对应实施例中的补充信号帧。如图7所示,在所述第二数据采集器的第二输入队列中,存在输入时间戳为48ms的历史第二原始信号帧,以及输入时间戳为52ms的目标第二原始信号帧。此时,所述历史第二原始信号帧为上述图6所对应实施例中的该第二数据采集器自身所采集到的第二原始信号帧,所述目标第二原始信号帧为上述图6所对应实施例中与补充信号帧具有相同输入时间戳的第二原始信号帧。
由于第二数据采集器的输入间隔时长(Tb=4ms),且此时所述目标第二原始信号帧的输入时间戳为52ms,因此,该数据处理装置可根据该输入时间戳(52ms)以及输入间隔时长(Tb)确定待输入到该第二输入队列中的第二原始信号帧的输入时间戳(即56ms),并将该第二原始信号帧的输入时间戳作为待输入至第一输入队列中的目标补充信号帧的输入时间戳。
可选的,计算该目标补充信号帧的输入时间戳的具体过程,还可根据所述目标第一原始信号帧的输入时间戳(52ms)和补帧间隔时长(Δt’)之和,以进一步确定待输入至第一输入队列中的目标补充信号帧的输入时间戳(即56ms)。 此时,应当理解,所述补帧间隔时长为如图7所示的第二原始信号帧的输入间隔时长(即Δt’=Tb)。
步骤S208,在当前时间达到所述目标补充信号帧的输入时间戳时,将所述第一输入队列中的所述目标补充信号帧与所述第二输入队列中的目标第二原始信号帧进行信号融合。
具体的,所述步骤S208的具体实现过程可参加上述图2所对应实施例中对所述步骤S103中的信号融合的描述,这里将不再继续进行赘述。
本申请实施例通过在第一数据采集器的第一帧率小于第二数据采集器的第二帧率时,根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳;若当前时间达到所述补充信号帧的输入时间戳,则将所述补充信号帧输入到所述第一数据采集器的第一输入队列,并获取所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;将所述第一输入队列中的所述补充信号帧与当前输入到所述第二输入队列的第二原始信号帧进行信号融合。由此可见,在当前时间达到所述补充信号帧的输入时间戳时,可将基于第一原始信号帧所生成的补充信号帧作为该第一数据采集器的输入信号(即将补帧处理后所得的补充信号帧输入到第一输入队列),从而可使补充信号帧与该第二数据采集器的目标输入信号(即当前输入到第二队列中的第二原始信号帧)进行信号融合。此外,基于该第二数据采集器的第二帧率,可确保该第二数据采集器每输入一个目标第二原始信号帧,都存在一个与其对应的信号帧(比如,补充信号帧)对其进行校正,以确保在输出较高帧率的估计量的同时,进一步提高测量精度。
在一个实施例中,还提供了一种智能终端,该智能终端的内部结构可如图1a所示,该智能终端包括数据处理装置,数据处理装置中包括各个模块,每个模块可全部或部分通过软件、硬件或其组合来实现。
进一步的,请参见图8,是本申请实施例提供的一种数据处理装置的结构示意图。如图8所示,所述数据处理装置1可应用于智能终端,且所述智能终端可以包括:智能手机、平板电脑、桌上型电脑、智能电视、VR眼镜、VR手套、VR头盔、增强现实设备(比如,AR眼镜)或者人工智能机器人。所述数据处理装置1可以包括:补充帧生成模块10,时间戳确定模块20,信号帧输入模块30,融合模块40和信号帧获取模块50;
所述补充帧生成模块10,用于若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集器的第一原始信号帧生成补充信号帧;
其中,所述补充帧生成模块10,具体用于在所述第一数据采集器的所述第一输入队列中提取所述第一数据采集器对应的目标第一原始信号帧,并根据所述目标第一原始信号帧生成所述第一输入队列对应的补充信号帧;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述补充信号帧的值与所述目标第一原始信号帧的值相同。
可选的,所述补充帧生成模块10包括:第一提取单元101,参量估算单元102,第一计算单元103和第一生成单元104;进一步的,所述补充帧生成模块10还包括:第二提取单元105,参量获取单元106,第二计算单元107和第二生成单元108;
所述第一提取单元101,用于在所述第一数据采集器的第一输入队列中提取历史第一原始信号帧以及目标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
所述参量估算单元102,用于根据所述历史第一原始信号帧估算第一补帧参量;
所述第一计算单元103,用于根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间戳以及所述输入间隔时长,计算与所述第一补帧参量对应的补帧间隔时长;
所述第一生成单元104,用于根据所述目标第一原始信号帧、所述第一补帧参量和所述补帧间隔时长,生成补充信号帧。
其中,所述第一提取单元101,参量估算单元102,第一计算单元103和第一生成单元104的具体实现方式可参见上述图4所对应实施例中的对第二种情况中生成所述补充信号帧的具体过程的描述,这里将不再继续进行赘述。
应当理解,所述补充帧生成模块10中的所述第一提取单元101,参量估算 单元102,第一计算单元103和第一生成单元104在用于生成所述补充信号帧时,所述第二提取单元105,参量获取单元106,第二计算单元107和第二生成单元108将不被用于生成所述补充信号帧。反之也可成立,即所述补充帧生成模块10中的所述第二提取单元105,参量获取单元106,第二计算单元107和第二生成单元108在用于生成所述补充信号帧时,所述第一提取单元101,参量估算单元102,第一计算单元103和第一生成单元104将不被用于生成所述补充信号帧。
所述第二提取单元105,用于在所述第一数据采集器的第一输入队列中提取目标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
所述参量获取单元106,用于获取第二补帧参量;
所述第二计算单元107,用于根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间戳以及所述输入间隔时长,计算与所述第二补帧参量对应的补帧间隔时长;
所述第二生成单元108,用于根据所述目标第一原始信号帧、所述第二补帧参量和所述补帧间隔时长,生成补充信号帧。
其中,所述第二提取单元105,参量获取单元106,第二计算单元107和第二生成单元108的具体实现方式可参见上述图4所对应实施例中的对第三种情况中生成所述补充信号帧的具体过程的描述,这里将不再继续进行赘述。
所述时间戳确定模块20,用于若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第二帧率确定所述补充信号帧的输入时间戳;
其中,所述时间戳确定模块20,具体用于根据所述第二帧率计算待输入到所述第二输入队列的第二原始信号帧的输入时间戳,作为所述补充信号帧的输入时间戳;
可选的,所述时间戳确定模块20,具体用于根据所述目标第一原始信号帧的输入时间戳和所述补帧间隔时长确定所述补充信号帧的输入时间戳。
所述信号帧输入模块30,用于若当前时间达到所述补充信号帧的输入时间戳,则将所述补充信号帧输入到所述第一数据采集器的第一输入队列,并获取 所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;
所述融合模块40,用于将所述第一输入队列中的所述补充信号帧与所述第二输入队列中的所述第二原始信号帧进行信号融合。
可选的,所述信号帧获取模块50,用于获取第一输入队列中的目标第一原始信号帧以及第二输入队列中的目标第二原始信号帧;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
其中,所述补充帧生成模块10,时间戳确定模块20,信号帧输入模块30,融合模块40和信号帧获取模块50的具体实现方式可参见上述图4所对应实施例中对步骤S201-步骤S205的描述,这里将不再继续进行赘述。
所述融合模块40,还用于若所述目标第二原始信号帧的输入时间戳小于所述目标第一原始信号帧的输入时间戳,则在当前时间达到所述目标第一原始信号帧的输入时间戳时,将所述第一数据采集器中的所述目标第一原始信号帧和所述第二数据采集器中的所述目标第二原始信号帧进行信号融合。
可选的,所述补充帧生成模块10,还用于将所述第一输入队列中的所述补充信号帧作为所述第一数据采集器的目标第一原始信号帧,并根据所述目标第一原始信号帧生成目标补充信号帧;
所述融合模块40,还用于在当前时间达到所述目标补充信号帧的输入时间戳时,将所述第一输入队列中的所述目标补充信号帧与所述第二输入队列中的目标第二原始信号帧进行信号融合。
本申请实施例通过在第一数据采集器的第一帧率小于第二数据采集器的第二帧率时,根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳;若当前时间达到所述补充信号帧的输入时间戳,则将所述补充信号帧输入到所述第一数据采集器的第一输入队列,并获取所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;将所述第一输入队列中的所述补充信号帧与当前输入到所述第二输入队列的第二原始信号帧进行信号融合。由此可见,在当前时间达到所述补充信号帧的输入时间戳时,可将基于第一原始信号帧所生成的补充信号帧作为该第一数据采集器的输入信号(即将补帧处理后所得的补充信号帧输入到第一输入队 列),从而可使补充信号帧与该第二数据采集器的目标输入信号(即当前输入到第二队列中的第二原始信号帧)进行信号融合。此外,基于该第二数据采集器的第二帧率,可确保该第二数据采集器每输入一个目标第二原始信号帧,都存在一个与其对应的信号帧(比如,补充信号帧)对其进行校正,以确保在输出较高帧率的估计量的同时,进一步提高测量精度。
进一步地,请参见图9,是本申请实施例提供的另一种数据处理装置的结构示意图。如图9所示,所述数据处理装置1000可以应用于上述图1对应实施例中的智能终端2000,所述数据处理装置1000可以包括:处理器1001,网络接口1004和存储器1005以及第一数据采集器1006和第二数据采集器1007,其中,所述第一数据采集器1006和所述第二数据采集器1007可以永久的或者可移除的附在智能终端上。此外,所述数据处理装置1000还可以包括:用户接口1003,和至少一个通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。其中,用户接口1003可以包括显示屏(Display),可选的,用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1004可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器1005可选的还可以是至少一个位于远离前述处理器1001的存储装置。如图9所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及设备控制应用程序。存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质可存储操作系统和计算机可读指令。该计算机可读指令被执行时,可使得处理器执行一种适用于智能终端的数据处理方法。
在图9所示的数据处理装置1000中,网络接口1004可提供网络通讯功能;而用户接口1003主要用于为用户提供输入的接口;而处理器1001可以用于调用存储器1005中存储的设备控制应用程序,以实现:
若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳;
若当前时间达到所述补充信号帧的输入时间戳,则将所述补充信号帧输入到所述第一数据采集器的第一输入队列,并获取所述第二数据采集器当前输入 到第二输入队列的第二原始信号帧;
将所述第一输入队列中的所述补充信号帧与当前输入到所述第二输入队列的第二原始信号帧进行信号融合。
应当理解,本申请实施例中所描述的数据处理装置1000可执行前文图2或图4所对应实施例中对所述数据处理方法的描述,也可执行前文图8所对应实施例中对所述数据处理装置1的描述,在此不再赘述。另外,对采用相同方法的有益效果描述,也不再进行赘述。此外,这里需要指出的是:本申请实施例还提供了一种计算机存储介质,且所述计算机存储介质中存储有前文提及的数据处理装置1所执行的计算机程序,且所述计算机程序包括程序指令,当所述处理器执行所述程序指令时,能够执行前文图2或图4所对应实施例中对所述数据处理方法的描述,因此,这里将不再进行赘述。另外,对采用相同方法的有益效果描述,也不再进行赘述。对于本申请所涉及的计算机存储介质实施例中未披露的技术细节,请参照本申请方法实施例的描述。
应该理解的是,虽然本申请各实施例中的各个步骤并不是必然按照步骤标号指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,各实施例中至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM (SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上所揭露的仅为本申请较佳实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。

Claims (24)

  1. 一种数据处理方法,其特征在于,包括:
    当终端中的第一数据采集器的第一帧率小于第二数据采集器的第二帧率时,所述终端根据所述第一数据采集器的第一原始信号帧生成补充信号帧,所述终端并根据所述第二帧率确定所述补充信号帧的输入时间戳;
    当当前时间达到所述补充信号帧的输入时间戳时,所述终端将所述补充信号帧输入到所述第一数据采集器的第一输入队列,所述终端并获取所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;
    所述终端将所述第一输入队列中的所述补充信号帧与当前输入到所述第二输入队列的第二原始信号帧进行信号融合。
  2. 根据权利要求1所述的方法,其特征在于,所述终端根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
    所述终端在所述第一数据采集器的所述第一输入队列中提取所述第一数据采集器对应的目标第一原始信号帧,并根据所述目标第一原始信号帧生成所述第一输入队列对应的补充信号帧;当所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧时,所述补充信号帧的值与所述目标第一原始信号帧的值相同。
  3. 根据权利要求1所述的方法,其特征在于,所述终端根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
    所述终端在所述第一数据采集器的第一输入队列中提取历史第一原始信号帧以及目标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;当所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧时,所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
    所述终端根据所述历史第一原始信号帧估算第一补帧参量;
    所述终端根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间 戳以及所述输入间隔时长,计算与所述第一补帧参量对应的补帧间隔时长;
    所述终端根据所述目标第一原始信号帧、所述第一补帧参量和所述补帧间隔时长,生成补充信号帧。
  4. 根据权利要求1所述的方法,其特征在于,所述终端根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
    所述终端在所述第一数据采集器的第一输入队列中提取目标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;当所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧时,所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
    所述终端获取第二补帧参量;
    所述终端根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间戳以及所述输入间隔时长,计算与所述第二补帧参量对应的补帧间隔时长;
    所述终端根据所述目标第一原始信号帧、所述第二补帧参量和所述补帧间隔时长,生成补充信号帧。
  5. 根据权利要求1所述的方法,其特征在于,所述终端根据所述第二帧率确定所述补充信号帧的输入时间戳,包括:
    所述终端根据所述第二帧率计算待输入到所述第二输入队列的第二原始信号帧的输入时间戳,作为所述补充信号帧的输入时间戳。
  6. 根据权利要求3或4所述的方法,其特征在于,所述终端根据所述第二帧率确定所述补充信号帧的输入时间戳,包括:
    所述终端根据所述目标第一原始信号帧的输入时间戳和所述补帧间隔时长确定所述补充信号帧的输入时间戳。
  7. 根据权利要求1所述的方法,其特征在于,所述终端在所述若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集 器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳之前,还包括:
    所述终端获取第一输入队列中的目标第一原始信号帧以及第二输入队列中的目标第二原始信号帧;当所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧时,所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
    当所述目标第二原始信号帧的输入时间戳小于所述目标第一原始信号帧的输入时间戳时,在当当前时间达到所述目标第一原始信号帧的输入时间戳时,所述终端将所述第一数据采集器中的所述目标第一原始信号帧和所述第二数据采集器中的所述目标第二原始信号帧进行信号融合。
  8. 根据权利要求7所述的方法,其特征在于,还包括:
    所述终端将所述第一输入队列中的所述补充信号帧作为所述第一数据采集器的目标第一原始信号帧,并根据所述目标第一原始信号帧生成目标补充信号帧;
    所述终端在当当前时间达到所述目标补充信号帧的输入时间戳时,将所述第一输入队列中的所述目标补充信号帧与所述第二输入队列中的目标第二原始信号帧进行信号融合。
  9. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
    若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳;
    若当前时间达到所述补充信号帧的输入时间戳,则将所述补充信号帧输入到所述第一数据采集器的第一输入队列,并获取所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;
    将所述第一输入队列中的所述补充信号帧与当前输入到所述第二输入队列的第二原始信号帧进行信号融合。
  10. 根据权利要求9所述的计算机设备,其特征在于,所述处理器执行的根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
    在所述第一数据采集器的所述第一输入队列中提取所述第一数据采集器对应的目标第一原始信号帧,并根据所述目标第一原始信号帧生成所述第一输入队列对应的补充信号帧;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述补充信号帧的值与所述目标第一原始信号帧的值相同。
  11. 根据权利要求9所述的计算机设备,其特征在于,所述处理器执行的根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
    在所述第一数据采集器的第一输入队列中提取历史第一原始信号帧以及目标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
    根据所述历史第一原始信号帧估算第一补帧参量;
    根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间戳以及所述输入间隔时长,计算与所述第一补帧参量对应的补帧间隔时长;
    根据所述目标第一原始信号帧、所述第一补帧参量和所述补帧间隔时长,生成补充信号帧。
  12. 根据权利要求9所述的计算机设备,其特征在于,所述处理器执行的根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
    在所述第一数据采集器的第一输入队列中提取目标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
    获取第二补帧参量;
    根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间戳以及所述输入间隔时长,计算与所述第二补帧参量对应的补帧间隔时长;
    根据所述目标第一原始信号帧、所述第二补帧参量和所述补帧间隔时长,生成补充信号帧。
  13. 根据权利要求9所述的计算机设备,其特征在于,所述处理器执行的根据所述第二帧率确定所述补充信号帧的输入时间戳,包括:
    根据所述第二帧率计算待输入到所述第二输入队列的第二原始信号帧的输入时间戳,作为所述补充信号帧的输入时间戳。
  14. 根据权利要求11或12所述的计算机设备,其特征在于,所述处理器执行的根据所述第二帧率确定所述补充信号帧的输入时间戳,包括:
    根据所述目标第一原始信号帧的输入时间戳和所述补帧间隔时长确定所述补充信号帧的输入时间戳。
  15. 根据权利要求9所述的计算机设备,其特征在于,在所述若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳之前,所述计算机可读指令还使得所述处理器执行如下步骤:
    获取第一输入队列中的目标第一原始信号帧以及第二输入队列中的目标第二原始信号帧;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
    若所述目标第二原始信号帧的输入时间戳小于所述目标第一原始信号帧的输入时间戳,则在当前时间达到所述目标第一原始信号帧的输入时间戳时,将所述第一数据采集器中的所述目标第一原始信号帧和所述第二数据采集器中的所述目标第二原始信号帧进行信号融合。
  16. 根据权利要求15所述的计算机设备,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    将所述第一输入队列中的所述补充信号帧作为所述第一数据采集器的目标第一原始信号帧,并根据所述目标第一原始信号帧生成目标补充信号帧;
    在当前时间达到所述目标补充信号帧的输入时间戳时,将所述第一输入队列中的所述目标补充信号帧与所述第二输入队列中的目标第二原始信号帧进行信号融合。
  17. 一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如下步骤:
    若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳;
    若当前时间达到所述补充信号帧的输入时间戳,则将所述补充信号帧输入到所述第一数据采集器的第一输入队列,并获取所述第二数据采集器当前输入到第二输入队列的第二原始信号帧;
    将所述第一输入队列中的所述补充信号帧与当前输入到所述第二输入队列的第二原始信号帧进行信号融合。
  18. 根据权利要求17所述的存储介质,其特征在于,所述处理器执行的根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
    在所述第一数据采集器的所述第一输入队列中提取所述第一数据采集器对应的目标第一原始信号帧,并根据所述目标第一原始信号帧生成所述第一输入队列对应的补充信号帧;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述补充信号帧的值与所述目标第一原始信号帧的值相同。
  19. 根据权利要求17所述的存储介质,其特征在于,所述处理器执行的根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
    在所述第一数据采集器的第一输入队列中提取历史第一原始信号帧以及目 标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
    根据所述历史第一原始信号帧估算第一补帧参量;
    根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间戳以及所述输入间隔时长,计算与所述第一补帧参量对应的补帧间隔时长;
    根据所述目标第一原始信号帧、所述第一补帧参量和所述补帧间隔时长,生成补充信号帧。
  20. 根据权利要求17所述的存储介质,其特征在于,所述处理器执行的根据所述第一数据采集器的第一原始信号帧生成补充信号帧,包括:
    在所述第一数据采集器的第一输入队列中提取目标第一原始信号帧的输入时间戳,并在所述第二数据采集器的第二输入队列中提取目标第二原始信号帧的输入时间戳;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
    获取第二补帧参量;
    根据所述第二帧率确定第二数据采集器的输入间隔时长,并根据所述目标第一原始信号帧的输入时间戳、所述目标第二原始信号帧的输入时间戳以及所述输入间隔时长,计算与所述第二补帧参量对应的补帧间隔时长;
    根据所述目标第一原始信号帧、所述第二补帧参量和所述补帧间隔时长,生成补充信号帧。
  21. 根据权利要求17所述的存储介质,其特征在于,所述处理器执行的根据所述第二帧率确定所述补充信号帧的输入时间戳,包括:
    根据所述第二帧率计算待输入到所述第二输入队列的第二原始信号帧的输入时间戳,作为所述补充信号帧的输入时间戳。
  22. 根据权利要求19或20所述的存储介质,其特征在于,所述处理器执行的根据所述第二帧率确定所述补充信号帧的输入时间戳,包括:
    根据所述目标第一原始信号帧的输入时间戳和所述补帧间隔时长确定所述补充信号帧的输入时间戳。
  23. 根据权利要求17所述的存储介质,其特征在于,在所述若第一数据采集器的第一帧率小于第二数据采集器的第二帧率,则根据所述第一数据采集器的第一原始信号帧生成补充信号帧,并根据所述第二帧率确定所述补充信号帧的输入时间戳之前,所述计算机可读指令还使得所述处理器执行如下步骤:
    获取第一输入队列中的目标第一原始信号帧以及第二输入队列中的目标第二原始信号帧;所述目标第一原始信号帧为所述第一输入队列中具有最大的输入时间戳的第一原始信号帧;所述目标第二原始信号帧为所述第二输入队列中具有最大的输入时间戳的第二原始信号帧;
    若所述目标第二原始信号帧的输入时间戳小于所述目标第一原始信号帧的输入时间戳,则在当前时间达到所述目标第一原始信号帧的输入时间戳时,将所述第一数据采集器中的所述目标第一原始信号帧和所述第二数据采集器中的所述目标第二原始信号帧进行信号融合。
  24. 根据权利要求23所述的存储介质,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    将所述第一输入队列中的所述补充信号帧作为所述第一数据采集器的目标第一原始信号帧,并根据所述目标第一原始信号帧生成目标补充信号帧;
    在当前时间达到所述目标补充信号帧的输入时间戳时,将所述第一输入队列中的所述目标补充信号帧与所述第二输入队列中的目标第二原始信号帧进行信号融合。
PCT/CN2018/111691 2017-10-25 2018-10-24 数据处理方法、计算机设备和存储介质 WO2019080879A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18871662.5A EP3614256B1 (en) 2017-10-25 2018-10-24 Data processing method, computer device, and storage medium
US16/599,004 US11245763B2 (en) 2017-10-25 2019-10-10 Data processing method, computer device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711008770.0A CN109711421A (zh) 2017-10-25 2017-10-25 一种数据处理方法和装置
CN201711008770.0 2017-10-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/599,004 Continuation US11245763B2 (en) 2017-10-25 2019-10-10 Data processing method, computer device and storage medium

Publications (1)

Publication Number Publication Date
WO2019080879A1 true WO2019080879A1 (zh) 2019-05-02

Family

ID=66247174

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/111691 WO2019080879A1 (zh) 2017-10-25 2018-10-24 数据处理方法、计算机设备和存储介质

Country Status (4)

Country Link
US (1) US11245763B2 (zh)
EP (1) EP3614256B1 (zh)
CN (1) CN109711421A (zh)
WO (1) WO2019080879A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110596654B (zh) * 2019-10-18 2023-06-30 立晟智能科技(成都)有限公司 一种基于毫米波雷达的数据同步采集系统
CN112354171B (zh) * 2020-10-20 2023-08-25 上海恒润文化科技有限公司 一种轨道车及其执行机构的执行控制方法和装置
CN114301942A (zh) * 2021-12-29 2022-04-08 杭州涂鸦信息技术有限公司 数据上报方法、数据上报装置以及计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102957869A (zh) * 2011-09-26 2013-03-06 斯凯普公司 视频稳定
CN103139446A (zh) * 2011-10-14 2013-06-05 斯凯普公司 接收的视频稳定化
CN103686042A (zh) * 2012-09-25 2014-03-26 三星电子株式会社 图像数据处理的方法和装置以及包括该装置的电子设备
US20160323565A1 (en) * 2015-04-30 2016-11-03 Seiko Epson Corporation Real Time Sensor and Method for Synchronizing Real Time Sensor Data Streams

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6222848B1 (en) * 1997-12-22 2001-04-24 Nortel Networks Limited Gigabit ethernet interface to synchronous optical network (SONET) ring
US7522629B2 (en) * 2003-01-16 2009-04-21 Alcatel-Lucent Usa Inc. Sending signaling messages to CDMA cellular mobile stations
US20070153731A1 (en) * 2006-01-05 2007-07-05 Nadav Fine Varying size coefficients in a wireless local area network return channel
US7974278B1 (en) * 2007-12-12 2011-07-05 Integrated Device Technology, Inc. Packet switch with configurable virtual channels
US8879464B2 (en) * 2009-01-29 2014-11-04 Avaya Inc. System and method for providing a replacement packet
TW201039580A (en) * 2009-04-20 2010-11-01 Ralink Technology Corp Method for determining a modulation and coding scheme for packets with variable lengths
US9116001B2 (en) * 2012-06-14 2015-08-25 Qualcomm Incorporated Adaptive estimation of frame time stamp latency
US9554113B2 (en) * 2013-03-21 2017-01-24 Mediatek Inc. Video frame processing method
US20150185054A1 (en) * 2013-12-30 2015-07-02 Motorola Mobility Llc Methods and Systems for Synchronizing Data Received from Multiple Sensors of a Device
US9098753B1 (en) * 2014-04-25 2015-08-04 Google Inc. Methods and systems for object detection using multiple sensors
CN104112363B (zh) * 2014-07-04 2016-05-25 西安交通大学 多传感数据时空同步方法及道路多传感数据车载采集系统
US10180340B2 (en) * 2014-10-09 2019-01-15 Invensense, Inc. System and method for MEMS sensor system synchronization
CN106612452B (zh) * 2015-10-22 2019-12-13 深圳市中兴微电子技术有限公司 机顶盒音视频同步的方法及装置
CN105898384B (zh) * 2016-04-26 2019-03-22 广州盈可视电子科技有限公司 一种流媒体视频混合帧率控制的方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102957869A (zh) * 2011-09-26 2013-03-06 斯凯普公司 视频稳定
CN103139446A (zh) * 2011-10-14 2013-06-05 斯凯普公司 接收的视频稳定化
CN103686042A (zh) * 2012-09-25 2014-03-26 三星电子株式会社 图像数据处理的方法和装置以及包括该装置的电子设备
US20160323565A1 (en) * 2015-04-30 2016-11-03 Seiko Epson Corporation Real Time Sensor and Method for Synchronizing Real Time Sensor Data Streams

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3614256A4 *

Also Published As

Publication number Publication date
US11245763B2 (en) 2022-02-08
US20200045113A1 (en) 2020-02-06
EP3614256A1 (en) 2020-02-26
EP3614256A4 (en) 2020-09-09
CN109711421A (zh) 2019-05-03
EP3614256B1 (en) 2023-02-01

Similar Documents

Publication Publication Date Title
CN107888828B (zh) 空间定位方法及装置、电子设备、以及存储介质
EP3014476B1 (en) Using movement patterns to anticipate user expectations
CN107645701B (zh) 一种生成运动轨迹的方法及装置
WO2019119289A1 (zh) 一种定位方法、装置及电子设备、计算机程序产品
CN108871311B (zh) 位姿确定方法和装置
US11181379B2 (en) System and method for enhancing non-inertial tracking system with inertial constraints
CN110879400A (zh) 激光雷达与imu融合定位的方法、设备及存储介质
US11245763B2 (en) Data processing method, computer device and storage medium
CN108988974B (zh) 时间延时的测量方法、装置和对电子设备时间同步的系统
CN104501814A (zh) 一种基于视觉和惯性信息的姿态与位置估计方法
JP7182020B2 (ja) 情報処理方法、装置、電子機器、記憶媒体およびプログラム
US11127156B2 (en) Method of device tracking, terminal device, and storage medium
JP2017073753A (ja) 補正方法、プログラム及び電子機器
CN109040525B (zh) 图像处理方法、装置、计算机可读介质及电子设备
CN112819860A (zh) 视觉惯性系统初始化方法及装置、介质和电子设备
CN111121755B (zh) 一种多传感器的融合定位方法、装置、设备及存储介质
CN113610702B (zh) 一种建图方法、装置、电子设备及存储介质
CN108804161B (zh) 应用的初始化方法、装置、终端和存储介质
US11694409B1 (en) Augmented reality using a split architecture
CN111382701A (zh) 动作捕捉方法、装置、电子设备及计算机可读存储介质
CN113628284A (zh) 位姿标定数据集生成方法、装置、系统、电子设备及介质
Artemciukas et al. Kalman filter for hybrid tracking technique in augmented reality
JP2021148709A (ja) 計測装置、計測方法およびプログラム
CN117294832B (zh) 数据处理方法、装置、电子设备和计算机可读存储介质
US20210295557A1 (en) Device and method for position determination in a 3d model of an environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18871662

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018871662

Country of ref document: EP

Effective date: 20191119

NENP Non-entry into the national phase

Ref country code: DE