CN113850929B - Display method, device, equipment and medium for processing annotation data stream - Google Patents

Display method, device, equipment and medium for processing annotation data stream Download PDF

Info

Publication number
CN113850929B
CN113850929B CN202111113433.4A CN202111113433A CN113850929B CN 113850929 B CN113850929 B CN 113850929B CN 202111113433 A CN202111113433 A CN 202111113433A CN 113850929 B CN113850929 B CN 113850929B
Authority
CN
China
Prior art keywords
data
annotation
target
prediction
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111113433.4A
Other languages
Chinese (zh)
Other versions
CN113850929A (en
Inventor
聂鑫
杨逸飞
陈飞
霍达
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yuji Technology Co ltd
Original Assignee
Guangzhou Weride Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weride Technology Co Ltd filed Critical Guangzhou Weride Technology Co Ltd
Priority to CN202111113433.4A priority Critical patent/CN113850929B/en
Publication of CN113850929A publication Critical patent/CN113850929A/en
Application granted granted Critical
Publication of CN113850929B publication Critical patent/CN113850929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a display method, a device, equipment and a medium for processing a labeling data stream, which relate to an unmanned vehicle, and the method comprises the following steps: acquiring a data stream to be marked, which is acquired by an unmanned vehicle; inputting the data stream to be marked into a preset marking prediction model, and sequentially generating marking prediction data corresponding to each frame of data to be marked in the data stream to be marked; when the annotation prediction data meets preset construction conditions, constructing an initial input table by adopting the generated annotation prediction data and creating a fragment window; performing aggregation operation and identification adjustment on each row of marking prediction data in the segmentation window to generate a dynamic result table; and executing the association operation by adopting the initial input table and the dynamic result table, generating a target output table and displaying. Therefore, the past, present and future data are constructed simultaneously in the same slicing window, and the situations of jump and the like of the marking prediction data before and after the current moment are analyzed more efficiently on the basis of guaranteeing the real-time performance of data stream processing.

Description

Display method, device, equipment and medium for processing annotation data stream
Technical Field
The present invention relates to the field of data stream processing technologies, and in particular, to a method, an apparatus, a device, and a medium for displaying labeling data stream processing.
Background
With the rapid development of information processing technology, internet technology and communication technology, autopilot automobiles have become a trend of people's life in the future. However, mass traffic data can be generated in real time in the automatic driving process of the automobile, and how to analyze the data in time is a problem to be solved in order to ensure the reliability and the safety of driving.
In the prior art, a Flink computing framework is proposed, which can combine batch processing and stream processing, and the core is a stream data processing engine for providing data distribution and parallelization computation, and the data within a certain time is analyzed by providing a data slicing window.
However, the data stream processing process in the prior art only supports creating a slice window for the current data line and a plurality of past data lines at present, and a plurality of recognition models exist in the actual automatic driving process.
Disclosure of Invention
The invention provides a display method, a device, equipment and a medium for processing a marked data stream, which solve the technical problem that the real-time performance of the data stream processing process cannot be ensured because the prior art can only analyze the data before and after the current moment in a data batch processing mode.
The display method for processing the annotation data stream provided by the first aspect of the invention comprises the following steps:
acquiring a data stream to be marked;
inputting the data stream to be annotated into a preset annotation prediction model, and sequentially generating annotation prediction data respectively corresponding to each frame of data to be annotated in the data stream to be annotated;
when the annotation prediction data meets preset construction conditions, constructing an initial input table by adopting the generated annotation prediction data and creating a fragment window;
performing aggregation operation and identification adjustment on each line of marking prediction data in the slicing window to generate a dynamic result table;
and executing association operation by adopting the initial input table and the dynamic result table, generating a target output table and displaying.
Optionally, a processor applied to the unmanned vehicle, the processor is in communication connection with various sensors set by the unmanned vehicle, and the step of acquiring the data stream to be annotated includes:
Acquiring environmental data of the environment where the unmanned vehicle is located in real time through the various sensors;
and receiving the environmental data according to the acquisition time sequence of the environmental data, and sequencing to obtain a data stream to be annotated.
Optionally, the step of inputting the data stream to be annotated into a preset annotation prediction model, and sequentially generating annotation prediction data corresponding to each frame of data to be annotated in the data stream to be annotated, includes:
inputting the data stream to be marked into a preset marking prediction model;
carrying out object recognition on each frame of data to be marked in the data stream to be marked through the marking prediction model in sequence, and determining predicted objects corresponding to the data to be marked in each frame;
generating feature annotation information corresponding to each predicted object according to the object type of each predicted object through the annotation prediction model;
and sequencing the feature labeling information to construct labeling prediction data corresponding to the data to be labeled of each frame.
Optionally, when the annotation prediction data meets a preset construction condition, the step of constructing an initial input table and creating a fragmentation window by using the generated annotation prediction data includes:
When the number of the generated lines of the standard prediction data reaches a preset number threshold, constructing an initial input table by using the generated marked prediction data;
or when the generation time of the standard prediction data reaches a preset time threshold value, constructing an initial input table by using the generated annotation prediction data;
and creating a slice window by using the multi-row annotation prediction data in the initial input table.
Optionally, the labeling prediction data is provided with an initial data identifier; the step of performing aggregation operation and identification adjustment on each row of marking prediction data in the slicing window to generate a dynamic result table comprises the following steps:
performing aggregation operation by adopting marking prediction data of each row in the slicing window, and generating an operation result corresponding to the marking prediction data of each row;
respectively adjusting each initial data identifier in the slicing window to obtain target data identifiers corresponding to the marking prediction data of each row;
and establishing association between the target data identifier and the operation result to generate a dynamic result table.
Optionally, the step of adjusting each initial data identifier in the slicing window to obtain a target data identifier corresponding to each row of the labeling prediction data includes:
Selecting a target identifier from the initial data identifiers in the slicing window;
updating the last initial data identifier in the slicing window by adopting the target identifier;
increasing the target mark according to a preset value;
selecting an initial data identifier to be updated from the initial data identifiers which are not updated;
updating the initial data identifier to be updated by adopting the increased target identifier;
the step of increasing the target identifier according to a preset value is carried out in a jumping mode until all the initial data identifiers are updated;
and determining all initial data identifiers at the current moment as target data identifiers corresponding to the labeling prediction data of each row.
Optionally, the step of performing the association operation by using the initial input table and the dynamic result table to generate and display a target output table includes:
traversing the initial input table and the dynamic result table;
sequentially updating the initial data identifiers by adopting the target data identifiers to obtain middle annotation prediction data;
sequentially associating the intermediate annotation prediction data with the operation result to obtain target annotation prediction data;
and constructing a target output table by adopting all the target annotation prediction data and displaying the target output table.
Optionally, the method further comprises:
comparing the target annotation prediction data of each row in the target output table, and determining the change amplitude between the target annotation prediction data of each row;
if the change amplitude is larger than a preset change threshold value, judging that the labeling prediction model is in an unstable state;
and if the change amplitude is smaller than or equal to the change threshold value, judging that the labeling prediction model is in a stable state.
Optionally, the method further comprises:
if the labeling prediction model is judged to be in an unstable state, dividing all the target labeling prediction data into a training set and a testing set according to a preset dividing proportion;
training the labeling prediction model by adopting the training set to obtain an updated labeling prediction model;
inputting target annotation prediction data in the test set into the updating annotation prediction model in sequence to obtain a plurality of updating output results;
comparing a plurality of the updated output results, and determining the updated change amplitude among the updated output results;
if the updated change amplitude is greater than the change threshold, determining the updated annotation prediction model as a new annotation prediction model, and jumping to execute the step of training the annotation prediction model by adopting the training set to obtain the updated annotation prediction model;
And if the update change amplitude is smaller than or equal to the change threshold value, judging that the training of the update annotation prediction model is completed, and determining the update annotation prediction model as a new annotation prediction model.
The second aspect of the present invention provides a display device for processing a labeling data stream, including:
the data stream acquisition module is used for acquiring a data stream to be marked;
the marking prediction module is used for inputting the data stream to be marked into a preset marking prediction model and sequentially generating marking prediction data respectively corresponding to each frame of data to be marked in the data stream to be marked;
the initial input table construction module is used for constructing an initial input table and creating a segmentation window by adopting the generated annotation prediction data when the annotation prediction data meets preset construction conditions;
the window data processing module is used for executing aggregation operation and identification adjustment on each line of marking prediction data in the slicing window to generate a dynamic result table;
and the association display module is used for executing association operation by adopting the initial input table and the dynamic result table, generating a target output table and displaying the target output table.
A third aspect of the present invention provides an electronic device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the presentation method of annotation data stream processing according to the first aspect of the present invention.
A fourth aspect of the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed, implements a presentation method of annotation data stream processing according to the first aspect of the invention.
From the above technical scheme, the invention has the following advantages:
according to the method, a display device for processing the marking data stream is used for acquiring the data stream to be marked, which is acquired by the unmanned vehicle in the actual running process, the data stream to be marked is input into a preset marking prediction model according to frames, and marking prediction data corresponding to each frame of data to be marked are sequentially generated; when the generated annotation prediction data meets preset construction conditions, constructing an initial input table by adopting the generated annotation prediction data, creating a segmentation window, executing required aggregation operation on each line of annotation prediction data in the segmentation window to obtain an operation result, and executing identification adjustment on initial data identifications originally carried by each line of annotation prediction data to obtain target data identifications, and sequencing the operation result and the target data identifications according to the original sequence of the identification prediction data to generate a dynamic result table; and then, executing association operation by adopting the initial input table and the dynamic result table according to preset association conditions, thereby generating and displaying a target output table. Therefore, the technical problem that the real-time performance of the data stream processing process cannot be guaranteed because the data before and after the current moment can be analyzed only in a data batch processing mode in the prior art is solved, and the current moment position is adjusted in a mode of adjusting the initial data identification of the marking predicted data, so that the current and future data exist in the same slicing window at the same time, and the conditions of whether jump and the like occur in the marking predicted data before and after the current moment are analyzed more efficiently on the basis of guaranteeing the real-time performance of the data stream processing.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flowchart illustrating a method for displaying annotation data stream processing according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for displaying annotation data stream processing according to a second embodiment of the present invention;
fig. 3 is a block diagram of a display device for labeling data stream processing according to a third embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a display method, a device, equipment and a medium for processing a marked data stream, which are used for solving the technical problem that the real-time performance of the data stream processing process cannot be ensured because the data before and after the current moment can be analyzed only in a data batch processing mode in the prior art.
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention are described in detail below with reference to the accompanying drawings, and it is apparent that the embodiments described below are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a method for displaying a labeling data stream process according to an embodiment of the present invention, where the method for displaying a labeling data stream process in this embodiment may be implemented by a device for displaying a labeling data stream process, and the device for displaying a labeling data stream process may be implemented by software and/or hardware, and a specific implementation process of the method is further described below.
The invention provides a display method for processing a marked data stream, which comprises the following steps:
step 101, obtaining a data stream to be marked;
the data flow to be marked in the embodiment of the invention refers to a data flow acquired by the camera equipment or the sensing equipment according to time sequence for the environment where the equipment is located, wherein the data flow comprises multiple frames of data to be marked, and each frame of data to be marked comprises at least one object so as to provide a data basis for object identification and object marking of the environment where the equipment is located.
It should be noted that the data stream to be marked may be a video stream, an image stream, a speed stream or other streaming data, and the data to be marked may be data such as an image.
In the embodiment of the invention, the data stream to be marked in the current environment is obtained in real time through the camera equipment or the sensing equipment and the like, and the data stream to be marked is uploaded to the display device for marking the data stream processing in real time through the wired network or the wireless network, so that the data acquisition of the data stream to be marked is realized.
102, inputting a data stream to be marked into a preset marking prediction model, and sequentially generating marking prediction data respectively corresponding to each frame of data to be marked in the data stream to be marked;
the labeling prediction model in the embodiment of the invention refers to a model for determining the prediction information of each target object by identifying each target object in each frame of data to be labeled in the data stream to be labeled. The prediction information includes, but is not limited to, object position, object class, detectable (unoccluded) polygon range, predicted actual polygon range, object tracking ID, object movement speed, direction, and the like. The model can be a regional convolution neural network R-CNN, a lightweight neural network MobileNet or other object detection model, and the like.
After the data stream to be marked, which is uploaded by the unmanned vehicle, is obtained, the data stream to be marked is input into a marking prediction model which is arranged at a local or cloud end, object detection is carried out on each frame of data to be marked according to the input time sequence of the data stream to be marked through the marking prediction model, corresponding marking prediction is carried out, and marking prediction data respectively corresponding to each frame of data to be marked are sequentially generated.
It should be noted that, the labeling prediction data may be represented in the form of line data, where each line of labeling prediction data indicates labeling information corresponding to all objects in each frame of data to be labeled.
Step 103, when the annotation prediction data meets the preset construction conditions, constructing an initial input table by adopting the generated annotation prediction data and creating a fragment window;
the preset construction condition refers to a quantity threshold or a time threshold which is required to be met after the generation quantity or the generation time of the annotation prediction data is counted in the generation process of the annotation prediction data.
In the embodiment of the invention, because the generation process of the annotation prediction data is a streaming processing mode, after partial annotation prediction data is generated, if the generation quantity or the generation time of the annotation prediction data meets the preset construction condition, the generated annotation prediction data is adopted to construct an initial input table so as to provide a data basis for subsequent windowing processing.
104, executing aggregation operation and identification adjustment on each line of label prediction data in the segmentation window to generate a dynamic result table;
a fragmentation window refers to a technical means of cutting an ever-increasing unlimited data set into finite data blocks in a data processing engine that processes the unlimited data set so that the processing engine can perform an aggregation operation on the finite data blocks. The aggregation operation may include, but is not limited to, a minimum, an average, a sum/difference squared, a percentile value, any operation based on a mathematical formula that takes a plurality of values as inputs and outputs a single value, and the like. The slice window includes, but is not limited to, the following types: fixed time window, sliding time window, session time window, count time window, etc.
After the initial input table is obtained, a slicing window is created by adopting the initial input table, and aggregation operation is carried out on each row of marking prediction data in the slicing window so as to determine an operation result in the slicing window. Meanwhile, the original identification of each line of marking prediction data can be identified and adjusted so as to obtain future data and past data simultaneously in the same segmentation window, and after the operation result and the target data identification are obtained, the result is stored as a dynamic result table corresponding to the initial input table.
And 105, performing association operation by adopting the initial input table and the dynamic result table, generating a target output table and displaying the target output table.
After the dynamic result table is obtained, the initial input table and the dynamic result table are adopted to execute the association operation, and the target input table is generated and displayed.
It should be noted that, the association operation is a join operation, which refers to an operation of using the same attribute in two (or more) tables to say that records of two (or more) tables are combined together, and may include, but is not limited to, the following categories: cross-connect (Cross join), natural connect (Natural join), inner connect (Inner join), outer connect (Outer join), self-connect, etc., the Outer connect may include Left connect Left Outer join, right connect Right Outer join, full connect Outer join, etc. The dynamic result table stores target data identifiers, each row of data in the initial input table carries initial data identifiers besides marking predicted data, the initial data identifiers and the initial data identifiers belong to the same attribute, and at the moment, the association of the initial input table and the dynamic result table can be realized by setting a join condition to be the mode of [ initial data identifier=target data identifier ].
In the embodiment of the invention, a display device for processing the marking data stream is used for acquiring the data stream to be marked acquired by the unmanned vehicle in the actual running process, inputting the data stream to be marked into a preset marking prediction model according to frames, and sequentially generating marking prediction data corresponding to each frame of data to be marked; when the generated annotation prediction data meets preset construction conditions, constructing an initial input table by adopting the generated annotation prediction data, creating a segmentation window, executing required aggregation operation on each line of annotation prediction data in the segmentation window to obtain an operation result, and executing identification adjustment on initial data identifications originally carried by each line of annotation prediction data to obtain target data identifications, and sequencing the operation result and the target data identifications according to the original sequence of the identification prediction data to generate a dynamic result table; and then, executing association operation by adopting the initial input table and the dynamic result table according to preset association conditions, thereby generating and displaying a target output table. Therefore, the technical problem that the real-time performance of the data stream processing process cannot be guaranteed because the data before and after the current moment can be analyzed only in a data batch processing mode in the prior art is solved, and the current moment position is adjusted in a mode of adjusting the initial data identification of the marking predicted data, so that the current and future data exist in the same slicing window at the same time, and the conditions of whether jump and the like occur in the marking predicted data before and after the current moment are analyzed more efficiently on the basis of guaranteeing the real-time performance of the data stream processing.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for displaying a labeling data stream according to a second embodiment of the present invention.
The invention provides a display method for processing a marked data stream, which comprises the following steps:
step 201, obtaining a data stream to be marked;
the data flow to be marked in the embodiment of the invention refers to a data flow acquired by the camera equipment or the sensing equipment according to time sequence for the environment where the equipment is located, wherein the data flow comprises multiple frames of data to be marked, and each frame of data to be marked comprises at least one object so as to provide a data basis for object identification and object marking of the environment where the equipment is located.
Optionally, the method is applied to a processor of an unmanned vehicle, said processor being communicatively connected to various sensors provided by said unmanned vehicle, step 201 may comprise the sub-steps of:
acquiring environmental data of the environment where the unmanned vehicle is located in real time through the various sensors;
and receiving the environmental data according to the acquisition time sequence of the environmental data, and sequencing to obtain a data stream to be annotated.
The unmanned vehicle refers to an automatic driving automobile (Autonomous vehicles; self-driving automobile) which is also called an unmanned automobile, a computer driving automobile or a wheel type mobile robot, and is an intelligent automobile for realizing unmanned through a computer system. By means of cooperation of artificial intelligence, visual computing, radar, monitoring device and global positioning system, the computer can control the motor vehicle without any active operation of human being.
In the embodiment of the invention, the environmental data of the current environment of the unmanned vehicle is acquired in real time through various sensors such as an imaging device, an environment sensor, a temperature sensor, a speed sensor and the like arranged on the unmanned vehicle. The display device for processing the marked data stream in the processor of the unmanned vehicle is uploaded to the display device in real time through a wired network or a wireless network according to the acquisition time sequence of the environmental data, and the device sorts the environmental data according to the acquisition time sequence, so that the data acquisition of the data stream to be marked is realized.
Step 202, inputting a data stream to be marked into a preset marking prediction model, and sequentially generating marking prediction data respectively corresponding to each frame of data to be marked in the data stream to be marked;
optionally, step 202 may comprise the sub-steps of:
inputting the data stream to be marked into a preset marking prediction model;
carrying out object identification on each frame of data to be marked in the data stream to be marked through the marking prediction model in sequence, and determining predicted objects corresponding to each frame of data to be marked respectively;
generating feature labeling information corresponding to each predicted object according to the object type of each predicted object through a labeling prediction model;
And sequencing the feature labeling information to construct labeling prediction data corresponding to each frame of data to be labeled.
The labeling prediction model in the embodiment of the invention refers to a model for determining the prediction information of each target object by identifying each target object in each frame of data to be labeled in the data stream to be labeled. The prediction information includes, but is not limited to, object position, object class, detectable (unoccluded) polygon range, predicted actual polygon range, object tracking ID, object movement speed, direction, and the like. The model can be a regional convolution neural network R-CNN, a lightweight neural network MobileNet or other object detection model, and the like.
In the embodiment of the invention, after the data stream to be annotated is obtained, the data stream to be annotated is input into a preset annotation prediction model according to frames, object identification is carried out on each frame of data to be annotated through the annotation prediction model, so that a predicted object in each frame of data to be annotated is determined, and when the predicted object is determined, feature annotation information corresponding to the predicted object can be determined through the annotation prediction model according to the object type of the predicted object.
For example, the predicted objects such as cars, pedestrians, bicycles, trucks and the like are identified in the data to be marked, and at this time, different characteristic marking information can be respectively extracted for each predicted object based on different object types. If the cars and trucks belong to motor vehicles, the position, the moving speed, the direction, the size and other characteristic labeling information of the trucks and the cars can be respectively determined; the bicycle belongs to a non-motor vehicle, and can predict the detectable range, the predicted direction, the predicted speed and other characteristic labeling information of the bicycle; the pedestrian belongs to an important attention object, and the position, the traveling intention and other characteristic labeling information of the pedestrian can be predicted.
After the feature labeling information is obtained, the feature labeling information can be recorded in a line data mode, and the feature labeling information is sequenced by taking the predicted object as a unit, and the feature labeling information corresponding to all the predicted objects in one frame of data to be labeled is recorded in each line, so that labeling prediction data corresponding to each frame of data to be labeled is constructed.
Step 203, when the labeling prediction data meets the preset construction conditions, constructing an initial input table by adopting the generated labeling prediction data and creating a fragment window; the labeling prediction data is provided with an initial data identifier;
optionally, step 203 may comprise the sub-steps of:
when the number of the generated lines of the standard prediction data reaches a preset number threshold, constructing an initial input table by using the generated marked prediction data;
or when the generation time of the standard prediction data reaches a preset time threshold value, constructing an initial input table by using the generated annotation prediction data;
a slice window is created using the multiple lines of annotated prediction data within the initial input table.
A fragmentation window refers to a technical means of cutting an ever-increasing unlimited data set into finite data blocks in a data processing engine that processes the unlimited data set so that the processing engine can perform an aggregation operation on the finite data blocks. The aggregation operation may include, but is not limited to, a minimum, an average, a sum/difference squared, a percentile value, any operation based on a mathematical formula that takes a plurality of values as inputs and outputs a single value, and the like. The slice window includes, but is not limited to, the following types: fixed time window, sliding time window, session time window, count time window, etc.
In one example of the present invention, when the number of generated lines of the annotation prediction data generated by the annotation prediction model reaches a preset number threshold, the generated annotation prediction data may be ranked according to the number of frames at this time to construct an initial input table; or when the generation time of generating the annotation prediction data by the annotation prediction model reaches a preset time threshold, the generated annotation prediction data can be ordered according to the frame number, and an initial input table is constructed.
And after the initial input table is constructed, a part of line or all line marking prediction data can be intercepted from the initial input table to construct a slicing window.
It should be noted that the initial data identifier may be the number of frames of the data stream to be annotated, for example, the annotation prediction data is extracted from the 18 th frame of the data stream to be annotated, where the initial data identifier may be 18.
Step 204, performing aggregation operation by adopting each row of marking prediction data in the slicing window, and generating an operation result corresponding to each row of marking prediction data;
in the embodiment of the invention, after selecting multiple lines or all marked prediction data in an initial input table to construct a segmentation window, the type of aggregation operation, such as average value, maximum value, minimum value, square sum, square variance, percentile value and the like, can be selected according to the user requirement, the aggregation operation is performed on each line of marked prediction data in the segmentation window, and the same type of data in the same line or the same type of data in other lines is adopted to calculate so as to generate an operation result corresponding to each line of marked prediction data.
In a specific implementation, the type of the slice window is taken as a fixed time window t x-10 To t x Calculating an average of the object velocity, for example, may be identified as t using the initial data x-10 To t x The average value operation is carried out on the object speed in each line of marking prediction data, and the initial data mark t is obtained x-10 To t x The average value in the column is used as the calculation result of the label prediction data of each column and is recorded in the output of each column.
Step 205, respectively adjusting each initial data identifier in the segmentation window to obtain a target data identifier corresponding to each line of marking prediction data;
optionally, step 205 may comprise the sub-steps of:
selecting a target identifier from the initial data identifiers in the slicing window;
updating the last initial data identifier in the slicing window by adopting the target identifier;
increasing a target mark according to a preset numerical value;
selecting an initial data identifier to be updated from the initial data identifiers which are not updated;
updating the initial data identifier to be updated by adopting the increased target identifier;
the step of increasing the target mark according to the preset value is carried out in a jumping mode until all initial data marks are updated;
and determining all initial data identifiers at the current moment as target data identifiers corresponding to each row of marking prediction data.
In the embodiment of the invention, any identifier can be selected from initial data identifiers in a fragmentation window as a target identifier, then the target identifier is adopted to replace and update the last initial data identifier in the fragmentation window, after the last initial data identifier is replaced and updated, the target identifier is increased according to a preset numerical value, and in the initial data identifiers which are not updated, the last initial data identifier is selected as the initial data identifier to be updated, and then the increased target identifier is adopted to update the initial data identifier to be updated; and increasing the target mark according to the preset value again until all the initial data marks are updated. At this time, all initial data identifiers at the current moment can be determined as target data identifiers corresponding to each row of marking prediction data respectively.
For example, the slice window stores an initial data identifier t x-10 To t x Marking predicted data in the target mark, and selecting the target mark as t at the moment x-5 The last initial data mark in the slicing window is t x-10 The initial data can be identified t x-10 Replacement update to t x-5 The method comprises the steps of carrying out a first treatment on the surface of the Increasing the target mark to t according to the preset value 1 x-4 The method comprises the steps of carrying out a first treatment on the surface of the The initial data not updated at this time is identified as t x-9 To t x The last initial data mark t is selected from the data marks x-9 For the data mark to be updated, replacing the data mark with the increased target mark t x-4 . And so on until all initial data identificationsUpdated, the target data mark is obtained as t x-5 To t x+5 Is provided.
Step 206, establishing association between the target data identification and the operation result to generate a dynamic result;
in the embodiment of the invention, after the target data identification and the operation result are acquired, the target data identification and the operation result can be used for establishing association to generate a dynamic result table.
And 207, performing association operation by adopting the initial input table and the dynamic result table, generating a target output table and displaying the target output table.
Further, step 207 may comprise the sub-steps of:
traversing an initial input table and a dynamic result table;
sequentially updating the initial data identification by adopting the target data identification to obtain middle annotation prediction data;
sequentially associating the intermediate annotation prediction data with the operation result to obtain target annotation prediction data;
and constructing a target output table by adopting all target labeling prediction data and displaying the target output table.
In a specific implementation, the stream processing system can support a plurality of dynamic tables at the same time, after the dynamic result table is obtained, the initial input table and the dynamic result table can be traversed, and the target data identifiers are adopted to update all the initial data identifiers in the initial input table in sequence, so that the middle annotation prediction data can be obtained. And then sequentially associating the intermediate annotation prediction data with the operation result to obtain the target annotation prediction data. Specifically, after the operation result corresponding to the slice window is obtained, the operation result may be recorded for each line of middle-labeled prediction data, for example, the operation result is an average value a, and the operation result for each line of middle-labeled prediction data may be recorded as a.
After the target labeling prediction data are obtained, the target output table is constructed and displayed by adopting all target identification prediction data, and the target output table is further analyzed by a subsequent engineer in a UI display mode, so that the effect of acquiring window characteristic information of each object in a past and future range in a stream processing system is realized.
Optionally, the method further comprises:
comparing each row of target annotation prediction data in the target output table, and determining the change amplitude among each row of target annotation prediction data;
if the change amplitude is larger than a preset change threshold value, judging that the labeling prediction model is in an unstable state;
and if the variation amplitude is smaller than or equal to the variation threshold value, judging that the labeling prediction model is in a stable state.
In the embodiment of the invention, after the target output table is acquired, in order to determine whether unexpected changes occur in the target identification prediction data of each row in the same time window, the target labeling prediction data of each row in the target output table respectively represent object labeling prediction results in a plurality of frames before and after the current time. At this time, the target annotation prediction data of each row in the target output table can be used for comparison, and the change amplitude among the target annotation prediction data of each row is determined through the comparison of the object annotation prediction results of the same position in the target annotation prediction data of each row. If the change amplitude is larger than a preset change threshold, indicating that the labeling prediction model has identification errors in a plurality of frames before and after the current moment, judging that the labeling prediction model is in an unstable state, and waiting for further optimization and improvement of the model; if the variation amplitude is smaller than or equal to the variation threshold value, the fact that the prediction result of the labeling prediction model does not have an impermissible error in a plurality of frames before and after the current moment is indicated, and the labeling prediction model is judged to be in a stable state.
It should be noted that the variation threshold may include various kinds, including but not limited to the maximum allowable number of hops of the same object type, the allowable difference value of the same object size judgment, the maximum number of variation of the tracking ID of the same object, the variation amplitude, and the like.
Further, the method further comprises:
if the labeling prediction model is in an unstable state, dividing all target labeling prediction data into a training set and a testing set according to a preset dividing proportion;
training the labeling prediction model by adopting a training set to obtain an updated labeling prediction model;
inputting target annotation prediction data in the test set into the updated annotation prediction model in sequence to obtain a plurality of updated output results;
comparing the plurality of updating output results, and determining the updating change amplitude among the updating output results;
if the update change amplitude is greater than the change threshold value, determining the update annotation prediction model as a new annotation prediction model, and performing skip execution to train the annotation prediction model by adopting a training set to obtain the update annotation prediction model;
if the update change amplitude is smaller than or equal to the change threshold value, judging that the update annotation prediction model training is completed, and determining the update annotation prediction model as a new annotation prediction model.
In another example of the present invention, if it is determined that the labeling prediction model is in an unstable state, it indicates that the target labeling prediction data at this time contains a large amount of valid data, and in order to further optimize the model, the training set and the test set may be divided by all the target labeling prediction data according to a preset division ratio. The preset dividing ratio may be a training set: test set = 9:1, 8:2, etc., as embodiments of the invention are not limited in this regard.
After the training set is obtained, training the labeling prediction model by adopting the training set, inputting target labeling prediction data in the training set into the labeling prediction model one by one, and adjusting model parameters of the labeling prediction model after generating a labeling prediction result; and further processing the target annotation prediction data until the accuracy of the annotation prediction result of the model reaches a preset threshold value, and obtaining an updated annotation prediction model at the moment. The adjustment method of the model parameters can be a gradient descent method and the like.
After the test set and the updated annotation prediction model are obtained, the performance of the updated annotation prediction model can be further checked by adopting the test set, and target annotation prediction data in the test set are sequentially input into the updated annotation prediction model to obtain a plurality of updated output results; comparing the plurality of updating output results, and determining the updating change amplitude among the updating output results; if the update change amplitude is larger than the change threshold value, further performing repeated training on the update annotation prediction model by adopting a training set; if the update change amplitude is smaller than or equal to the change threshold value, judging that the update annotation prediction model training is completed, and determining the update annotation prediction model as a new annotation prediction model.
In the embodiment of the invention, a display device for processing the marking data stream is used for acquiring the data stream to be marked acquired by the unmanned vehicle in the actual running process, inputting the data stream to be marked into a preset marking prediction model according to frames, and sequentially generating marking prediction data corresponding to each frame of data to be marked; when the generated annotation prediction data meets preset construction conditions, constructing an initial input table by adopting the generated annotation prediction data, creating a segmentation window, executing required aggregation operation on each line of annotation prediction data in the segmentation window to obtain an operation result, and executing identification adjustment on initial data identifications originally carried by each line of annotation prediction data to obtain target data identifications, and sequencing the operation result and the target data identifications according to the original sequence of the identification prediction data to generate a dynamic result table; and then, executing association operation by adopting the initial input table and the dynamic result table according to preset association conditions, thereby generating and displaying a target output table. Therefore, the technical problem that the real-time performance of the data stream processing process cannot be guaranteed because the data before and after the current moment can be analyzed only in a data batch processing mode in the prior art is solved, and the current moment position is adjusted in a mode of adjusting the initial data identification of the marking predicted data, so that the current and future data exist in the same slicing window at the same time, and the conditions of whether jump and the like occur in the marking predicted data before and after the current moment are analyzed more efficiently on the basis of guaranteeing the real-time performance of the data stream processing.
Referring to fig. 3, fig. 3 is a block diagram illustrating a display device for labeling data stream processing according to a third embodiment of the present invention.
The embodiment of the invention provides a display device for processing a marked data stream, which comprises:
a data stream obtaining module 301, configured to obtain a data stream to be annotated;
the annotation prediction module 302 is configured to input the data stream to be annotated into a preset annotation prediction model, and sequentially generate annotation prediction data corresponding to each frame of data to be annotated in the data stream to be annotated;
an initial input table construction module 303, configured to construct an initial input table and create a fragment window using the generated annotation prediction data when the annotation prediction data meets a preset construction condition;
the window data processing module 304 is configured to perform an aggregation operation and an identifier adjustment on each line of the label prediction data in the slice window, and generate a dynamic result table;
and the association display module 305 is used for performing association operation by adopting the initial input table and the dynamic result table, generating and displaying a target output table.
Optionally, the present apparatus is applied to a processor of an unmanned vehicle, where the processor is communicatively connected to various sensors disposed on the unmanned vehicle, and the data stream acquiring module 301 includes:
The environment data acquisition sub-module is used for acquiring environment data of the environment where the unmanned vehicle is located in real time through the various sensors;
and the data stream generation sub-module is used for receiving the environmental data according to the acquisition time sequence of the environmental data and sequencing the environmental data to obtain the data stream to be marked.
Optionally, the annotation prediction module 302 includes:
the data stream input sub-module is used for inputting the data stream to be marked into a preset marking prediction model;
the object identification sub-module is used for sequentially carrying out object identification on each frame of data to be marked in the data stream to be marked through the marking prediction model, and determining predicted objects corresponding to each frame of data to be marked respectively;
the feature annotation information generation sub-module is used for generating feature annotation information corresponding to each predicted object according to the object type of each predicted object through the annotation prediction model;
and the information sequencing sub-module is used for sequencing the feature labeling information and constructing labeling prediction data corresponding to the data to be labeled of each frame.
Optionally, the initial input table construction module 303 includes:
the initial input table construction submodule is used for constructing an initial input table by adopting the generated annotation prediction data when the number of the generated lines of the standard prediction data reaches a preset number threshold; or when the generation time of the standard prediction data reaches a preset time threshold value, constructing an initial input table by using the generated annotation prediction data;
And the segment window creation sub-module is used for creating a segment window by adopting the multi-row annotation prediction data in the initial input table.
Optionally, the labeling prediction data is provided with an initial data identifier; the window data processing module 304 includes:
the aggregation operation sub-module is used for executing aggregation operation by adopting each row of marking prediction data in the slicing window to generate operation results corresponding to each row of marking prediction data;
the mark adjustment sub-module is used for respectively adjusting each initial data mark in the slicing window to obtain target data marks corresponding to the marking prediction data of each row;
and the result table association sub-module is used for establishing association with the operation result by adopting the target data identifier to generate a dynamic result table.
Optionally, the identification adjustment submodule is specifically configured to:
selecting a target identifier from the initial data identifiers in the slicing window;
updating the last initial data identifier in the slicing window by adopting the target identifier;
increasing the target mark according to a preset value;
selecting an initial data identifier to be updated from the initial data identifiers which are not updated;
updating the initial data identifier to be updated by adopting the increased target identifier;
The step of increasing the target identifier according to a preset value is carried out in a jumping mode until all the initial data identifiers are updated;
and determining all initial data identifiers at the current moment as target data identifiers corresponding to the labeling prediction data of each row.
Optionally, the association presentation module 305 includes:
a traversal submodule for traversing the initial input table and the dynamic result table;
the identification updating sub-module is used for sequentially updating the initial data identification by adopting the target data identification to obtain middle annotation prediction data;
the data and result association sub-module is used for sequentially associating the intermediate annotation prediction data with the operation result to obtain target annotation prediction data;
and the target output table construction and display sub-module is used for constructing and displaying a target output table by adopting all the target annotation prediction data.
Optionally, the apparatus further comprises:
the change amplitude determining module is used for comparing the target annotation prediction data of each row in the target output table and determining the change amplitude among the target annotation prediction data of each row;
the unstable state judging module is used for judging that the labeling prediction model is in an unstable state if the change amplitude is larger than a preset change threshold value;
And the stable state judging module is used for judging that the labeling prediction model is in a stable state if the change amplitude is smaller than or equal to the change threshold value.
Optionally, the apparatus further comprises:
the data set dividing module is used for dividing all the target annotation prediction data into a training set and a testing set according to a preset dividing proportion if the annotation prediction model is judged to be in an unstable state;
the model training module is used for training the labeling prediction model by adopting the training set to obtain an updated labeling prediction model;
the model test module is used for sequentially inputting target annotation prediction data in the test set into the updating annotation prediction model to obtain a plurality of updating output results;
the update change amplitude calculation module is used for comparing a plurality of update output results and determining update change amplitude among the update output results;
the training circulation module is used for determining the updated annotation prediction model as a new annotation prediction model if the updated variation amplitude is larger than the variation threshold value, and performing the step of training the annotation prediction model by adopting the training set in a jumping manner to obtain the updated annotation prediction model;
And the training completion judging module is used for judging that the training of the updated annotation prediction model is completed if the updated variation amplitude is smaller than or equal to the variation threshold value, and determining the updated annotation prediction model as a new annotation prediction model.
The embodiment of the invention also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the computer program when executed by the processor causes the processor to execute the steps of the method for displaying the annotation data stream processing according to any embodiment of the invention.
An embodiment of the present invention provides a computer readable storage medium, on which a computer program is stored, where the computer program is executed to implement a presentation method for processing a labeling data stream according to any embodiment of the present invention.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for displaying a marked data stream process, comprising:
acquiring a data stream to be marked;
inputting the data stream to be annotated into a preset annotation prediction model, and sequentially generating annotation prediction data respectively corresponding to each frame of data to be annotated in the data stream to be annotated;
when the annotation prediction data meets preset construction conditions, constructing an initial input table by adopting the generated annotation prediction data and creating a fragment window;
performing aggregation operation and identification adjustment on each line of marking prediction data in the slicing window to generate a dynamic result table;
performing association operation by adopting the initial input table and the dynamic result table, generating a target output table and displaying the target output table;
The annotation prediction data is provided with an initial data identifier; the step of performing aggregation operation and identification adjustment on each row of marking prediction data in the slicing window to generate a dynamic result table comprises the following steps:
performing aggregation operation by adopting marking prediction data of each row in the slicing window, and generating an operation result corresponding to the marking prediction data of each row;
respectively adjusting each initial data identifier in the slicing window to obtain target data identifiers corresponding to the marking prediction data of each row;
establishing association between the target data identifier and the operation result to generate a dynamic result table;
the step of respectively adjusting each initial data identifier in the slicing window to obtain target data identifiers corresponding to the labeling prediction data of each row comprises the following steps:
selecting a target identifier from the initial data identifiers in the slicing window;
updating the last initial data identifier in the slicing window by adopting the target identifier;
increasing the target mark according to a preset value;
selecting an initial data identifier to be updated from the initial data identifiers which are not updated;
updating the initial data identifier to be updated by adopting the increased target identifier;
The step of increasing the target identifier according to a preset value is carried out in a jumping mode until all the initial data identifiers are updated;
and determining all initial data identifiers at the current moment as target data identifiers corresponding to the labeling prediction data of each row.
2. The method of claim 1, wherein a processor applied to an unmanned vehicle is communicatively coupled to a plurality of sensors disposed on the unmanned vehicle, the step of obtaining the data stream to be annotated comprising:
acquiring environmental data of the environment where the unmanned vehicle is located in real time through the various sensors;
and receiving the environmental data according to the acquisition time sequence of the environmental data, and sequencing to obtain a data stream to be annotated.
3. The method according to claim 1 or 2, wherein the step of inputting the data stream to be annotated into a preset annotation prediction model, and sequentially generating annotation prediction data corresponding to each frame of data to be annotated in the data stream to be annotated, includes:
inputting the data stream to be marked into a preset marking prediction model;
carrying out object recognition on each frame of data to be marked in the data stream to be marked through the marking prediction model in sequence, and determining predicted objects corresponding to the data to be marked in each frame;
Generating feature annotation information corresponding to each predicted object according to the object type of each predicted object through the annotation prediction model;
and sequencing the feature labeling information to construct labeling prediction data corresponding to the data to be labeled of each frame.
4. The method according to claim 1 or 2, wherein the step of constructing an initial input table and creating a tile window using the generated annotation prediction data when the annotation prediction data satisfies a preset construction condition, comprises:
when the number of the generated lines of the annotation prediction data reaches a preset number threshold, constructing an initial input table by using the generated annotation prediction data;
or when the generation time of the annotation prediction data reaches a preset time threshold, constructing an initial input table by adopting the generated annotation prediction data;
and creating a slice window by using the multi-row annotation prediction data in the initial input table.
5. The method according to claim 1 or 2, wherein the step of performing an association operation using the initial input table and the dynamic result table to generate and present a target output table comprises:
Traversing the initial input table and the dynamic result table;
sequentially updating the initial data identifiers by adopting the target data identifiers to obtain middle annotation prediction data;
sequentially associating the intermediate annotation prediction data with the operation result to obtain target annotation prediction data;
and constructing a target output table by adopting all the target annotation prediction data and displaying the target output table.
6. The method of claim 5, wherein the method further comprises:
comparing the target annotation prediction data of each row in the target output table, and determining the change amplitude between the target annotation prediction data of each row;
if the change amplitude is larger than a preset change threshold value, judging that the labeling prediction model is in an unstable state;
and if the change amplitude is smaller than or equal to the change threshold value, judging that the labeling prediction model is in a stable state.
7. The method of claim 6, wherein the method further comprises:
if the labeling prediction model is judged to be in an unstable state, dividing all the target labeling prediction data into a training set and a testing set according to a preset dividing proportion;
training the labeling prediction model by adopting the training set to obtain an updated labeling prediction model;
Inputting target annotation prediction data in the test set into the updating annotation prediction model in sequence to obtain a plurality of updating output results;
comparing a plurality of the updated output results, and determining the updated change amplitude among the updated output results;
if the updated change amplitude is greater than the change threshold, determining the updated annotation prediction model as a new annotation prediction model, and jumping to execute the step of training the annotation prediction model by adopting the training set to obtain the updated annotation prediction model;
and if the update change amplitude is smaller than or equal to the change threshold value, judging that the training of the update annotation prediction model is completed, and determining the update annotation prediction model as a new annotation prediction model.
8. A presentation device for annotation data stream processing, comprising:
the data stream acquisition module is used for acquiring a data stream to be marked;
the marking prediction module is used for inputting the data stream to be marked into a preset marking prediction model and sequentially generating marking prediction data respectively corresponding to each frame of data to be marked in the data stream to be marked;
the initial input table construction module is used for constructing an initial input table and creating a segmentation window by adopting the generated annotation prediction data when the annotation prediction data meets preset construction conditions;
The window data processing module is used for executing aggregation operation and identification adjustment on each line of marking prediction data in the slicing window to generate a dynamic result table;
the association display module is used for executing association operation by adopting the initial input table and the dynamic result table, generating a target output table and displaying the target output table;
the annotation prediction data is provided with an initial data identifier; the window data processing module includes:
the aggregation operation sub-module is used for executing aggregation operation by adopting each row of marking prediction data in the slicing window to generate operation results corresponding to each row of marking prediction data;
the mark adjustment sub-module is used for respectively adjusting each initial data mark in the slicing window to obtain target data marks corresponding to the marking prediction data of each row;
the result table association sub-module is used for establishing association with the operation result by adopting the target data identifier to generate a dynamic result table;
the identification adjustment submodule is specifically used for:
selecting a target identifier from the initial data identifiers in the slicing window;
updating the last initial data identifier in the slicing window by adopting the target identifier;
Increasing the target mark according to a preset value;
selecting an initial data identifier to be updated from the initial data identifiers which are not updated;
updating the initial data identifier to be updated by adopting the increased target identifier;
the step of increasing the target identifier according to a preset value is carried out in a jumping mode until all the initial data identifiers are updated;
and determining all initial data identifiers at the current moment as target data identifiers corresponding to the labeling prediction data of each row.
9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program that, when executed by the processor, causes the processor to perform the steps of the presentation method of annotation data stream processing according to any of claims 1-7.
10. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed implements the presentation method of annotation data stream processing according to any of claims 1-7.
CN202111113433.4A 2021-09-18 2021-09-18 Display method, device, equipment and medium for processing annotation data stream Active CN113850929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111113433.4A CN113850929B (en) 2021-09-18 2021-09-18 Display method, device, equipment and medium for processing annotation data stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111113433.4A CN113850929B (en) 2021-09-18 2021-09-18 Display method, device, equipment and medium for processing annotation data stream

Publications (2)

Publication Number Publication Date
CN113850929A CN113850929A (en) 2021-12-28
CN113850929B true CN113850929B (en) 2023-05-26

Family

ID=78979336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111113433.4A Active CN113850929B (en) 2021-09-18 2021-09-18 Display method, device, equipment and medium for processing annotation data stream

Country Status (1)

Country Link
CN (1) CN113850929B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230349A (en) * 2017-05-23 2017-10-03 长安大学 A kind of online real-time short time traffic flow forecasting method
WO2017185576A1 (en) * 2016-04-25 2017-11-02 百度在线网络技术(北京)有限公司 Multi-streaming data processing method, system, storage medium, and device
CN109635793A (en) * 2019-01-31 2019-04-16 南京邮电大学 A kind of unmanned pedestrian track prediction technique based on convolutional neural networks
CN110569704A (en) * 2019-05-11 2019-12-13 北京工业大学 Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110807123A (en) * 2019-10-29 2020-02-18 中国科学院上海微系统与信息技术研究所 Vehicle length calculation method, device and system, computer equipment and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102456069A (en) * 2011-08-03 2012-05-16 中国人民解放军国防科学技术大学 Incremental aggregate counting and query methods and query system for data stream
US10318533B2 (en) * 2013-02-15 2019-06-11 Telefonaktiebolaget Lm Ericsson (Publ) Optimized query execution in a distributed data stream processing environment
CN105872432B (en) * 2016-04-21 2019-04-23 天津大学 The apparatus and method of quick self-adapted frame rate conversion
CN108462605B (en) * 2018-02-06 2022-03-15 国家电网公司 Data prediction method and device
US20210034586A1 (en) * 2019-08-02 2021-02-04 Timescale, Inc. Compressing data in database systems using hybrid row/column storage representations
CN111090688B (en) * 2019-12-23 2023-07-28 北京奇艺世纪科技有限公司 Smoothing processing method and device for time sequence data
CN111970584A (en) * 2020-07-08 2020-11-20 国网宁夏电力有限公司电力科学研究院 Method, device and equipment for processing data and storage medium
CN113191905A (en) * 2021-04-23 2021-07-30 北京金堤征信服务有限公司 Shareholder data processing method and device, electronic equipment and readable storage medium
CN113138960A (en) * 2021-05-17 2021-07-20 毕晓柏 Data storage method and system based on cloud storage space adjustment
CN113408671B (en) * 2021-08-18 2021-11-16 成都时识科技有限公司 Object identification method and device, chip and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017185576A1 (en) * 2016-04-25 2017-11-02 百度在线网络技术(北京)有限公司 Multi-streaming data processing method, system, storage medium, and device
CN107230349A (en) * 2017-05-23 2017-10-03 长安大学 A kind of online real-time short time traffic flow forecasting method
CN109635793A (en) * 2019-01-31 2019-04-16 南京邮电大学 A kind of unmanned pedestrian track prediction technique based on convolutional neural networks
CN110569704A (en) * 2019-05-11 2019-12-13 北京工业大学 Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110807123A (en) * 2019-10-29 2020-02-18 中国科学院上海微系统与信息技术研究所 Vehicle length calculation method, device and system, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
大数据在铁路工务管理系统中的应用研究;刘振军;张雷;;企业技术开发(第08期);全文 *

Also Published As

Publication number Publication date
CN113850929A (en) 2021-12-28

Similar Documents

Publication Publication Date Title
CN113642633B (en) Method, device, equipment and medium for classifying driving scene data
CN108256431B (en) Hand position identification method and device
US9053433B2 (en) Assisting vehicle guidance over terrain
JP2022505759A (en) Methods and equipment for testing driver assistance systems
CN116108717B (en) Traffic transportation equipment operation prediction method and device based on digital twin
WO2020007589A1 (en) Training a deep convolutional neural network for individual routes
CN113343461A (en) Simulation method and device for automatic driving vehicle, electronic equipment and storage medium
Spannaus et al. AUTOMATUM DATA: Drone-based highway dataset for the development and validation of automated driving software for research and commercial applications
JP2023540613A (en) Method and system for testing driver assistance systems
CN111338232B (en) Automatic driving simulation method and device
EP2405383A1 (en) Assisting with guiding a vehicle over terrain
CN110986994B (en) Automatic lane change intention marking method based on high-noise vehicle track data
CN115830399A (en) Classification model training method, apparatus, device, storage medium, and program product
CN112447060A (en) Method and device for recognizing lane and computing equipment
del Egio et al. Self-driving a car in simulation through a CNN
CN113850929B (en) Display method, device, equipment and medium for processing annotation data stream
Kim et al. OPEMI: Online Performance Evaluation Metrics Index for Deep Learning-Based Autonomous Vehicles
CN113085861A (en) Control method and device for automatic driving vehicle and automatic driving vehicle
CN106097751A (en) Vehicle travel control method and device
CN115985124B (en) Vehicle running control method and device, storage medium and electronic device
US20230025579A1 (en) High-definition mapping
WO2023017652A1 (en) Information processing device, information processing method, server device, vehicle device, and information processing program
EP4338059A1 (en) Tools for performance testing autonomous vehicle planners
CN117095338A (en) Wireless parking method based on road video identification and related device
EP4338054A1 (en) Tools for performance testing autonomous vehicle planners

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231031

Address after: Room 908, Building A2, 23 Spectral Middle Road, Huangpu District, Guangzhou City, Guangdong Province, 510000

Patentee after: Guangzhou Yuji Technology Co.,Ltd.

Address before: Room 687, No. 333, jiufo Jianshe Road, Zhongxin Guangzhou Knowledge City, Guangzhou, Guangdong 510555

Patentee before: GUANGZHOU WENYUAN ZHIXING TECHNOLOGY Co.,Ltd.