CN112182289B - Data deduplication method and device based on Flink frame - Google Patents

Data deduplication method and device based on Flink frame Download PDF

Info

Publication number
CN112182289B
CN112182289B CN202011077804.3A CN202011077804A CN112182289B CN 112182289 B CN112182289 B CN 112182289B CN 202011077804 A CN202011077804 A CN 202011077804A CN 112182289 B CN112182289 B CN 112182289B
Authority
CN
China
Prior art keywords
vehicle
data
video
video data
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011077804.3A
Other languages
Chinese (zh)
Other versions
CN112182289A (en
Inventor
吕泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongke Tongda High New Technology Co Ltd
Original Assignee
Wuhan Zhongke Tongda High New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongke Tongda High New Technology Co Ltd filed Critical Wuhan Zhongke Tongda High New Technology Co Ltd
Priority to CN202011077804.3A priority Critical patent/CN112182289B/en
Publication of CN112182289A publication Critical patent/CN112182289A/en
Application granted granted Critical
Publication of CN112182289B publication Critical patent/CN112182289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a data deduplication method and a device based on a Flink frame, wherein the data deduplication method based on the Flink frame obtains a data processing model corresponding to a data execution identifier in the Flink frame according to the data execution identifier after obtaining a data execution request carrying the data execution identifier; judging whether the same vehicle video clips exist in each video data carried by the data execution request according to the data processing model, and performing de-duplication on the video data when the same vehicle video clips exist in each video data to obtain de-duplicated video data; when the same vehicle video clips exist in each video data, the same vehicle video clips are removed, and the duplicate removal video data are obtained, so that when the video data collected by the front-end equipment are processed, duplicate data in the video data can be removed first, the amount of data to be processed is reduced, and the data processing efficiency is improved.

Description

Data deduplication method and device based on Flink frame
Technical Field
The application relates to the technical field of intelligent traffic, in particular to a data deduplication method and device based on a Flink frame.
Background
Along with the progress and development of technology, in order to prevent illegal events, snapshot devices are arranged in various areas, including front-end cameras arranged on various roads, various communities, various intersections and the like, so that the snapshot devices can collect information of various areas, illegal data are obtained according to the information collected by the snapshot devices, but the number of the snapshot devices is large, so that the obtained data are hundreds of millions, when the information collected by the snapshot devices is processed to obtain illegal data, each data needs to be processed, but the same or similar data exist in the collected information, so that the time consumption of a processing process of the data is long, and the processing efficiency of the data is low.
Therefore, in the existing data processing process, there is a technical problem that the data processing efficiency is low due to the fact that repeated data exist in the data.
Disclosure of Invention
The embodiment of the application provides a data deduplication method based on a Flink framework, which is used for solving the technical problem that the data processing efficiency is low due to the fact that repeated data exist in data in the existing data processing process.
The embodiment of the application provides a data deduplication method based on a Flink frame, which comprises the following steps:
Acquiring a data execution request carrying a data execution identifier;
acquiring a data processing model corresponding to the data execution identifier in the Flink frame according to the data execution identifier;
and judging whether the same vehicle video clips exist in each video data carried by the data execution request according to the data processing model, and performing de-duplication on the video data when the same vehicle video clips exist in each video data to obtain the de-duplication video data.
Meanwhile, the embodiment of the application provides a data deduplication device based on a Flink frame, which comprises:
the first acquisition module is used for acquiring a data execution request carrying a data execution identifier;
the second acquisition module is used for acquiring a data processing model corresponding to the data execution identifier in the Flink frame according to the data execution identifier;
and the de-duplication module is used for judging whether the same vehicle video clips exist in each video data carried by the data execution request according to the data processing model, and de-duplication the video data when the same vehicle video clips exist in each video data, so as to obtain the de-duplication video data.
Meanwhile, the embodiment of the application provides a server, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps in the data deduplication method based on the Flink framework when executing the program.
Meanwhile, an embodiment of the present application provides a computer readable storage medium, where a plurality of instructions are stored, where the instructions are adapted to be loaded by a processor to perform the steps in the above-mentioned data deduplication method based on a Flink frame.
The beneficial effects are that: the embodiment of the application provides a data deduplication method and a device based on a Flink frame, wherein the data deduplication method based on the Flink frame obtains a data processing model corresponding to a data execution identifier in the Flink frame according to the data execution identifier after obtaining a data execution request carrying the data execution identifier; judging whether the same vehicle video clips exist in each video data carried by the data execution request according to the data processing model, and performing de-duplication on the video data when the same vehicle video clips exist in each video data to obtain de-duplicated video data; according to the embodiment of the application, when the data execution request is received, the data processing model is adopted to process each video data, whether the same vehicle video segment exists in each video data is judged, when the same vehicle video segment exists in each video data, the same vehicle video segment is removed, and the duplicate removal video data is obtained, so that when the video data acquired by the front-end equipment are processed, the duplicate data in the video data can be removed first, the data quantity to be processed is reduced, and the data processing efficiency is improved.
Drawings
Technical solutions and other advantageous effects of the present application will be made apparent from the following detailed description of specific embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic view of a scenario of an intelligent traffic system according to an embodiment of the present application.
Fig. 2 is a flow chart of a data deduplication method based on a link framework according to an embodiment of the present application.
Fig. 3 is a first schematic diagram of a vehicle driving process in a data deduplication method based on a Flink frame according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a processing manner of video data in a data deduplication method based on a link frame according to an embodiment of the present application.
Fig. 5 is a second schematic diagram of a vehicle driving process in a data deduplication method based on a Flink frame according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a data deduplication device based on a Flink frame according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Please participate in fig. 1, fig. 1 is a schematic view of a scenario of an intelligent traffic system provided in an embodiment of the present application, where the system may include a device and a server, and the server are connected to each other by means of an internet formed by various gateways, and the like, which is not described herein, where the device includes a front-end device 11 and the like, the server includes a storage server 12, a processing server 13, and a communication server, and fig. 1 does not show the communication server and the like, where:
the front-end device 11 includes, but is not limited to, an embedded high-definition camera, an industrial personal computer, a high-definition camera, etc. for collecting data of vehicles and pedestrians passing through the front-end device, where the data collection includes, but is not limited to, license plate numbers (numbers may be fake license plates or fake license plates), license plate types (blue bottom license plates of private cars, yellow bottom license plates of trucks, etc.), and illegal behaviors of pedestrians.
The servers include local servers and/or remote servers, etc. The storage server 12, the processing server 13 and the communication server may be disposed in a local server or may be partially or fully disposed in a remote server.
The processing server 13 may acquire a data execution request carrying a data execution identifier; acquiring a data processing model corresponding to the data execution identifier in the Flink frame according to the data execution identifier; and judging whether the same vehicle video clips exist in each video data carried by the data execution request according to the data processing model, and performing de-duplication on the video data when the same vehicle video clips exist in each video data to obtain the de-duplication video data.
It should be noted that, the schematic diagram of the system scenario shown in fig. 1 is an example, and the servers and the scenario described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided in the embodiments of the present application, and those skilled in the art can know that, with the evolution of the system and the appearance of a new service scenario, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems. The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
Fig. 2 is a flow chart of a data deduplication method based on a flank frame according to an embodiment of the present application, referring to fig. 2, the data deduplication method based on a flank frame includes the following steps:
201: and acquiring a data execution request carrying a data execution identifier.
In one embodiment, when a data execution request carrying a data execution identifier is acquired, video data carried by the data execution request needs to be acquired, the video data is video data obtained by processing a real-time monitoring video stream after the server acquires the real-time monitoring video stream from a front-end device, and in consideration of that the acquired video data can be video data before one day in order to keep real-time performance as much as possible in data analysis, when the video data is deduplicated, the storage server can also store video data for multiple days first, then deduplication processing is performed on the video data for multiple days, when the video data is acquired, the storage server can receive the real-time monitoring video stream sent by the front-end device first, then directly store the real-time monitoring video stream in the storage server, or process the real-time monitoring video stream to obtain the video data and store the video data in the storage server, then the processing server receives the data execution request carrying the data execution identifier to perform deduplication processing, and in the process, when the storage server is set, the storage server can also store the video data execution request to the processing server, and the video execution request can be automatically carried by the storage server when the storage server stores the video data to acquire the video data execution request for each real-time.
In one embodiment, when the real-time monitoring video stream is processed to obtain video data, the real-time monitoring video stream may be directly used as video data, or the real-time monitoring video stream may be decoded and encoded correspondingly to obtain video data, so that the video data can be subjected to de-duplication processing.
In one embodiment, the real-time monitoring video stream includes video captured by a front-end device, such as a high-definition camera, and includes real-time monitoring of a vehicle passing through the front-end device and real-time monitoring of a pedestrian passing through the front-end device, and meanwhile, in the real-time monitoring video stream, corresponding time and information corresponding to an address of the front-end device can be recorded, so that when illegal behaviors occur in the real-time monitoring video stream, corresponding records can be made on the time and the address of the illegal behaviors, and processing can be performed according to the corresponding records when the illegal behaviors are processed later.
In one embodiment, the data execution identifier refers to an identifier of a data execution request, when processing data, the data execution identifier needs to be determined, so that a process needing to be processed can be determined according to the data execution identifier, and the data is processed in a corresponding processing mode, such as deduplication, and video data carried by the data execution request can be found through the data execution identifier, so that after the data execution identifier is obtained, a corresponding processing process is performed according to the data execution identifier.
202: and acquiring a data processing model corresponding to the data execution identifier in the Flink frame according to the data execution identifier.
In one embodiment, the Flink framework is a distributed processing engine, and can process bounded flows and unbounded flows, wherein the bounded flows refer to offline data, the unbounded flows refer to real-time data, and the Flink framework has the advantages of low delay, high throughput, high instantaneity, accurate analysis precision and the like, so that the de-duplication efficiency can be improved when the data is de-duplicated, and the data processing efficiency is improved.
In one embodiment, a plurality of models exist in the link framework, the models comprise various types of data processing models and data analysis models, when the video data is de-duplicated, the corresponding data processing model needs to be determined according to the data execution identifier, namely the data execution identifier can be made to carry the data execution type, so that the corresponding type of data processing model can be determined according to the data execution identifier, and then the determined data processing model is selected from the corresponding type of data processing models, so that the video data is de-duplicated.
In one embodiment, when determining a corresponding data processing model in the link frame according to the data execution identifier, the link identifier corresponding to the link frame may be determined by the data execution identifier, and then the corresponding data processing model is acquired from the link frame according to the link identifier, which includes: acquiring a Flink identifier corresponding to the Flink frame and the data execution identifier according to the data execution identifier; and acquiring the data processing model from the Flink frame according to the Flink identifier.
In one embodiment, the flank identifier corresponds to each or each type of model, and a corresponding flank identifier is set for each or each type of model, so that when the model is called, the corresponding flank identifier can be found according to the data execution identifier, and then the corresponding data processing model can be found according to the flank identifier.
In one embodiment, before the data processing model is obtained, training the data processing model is required to make the processing result of the data processing model more accurate, an initial data processing model may be set first, then the initial data processing model is trained by using video data, and a trained data processing model is obtained, so that the trained data processing model can perform deduplication on the video data more accurately, and the steps include: setting an initial data processing model; acquiring test video data, test video fragments corresponding to the test video data and test-to-attach importance frequency fragments corresponding to the test video data; processing the test video segments corresponding to the test video data by adopting an initial data processing model to obtain a model for attaching importance to the frequency segments; correcting the initial data processing model according to the test frequency-removing segment and the model frequency-removing segment to obtain a data processing model; specifically, when test video data is obtained, the test video data may be obtained historical video data, and the test video data is processed by other modes or manually to obtain corresponding test video segments and test frequency-removing segments, so that the video data can be used as test video data, and after the test video data is obtained, an initial data processing model can be adopted to process the test video data, so as to obtain a model frequency-removing segment obtained by processing the initial data processing model, and if the test frequency-removing segment obtained manually or in other modes and the model frequency-removing segment have differences, the initial data processing model is corrected, then the test video data is adopted to train the initial data processing model again until the model frequency-removing segment obtained by processing the initial data processing model is identical to the test frequency-removing segment, the training is completed, and the data processing model is obtained.
In one embodiment, when the initial data processing model is trained, considering that the deduplication accuracy of the trained data processing model may not reach full accuracy, when the initial data processing model is trained, the preset deduplication accuracy may be set, after the initial data processing model reaches the preset deduplication accuracy, the initial data processing model is determined to be the data processing model, for example, the preset deduplication accuracy may be set to reach 95%, it is assumed that 100 sets of repeated video segments exist in multiple video data, but the initial data processing model only finds 98 sets of repeated video segments, at this time, it may also be determined that the deduplication accuracy of the initial data processing model is higher than 95%, and the initial data processing model may be used as the data processing model.
In one embodiment, after the data processing model is obtained by training, the data processing model is used to perform deduplication on the video data, in this process, considering that the data processing model still has the problem of not finding out duplicate video segments, after the data processing model processes the video data to obtain deduplicated video data, assuming that duplicate video segments are still found in the deduplicated video data, the data processing model can be continuously trained by using the data, that is, the training on the data processing model is maintained, so that the deduplication accuracy of the data processing model is improved.
203: and judging whether the same vehicle video clips exist in each video data carried by the data execution request at the same time according to the data processing model, and performing de-duplication on the video data when the same vehicle video clips exist in each video data at the same time to obtain de-duplication video data.
In one embodiment, when the video data is deduplicated, the same vehicle video segments appearing at the same time are deduplicated, and the same vehicle video segments appearing at different times are taken as tracks of vehicles, so when judging whether the video data has duplicate data, it is first required to determine that the vehicle video segments are of the same time period, if the vehicle video segments are not of the same time, the vehicle video segments do not belong to the duplicate data, and in the following embodiment, when judging whether the vehicle video segments are of the same vehicle video segments, each vehicle video segment of the same time period is considered to be omitted.
In one embodiment, the same time may refer to a certain time or a certain period of time, for example, the same time may be 20:00: at the time of 00, 20:00:00 to 20:01:00, for example, only one vehicle is running in a certain period, so that the video data does not need to be divided into each moment, the video data at each moment is subjected to de-duplication, and the data processing amount is reduced.
In one embodiment, when deduplicating video data, the video data is first divided into a plurality of vehicle video segments, so that the video data can be grouped according to time, so that the video data in each group is deduplicated, so that duplicate data is removed, and the steps include: processing each video data according to the time sequence to obtain vehicle video clips in each video data in each time interval; judging whether the same vehicle video clips exist in each video data in the same time interval according to the vehicle video clips in each video data; deleting repeated vehicle video clips in the same vehicle video clips when the same vehicle video clips exist in all video data in the same time interval, and obtaining duplicate removal video data; specifically, as shown in fig. 3, assuming that a certain vehicle 32 runs on a bidirectional six-lane urban road, fig. 3 shows only one-way three lanes, when the vehicle 32 passes through a certain intersection, the vehicle runs on the middle lane among the three lanes, a first front end camera 311, a second front end camera 312 and a third front end camera 313 are sequentially arranged on the corresponding three lanes of the intersection, when the vehicle 32 passes through the intersection, the first front end camera 311, the second front end camera 312 and the third front end camera 313 are all in the running process of shooting the vehicle 32, but when shooting at the same time, the first front end camera 311 positioned on the left side shoots license plate information, vehicle color information and vehicle left side surface information of the vehicle 32, the license plate information, the vehicle color information, and the vehicle front side information are captured by the second front-end camera 312, the license plate information, the vehicle color information, and the vehicle right side information are captured by the third front-end camera 313, and the vehicle information that can be obtained from the video data of the three front-end cameras can be considered to be the same considering that the side of the general vehicle does not have the key information of the vehicle, at this time, the duplicate data needs to be removed, and as shown in fig. 4, it is assumed that the video data 411, the video data 412, and the video data 413 are the video data captured by the first front-end camera 311, the second front-end camera 412, and the third front-end camera 413, respectively, and the video data 411 includes the following data including the data of 20:00: first imaging information 3111 to 21 of 00: 00: the first imaging information 3112 of 00 includes, in the video data 412, a sequence of the sequence of 20:00: second imaging information 3121 to 21 of 00: 00: the second imaging information 3122 of 00 includes, in the video data 413, the information of the following 20:00: third imaging information 3131 to 21 of 00: 00:00, after each video data is acquired, the video data needs to be divided into a plurality of video segments according to a time sequence, in fig. 4, the video data is divided into vehicle video segments according to a time point, but the video data may also be divided into vehicle video segments according to a time interval, after each video data is divided into video segments, the video segments with the same time interval or the same time point may be placed in the same group, for example, in fig. 4, after the video data is divided into vehicle video segments, the vehicle video segments with the same time interval or the same time point, namely 20:00: first imaging information 3111 and 20 of 00: 00: second imaging information 3121, 20 of 00: 00: third imaging information 3131 of 00 is placed in first group 414, 21:00: first imaging information 3112 and 21 of 00: 00: second imaging information 3122, 21 of 00: 00: if the third image capturing information 3132 of 00 is placed in the second group 415, it may be determined whether the same vehicle video segment exists in the video data of the same time interval, for example, it is determined 20 in the first group 414: 00: first imaging information 3111 and 20 of 00: 00: second imaging information 3121, 20 of 00: 00: third imaging information 3131 of 00 is whether three pieces of vehicle video are identical, and if so, deleting the same pieces of vehicle video to obtain de-emphasized data, for example, 20:00: first imaging information 3111 and 20 of 00: 00: and if the second image pickup information 3121 of 00 is the same, deleting one of the two, thereby removing the duplicate vehicle video clip and obtaining the duplicate video data.
It should be noted that, in fig. 4, the video data is divided at a time point, and the obtained vehicle video segments are fixed pictures, but the video data may also be divided at time intervals, so that the vehicle video segments are consecutive pictures, in fig. 4, the division of the video data into individual pictures does not represent that the video data is divided into fixed pictures, and the video data may be divided into a plurality of consecutive vehicle video segments.
In one embodiment, when determining whether the same vehicle video clip is present in each video data in the same time interval, each vehicle video clip is retained when the same vehicle video clip is not present in each video data in the same time interval.
In one embodiment, when judging whether the same vehicle video clip exists in each video data in the same time interval, the method further includes the steps of firstly extracting information of a vehicle from each video data, then searching for whether the information of the vehicle exists in other video data, judging whether the similarity of two vehicle information is greater than a preset similarity, and if the similarity of the two vehicle information is greater than the preset similarity, judging that the two vehicle video clips with the two vehicle information are the same vehicle video clip, wherein the method includes the following steps: acquiring first vehicle information in a vehicle video clip in video data according to the vehicle video clip in each video data; searching second vehicle information in other vehicle video clips according to the first vehicle information; judging whether the similarity of the first vehicle information and the second vehicle information is larger than a preset similarity according to the first vehicle information and the second vehicle information, and determining that a vehicle video segment containing the first vehicle information and a vehicle video segment containing the second vehicle information are the same vehicle video segment when the similarity of the first vehicle information and the second vehicle information is larger than the preset similarity; specifically, as shown in fig. 5, assuming that in the two-way four-lane urban road, the traveling directions of the vehicle on the lane one and the lane two are from top to bottom, the traveling directions of the vehicle on the lane three and the lane four are from bottom to top, a group of front end cameras are arranged on each lane, and the front end cameras only shoot in one direction, for example, the lane one is provided with a front end camera one 521, a front end camera five 525, the lane two is provided with a front end camera two 522, a front end camera six 526, the lane three is provided with a front end camera three 523, a front end camera seven 527, the lane four is provided with a front end camera four 524, a front end camera eight 528, the front end cameras one 521 to the front end camera four 524 shoot from top to bottom, and the front end cameras five 525 to the front end cameras 528 shoot from bottom to top, when the lane one to the lane four are respectively driven with the vehicle one 511, the vehicle two 512, the vehicle three 513 and the vehicle four 514, each camera records the driving process of the first 511 to fourth 514 vehicles, for example, the first 511 vehicle has not passed through the intersection, when the first 511 vehicle reaches the intersection, the front end cameras 525 to eight 528 can shoot the head information or the side information of the first 511 vehicle, when the first 511 vehicle passes through the intersection, the front end cameras 521 to 524 can shoot the tail information or the side information of the first 511 vehicle, correspondingly, according to the driving direction of the vehicle and the shooting direction of the front end cameras, the information of the first 511 to fourth 514 vehicles is acquired, and each front end camera acquires different video data, at this time, after the video data is disassembled into the vehicle video segments, the first vehicle information in the vehicle video segments in each video data can be acquired first, for example, if the vehicle video clip in the video data of the front-end camera 521 includes the vehicle information of each of the vehicles one 511 to four 514, the information of the vehicle one 511 in the front-end camera 521 may be used as the first vehicle information, then the second vehicle information of the vehicle one 511 in the other vehicle video clips, for example, the information of the vehicle one 511 also exists in the front-end camera two 522, then it is determined whether the similarity of the first vehicle information and the second vehicle information is greater than the preset similarity, for example, the first vehicle information includes the license plate number of the vehicle one 511, the running speed of the vehicle one 511, the color of the vehicle one 511, the result of whether the vehicle 511 is pressed or not, and so on; the second vehicle information includes a license plate number of the first vehicle 511, a running speed of the first vehicle 511, a color of the first vehicle 511, and a side image of the first vehicle 511, and at this time, it is determined whether the similarity between the first vehicle information and the second vehicle information is greater than a preset similarity, for example, the preset similarity is set to 90%, at this time, the similarity between the first vehicle information and the second vehicle information is greater than 90%, and at this time, it is determined that the vehicle video clip including the first vehicle information and the vehicle video clip including the second vehicle information are the same vehicle video clip.
It should be noted that, when the data processing model is adopted to perform deduplication, the data processing model is trained to determine the similarity of the vehicle information, for example, a plurality of vehicle video clips are input, so that the data processing model determines the similarity of the vehicle information in the plurality of vehicle video clips, and determines whether the vehicle video clips are the same, thereby enabling the similarity of the vehicle information to be accurately determined when the similarity of the vehicle information is determined, and meanwhile, the preset similarity can be set according to requirements.
In one embodiment, when judging whether the similarity between the first vehicle information and the second vehicle information is greater than the preset similarity, considering that the vehicle information of a plurality of vehicles exists in one vehicle video segment, if the similarity between the vehicle information of one vehicle in the two vehicle video segments is greater than the preset similarity, then different vehicle information can be obtained from one of the video segments, then the vehicle information of the two vehicle video segments is combined to obtain a vehicle video segment without repeated vehicle information and vehicle information, for example, the vehicle information of the first vehicle 511 and the vehicle information of the second vehicle 512 are included in the vehicle video segment of the front camera 521 and the front camera 522 of fig. 5, and the similarity of the vehicle 511 of the vehicle video segment of the first vehicle 521 and the vehicle video segment of the front camera 522 of the second vehicle is greater than the preset similarity, but the similarity between the vehicle 512 of the vehicle video segment of the first vehicle 521 and the front camera 522 of the front camera 522 is not greater than the preset similarity, then the vehicle video segment of the first vehicle 521 and the vehicle video segment of the front camera 522 of the second vehicle 521 can be removed, and the vehicle video segment of the front camera 512 of the front camera 521 can be removed from the vehicle video segment of the first vehicle video segment of the front camera 521 and the vehicle video segment of the front camera 522; or dividing each vehicle into different video clips, so that each video clip can be judged, and the repeated video clips can be removed.
In one embodiment, in determining whether the similarity of the first vehicle information and the second vehicle information is greater than a preset similarity, considering that the vehicle video clip having the first vehicle information and the second vehicle information cannot be regarded as the same vehicle video clip in order to avoid deleting the vehicle key information even if the similarity of the first vehicle information and the second vehicle information is greater than the preset similarity, and in the case that the first vehicle information and the second vehicle information have the vehicle key information, the vehicle video clip having the first vehicle information and the second vehicle information is regarded as the same vehicle video clip, the steps include: judging whether vehicle key information exists in the first vehicle information and the second vehicle information when the similarity between the first vehicle information and the second vehicle information is larger than a preset similarity; determining that a vehicle video clip including the first vehicle information and a vehicle video clip including the second vehicle information are different vehicle video clips when no vehicle key information exists in the first vehicle information and the second vehicle information, and determining that a vehicle video clip including the first vehicle information and a vehicle video clip including the second vehicle information are the same vehicle video clip when the vehicle key information exists in the first vehicle information and the second vehicle information; specifically, assuming that neither the first vehicle information nor the second vehicle information includes vehicle critical information, for example, the first vehicle information includes a color of a vehicle, a scratch of a vehicle, a line pressing behavior of a vehicle, and an overspeed behavior of a vehicle, and the second vehicle information includes a color of a vehicle, a scratch of a vehicle, a line pressing behavior of a vehicle, and an overspeed behavior of a vehicle, the similarity between the first vehicle information and the second vehicle information is greater than a preset similarity, but since it cannot be determined whether the first vehicle information and the second vehicle information are information of the same vehicle, at this time, a vehicle video clip having the first vehicle information and the second vehicle information cannot be regarded as the same vehicle video clip, but considering that a traveling record of only the vehicle occurs, it is also possible to add the vehicle critical information of the vehicle to the first vehicle information and the second vehicle information, and then determine whether the vehicle video clip having the first vehicle information and the second vehicle information are the same vehicle video clip; assuming that the first vehicle information includes the vehicle key information and the second vehicle information does not include the vehicle key information, the vehicle video clip having the first vehicle information and the second vehicle information cannot be regarded as the same vehicle video clip at this time unless a situation in which only the vehicle appears for a certain period of time or all other vehicle information is the same, at which time the vehicle video clip having the first vehicle information and the second vehicle information can be determined as the same vehicle video clip in consideration of actual demands.
The vehicle key information refers to vehicle information that can identify a unique vehicle, for example, a license plate of a vehicle, and in consideration of a situation in which a fake license plate occurs, although the unique vehicle cannot be found by the fake license plate, when viewing video data, the fake license plate is the unique vehicle using the fake license plate corresponding to the time period, so that when the fake license plate occurs during weight removal, the fake license plate can be used as the vehicle key information.
In one embodiment, when it is determined that the same vehicle video clip exists in the same time interval, it is required to determine duplicate vehicle video clips to be deleted from the same vehicle video clips, where the vehicle video clips to be deleted can be determined according to the effective data amount in each vehicle video clip, that is, considering that the effective data amount of the reserved vehicle video clip is the largest, other vehicle video clips are deleted, and the steps include: acquiring effective data volume of each vehicle video clip in the same vehicle video clips; according to the effective data volume of each vehicle video segment, determining the vehicle video segment with the largest effective data volume; the vehicle video clips with the largest effective data volume are reserved, and other vehicle video clips are deleted to obtain duplicate removal video data; specifically, for example, one of the same vehicle video clips includes a license plate number of the vehicle, a front end depression of the vehicle, a side scratch of the vehicle, a type of the vehicle, a state of a person in the vehicle (for example, whether to tie a seat belt, whether to make a call, etc.), a speed of the vehicle; the other vehicle video clip comprises a license plate of the vehicle, a front end depression of the vehicle, a state of personnel in the vehicle, a speed of the vehicle and a color of the vehicle, and the vehicle video clip with the side scratch of the vehicle and the type of the vehicle is taken as the vehicle video clip with the largest effective data amount in consideration of the importance of the side scratch of the vehicle and the type of the vehicle to be larger than the color of the vehicle, and then the other vehicle video clips are deleted, so that the duplicate removal video data is obtained.
In one embodiment, considering that the video clips of the vehicle need to have higher definition so as to be able to determine the vehicle information and to be convenient for viewing, the step of determining the video clips of the vehicle that need to be deleted from the same video clips of the vehicle according to the definition of the video clips of the vehicle includes: acquiring the definition of each vehicle video clip in the same vehicle video clips; determining a vehicle video clip with the maximum definition according to the definition of each vehicle video clip; the vehicle video clips with the maximum definition are reserved, and other vehicle video clips are deleted to obtain duplicate removal video data; specifically, for example, two identical vehicle video clips each have a license plate number of a vehicle, a front end depression of the vehicle, a speed of the vehicle and a side image of the vehicle, but the side image of the vehicle in one of the vehicle video clips is blurred, the definition is lower, the side image of the vehicle in the other vehicle video clip is clearer, the vehicle video data with lower definition is deleted, and the vehicle video clip with the largest definition is reserved to obtain the duplicate removal video data.
In one embodiment, after removing the duplicate video segment, the duplicate video data and the removed duplicate video data may be stored to facilitate subsequent viewing of the duplicate video data and the duplicate video data, in view of the need to check the duplicate video data at a later time to avoid deleting vehicle critical information during the duplicate removal, the steps comprising: acquiring the duplicate removal video data and duplicate video data corresponding to the duplicate removal video data; according to the de-duplication video data and the repeated video data, giving the same data corresponding identification to the de-duplication video data and the repeated video data; and storing the duplicate video data and the duplicate video data in an Hbase system according to the data corresponding identifier.
It should be noted that, when storing data, the Hbase system is a column-type storage system, only the required column needs to be fetched, and the whole data does not need to be fetched, so that the required storage space is reduced, so that the same storage space can store more data, and when storing data using the Hbase system, for example, when storing data using the Hbase system, one scheme stores 20 fields, even if there are unreadable or empty portions in the 20 fields, 0 is stored in the empty portions and unreadable portions, so that even if the data does not have 20 fields, the unnecessary space is occupied, and when storing data using the Hbase system, assuming that there are empty portions or unreadable portions in the 20 fields, for example, only 14 fields are normal, the Hbase system stores only 14 fields, so that the required storage space is reduced, and the amount of data stored is larger in the same storage space.
The embodiment of the application provides a data de-duplication method based on a Flink frame, which determines whether all vehicle video clips are identical by judging vehicle information in each vehicle video clip, deletes the vehicle video clips according to the effective data volume or definition of the vehicle video clips when the same vehicle video clips exist, obtains de-emphasized data, stores the de-emphasized data and repeated video data, removes the repeated data in the video data, reduces the data volume to be processed, improves the data processing efficiency, and can check the stored de-duplicated video data and repeated video data, thereby facilitating the follow-up checking of the accuracy of the de-duplicated video data.
Correspondingly, fig. 6 is a schematic structural diagram of a data deduplication device based on a Flink frame according to an embodiment of the present application; referring to fig. 6, the data deduplication device based on the link framework includes the following modules:
a first obtaining module 601, configured to obtain a data execution request carrying a data execution identifier;
a second obtaining module 602, configured to obtain a data processing model corresponding to the data execution identifier in the link frame according to the data execution identifier;
and the deduplication module 603 is configured to determine, according to the data processing model, whether the same vehicle video segment exists in each video data carried by the data execution request at the same time, and perform deduplication on the video data when the same vehicle video segment exists in each video data at the same time, so as to obtain the deduplication video data.
In one embodiment, the deduplication module 603 is configured to process each video data in time sequence, so as to obtain a vehicle video clip in each video data in each time interval; judging whether the same vehicle video clips exist in each video data in the same time interval according to the vehicle video clips in each video data; and deleting repeated vehicle video clips in the same vehicle video clips when the same vehicle video clips exist in all video data in the same time interval, so as to obtain duplicate removal video data.
In one embodiment, the deduplication module 603 is configured to obtain, according to the vehicle video clips in each video data, first vehicle information in the vehicle video clip in one video data; searching second vehicle information in other vehicle video clips according to the first vehicle information; judging whether the similarity between the first vehicle information and the second vehicle information is larger than a preset similarity or not according to the first vehicle information and the second vehicle information; and when the similarity between the first vehicle information and the second vehicle information is greater than a preset similarity, determining that the vehicle video clip containing the first vehicle information and the vehicle video clip containing the second vehicle information are the same vehicle video clip.
In one embodiment, the deduplication module 603 is configured to determine whether vehicle key information exists in the first vehicle information and the second vehicle information when the similarity between the first vehicle information and the second vehicle information is greater than a preset similarity; determining that a vehicle video clip containing the first vehicle information and a vehicle video clip containing the second vehicle information are different vehicle video clips when no vehicle key information exists in the first vehicle information and the second vehicle information; and when the vehicle key information exists in the first vehicle information and the second vehicle information, determining that the vehicle video clip containing the first vehicle information and the vehicle video clip containing the second vehicle information are the same vehicle video clip.
In one embodiment, the deduplication module 603 is configured to obtain a valid data amount for each of the same vehicle video clips; according to the effective data volume of each vehicle video segment, determining the vehicle video segment with the largest effective data volume; and (5) keeping the vehicle video clips with the largest effective data amount, and deleting other vehicle video clips to obtain the deduplication video data.
In one embodiment, the deduplication module 603 is configured to obtain sharpness of each of the same vehicle video segments; determining a vehicle video clip with the maximum definition according to the definition of each vehicle video clip; and reserving the vehicle video clips with the maximum definition, and deleting other vehicle video clips to obtain the duplicate removal video data.
In one embodiment, the deduplication module 603 is configured to obtain the deduplication video data, and duplicate video data corresponding to the deduplication video data; according to the de-duplication video data and the repeated video data, giving the same data corresponding identification to the de-duplication video data and the repeated video data; and storing the duplicate video data and the duplicate video data in an Hbase system according to the data corresponding identifier.
In one embodiment, the second obtaining module 602 is configured to obtain, according to the data execution identifier, a link identifier corresponding to the data execution identifier by using the link frame; and acquiring the data processing model from the Flink frame according to the Flink identifier.
In one embodiment, the second acquisition module 602 is configured to set an initial data processing model; acquiring test video data, test video fragments corresponding to the test video data and test-to-attach importance frequency fragments corresponding to the test video data; processing the test video segments corresponding to the test video data by adopting an initial data processing model to obtain a model for attaching importance to the frequency segments; and correcting the initial data processing model according to the test frequency-removing segment and the model frequency-removing segment to obtain a data processing model.
Accordingly, embodiments of the present application also provide a server, as shown in fig. 7, which may include Radio Frequency (RF) circuitry 701, memory 702 including one or more computer-readable storage media, input unit 703, display unit 704, sensor 705, audio circuitry 706, wireless fidelity (WiFi, wireless Fidelity) module 707, processor 708 including one or more processing cores, and power supply 709. Those skilled in the art will appreciate that the server architecture shown in fig. 7 is not limiting of the server and may include more or fewer components than shown, or may combine certain portions, or a different arrangement of components. Wherein:
The RF circuit 701 may be used for receiving and transmitting signals during a message or a call, and in particular, after receiving downlink information of a base station, the downlink information is processed by one or more processors 708; in addition, data relating to uplink is transmitted to the base station. The memory 702 may be used to store software programs and modules, and the processor 708 may perform various functional applications and data processing by executing the software programs and modules stored in the memory 702. The input unit 703 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The display unit 704 may be used to display information input by a user or information provided to the user and various graphical user interfaces of the server, which may be composed of graphics, text, icons, video, and any combination thereof.
The server may also include at least one sensor 705, such as a light sensor, a motion sensor, and other sensors. The audio circuitry 706 includes speakers that may provide an audio interface between the user and the server.
The WiFi belongs to a short-distance wireless transmission technology, and the server can help the user to send and receive e-mail, browse web pages, access streaming media and the like through the WiFi module 707, and provides wireless broadband internet access for the user. Although fig. 7 shows a WiFi module 707, it is to be understood that it does not belong to the necessary constitution of the server, and can be omitted entirely as needed within a range that does not change the essence of the application.
The processor 708 is the control center of the server, and uses various interfaces and lines to connect the various parts of the overall handset, performing various functions of the server and processing data by running or executing software programs and/or modules stored in the memory 702, and invoking data stored in the memory 702, thereby performing overall monitoring of the handset.
The server also includes a power supply 709 (e.g., a battery) for powering the various components, which may be logically connected to the processor 708 by a power management system, such as to perform charge, discharge, and power management functions via the power management system.
Although not shown, the server may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the processor 708 in the server loads executable files corresponding to the processes of one or more application programs into the memory 702 according to the following instructions, and the processor 708 executes the application programs stored in the memory 702, so as to implement the following functions:
acquiring a data execution request carrying a data execution identifier; acquiring a data processing model corresponding to the data execution identifier in the Flink frame according to the data execution identifier; and judging whether the same vehicle video clips exist in each video data carried by the data execution request at the same time according to the data processing model, and performing de-duplication on the video data when the same vehicle video clips exist in each video data at the same time to obtain the de-duplication video data.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of an embodiment that are not described in detail in the foregoing embodiments may be referred to in the foregoing detailed description, which is not repeated herein.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform the following functions:
acquiring a data execution request carrying a data execution identifier; acquiring a data processing model corresponding to the data execution identifier in the Flink frame according to the data execution identifier; and judging whether the same vehicle video clips exist in each video data carried by the data execution request at the same time according to the data processing model, and performing de-duplication on the video data when the same vehicle video clips exist in each video data at the same time to obtain the de-duplication video data.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Because the instructions stored in the storage medium may perform steps in any of the methods provided in the embodiments of the present application, the beneficial effects that any of the methods provided in the embodiments of the present application may be achieved, which are detailed in the previous embodiments and are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The foregoing describes in detail a method and apparatus for data deduplication based on a flank framework, a server and a computer readable storage medium, and specific examples are applied to describe the principles and implementations of the present application, and the description of the foregoing examples is only used to help understand the technical solution and core ideas of the present application; those of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (7)

1. A method for deduplication of data based on a flank framework, comprising:
acquiring a data execution request carrying a data execution identifier;
acquiring a data processing model corresponding to the data execution identifier in the Flink frame according to the data execution identifier;
judging whether the same vehicle video clips exist in each video data carried by the data execution request at the same time according to the data processing model, and performing de-duplication on the video data when the same vehicle video clips exist in each video data at the same time to obtain de-duplication video data; the method specifically comprises the following steps: processing each video data according to the time sequence to obtain vehicle video clips in each video data in each time interval; acquiring first vehicle information in a vehicle video clip in video data according to the vehicle video clip in each video data; searching second vehicle information in other vehicle video clips according to the first vehicle information; judging whether the similarity between the first vehicle information and the second vehicle information is larger than a preset similarity or not according to the first vehicle information and the second vehicle information; judging whether vehicle key information exists in the first vehicle information and the second vehicle information when the similarity between the first vehicle information and the second vehicle information is larger than a preset similarity; determining that a vehicle video clip containing the first vehicle information and a vehicle video clip containing the second vehicle information are different vehicle video clips when no vehicle key information exists in the first vehicle information and the second vehicle information; determining that a vehicle video clip containing the first vehicle information and a vehicle video clip containing the second vehicle information are the same vehicle video clip when vehicle key information exists in the first vehicle information and the second vehicle information; and deleting repeated vehicle video clips in the same vehicle video clips when the same vehicle video clips exist in all video data in the same time interval, so as to obtain duplicate removal video data.
2. The method for deduplication of data based on a link frame as set forth in claim 1, wherein the step of deleting duplicate vehicle video clips in the same vehicle video clip when the same vehicle video clip exists in each video data in the same time interval to obtain deduplication video data comprises:
acquiring effective data volume of each vehicle video clip in the same vehicle video clips;
according to the effective data volume of each vehicle video segment, determining the vehicle video segment with the largest effective data volume;
and (5) keeping the vehicle video clips with the largest effective data amount, and deleting other vehicle video clips to obtain the deduplication video data.
3. The method for deduplication of data based on a link frame as set forth in claim 1, wherein the step of deleting duplicate vehicle video clips in the same vehicle video clip when the same vehicle video clip exists in each video data in the same time interval to obtain deduplication video data comprises:
acquiring the definition of each vehicle video clip in the same vehicle video clips;
determining a vehicle video clip with the maximum definition according to the definition of each vehicle video clip;
And reserving the vehicle video clips with the maximum definition, and deleting other vehicle video clips to obtain the duplicate removal video data.
4. The method for deduplication of data based on a link frame according to claim 1, wherein the step of determining, according to the data processing model, whether the same vehicle video segment exists in each video data carried by the data execution request at the same time, and performing deduplication on the video data to obtain the deduplicated video data when the same vehicle video segment exists in each video data at the same time comprises the steps of:
acquiring the duplicate removal video data and duplicate video data corresponding to the duplicate removal video data;
according to the de-duplication video data and the repeated video data, giving the same data corresponding identification to the de-duplication video data and the repeated video data;
and storing the duplicate video data and the duplicate video data in an Hbase system according to the data corresponding identifier.
5. The method for deduplication of data based on a flank frame according to claim 1, wherein the step of obtaining a data processing model corresponding to the data execution identifier in the flank frame according to the data execution identifier comprises:
Acquiring a Flink identifier corresponding to the Flink frame and the data execution identifier according to the data execution identifier;
and acquiring the data processing model from the Flink frame according to the Flink identifier.
6. The method for deduplication of data based on a flank framework as described in claim 5, wherein said step of obtaining said data processing model from said flank framework based on said flank identification comprises:
setting an initial data processing model;
acquiring test video data, test video fragments corresponding to the test video data and test-to-attach importance frequency fragments corresponding to the test video data;
processing the test video segments corresponding to the test video data by adopting an initial data processing model to obtain a model for attaching importance to the frequency segments;
and correcting the initial data processing model according to the test frequency-removing segment and the model frequency-removing segment to obtain a data processing model.
7. A data deduplication apparatus based on a Flink framework, comprising:
the first acquisition module is used for acquiring a data execution request carrying a data execution identifier;
the second acquisition module is used for acquiring a data processing model corresponding to the data execution identifier in the Flink frame according to the data execution identifier;
The de-duplication module is used for judging whether the same vehicle video clips exist in each video data carried by the data execution request at the same time according to the data processing model, and de-duplication is carried out on the video data when the same vehicle video clips exist in each video data at the same time, so as to obtain the de-duplication video data; the method is particularly used for: processing each video data according to the time sequence to obtain vehicle video clips in each video data in each time interval; acquiring first vehicle information in a vehicle video clip in video data according to the vehicle video clip in each video data; searching second vehicle information in other vehicle video clips according to the first vehicle information; judging whether the similarity between the first vehicle information and the second vehicle information is larger than a preset similarity or not according to the first vehicle information and the second vehicle information; judging whether vehicle key information exists in the first vehicle information and the second vehicle information when the similarity between the first vehicle information and the second vehicle information is larger than a preset similarity; determining that a vehicle video clip containing the first vehicle information and a vehicle video clip containing the second vehicle information are different vehicle video clips when no vehicle key information exists in the first vehicle information and the second vehicle information; determining that a vehicle video clip containing the first vehicle information and a vehicle video clip containing the second vehicle information are the same vehicle video clip when vehicle key information exists in the first vehicle information and the second vehicle information; and deleting repeated vehicle video clips in the same vehicle video clips when the same vehicle video clips exist in all video data in the same time interval, so as to obtain duplicate removal video data.
CN202011077804.3A 2020-10-10 2020-10-10 Data deduplication method and device based on Flink frame Active CN112182289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011077804.3A CN112182289B (en) 2020-10-10 2020-10-10 Data deduplication method and device based on Flink frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011077804.3A CN112182289B (en) 2020-10-10 2020-10-10 Data deduplication method and device based on Flink frame

Publications (2)

Publication Number Publication Date
CN112182289A CN112182289A (en) 2021-01-05
CN112182289B true CN112182289B (en) 2023-04-28

Family

ID=73948737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011077804.3A Active CN112182289B (en) 2020-10-10 2020-10-10 Data deduplication method and device based on Flink frame

Country Status (1)

Country Link
CN (1) CN112182289B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235599B (en) * 2020-10-14 2022-05-27 广州欢网科技有限责任公司 Video processing method and system
CN113191305A (en) * 2021-05-17 2021-07-30 武汉中科通达高新技术股份有限公司 Data management method and device based on Flink framework

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851287A (en) * 2015-04-15 2015-08-19 浙江大学 Method for urban road link travel time detection based on video detector
CN110111565A (en) * 2019-04-18 2019-08-09 中国电子科技网络信息安全有限公司 A kind of people's vehicle flowrate System and method for flowed down based on real-time video
CN110166650A (en) * 2019-04-29 2019-08-23 北京百度网讯科技有限公司 Generation method and device, the computer equipment and readable medium of video set
CN110533925A (en) * 2019-09-04 2019-12-03 上海眼控科技股份有限公司 Processing method, device, computer equipment and the storage medium of vehicle illegal video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851287A (en) * 2015-04-15 2015-08-19 浙江大学 Method for urban road link travel time detection based on video detector
CN110111565A (en) * 2019-04-18 2019-08-09 中国电子科技网络信息安全有限公司 A kind of people's vehicle flowrate System and method for flowed down based on real-time video
CN110166650A (en) * 2019-04-29 2019-08-23 北京百度网讯科技有限公司 Generation method and device, the computer equipment and readable medium of video set
CN110533925A (en) * 2019-09-04 2019-12-03 上海眼控科技股份有限公司 Processing method, device, computer equipment and the storage medium of vehicle illegal video

Also Published As

Publication number Publication date
CN112182289A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN112182289B (en) Data deduplication method and device based on Flink frame
CN105185121B (en) A kind of method of virtual bayonet socket parallelism recognition car plate
US20120300073A1 (en) Application server and method for finding parking space
CN106919610B (en) Internet of vehicles data processing method, system and server
CN103167265A (en) Video processing method and video processing system based on intelligent image identification
CN108280902A (en) The document handling method and device of vehicle-mounted monitoring equipment, vehicle-mounted monitoring equipment
CN103400499B (en) Ultra-high definition intelligent video electronic police system and vehicle violation judging method of ultra-high definition intelligent video electronic police system
CN112148493A (en) Streaming media task management method and device and data server
CN103559274A (en) Vehicle condition information query method and device
CN105513360A (en) Method for processing vehicle data and device
CN113450578B (en) Traffic violation event evidence obtaining method, device, equipment and system
EP3975133A1 (en) Processing of images captured by vehicle mounted cameras
CN111935497B (en) Video stream management method and data server for traffic police system
CN110853364B (en) Data monitoring method and device
CN107895487A (en) It is a kind of that the method for similar car plate string simultaneously is carried out based on big data
CN112201047B (en) Suspected vehicle foothold analysis method and device based on Flink framework
US11034020B2 (en) Systems and methods for enhanced review of automated robotic systems
CN112181986A (en) Data analysis method and device based on Flink framework
CN113191305A (en) Data management method and device based on Flink framework
CN114844924A (en) Scene playback method and system based on cloud data
CN113660540A (en) Image information processing method, system, display method, device and storage medium
CN112347996A (en) Scene state judgment method, device, equipment and storage medium
CN106911550B (en) Information pushing method, information pushing device and system
CN112015703A (en) Traffic management data management method and device and server
CN111414517B (en) Video face analysis method, device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant