CN111652904A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111652904A
CN111652904A CN202010567795.XA CN202010567795A CN111652904A CN 111652904 A CN111652904 A CN 111652904A CN 202010567795 A CN202010567795 A CN 202010567795A CN 111652904 A CN111652904 A CN 111652904A
Authority
CN
China
Prior art keywords
video
video analysis
motion state
sample
deviation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010567795.XA
Other languages
Chinese (zh)
Inventor
侯琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010567795.XA priority Critical patent/CN111652904A/en
Publication of CN111652904A publication Critical patent/CN111652904A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a data processing method, an apparatus, an electronic device, and a storage medium, wherein the method includes: based on the video quantity obtained by the video analysis, evaluating a first motion state of a first object in the video analysis; acquiring a second motion state of a second object in historical video analysis; obtaining a first sample parameter corresponding to the first motion state and a second sample parameter corresponding to the second motion state based on sample estimation performed on the first motion state and the second motion state; and determining the video amount required to be acquired in the next video analysis based on the parameter deviation between the first sample parameter and the second sample parameter and the video amount acquired in the current video analysis. The embodiment of the disclosure can improve the efficiency of video analysis.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of internet of things, and in particular relates to a data processing method and device, electronic equipment and a storage medium.
Background
With the rapid development of information technology, the evaluation of the motion state of an object in a video through video analysis has important application value. For example: the speed of the vehicles running on the road is evaluated by analyzing the road monitoring video, so that the risk evaluation and early warning of the vehicles are carried out on the basis. Each time a video is analyzed, a certain amount of video needs to be acquired. In the prior art, the amount of video acquired each time for video analysis is fixed. This method may cause redundancy of the acquired video amount or insufficient amount of the acquired video, resulting in insufficient efficiency of video analysis.
Disclosure of Invention
An object of the present disclosure is to provide a data processing method, apparatus, electronic device and storage medium, which can improve the efficiency of video analysis.
According to an aspect of the disclosed embodiments, a data processing method is disclosed, the method comprising:
based on the video quantity obtained by the video analysis, evaluating a first motion state of a first object in the video analysis;
acquiring a second motion state of a second object in historical video analysis;
obtaining a first sample parameter corresponding to the first motion state and a second sample parameter corresponding to the second motion state based on sample estimation performed on the first motion state and the second motion state;
and determining the video amount required to be acquired in the next video analysis based on the parameter deviation between the first sample parameter and the second sample parameter and the video amount acquired in the current video analysis.
According to an aspect of the disclosed embodiments, a data processing apparatus is disclosed, the apparatus comprising:
the evaluation module is configured to evaluate a first motion state of a first object in the current video analysis based on the video amount obtained by the current video analysis;
a first obtaining module configured to obtain a second motion state of a second object in the historical video analysis;
a second obtaining module, configured to obtain a first sample parameter corresponding to the first motion state and a second sample parameter corresponding to the second motion state based on sample estimation performed on the first motion state and the second motion state;
and the determining module is configured to determine the video amount required to be acquired in the next video analysis based on the parameter deviation between the first sample parameter and the second sample parameter and the video amount acquired in the current video analysis.
In an exemplary embodiment of the disclosure, the apparatus is configured to:
obtaining the first sample parameter based on sample estimation performed on the first motion state and the second motion state corresponding to at least one time of the historical video analysis;
and acquiring the second sample parameter based on sample estimation of the second motion state respectively corresponding to at least two times of historical video analysis.
In an exemplary embodiment of the disclosure, the apparatus is configured to:
obtaining the first sample parameter based on sample estimation of the first motion states respectively corresponding to at least two first objects;
and acquiring the second sample parameters based on sample estimation of the second operation states respectively corresponding to at least two second objects.
In an exemplary embodiment of the disclosure, the apparatus is configured to: and if the parameter deviation is less than or equal to a preset deviation threshold value, determining the video quantity obtained by the video analysis of this time as the video quantity required to be obtained by the video analysis of the next time.
In an exemplary embodiment of the disclosure, the apparatus is configured to:
if the parameter deviation is larger than a preset deviation threshold value, acquiring a first time length spent by the video analysis and a second time length spent by the historical video analysis;
determining a time length deviation between the first time length and the second time length;
and determining the video volume required to be acquired in the next video analysis based on the comparison between the first time length and a preset time length threshold, the comparison between the time length deviation and the deviation threshold and the video volume acquired in the current video analysis.
In an exemplary embodiment of the disclosure, the apparatus is configured to: and if the first time length is less than or equal to the time length threshold value and the time length deviation is less than or equal to the deviation threshold value, determining the video quantity obtained by the current video analysis, which is increased by a preset video quantity, as the video quantity required to be obtained by the next video analysis.
In an exemplary embodiment of the disclosure, the apparatus is configured to: and if the first time length is larger than the time length threshold value or the time length deviation is larger than the deviation threshold value, determining the video quantity obtained by the current video analysis with the reduced preset video quantity as the video quantity required to be obtained by the next video analysis.
According to an aspect of an embodiment of the present disclosure, there is disclosed a data processing electronic device including: a memory storing computer readable instructions; a processor reading computer readable instructions stored by the memory to perform the method of any of the preceding claims.
According to an aspect of an embodiment of the present disclosure, a computer program medium is disclosed, having computer readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the method of any of the preceding claims.
In the embodiment of the disclosure, in the process of determining the motion state of the object through video analysis, the video amount required to be acquired in the next video analysis is corrected on the basis of the video amount acquired in the current video analysis according to the parameter deviation between the sample parameters corresponding to the motion state, so that the adjustment of the video amount is dynamically realized, the acquisition of the video amount is associated with the motion state, and the efficiency of the video analysis is improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 illustrates an architectural diagram according to one embodiment of the present disclosure.
FIG. 2 illustrates a terminal interface diagram in a particular application, according to one embodiment of the present disclosure.
FIG. 3 illustrates a terminal interface diagram in a particular application, according to one embodiment of the present disclosure.
FIG. 4 shows a flow diagram of a data processing method according to one embodiment of the present disclosure.
FIG. 5 shows a block diagram of a data processing apparatus according to one embodiment of the present disclosure.
FIG. 6 shows a hardware diagram of data processing electronics, according to one embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, steps, and so forth. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The embodiment of the disclosure provides a data processing method and device, electronic equipment and a storage medium, relates to the field of Internet of things, and aims to improve the efficiency of video analysis. The Internet of Things (IOT) refers to The realization of ubiquitous connection between objects and people and The realization of intelligent sensing, identification and management of objects and processes through various devices and technologies such as various information sensors, radio frequency identification technologies, global positioning systems, infrared sensors, laser scanners and The like. The internet of things is an information bearer based on the internet, a traditional telecommunication network and the like, and all common physical objects which can be independently addressed form an interconnected network.
FIG. 1 illustrates the architecture of one embodiment of the present disclosure.
This embodiment is applied to the car networking, and wherein, the communication network for communication mainly divide into two parts: a private network for highway communication where the vehicle 20 traveling on the highway is located, and a public network where the center cloud 10a and the edge computing node 10b are located. Through the fusion of the highway communication private network and the public network, the communication between the center cloud 10a, the edge computing node 10b and the vehicle 20 is realized. The central cloud 10a or the edge computing node 10b may be a server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. The public network in which the central cloud 10a and the edge computing node 10b are located may be a 4G (the 4th generation mobile communication technology, fourth generation mobile communication technology) network, or may be a 5G (the 5th generation mobile communication technology, fifth generation mobile communication technology) network.
In this embodiment, the center cloud 10a and the edge computing station 10b may together form an analysis system that evaluates the motion state of the vehicle 20. Specifically, the camera transmits the acquired video of the vehicle 20 to the edge computing node 10b, and the edge computing node 10b is mainly used for identifying and positioning the vehicle 20 based on the acquired video; and then the edge computing node 10b transmits the video and the recognition and positioning result to the center cloud 10a in the form of the car networking message, and the center cloud 10a analyzes the video with a certain amount of video on the basis of the recognition and positioning result to evaluate the motion state of the vehicle 20.
After the analysis system estimates the motion state of the vehicle 20 based on the video amount obtained by the current video analysis, the data processing method provided by the present disclosure may be applied to determine the video amount required to be obtained by the next video analysis. Specifically, the analysis system may acquire the motion state of the vehicle 20 in the last video analysis; respectively acquiring corresponding sample parameters based on sample estimation performed on the motion state of the vehicle 20 in the current video analysis and the motion state of the vehicle 20 in the last video analysis; and then determining the video amount required to be acquired in the next video analysis based on the parameter deviation among the sample parameters and the video amount acquired in the current video analysis.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
Fig. 2 shows a terminal interface diagram of an embodiment of the present disclosure in a specific application.
In this embodiment, an analysis system that evaluates the motion state of a subject based on video analysis is applied to intelligent traffic control. By selecting a specific place in the interface left map (wherein the interface left map can be obtained through interaction with the independent map system), the video collected by the camera of the specific place is called and played in the center of the interface. In the video playing process, the analysis system performs video analysis, evaluates the speed of each vehicle in the video, and identifies the speed for subsequent traffic control (such as scene early warning).
A type distribution schematic diagram is further displayed on the left side of the interface to indicate that the objects in the current road are all motor vehicles; a statistical graph of the real-time vehicle speed is also displayed in the middle of the interface; and a curve graph of the real-time traffic flow and display content of scene early warning information are displayed on the right side of the interface.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
Fig. 3 shows a terminal interface diagram of an embodiment of the present disclosure in a specific application.
In this embodiment, an analysis system that evaluates the motion state of a subject based on video analysis is applied to intelligent traffic control. And calling the video collected by the camera at the specific place by selecting the specific place in the map on the left side of the interface, and playing the video in the center of the interface. In the video playing process, the analysis system performs video analysis, evaluates the speed of each vehicle and the speed of pedestrians in the video, and performs identification for subsequent traffic control.
The left side of the interface is also displayed with a type distribution schematic diagram to indicate that the object in the current road has both motor vehicles and pedestrians; a statistical graph of the real-time vehicle speed is also displayed in the middle of the interface; and a curve graph of real-time traffic flow and a display column of scene early warning information are displayed on the right side of the interface.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
Fig. 4 illustrates a data processing method, which is exemplary of an analysis system for evaluating a motion state of a subject based on video analysis as an execution subject, according to an embodiment of the present disclosure, the method including:
step S310, evaluating a first motion state of a first object in the current video analysis based on the video quantity obtained by the current video analysis;
step S320, acquiring a second motion state of a second object in historical video analysis;
step S330, obtaining a first sample parameter corresponding to the first motion state and a second sample parameter corresponding to the second motion state based on sample estimation performed on the first motion state and the second motion state;
step S340, determining the video amount required to be acquired in the next video analysis based on the parameter deviation between the first sample parameter and the second sample parameter and the video amount acquired in the current video analysis.
In the embodiment of the disclosure, in the process of determining the motion state of the object through video analysis, the video amount required to be acquired in the next video analysis is corrected on the basis of the video amount acquired in the current video analysis according to the parameter deviation between the sample parameters corresponding to the motion state, so that the adjustment of the video amount is dynamically realized, the acquisition of the video amount is associated with the motion state, and the efficiency of the video analysis is improved.
It should be noted that the objects in the embodiments of the present disclosure include vehicles traveling on roads, pedestrians traveling on roads, falling objects, or other moving entities. For the purpose of brief explanation, in the explanation of the following embodiments, a vehicle traveling on a road is exemplarily taken as an object in the embodiment of the present disclosure.
It should be further noted that the motion state in the embodiments of the present disclosure includes a speed, an acceleration, a motion direction, or other motion states. For the purpose of brief explanation, in the explanation of the following embodiments, the speed is exemplarily taken as the motion state in the embodiments of the present disclosure.
In the embodiment of the present disclosure, the analysis system evaluates the first motion state of the first object in the video according to the analysis of the first object in the video based on the video amount obtained by the video analysis at this time. The analysis system mainly analyzes the first object through an algorithm or a model or an empirical formula related to motion prediction, and redundant development is not performed.
Specifically, if a video frame is used as the minimum unit of video analysis, in the video analysis process, the analysis system acquires a certain number of consecutive video frames, and evaluates the first motion state of the first object according to analysis of the first object in the consecutive video frames. For example: in the video analysis process, 8 continuous video frames are obtained, and the speed of the first vehicle is evaluated according to the analysis of the first vehicle in the 8 continuous video frames.
In an embodiment of the present disclosure, the analysis system obtains a second motion state of a second object in the historical video analysis. Specifically, the second motion state of the second object in the historical video analysis may be stored in a history record, and the analysis system may retrieve the history record to obtain the second motion state of the second object. For example: and after the historical video analysis obtains the speed of the second vehicle, storing the speed of the second vehicle into the historical record. The analysis system thus retrieves the speed of the second vehicle by retrieving the history.
It should be noted that the first object in the current video analysis and the second object in the historical video analysis may be the same entity or different entities.
It should be noted that the historical video analysis may be the last video analysis with respect to the current video analysis, or may be the last video analysis with respect to the current video analysis. With the increase of the interval with the video analysis, the degree of corresponding correction or corresponding compensation required in the subsequent data processing process also increases. In one embodiment, the last video analysis relative to the current video analysis is preferably selected as the historical video analysis.
In the embodiment of the disclosure, an analysis system takes a first motion state of a first object as a sample, and performs sample estimation on the sample to obtain a corresponding first sample parameter; and taking the second motion state of the second block object as a sample, and carrying out sample estimation on the sample to obtain a corresponding second sample parameter.
In one embodiment, the first object and the second object are the same object. Obtaining a first sample parameter corresponding to the first motion state and a second sample parameter corresponding to the second motion state based on sample estimation performed on the first motion state and the second motion state, including:
obtaining the first sample parameter based on sample estimation performed on the first motion state and the second motion state corresponding to at least one time of historical video analysis;
and acquiring the second sample parameter based on sample estimation of the second motion state respectively corresponding to at least two times of historical video analysis.
In this embodiment, a specific first object in the current video analysis and a specific second object in the historical video analysis are the same object. Taking a first motion state of a first object in the video analysis and a second motion state of a second object in at least one historical video analysis as first samples, and obtaining first sample parameters according to sample estimation performed on the first samples; and taking second motion states respectively corresponding to second objects in at least two times of historical video analysis as second samples, and obtaining second sample parameters according to sample estimation performed on the second samples.
For example: the speed V of the vehicle A obtained by evaluation in the video analysis1And the speed V of the vehicle A estimated in the last video analysis2As the first sample, the mean value r of the first sample is obtainedcuAnd variance S2 cu(ii) a The speed V of the vehicle A estimated in the last video analysis2And the speed V of the vehicle A estimated in the previous video analysis3As a second sample, the variance S of the second sample is determined2 la
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In an embodiment, obtaining a first sample parameter corresponding to the first motion state and a second sample parameter corresponding to the second motion state based on sample estimation performed on the first motion state and the second motion state includes:
obtaining the first sample parameter based on sample estimation of the first motion states respectively corresponding to at least two first objects;
the second sample parameter is obtained based on sample estimation performed on the second operating states respectively corresponding to at least two second objects.
In this embodiment, first motion states respectively corresponding to a plurality of first objects are used as first samples, and first sample parameters are obtained according to sample estimation performed on the first samples; and taking the second motion states respectively corresponding to the second objects as second samples, and obtaining second sample parameters according to sample estimation performed on the second samples.
For example: analyzing and evaluating the video to obtain the speed V of the vehicle A1And speed V of vehicle B2As first samples, these first samples are subjected to sample estimation to find the corresponding mean value rcuAnd variance S2 cu(ii) a Analyzing and evaluating the historical video to obtain the speed V of the vehicle C3And the speed V of the vehicle D4As second samples, these second samples are subjected to sample estimation to find the corresponding variances S2 la
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In the embodiment of the present disclosure, the analysis system determines a parameter deviation between the first sample parameter and the second sample parameter, and further determines a video amount required to be acquired in the next video analysis based on the parameter deviation and the video amount acquired in the current video analysis.
For example: when the analysis system obtains the speed V of the vehicle A estimated in the video analysis1(the amount of video acquired by this video analysis is n)cuFrames) and the speed V of the vehicle a estimated in the last video analysis2As the first sample, the mean value r of the first sample is obtainedcuAnd variance S2 cuAnd the speed V of the vehicle A estimated in the previous video analysis is calculated2And the speed V of the vehicle A estimated in the previous video analysis3As a second sample, the variance S of the second sample is determined2 laThe analysis system determines S2 cuAnd S2 laParameter deviation E betweens,Es=|S2 cu-S2 la|/S2 laAnd determining the speed V of the vehicle A in the video analysis and evaluation1As same as the firstMean value of book rcuDeviation E betweenv,Ev=|V1-rcu|/rcu. And then based on EsAnd ncuIn combination with EvDetermining the amount n of video required to be acquired for the next video analysisneAnd (5) frame.
In an embodiment, determining the amount of video required to be acquired for the next video analysis based on the parameter deviation between the first sample parameter and the second sample parameter and the amount of video acquired by the current video analysis includes:
and if the parameter deviation is less than or equal to a preset deviation threshold value, determining the video volume acquired by the video analysis at this time as the video volume required to be acquired by the video analysis at the next time.
In this embodiment, a deviation threshold is preset. And if the parameter deviation is less than or equal to the deviation threshold value, determining the video quantity required to be acquired in the next video analysis as the video quantity acquired in the current video analysis.
For example: the preset deviation threshold value is P, and the quantity of the video acquired by the video analysis is ncuFrame, amount of video n required to be acquired for next video analysisneAnd (5) frame. The analysis system determines the variance S of the first sample obtained2 cuVariance with second sample S2 laHas a parameter deviation of EsAnd obtaining the speed V of the vehicle A in the video analysis and evaluation1And the mean value r of the first samplecuA deviation between Ev. If EsP or less, and EvIf P is less than or equal to P, n is determinedne=ncu
It can be understood that, when the deviation between the first motion state of the first object and the mean value of the first sample is less than or equal to the deviation threshold, and the parameter deviation between the variance of the first sample and the variance of the second sample is less than or equal to the deviation threshold, it indicates that the deviation between the first motion state evaluated on the first object by the video analysis and the true motion state of the first object is within the range of the deviation threshold. Therefore, when the deviation range represented by the deviation threshold can meet the application requirement, the first motion state obtained by the video analysis on the first object evaluation is explained to be capable of meeting the application requirement, that is, the video amount obtained by the video analysis is also explained to be capable of meeting the application requirement. Therefore, in this case, the video analysis of the next time can continue to use the amount of video acquired by the video analysis of this time.
In an embodiment, the method further comprises:
acquiring the accident rate of the object;
determining the deviation threshold based on the accident rate.
In this embodiment, the deviation threshold is determined based on the accident rate of the subject. Specifically, the accident occurrence rate may be directly determined as the deviation threshold, or the deviation threshold may be obtained by correcting the accident occurrence rate according to the application requirement.
For example: the object targeted by the video analysis is a vehicle traveling on the road. Then the traffic accident occurrence rate P of the vehicles on the road is obtainedaccidentFurther, a deviation threshold value P for measuring the deviation of the parameter is determined, and P is equal to Paccident
This embodiment has the advantage that the deviation threshold is determined according to the accident rate, such that the deviation range represented by the deviation threshold is associated with the accident rate, thereby enabling the amount of video required to be acquired for the next video analysis to meet the application requirements related to the accident rate.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure. It will be appreciated that the deviation threshold may also be set according to other application requirements, or may be set according to empirical data.
In an embodiment, determining the amount of video required to be acquired for the next video analysis based on the parameter deviation between the first sample parameter and the second sample parameter and the amount of video acquired by the current video analysis includes:
if the parameter deviation is larger than a preset deviation threshold value, acquiring a first time length spent by the video analysis and a second time length spent by the historical video analysis;
determining a time length deviation between the first time length and the second time length;
and determining the video volume required to be acquired by the next video analysis based on the comparison between the first time length and a preset time length threshold, the comparison between the time length deviation and the deviation threshold and the video volume acquired by the video analysis.
In this embodiment, a deviation threshold is preset. If the parameter deviation is greater than the deviation threshold, the amount of the video required to be acquired in the next video analysis is determined based on the consideration of the time length spent in the video analysis.
Specifically, while the deviation threshold P is preset, a time length threshold t is also preset0. If the parameter deviation is larger than the deviation threshold value, the analysis system obtains the first time length t spent by the video analysiscuAnd a second time period t taken for historical video analysisla(ii) a Determining a first time period tcuAnd a second duration tlaTime length deviation E betweent,Et=|tcu-tla|/tla(ii) a Based on the first duration tcuAnd duration threshold t0Comparison of (E), deviation in duration (E)tComparing with deviation threshold P and video amount n obtained by the video analysiscuFrames, determining the amount of video n required to be acquired for the next video analysisneAnd (5) frame.
In an embodiment, determining the video volume required to be acquired for the next video analysis based on the comparison between the first duration and the preset duration threshold, the comparison between the duration deviation and the deviation threshold, and the video volume acquired by the current video analysis includes:
if the first time length is less than or equal to the time length threshold value and the time length deviation is less than or equal to the deviation threshold value, determining the video volume obtained by the current video analysis with the increased preset video volume as the video volume required to be obtained by the next video analysis.
In this embodiment, if the first time period tcuLess than or equal to time length threshold t0And a time length deviation EtIf the deviation is less than or equal to the deviation threshold value P, determining the video amount n required to be acquired in the next video analysisne=ncu+n0Wherein n iscuAmount of video, n, obtained for this video analysis0Is a preset amount of video for adjustment.
For example: the preset deviation threshold value is 0.12, the preset duration threshold value is 0.3s, and the preset video volume for adjustment is 1 frame. If the amount of the video acquired by the video analysis is 8 frames, the first time length spent by the video analysis is 0.27s (less than 0.3s), and the time length deviation between the first time length and the second time length spent by the historical video analysis is 0.08 (less than 0.12), it is determined that the amount of the video acquired by the next video analysis is 8+ 1-9 frames.
It will be appreciated that in a particular application scenario (e.g., in a vehicle warning scenario), the time spent for video analysis is a processing delay. The first time spent by the video analysis is less than or equal to the time threshold, and the processing delay is within the control range; the time length deviation is less than or equal to the deviation threshold value, which shows that the relative deviation of the processing delay is also in the control range. And under the condition that both are controlled, the parameter deviation is not in the control range, which indicates that the video volume obtained by the video analysis at this time is relatively less, and therefore, the video volume acquisition should be increased in the next video analysis.
In an embodiment, determining the video volume required to be acquired for the next video analysis based on the comparison between the first duration and the preset duration threshold, the comparison between the duration deviation and the deviation threshold, and the video volume acquired by the current video analysis includes:
if the first duration is greater than the duration threshold or the duration deviation is greater than the deviation threshold, determining the video volume obtained by the current video analysis with the reduced preset video volume as the video volume required to be obtained by the next video analysis.
In this embodiment, if the first time period tcuGreater than a duration threshold t0Or a time length deviation EtIf the deviation is larger than the deviation threshold value P, the quantity n of the video required to be acquired in the next video analysis is determinedne=ncu-n0Wherein n iscuAmount of video, n, obtained for this video analysis0Is a preset amount of video for adjustment.
For example: the preset deviation threshold value is 0.12, the preset duration threshold value is 0.3s, and the preset video volume for adjustment is 1 frame. If the video volume obtained by the video analysis is 8 frames and the first time length spent by the video analysis is 0.33s (greater than 0.3s), determining that the video volume required to be obtained by the next video analysis is 8-1-7 frames.
Or, if the video volume obtained by the current video analysis is 8 frames, and the time length deviation between the first time length spent by the current video analysis and the second time length spent by the historical video analysis is 0.20 (greater than 0.12), determining that the video volume required to be obtained by the next video analysis is 8-1-7 frames.
It can be understood that the first time length spent by the video analysis is greater than the time length threshold, which indicates that the processing delay is larger; or, the time length deviation is greater than the deviation threshold, which also indicates that the processing delay is larger. The processing delay is positively correlated with the obtained video amount, that is, the obtained video amount in the current video analysis is relatively large, so that the video amount acquisition in the next video analysis should be reduced.
In an embodiment, the method further comprises:
acquiring the minimum safe driving distance and the maximum safe speed of a road where a first object is located;
and determining the ratio of the minimum safe driving distance to the maximum safe vehicle speed as the time length threshold value.
In this embodiment, the object for which the video analysis is directed is a vehicle. The analysis system obtains the minimum safe driving distance L of the road where the first object is locatedminAnd a maximum safe vehicle speed VmaxDetermining a time length threshold t for measuring the first time length on the basis of the time length0,t0=Lmin/Vmax
The embodiment has the advantages that the ratio of the minimum safe driving distance of the road to the maximum safe vehicle speed is approximate to the fastest reaction time in the driving process of the vehicle, and the fastest reaction time is determined as the time length threshold value, so that the determination of the video quantity required to be acquired in the subsequent video analysis can meet the application requirement of the driving safety of the vehicle.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
The following table shows the effect of the embodiments of the present disclosure compared to the prior art in the application of the embodiments of the present disclosure to speed assessment of a running vehicle based on video analysis.
Figure BDA0002548162370000131
Figure BDA0002548162370000141
As can be seen from the data shown in this embodiment, compared with the prior art, the method provided in this embodiment of the present disclosure can significantly improve the efficiency of video analysis. In particular, the efficiency of video analysis is reflected in accuracy and speed. The method provided by the embodiment of the disclosure does not cause redundancy of the obtained video amount, and ensures the speed of video analysis; the shortage of the obtained video amount can not be caused, and the accuracy of video analysis is ensured, so that the efficiency of video analysis is improved.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
Fig. 5 shows a data processing apparatus according to an embodiment of the present disclosure, the apparatus comprising:
the evaluation module 410 is configured to evaluate a first motion state of a first object in a current video analysis based on the video amount obtained by the current video analysis;
a first obtaining module 420 configured to obtain a second motion state of a second object in the historical video analysis;
a second obtaining module 430, configured to obtain a first sample parameter corresponding to the first motion state and a second sample parameter corresponding to the second motion state based on sample estimation performed on the first motion state and the second motion state;
the determining module 440 is configured to determine the amount of video required to be acquired in the next video analysis based on the parameter deviation between the first sample parameter and the second sample parameter and the amount of video acquired in the current video analysis.
In an exemplary embodiment of the disclosure, the apparatus is configured to:
obtaining the first sample parameter based on sample estimation performed on the first motion state and the second motion state corresponding to at least one time of the historical video analysis;
and acquiring the second sample parameter based on sample estimation of the second motion state respectively corresponding to at least two times of historical video analysis.
In an exemplary embodiment of the disclosure, the apparatus is configured to:
obtaining the first sample parameter based on sample estimation of the first motion states respectively corresponding to at least two first objects;
and acquiring the second sample parameters based on sample estimation of the second operation states respectively corresponding to at least two second objects.
In an exemplary embodiment of the disclosure, the apparatus is configured to: and if the parameter deviation is less than or equal to a preset deviation threshold value, determining the video quantity obtained by the video analysis of this time as the video quantity required to be obtained by the video analysis of the next time.
In an exemplary embodiment of the disclosure, the apparatus is configured to:
if the parameter deviation is larger than a preset deviation threshold value, acquiring a first time length spent by the video analysis and a second time length spent by the historical video analysis;
determining a time length deviation between the first time length and the second time length;
and determining the video volume required to be acquired in the next video analysis based on the comparison between the first time length and a preset time length threshold, the comparison between the time length deviation and the deviation threshold and the video volume acquired in the current video analysis.
In an exemplary embodiment of the disclosure, the apparatus is configured to: and if the first time length is less than or equal to the time length threshold value and the time length deviation is less than or equal to the deviation threshold value, determining the video quantity obtained by the current video analysis, which is increased by a preset video quantity, as the video quantity required to be obtained by the next video analysis.
In an exemplary embodiment of the disclosure, the apparatus is configured to: and if the first time length is larger than the time length threshold value or the time length deviation is larger than the deviation threshold value, determining the video quantity obtained by the current video analysis with the reduced preset video quantity as the video quantity required to be obtained by the next video analysis.
Data processing electronics 50 according to an embodiment of the present disclosure are described below with reference to fig. 6. The data processing electronics 50 shown in fig. 6 is only an example and should not impose any limitations on the functionality or scope of use of embodiments of the disclosure.
As shown in fig. 6, the data processing electronics 50 is embodied in the form of a general purpose computing device. The components of the data processing electronics 50 may include, but are not limited to: the at least one processing unit 510, the at least one memory unit 520, and a bus 530 that couples various system components including the memory unit 520 and the processing unit 510.
Wherein the storage unit stores program code that is executable by the processing unit 510 to cause the processing unit 510 to perform steps according to various exemplary embodiments of the present invention as described in the description part of the above exemplary methods of the present specification. For example, the processing unit 510 may perform the various steps as shown in fig. 3.
The memory unit 520 may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM)5201 and/or a cache memory unit 5202, and may further include a read only memory unit (ROM) 5203.
Storage unit 520 may also include a program/utility 5204 having a set (at least one) of program modules 5205, such program modules 5205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 530 may be one or more of any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The data processing electronic device 50 may also communicate with one or more external devices 600 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the data processing electronic device 50, and/or with any devices (e.g., router, modem, etc.) that enable the data processing electronic device 50 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 550. An input/output (I/O) interface 550 is connected to the display unit 540. Also, the data processing electronics 50 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 560. As shown, network adapter 560 communicates with the other modules of data processing electronics 50 via bus 530. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the data processing electronics 50, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method described in the above method embodiment section.
According to an embodiment of the present disclosure, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A method of data processing, the method comprising:
based on the video quantity obtained by the video analysis, evaluating a first motion state of a first object in the video analysis;
acquiring a second motion state of a second object in historical video analysis;
obtaining a first sample parameter corresponding to the first motion state and a second sample parameter corresponding to the second motion state based on sample estimation performed on the first motion state and the second motion state;
and determining the video amount required to be acquired in the next video analysis based on the parameter deviation between the first sample parameter and the second sample parameter and the video amount acquired in the current video analysis.
2. The method of claim 1, wherein the first object and the second object are the same object,
obtaining a first sample parameter corresponding to the first motion state and a second sample parameter corresponding to the second motion state based on sample estimation performed on the first motion state and the second motion state, including:
obtaining the first sample parameter based on sample estimation performed on the first motion state and the second motion state corresponding to at least one time of the historical video analysis;
and acquiring the second sample parameter based on sample estimation of the second motion state respectively corresponding to at least two times of historical video analysis.
3. The method of claim 1, wherein obtaining a first sample parameter corresponding to the first motion state and a second sample parameter corresponding to the second motion state based on sample estimation performed on the first motion state and the second motion state comprises:
obtaining the first sample parameter based on sample estimation of the first motion states respectively corresponding to at least two first objects;
and acquiring the second sample parameters based on sample estimation of the second operation states respectively corresponding to at least two second objects.
4. The method of claim 1, wherein determining the amount of video to be acquired for the next video analysis based on the parameter deviation between the first sample parameter and the second sample parameter and the amount of video acquired by the current video analysis comprises:
and if the parameter deviation is less than or equal to a preset deviation threshold value, determining the video quantity obtained by the video analysis of this time as the video quantity required to be obtained by the video analysis of the next time.
5. The method of claim 1, wherein determining the amount of video to be acquired for the next video analysis based on the parameter deviation between the first sample parameter and the second sample parameter and the amount of video acquired by the current video analysis comprises:
if the parameter deviation is larger than a preset deviation threshold value, acquiring a first time length spent by the video analysis and a second time length spent by the historical video analysis;
determining a time length deviation between the first time length and the second time length;
and determining the video volume required to be acquired in the next video analysis based on the comparison between the first time length and a preset time length threshold, the comparison between the time length deviation and the deviation threshold and the video volume acquired in the current video analysis.
6. The method according to claim 5, wherein determining the amount of video required to be acquired for the next video analysis based on the comparison between the first duration and a preset duration threshold, the comparison between the duration deviation and the deviation threshold, and the amount of video acquired by the current video analysis comprises:
and if the first time length is less than or equal to the time length threshold value and the time length deviation is less than or equal to the deviation threshold value, determining the video quantity obtained by the current video analysis, which is increased by a preset video quantity, as the video quantity required to be obtained by the next video analysis.
7. The method according to claim 5, wherein determining the amount of video required to be acquired for the next video analysis based on the comparison between the first duration and a preset duration threshold, the comparison between the duration deviation and the deviation threshold, and the amount of video acquired by the current video analysis comprises:
and if the first time length is larger than the time length threshold value or the time length deviation is larger than the deviation threshold value, determining the video quantity obtained by the current video analysis with the reduced preset video quantity as the video quantity required to be obtained by the next video analysis.
8. A data processing apparatus, characterized in that the apparatus comprises:
the evaluation module is configured to evaluate a first motion state of a first object in the current video analysis based on the video amount obtained by the current video analysis;
a first obtaining module configured to obtain a second motion state of a second object in the historical video analysis;
a second obtaining module, configured to obtain a first sample parameter corresponding to the first motion state and a second sample parameter corresponding to the second motion state based on sample estimation performed on the first motion state and the second motion state;
and the determining module is configured to determine the video amount required to be acquired in the next video analysis based on the parameter deviation between the first sample parameter and the second sample parameter and the video amount acquired in the current video analysis.
9. An electronic device for data processing, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored by the memory to perform the method of any of claims 1-7.
10. A computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any one of claims 1-7.
CN202010567795.XA 2020-06-19 2020-06-19 Data processing method and device, electronic equipment and storage medium Pending CN111652904A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010567795.XA CN111652904A (en) 2020-06-19 2020-06-19 Data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010567795.XA CN111652904A (en) 2020-06-19 2020-06-19 Data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111652904A true CN111652904A (en) 2020-09-11

Family

ID=72351559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010567795.XA Pending CN111652904A (en) 2020-06-19 2020-06-19 Data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111652904A (en)

Similar Documents

Publication Publication Date Title
CN109598066B (en) Effect evaluation method, apparatus, device and storage medium for prediction module
KR20200115063A (en) Method of determining quality of map trajectory matching data, device, server and medium
CN113240909B (en) Vehicle monitoring method, equipment, cloud control platform and vehicle road cooperative system
CN112435469B (en) Vehicle early warning control method and device, computer readable medium and electronic equipment
US11328518B2 (en) Method and apparatus for outputting information
CN112966599B (en) Training method of key point recognition model, key point recognition method and device
CN112863187B (en) Detection method of perception model, electronic equipment, road side equipment and cloud control platform
CN113537362A (en) Perception fusion method, device, equipment and medium based on vehicle-road cooperation
CN112129304A (en) Electronic navigation method, electronic navigation device, electronic equipment and storage medium
CN111540191B (en) Driving warning method, system, equipment and storage medium based on Internet of vehicles
CN110493521B (en) Automatic driving camera control method and device, electronic equipment and storage medium
CN112622923B (en) Method and device for controlling a vehicle
CN111739290A (en) Vehicle early warning method and device
CN113418531B (en) Navigation route determining method, device, electronic equipment and computer storage medium
US11417114B2 (en) Method and apparatus for processing information
EP3620969A1 (en) Method and device for detecting obstacle speed, computer device, and storage medium
CN109934496B (en) Method, device, equipment and medium for determining inter-area traffic influence
CN111652904A (en) Data processing method and device, electronic equipment and storage medium
CN115392730A (en) Rental vehicle warning method and device, electronic equipment and storage medium
CN114677848A (en) Perception early warning system, method, device and computer program product
CN112069899A (en) Road shoulder detection method and device and storage medium
CN112766746A (en) Traffic accident recognition method and device, electronic equipment and storage medium
CN115131958B (en) Method and device for pushing congestion road conditions, electronic equipment and storage medium
CN115410386B (en) Short-time speed prediction method and device, computer storage medium and electronic equipment
CN117496571B (en) Human data storage method, device and medium based on face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination