CN112601050A - Smart city video monitoring method and device - Google Patents

Smart city video monitoring method and device Download PDF

Info

Publication number
CN112601050A
CN112601050A CN202011428869.8A CN202011428869A CN112601050A CN 112601050 A CN112601050 A CN 112601050A CN 202011428869 A CN202011428869 A CN 202011428869A CN 112601050 A CN112601050 A CN 112601050A
Authority
CN
China
Prior art keywords
monitoring
video
city
monitored
videos
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011428869.8A
Other languages
Chinese (zh)
Inventor
张兴莉
冯丽琴
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011428869.8A priority Critical patent/CN112601050A/en
Publication of CN112601050A publication Critical patent/CN112601050A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing

Abstract

The invention discloses a smart city video monitoring method and device. According to the method, firstly, according to key video information among a plurality of city monitoring videos, the association degree among the plurality of city monitoring videos is determined, secondly, the plurality of city monitoring videos are divided into a plurality of monitoring area images, the proportion of each city monitoring video in the monitoring area images is determined, the sum of the association degree and the proportion of each city monitoring video in the plurality of sample city monitoring videos of a target city monitoring video operating to-be-monitored video is further calculated, then the viewing probability of the to-be-monitored video is determined, and on the basis, the to-be-monitored video in the target city monitoring video is monitored. Therefore, the monitoring video to be monitored does not need to be monitored by workers in real time, and accidents caused by missing of the monitored area of the city monitoring video can be avoided.

Description

Smart city video monitoring method and device
Technical Field
The disclosure relates to the technical field of smart city video monitoring, in particular to a smart city video monitoring method and device.
Background
With the continuous development of smart city technology and intelligent industry, in the field of video monitoring, video monitoring can monitor many occasions visually and accurately, however, in the prior art, most of workers monitor city monitoring videos, and if the areas monitored by the city monitoring videos are unexpected after the workers leave, the accidents of the areas monitored by the city monitoring videos can be missed.
Disclosure of Invention
In order to solve the technical problems in the related art, the disclosure provides a smart city video monitoring method and device.
The invention provides a smart city video monitoring method, which comprises the following steps:
determining the association degree among a plurality of city monitoring videos according to key video information among the plurality of city monitoring videos;
dividing the plurality of city monitoring videos into a plurality of monitoring area images according to the association degree among the plurality of city monitoring videos, and determining the occupation ratio of each city monitoring video in the plurality of city monitoring videos in the monitoring area images;
when a video monitoring instruction sent by monitoring equipment corresponding to a target city monitoring video in the plurality of city monitoring videos is detected, calculating the sum of the correlation degrees between the target city monitoring video and a sample city monitoring video operating a video to be monitored in the plurality of sample city monitoring videos, and the sum of the proportions of the city monitoring video operating the video to be monitored in a monitoring area image to which the target city monitoring video belongs in the monitoring area image to which the target city monitoring video belongs, wherein the steps of: if a plurality of city monitoring videos operate the video to be monitored in the monitoring area image to which the target city monitoring video belongs, acquiring the occupation ratio of the plurality of city monitoring videos in the monitoring area image to which the target city monitoring video belongs, and calculating the sum of the occupation ratios of the plurality of city monitoring videos in the monitoring area image; obtaining the viewing probability of the target city monitoring video to the video to be monitored, wherein the sample city monitoring video is a city monitoring video which is in a direct connection relation with the target city monitoring video in the plurality of city monitoring videos;
monitoring the videos to be monitored in the target city monitoring videos according to the sum of the correlation degrees between the target city monitoring videos and the sample city monitoring videos operating the videos to be monitored in the plurality of sample city monitoring videos, the sum of the proportions of the city monitoring videos operating the videos to be monitored in the monitoring area images to which the target city monitoring videos belong, and the viewing probability of the target city monitoring videos to the videos to be monitored.
In an alternative embodiment, each of the plurality of city monitoring videos belongs to N monitoring area images of the plurality of monitoring area images, where N is a positive integer greater than or equal to 1; the determining the proportion of each city monitoring video in the plurality of city monitoring videos in the monitoring area image comprises: acquiring the image repeated occurrence frequency of each city monitoring video in each monitoring area image in the N monitoring area images; and determining the occupation ratio of each city monitoring video in each monitoring area image in the N monitoring area images according to the image repeated occurrence frequency of each city monitoring video in each monitoring area image in the N monitoring area images.
In an alternative embodiment, the obtaining of the viewing probability of the target city monitoring video for the video to be monitored includes: acquiring historical viewing records of the target city monitoring video on videos of different time nodes; and determining the viewing probability of the target city monitoring video to the video to be monitored according to the historical viewing records of the target city monitoring video to the videos of different time nodes.
In an alternative embodiment, the monitoring the video to be monitored in the target city monitoring video according to a sum of the correlation degrees between the target city monitoring video and the sample city monitoring video operating the video to be monitored in the plurality of sample city monitoring videos, a sum of proportions of the city monitoring video operating the video to be monitored in the monitoring area image to which the target city monitoring video belongs, and a viewing probability of the target city monitoring video to the video to be monitored, includes: calculating the sum of the correlation degrees between the target city monitoring video and the sample city monitoring video operating the video to be monitored in the plurality of sample city monitoring videos, the sum of the proportions of the city monitoring video operating the video to be monitored in the monitoring area image to which the target city monitoring video belongs, and the weighted average value of the viewing probability of the target city monitoring video to the video to be monitored; and taking the weighted average value as the comprehensive viewing probability of the videos to be monitored, and monitoring the videos to be monitored in the target city monitoring videos according to the comprehensive viewing probability.
In an alternative embodiment, the monitoring, according to the comprehensive viewing probability, a video to be monitored in the target city monitoring video includes: and monitoring the video to be monitored in the target city monitoring video with the comprehensive viewing probability arranged at the front K positions, wherein K is a positive integer greater than or equal to 1.
In an alternative embodiment, the method further comprises: and monitoring the video to be monitored in the target city monitoring video, and judging whether the target monitoring block corresponding to the monitored video to be monitored has traffic safety risk.
The invention also provides a smart city video monitoring device, which comprises:
the system comprises a relevancy determining module, a relevancy determining module and a relevancy determining module, wherein the relevancy determining module is used for determining the relevancy among a plurality of city monitoring videos according to key video information among the city monitoring videos;
the regional image dividing module is used for dividing the urban monitoring videos into a plurality of monitoring region images according to the association degrees among the urban monitoring videos and determining the occupation ratio of each urban monitoring video in the urban monitoring videos in the monitoring region images;
a sum of occupation ratio calculation module, configured to calculate, when a video monitoring instruction sent by a monitoring device corresponding to a target city monitoring video in the multiple city monitoring videos is detected, a sum of relevance degrees between the target city monitoring video and a sample city monitoring video in which a video to be monitored is operated in the multiple sample city monitoring videos, and a sum of occupation ratios of the city monitoring video in which the video to be monitored is operated in a monitoring area image to which the target city monitoring video belongs, including: if a plurality of city monitoring videos operate the video to be monitored in the monitoring area image to which the target city monitoring video belongs, acquiring the occupation ratio of the plurality of city monitoring videos in the monitoring area image to which the target city monitoring video belongs, and calculating the sum of the occupation ratios of the plurality of city monitoring videos in the monitoring area image; obtaining the viewing probability of the target city monitoring video to the video to be monitored, wherein the sample city monitoring video is a city monitoring video which is in a direct connection relation with the target city monitoring video in the plurality of city monitoring videos;
and the video monitoring module is used for monitoring the video to be monitored in the target city monitoring video according to the sum of the correlation degrees between the target city monitoring video and the sample city monitoring video operating the video to be monitored in the plurality of sample city monitoring videos, the sum of the occupation ratios of the city monitoring video operating the video to be monitored in the monitoring area image to which the target city monitoring video belongs and the viewing probability of the target city monitoring video to the video to be monitored.
In an alternative embodiment, the apparatus further comprises: and the monitoring block safety judgment module is used for monitoring the video to be monitored in the target city monitoring video and judging whether the traffic safety risk exists in the target monitoring block corresponding to the monitored video to be monitored.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects.
The utility model provides a smart city video monitoring method and a device, firstly, according to the key video information among a plurality of city monitoring videos, the relevancy among a plurality of city monitoring videos is determined, secondly, the plurality of city monitoring videos are divided into a plurality of monitoring area images, the occupation ratio of each city monitoring video in the monitoring area images is determined, the sum of the relevancy and the occupation ratio among the sample city monitoring videos operating the videos to be monitored in a plurality of sample city monitoring videos of the target city monitoring video is further calculated, further, the viewing probability of the videos to be monitored is determined, and on the basis, the videos to be monitored in the target city monitoring videos are monitored. Therefore, the monitoring video to be monitored does not need to be monitored by workers in real time, and accidents caused by missing of the monitored area of the city monitoring video can be avoided.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart of a smart city video monitoring method according to an embodiment of the present invention.
Fig. 2 is a block diagram of a smart city video monitoring apparatus according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a hardware structure of a video monitoring terminal according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Referring to fig. 1, a flow chart of a smart city video monitoring method is provided, and the following steps S11-S14 are specifically executed when the method is implemented.
Step S11, determining the association degree of the multiple city monitoring videos according to the key video information among the multiple city monitoring videos.
Step S12, according to the relevance between the multiple city monitoring videos, dividing the multiple city monitoring videos into multiple monitoring area images, and determining the proportion of each city monitoring video in the multiple city monitoring videos in the monitoring area images.
Step S13, when detecting a video monitoring instruction sent by a monitoring device corresponding to a target city monitoring video in the multiple city monitoring videos, calculating a sum of relevance degrees between the target city monitoring video and a sample city monitoring video operating a video to be monitored in the multiple sample city monitoring videos, and a sum of proportions of the city monitoring video operating the video to be monitored in a monitoring area image to which the target city monitoring video belongs in the monitoring area image to which the target city monitoring video belongs, including: if a plurality of city monitoring videos operate the video to be monitored in the monitoring area image to which the target city monitoring video belongs, acquiring the occupation ratio of the plurality of city monitoring videos in the monitoring area image to which the target city monitoring video belongs, and calculating the sum of the occupation ratios of the plurality of city monitoring videos in the monitoring area image; and obtaining the viewing probability of the target city monitoring video to the video to be monitored, wherein the sample city monitoring video is a city monitoring video which is in a direct connection relation with the target city monitoring video in the plurality of city monitoring videos.
Step S14, monitoring the video to be monitored in the target city monitoring video according to the sum of the correlation degrees between the target city monitoring video and the sample city monitoring video operating the video to be monitored in the plurality of sample city monitoring videos, the sum of the proportions of the city monitoring video operating the video to be monitored in the monitoring area image to which the target city monitoring video belongs, and the viewing probability of the target city monitoring video to the video to be monitored.
The following advantageous effects can be achieved when the method described in the above steps S11-S14 is performed: the method comprises the steps of firstly determining the relevance among a plurality of city monitoring videos according to key video information among the city monitoring videos, secondly dividing the city monitoring videos into a plurality of monitoring area images, determining the proportion of each city monitoring video in the monitoring area images, further calculating the sum of the relevance and the proportion among the sample city monitoring videos operating the to-be-monitored videos in a plurality of sample city monitoring videos of a target city monitoring video, further determining the viewing probability of the to-be-monitored videos, and monitoring the to-be-monitored videos in the target city monitoring video on the basis. Therefore, the monitoring video to be monitored does not need to be monitored by workers in real time, and accidents caused by missing of the monitored area of the city monitoring video can be avoided.
Further, each of the plurality of city monitoring videos belongs to N monitoring area images of the plurality of monitoring area images, where N is a positive integer greater than or equal to 1. In a specific implementation, in order to avoid determining the proportion of each city monitoring video in each monitoring area image of the N monitoring area images by mistake, the determining the proportion of each city monitoring video in the monitoring area images in the plurality of city monitoring videos described in step S12 includes: acquiring the image repeated occurrence frequency of each city monitoring video in each monitoring area image in the N monitoring area images; and determining the occupation ratio of each city monitoring video in each monitoring area image in the N monitoring area images according to the image repeated occurrence frequency of each city monitoring video in each monitoring area image in the N monitoring area images.
Therefore, the proportion of each monitoring area image of each city monitoring video in the N monitoring area images is determined through the repeated occurrence frequency, and the proportion of each monitoring area image of each city monitoring video in the N monitoring area images can be prevented from being determined by mistake.
In specific implementation, in order to accurately determine the viewing probability of the video to be monitored, the obtaining of the viewing probability of the target city monitoring video to the video to be monitored, which is described in step S13, includes: acquiring historical viewing records of the target city monitoring video on videos of different time nodes; and determining the viewing probability of the target city monitoring video to the video to be monitored according to the historical viewing records of the target city monitoring video to the videos of different time nodes.
By executing the content, the viewing probability of the video to be monitored can be accurately judged on the premise of viewing the record according to the history of the videos of the nodes at different times.
On the basis, in order to comprehensively monitor the video to be monitored, the monitoring method described in step S14 may specifically include the following steps, according to a sum of the correlation degrees between the target city monitoring video and the sample city monitoring video in which the video to be monitored is operated in the plurality of sample city monitoring videos, a sum of the proportions of the city monitoring video in which the video to be monitored is operated in the monitoring area image to which the target city monitoring video belongs, and a viewing probability of the video to be monitored by the target city monitoring video: calculating the sum of the correlation degrees between the target city monitoring video and the sample city monitoring video operating the video to be monitored in the plurality of sample city monitoring videos, the sum of the proportions of the city monitoring video operating the video to be monitored in the monitoring area image to which the target city monitoring video belongs, and the weighted average value of the viewing probability of the target city monitoring video to the video to be monitored; and taking the weighted average value as the comprehensive viewing probability of the videos to be monitored, and monitoring the videos to be monitored in the target city monitoring videos according to the comprehensive viewing probability.
Therefore, on the basis of calculating the weighted average of the sum of the relevance degrees, the sum of the occupation ratios and the viewing probability of the video to be monitored, the weighted average is used as a comprehensive viewing overview, and on the basis, the video to be monitored in the target city monitoring video is monitored, so that the video to be monitored can be comprehensively monitored without monitoring the video to be monitored in real time by workers.
It can be understood that, the monitoring the video to be monitored in the target city monitoring video according to the comprehensive viewing probability includes: and monitoring the video to be monitored in the target city monitoring video with the comprehensive viewing probability arranged at the front K positions, wherein K is a positive integer greater than or equal to 1.
On the basis of the above, the present invention may further include step S15: and monitoring the video to be monitored in the target city monitoring video, and judging whether the target monitoring block corresponding to the monitored video to be monitored has traffic safety risk.
It can be understood that the step S15 of monitoring the video to be monitored in the target city monitoring video and determining whether a traffic safety risk exists in a target monitoring block corresponding to the monitored video to be monitored may specifically include the following.
Step S151, acquiring first block traffic road information and second block traffic road information aiming at the target monitoring block.
For example, the traffic congestion weight of the second block traffic road information is smaller than the traffic congestion weight of the first block traffic road information.
Step S152, determining target traffic flow information of the target monitoring block according to the traffic time interval sequence of the second block traffic road information, and acquiring real-time monitoring information of the target monitoring block from the first block traffic road information according to the target traffic flow information; and determining the difference value between the target monitoring information identification degree of the real-time monitoring information and each candidate monitoring information identification degree in a preset information identification degree queue.
For example, the preset information identification degree queue includes a plurality of candidate monitoring information identification degrees, each candidate monitoring information identification degree is correspondingly provided with a traffic safety tag, and the traffic safety tags indicate that traffic safety risks exist or do not exist in the target monitoring block.
Step S153, selecting m candidate monitoring information identification degrees from the preset information identification degree queue based on the difference value between the target monitoring information identification degree and each candidate monitoring information identification degree; and judging whether the target monitoring block has traffic safety risks or not based on the m traffic safety labels with the candidate monitoring information identification degrees.
For example, traffic safety tags are used to determine the safety status of a monitored neighborhood of objects. m is a positive integer greater than or equal to 1.
It can be understood that, by executing the above steps S151 to S153, first obtaining first block traffic road information and second block traffic road information, then determining target traffic flow information of a target monitoring block according to a traffic time period sequence of the second block traffic road information, further obtaining real-time monitoring information of the target monitoring block from the first block traffic road information, then determining a difference value between a target monitoring information identification degree of the real-time monitoring information and each candidate monitoring information identification degree in a preset information identification degree queue, and finally determining whether there is a traffic safety risk in the target monitoring block based on m traffic safety tags of the candidate monitoring information identification degrees selected from the preset information identification degree queue.
Therefore, the street traffic road information with different traffic jam weights can be analyzed, so that the traffic time interval sequence and the real-time monitoring information can be determined relatively independently based on different street traffic road information, the influence deviation between the traffic time interval sequence and the real-time monitoring information can be ensured not to be overlarge, the reliability of the real-time monitoring information is improved, and the accuracy of the difference value of the target monitoring information identification degree and each candidate monitoring information identification degree in the preset information identification degree queue is ensured. Therefore, when a plurality of candidate monitoring information identification degrees are selected, the candidate monitoring information identification degrees corresponding to the traffic safety labels related to the target monitoring block can be selected as much as possible, so that when the traffic safety risk judgment of the target monitoring block is carried out based on the traffic safety labels, different safety characteristics identified by the target monitoring block can be comprehensively considered, the reliability of the traffic safety risk identification is improved, the traffic safety of the target monitoring block is ensured, and the problem of mistakenly judging the safety of the target monitoring block due to inaccurate identification is avoided.
In some examples, the selecting m candidate monitoring information recognizability from the preset information recognition queue based on the difference between the target monitoring information recognizability and each candidate monitoring information recognizability described in step S153 may include: and selecting m candidate monitoring information identification degrees with the largest difference from the preset information identification degree queue based on the difference between the target monitoring information identification degree and each candidate monitoring information identification degree in the preset information identification degree queue.
In practical application, in order to comprehensively consider different safety features identified by a target monitoring block to improve the reliability of traffic safety risk identification, the safety feature similarity rates corresponding to nodes in different monitoring times need to be considered, so that the instantaneous variability of the safety features is considered. To achieve this, in step S153, whether the target monitoring block has a traffic safety risk is determined based on the m traffic safety tags identified by the candidate monitoring information, which may include the following steps S1531 to S1536.
Step S1531 determines a current state information set used for calculating the comprehensive information identification degrees corresponding to the m candidate monitoring information identification degrees based on the tag similarity between every two adjacent traffic safety tags in the traffic safety tags of the m candidate monitoring information identification degrees.
Step S1532, based on the current state information set, obtaining a to-be-monitored block state information set corresponding to each block monitoring time node in a first set monitoring block time period of the target monitoring block, where the first set monitoring block time period includes at least two block monitoring time nodes, and the to-be-monitored block state information set corresponding to each block monitoring time node includes monitoring safety parameters of the monitoring block collected or calculated by a safety state verification unit in the target monitoring block in the corresponding block monitoring time node.
Step S1533, determining a security feature similarity rate between the to-be-monitored block status information sets corresponding to each block monitoring time node in the first set monitoring block time period.
Step S1534, determining a block picture record set of the target monitoring block in the first set monitoring block time period according to the security feature similarity between the to-be-monitored block state information sets corresponding to each block monitoring time node in the first set monitoring block time period.
Step S1535, determining the security level index of the target monitoring block in the first set monitoring block time period according to the block picture record set.
Step S1536, calculating the comprehensive information identification degrees corresponding to the m candidate monitoring information identification degrees according to the safety level index; judging whether the identification degree of the comprehensive information is greater than the identification degree of set information; determining that the target monitoring block has no traffic safety risk when the comprehensive information identification degree is judged to be greater than or equal to the set information identification degree; and when the comprehensive information identification degree is judged to be smaller than the set information identification degree, determining that the traffic safety risk exists in the target monitoring block, and locking the safety accident event information of the target monitoring block when the traffic safety risk exists in the target monitoring block.
Thus, by applying the contents described in the above steps S1531 to S1536, according to the security feature similarity between the to-be-monitored block status information sets corresponding to the respective block monitoring time nodes in the first set monitoring block time period, the block picture record set of the target monitoring block in the first set monitoring block time period is determined, and the security level index of the target monitoring block in the first set monitoring block time period is determined according to the block picture record set, so that the comprehensive information identification degree is calculated based on the security level index, and thus the security feature similarity corresponding to the different monitoring time nodes can be considered, thereby considering the instant variability of the security feature, and further comprehensively considering the different security features monitored by the target monitoring block. It can be understood that whether the traffic safety risk exists in the target monitoring block or not is monitored through the comprehensive information identification degree, and the reliability of traffic safety risk identification can be improved.
Further, the obtaining of the to-be-monitored neighborhood state information set corresponding to each neighborhood monitoring time node of the target monitoring neighborhood within the first set monitoring neighborhood time period described in step S1532 may be implemented by the following contents described in steps S15321 to S15324.
Step S15321 of acquiring monitoring security parameters of the monitored neighborhood collected by the security status verification unit in the target monitored neighborhood within the set time interval after the first neighborhood monitoring time node starts, and determining a set of to-be-monitored neighborhood status information corresponding to the first neighborhood monitoring time node according to the monitoring security parameters of the monitored neighborhood collected by the security status verification unit in the target monitored neighborhood within the set time interval after the first neighborhood monitoring time node starts, where the first neighborhood monitoring time node is any one of the neighborhood monitoring time nodes within the first set monitored neighborhood time period.
Step S15322, when the security status verification unit in the target monitored neighborhood does not acquire the monitoring security parameter of the monitored neighborhood within a set time duration after the start of the second neighborhood monitoring time node, determining a set of to-be-monitored neighborhood status information corresponding to the second neighborhood monitoring time node according to the monitoring security parameter of the monitored neighborhood calculated by the security status verification unit in the target monitored neighborhood, where the second neighborhood monitoring time node is any one of the neighborhood monitoring time nodes other than the first neighborhood monitoring time node within the first set monitored neighborhood time period.
Step S15323, the security status verification unit in the target monitoring block does not collect the monitoring security parameters of the monitoring block within the set time interval after the monitoring time node of the third block is started, and the monitored block state information sets corresponding to the continuous first set number of block monitoring time nodes before the third block monitoring time node are all determined according to the monitoring safety parameters of the monitored blocks calculated by the safety state verification unit, sending a monitoring block acquisition instruction to the safety state verification unit, so that the security status verification unit collects the monitoring security parameters of the monitoring neighborhood in response to the monitoring neighborhood collection instruction, the third neighborhood monitoring time node is any neighborhood monitoring time node except the first neighborhood monitoring time node and the second neighborhood monitoring time node in the first set monitoring neighborhood time period.
Step S15324, acquiring the monitoring security parameters of the monitoring block acquired by the security status verifying unit in response to the monitoring block acquisition instruction, and determining a to-be-monitored block status information set corresponding to the third block monitoring time node according to the monitoring security parameters of the monitoring block acquired by the security status verifying unit in response to the monitoring block acquisition instruction.
It can be understood that by executing the steps S15321 to S15324, the to-be-monitored block status information sets corresponding to different block monitoring time nodes can be completely determined, so as to provide sufficient data basis for the subsequent calculation of the comprehensive information identification degree, and ensure the reliability of the subsequent calculation of the comprehensive information identification degree.
Further, the determining of the security feature similarity between the to-be-monitored block status information sets corresponding to the block monitoring time nodes in the first set block monitoring time period described in step S1533 may be implemented by the following two implementation manners.
In the first implementation mode, a dynamic monitoring security parameter set is determined from a to-be-monitored block state information set corresponding to each block monitoring time node in a first set monitoring block time period; and respectively determining each to-be-monitored block state information set except the dynamic monitoring safety parameter set in the to-be-monitored block state information set corresponding to each block monitoring time node in the first set monitoring block time period, and the safety feature similarity between the to-be-monitored block state information set and the dynamic monitoring safety parameter set.
In a second implementation manner, security feature similarity rates between to-be-monitored block status information sets corresponding to every two adjacent block monitoring time nodes in the first set monitoring block time period are respectively determined.
It will be appreciated that the above described embodiments of determining a security feature similarity ratio may alternatively be used, thereby allowing flexible and fast calculation of the security feature similarity ratio.
On the basis of the above steps S1531 to S1536, the to-be-monitored block status information set corresponding to each block monitoring time node in the first set monitoring block time period includes an updatable status data set and a non-updatable status data set, and the block picture record set includes a first block picture record set determined according to the security feature similarity rate corresponding to the updatable status data set of each block monitoring time node specified in the first set monitoring block time period, and a second block picture record set determined according to the security feature similarity rate corresponding to the non-updatable status data set of each block monitoring time node specified in the first set monitoring block time period. Based on this, the determining the security level index of the target monitoring block within the first set monitoring block time period according to the block picture record set in step S1535 includes step S15350: and determining the safety level index of the target monitoring block in the first set monitoring block time period according to the first block picture record set and the second block picture record set.
Further, the determining the security level index of the target monitoring block within the first set monitoring block time period according to the first block image record set and the second block image record set in step S15350 may further include the following steps S15351 to S15353.
Step S15351, when the street picture change coefficient corresponding to the first street picture record set is not smaller than a preset first change coefficient threshold and the street picture change coefficient corresponding to the second street picture record set is not smaller than a preset second change coefficient threshold, determining that the security level index of the target monitoring street within the first set monitoring street time period is the first target level index.
Step S15352, when the street view variation coefficient corresponding to the first street view record set is not smaller than the first variation coefficient threshold and the street view variation coefficient corresponding to the second street view record set is smaller than the second variation coefficient threshold, determining that the security level index of the target monitored street within the first set monitored street time period is the second target level index.
Step S15353, when the street view variation coefficient corresponding to the first street view record set is smaller than the first variation coefficient threshold and the street view variation coefficient corresponding to the second street view record set is smaller than the second variation coefficient threshold, determining that the security level index of the target monitored street in the first set monitored street time period is a third target level index.
Therefore, different third target grade indexes can be determined according to different street picture change coefficients, and therefore the third target grade indexes are ensured to be matched with picture records monitored by actual target monitoring streets.
Further, the step S1534 determines, according to the security feature similarity between the to-be-monitored neighborhood state information sets corresponding to the respective neighborhood monitoring time nodes in the first set monitoring neighborhood time period, a neighborhood picture record set of the target monitoring neighborhood in the first set monitoring neighborhood time period, including the contents described in the following steps S15341 and S15342.
Step S15341 determines, from the to-be-monitored neighborhood state information sets corresponding to the respective neighborhood monitoring time nodes in the first set monitored neighborhood time period, at least one target updatable state data set whose monitored neighborhood confidence weight is higher than the first set confidence weight threshold, and at least one target non-updatable state data set whose monitored neighborhood confidence weight is higher than the second set confidence weight threshold.
Step S15342 determines the first street view record set according to the security feature similarity corresponding to the at least one target updatable status data set, and determines the second street view record set according to the security feature similarity corresponding to the at least one target non-updatable status data set.
In addition, the determining, according to the security feature similarity between the to-be-monitored block state information sets corresponding to each block monitoring time node in the first set monitoring block time period and described in step S1534, a block picture record set of the target monitoring block in the first set monitoring block time period may also be implemented by the following implementation manners: determining relevance parameters of the safety feature similarity rates according to the quantity of the to-be-monitored block state information contained in the to-be-monitored block state information set corresponding to each block monitoring time node in the first set monitoring block time period; and determining a block picture record set of the target monitoring block in the first set monitoring block time period according to the safety feature similarity between the block state information sets to be monitored corresponding to the block monitoring time nodes in the first set monitoring block time period and the relevance parameters of the safety feature similarity.
It can be understood that the two further implementation manners of step S1534 are implemented according to the reliability weight of the monitored neighborhood and the relevance parameter, so that an implementation manner that is easy to implement can be flexibly selected according to the target monitored neighborhood.
It is to be understood that the determination of the difference between the target monitoring information identification degree of the real-time monitoring information and each candidate monitoring information identification degree in the preset information identification degree queue described in step S152 may be implemented by any one of the following three embodiments.
In the first embodiment, the difference between the target monitoring information identification degree and the candidate monitoring information identification degree is determined based on the monitoring timing sequence identification coefficient of the target monitoring information identification degree and the candidate monitoring information identification degree.
In a second embodiment, the difference between the target monitoring information identification degree and the candidate monitoring information identification degree is determined based on the monitoring event identification coefficient between the target monitoring information identification degree and the candidate monitoring information identification degree.
In a third embodiment, a difference between the target monitoring information identification degree and the candidate monitoring information identification degree is determined based on a monitoring risk identification coefficient between the target monitoring information identification degree and the candidate monitoring information identification degree.
In one possible embodiment, in order to ensure that the target traffic flow information of the target monitoring block can cover the target traffic flow information identified by the target monitoring block, the determining of the target traffic flow information of the target monitoring block according to the traffic time interval sequence of the second block traffic road information described in step S152 may further include the following implementation of steps S1521-S1526.
Step S1521, multiple traffic restriction information combinations corresponding to the traffic time interval sequence of the second block traffic road information and a traffic mode information set corresponding to each traffic restriction information combination are obtained, and each traffic restriction information combination comprises multiple different traffic information labels.
Step S1522, determining a first traffic restriction identifier sequence corresponding to the traffic restriction information combination in the traffic manner information set corresponding to the traffic restriction information combination.
Step S1523, the first traffic restriction mark sequence corresponding to the traffic restriction information combination is adopted to carry out speed restriction mark information correction, and the speed restriction mark information correction result of each traffic information label in the traffic restriction information combination is obtained.
Step S1524, based on the speed limit sign information correction result of each traffic information label in the multiple traffic restriction information combinations, performing traffic rate update on the first traffic restriction identification sequence corresponding to the traffic restriction information combination to obtain a first updated traffic rate corresponding to the traffic restriction information combination.
Step S1525, adding the first updated traffic rate corresponding to the traffic restriction information combination to the traffic mode information set corresponding to the traffic restriction information combination.
Step S1526, the step is returned and executed to determine a first traffic restriction identification sequence corresponding to the traffic restriction information combination in the traffic mode information set corresponding to the traffic restriction information combination until the safety traffic coefficient corresponding to the multiple traffic restriction information combinations reaches the set coefficient; and when the safety traffic coefficient corresponding to the multiple traffic restriction information combinations reaches the set coefficient, determining the target traffic flow information of the target monitoring block based on the safety traffic coefficient and the multiple traffic restriction information combinations.
In this way, by applying the steps S1521 to S1526, the first traffic restriction identifier sequence can be determined iteratively, so as to ensure that the safe traffic coefficient corresponding to the combination of the multiple types of traffic restriction information reaches the set coefficient, and thus, the target traffic flow information of the target monitoring block can be determined based on the safe traffic coefficient and the combination of the multiple types of traffic restriction information. Since the safe traffic coefficient reaches the set coefficient, and the set coefficient is configured based on the target traffic flow information identified by the target monitoring block, the method can ensure that the target traffic flow information of the target monitoring block can cover the target traffic flow information identified by the target monitoring block.
Further, the determination of the first traffic restriction identification sequence corresponding to the traffic restriction information combination in the traffic manner information set corresponding to the traffic restriction information combination described in step S1522 may be exemplarily explained as the following step S15221-step S15224.
Step S15221, determining a second traffic restriction identifier sequence, a first static traffic rate, and a first static traffic rate corresponding to the target traffic restriction information combination.
Step S15222, obtaining a first comparison result of the first static traffic rate corresponding to the traffic restriction information combination by performing a bit-by-bit comparison on the first static traffic rate corresponding to the traffic restriction information combination and the first static traffic rate corresponding to the target traffic restriction information combination, where the target traffic restriction information combination is all traffic restriction information combinations including the traffic restriction information combination in the multiple traffic restriction information combinations.
Step S15223, by comparing the first static traffic rate corresponding to the traffic restriction information combination and the second traffic restriction identifier sequence corresponding to the traffic restriction information combination bit by bit, a second comparison result of the first static traffic rate of the traffic restriction information combination is obtained.
Step S15224, based on the second comparison result and the first comparison result, determining the second traffic restriction identifier sequence corresponding to the traffic restriction information combination or the first static traffic rate corresponding to the traffic restriction information combination as the first traffic restriction identifier sequence corresponding to the traffic restriction information combination.
Further, in the above step S15221, the first static traffic rate corresponding to the target traffic limitation information combination is determined, which includes the following contents: step S152211, acquiring a restriction schedule set of the target traffic restriction information combination, and determining a traffic restriction operation record corresponding to the target traffic restriction information combination; step S152212, according to the restriction schedule set of the target traffic restriction information combination, determining a first static traffic rate corresponding to the target traffic restriction information combination in the traffic restriction operation record corresponding to the target traffic restriction information combination.
In a further embodiment, the determination of the combination of the target traffic restriction information and the corresponding traffic restriction operation record described in step S152211 can be implemented by the following steps a-d.
Step a, determining a second comparison result and a first comparison result of each passing mode information set in the passing mode information sets corresponding to the target passing limitation information combination.
And b, calculating the queue continuity weight of each correction safety factor queue in the traffic mode information set corresponding to the target traffic limitation information combination based on the second comparison result and the first comparison result.
C, sequencing each correction safety factor queue in the traffic mode information set corresponding to the target traffic restriction information combination according to the queue continuity weight, determining the first sequenced correction safety factor queue as a main correction safety factor queue, and integrating the correction safety factor queues sequenced in a set value interval into a secondary correction safety factor queue; and determining the interval difference value of the sequencing serial numbers of the set value interval and the main correction safety coefficient queue according to the average value of the queue continuity weight of each correction safety coefficient queue.
And d, determining a traffic restriction operation record corresponding to the target traffic restriction information combination according to the secondary correction safety factor queue.
In an alternative embodiment, the step S152 of obtaining the real-time monitoring information of the target monitoring neighborhood from the first neighborhood traffic road information according to the target traffic flow information may further include the following steps (1) to (4).
(1) And acquiring safety feature change data from the first block traffic road information according to the target traffic flow information.
(2) Carrying out feature clustering on the security feature change data to obtain a security feature data set; the feature evaluation of each feature data in the security feature data set is a first feature evaluation or a second feature evaluation, and the feature data corresponding to all the first feature evaluations are the marked feature data of the security feature data set.
(3) And determining a real-time information sequence matched with the marked feature data from the first block traffic road information.
(4) And determining the real-time monitoring information of the target monitoring block according to the real-time information sequence.
In step (1), the acquiring safety feature change data from the first block traffic road information according to the target traffic flow information includes: determining safety feature description information according to the feature variable division record of the second block traffic road information and the feature variable division record of the first block traffic road information; and acquiring safety feature change data from the first block traffic road information according to the safety feature description information and the target traffic flow information.
By the design, based on the content described in the steps (1) to (4), the real-time information sequence can be determined in real time based on the safety feature change data, so that the determined real-time monitoring information of the target monitoring block has better timeliness.
In another alternative embodiment, the step S151 of obtaining the first block traffic road information and the second block traffic road information for the target monitoring block may include the following steps S1511 to S1514.
Step S1511, determining the current thread state information of the event monitoring thread corresponding to the target monitoring block; and determining a safety state characteristic from the current thread state information.
Step S1512 determines whether the operable state in the current thread state information changes relative to the operable state in the previous thread state information of the current thread state information.
Step S1513, if yes, determining the security status feature determined from the current thread status information as the effective security status feature of the current thread status information; otherwise, fusing the safety state features determined from the current thread state information with the effective safety state features at the corresponding positions in the previous thread state information to obtain a fusion result, and determining the fusion result as the effective safety state features of the current thread state information.
Step S1514, the first and second neighborhood traffic road information is obtained in different information extraction manners based on the effective safety state feature of the current thread state information.
In this way, by applying the above steps S1511 to S1514, the validity of the security features between the acquired different block traffic road information can be ensured.
Based on the same inventive concept, please refer to fig. 2, the invention further provides a block diagram of a smart city video monitoring apparatus 20, which may further include the following functional modules:
the association degree determining module 21 is configured to determine association degrees between multiple city monitoring videos according to key video information between the multiple city monitoring videos;
the area image dividing module 22 is configured to divide the multiple city monitoring videos into multiple monitoring area images according to the association degrees among the multiple city monitoring videos, and determine a ratio of each city monitoring video in the multiple city monitoring videos in the monitoring area images;
a ratio sum calculating module 23, configured to calculate, when a video monitoring instruction sent by a monitoring device corresponding to a target city monitoring video in the multiple city monitoring videos is detected, a sum of relevance degrees between the target city monitoring video and a sample city monitoring video in which a video to be monitored is operated in the multiple sample city monitoring videos, and a sum of ratios of the city monitoring video in which the video to be monitored is operated in a monitoring area image to which the target city monitoring video belongs in the monitoring area image to which the target city monitoring video belongs, including: if a plurality of city monitoring videos operate the video to be monitored in the monitoring area image to which the target city monitoring video belongs, acquiring the occupation ratio of the plurality of city monitoring videos in the monitoring area image to which the target city monitoring video belongs, and calculating the sum of the occupation ratios of the plurality of city monitoring videos in the monitoring area image; obtaining the viewing probability of the target city monitoring video to the video to be monitored, wherein the sample city monitoring video is a city monitoring video which is in a direct connection relation with the target city monitoring video in the plurality of city monitoring videos;
the video monitoring module 24 is configured to monitor the video to be monitored in the target city monitoring video according to a sum of the correlation degrees between the target city monitoring video and the sample city monitoring video in which the video to be monitored is operated in the plurality of sample city monitoring videos, a sum of proportions of the city monitoring video in which the video to be monitored is operated in the monitoring area image to which the target city monitoring video belongs, and a viewing probability of the target city monitoring video to the video to be monitored.
Further, the apparatus further comprises: and the monitoring block safety judgment module 25 is configured to monitor a to-be-monitored video in the target city monitoring video, and judge whether a traffic safety risk exists in a target monitoring block corresponding to the to-be-monitored video.
On the basis, please refer to fig. 3 in combination, which provides a video monitoring terminal 110, including a processor 111, and a memory 112 and a bus 113 connected to the processor 111; wherein, the processor 111 and the memory 112 complete the communication with each other through the bus 113; the processor 111 is used to call program instructions in the memory 112 to perform the above-described method.
Further, a readable storage medium is provided, on which a program is stored, which when executed by a processor implements the method described above.
It should be understood that, for technical terms that are not noun explanations to the above-mentioned contents, a person skilled in the art can deduce and unambiguously determine the meaning of the present invention according to the above-mentioned disclosure, for example, for some values, coefficients, weights and other terms, a person skilled in the art can deduce and determine according to the logical relationship before and after, the value range of these values can be selected according to the actual situation, for example, 0 to 1, for example, 1 to 10, for example, 50 to 100, but not limited thereto, and a person skilled in the art can unambiguously determine some preset, reference, predetermined, set and target technical features/technical terms according to the above-mentioned disclosure. For some technical characteristic terms which are not explained, the technical solution can be clearly and completely implemented by those skilled in the art by reasonably and unambiguously deriving the technical solution based on the logical relations in the previous and following paragraphs. The foregoing will therefore be clear and complete to those skilled in the art. It should be understood that the process of deriving and analyzing technical terms, which are not explained, by those skilled in the art based on the above disclosure is based on the contents described in the present application, and thus the above contents are not an inventive judgment of the overall scheme.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (8)

1. A smart city video monitoring method is characterized by comprising the following steps:
determining the association degree among a plurality of city monitoring videos according to key video information among the plurality of city monitoring videos;
dividing the plurality of city monitoring videos into a plurality of monitoring area images according to the association degree among the plurality of city monitoring videos, and determining the occupation ratio of each city monitoring video in the plurality of city monitoring videos in the monitoring area images;
when a video monitoring instruction sent by monitoring equipment corresponding to a target city monitoring video in the plurality of city monitoring videos is detected, calculating the sum of the correlation degrees between the target city monitoring video and a sample city monitoring video operating a video to be monitored in the plurality of sample city monitoring videos, and the sum of the proportions of the city monitoring video operating the video to be monitored in a monitoring area image to which the target city monitoring video belongs in the monitoring area image to which the target city monitoring video belongs, wherein the steps of: if a plurality of city monitoring videos operate the video to be monitored in the monitoring area image to which the target city monitoring video belongs, acquiring the occupation ratio of the plurality of city monitoring videos in the monitoring area image to which the target city monitoring video belongs, and calculating the sum of the occupation ratios of the plurality of city monitoring videos in the monitoring area image; obtaining the viewing probability of the target city monitoring video to the video to be monitored, wherein the sample city monitoring video is a city monitoring video which is in a direct connection relation with the target city monitoring video in the plurality of city monitoring videos;
monitoring the videos to be monitored in the target city monitoring videos according to the sum of the correlation degrees between the target city monitoring videos and the sample city monitoring videos operating the videos to be monitored in the plurality of sample city monitoring videos, the sum of the proportions of the city monitoring videos operating the videos to be monitored in the monitoring area images to which the target city monitoring videos belong, and the viewing probability of the target city monitoring videos to the videos to be monitored.
2. The method of claim 1, wherein each of the plurality of city monitoring videos belongs to N monitoring area images of the plurality of monitoring area images, wherein N is a positive integer greater than or equal to 1; the determining the proportion of each city monitoring video in the plurality of city monitoring videos in the monitoring area image comprises: acquiring the image repeated occurrence frequency of each city monitoring video in each monitoring area image in the N monitoring area images; and determining the occupation ratio of each city monitoring video in each monitoring area image in the N monitoring area images according to the image repeated occurrence frequency of each city monitoring video in each monitoring area image in the N monitoring area images.
3. The method of claim 1, wherein the obtaining of the viewing probability of the target city monitoring video for the video to be monitored comprises: acquiring historical viewing records of the target city monitoring video on videos of different time nodes; and determining the viewing probability of the target city monitoring video to the video to be monitored according to the historical viewing records of the target city monitoring video to the videos of different time nodes.
4. The method of claim 1, wherein the monitoring the video to be monitored in the target city monitoring video according to a sum of the correlation degrees between the target city monitoring video and the sample city monitoring video operating the video to be monitored in the plurality of sample city monitoring videos, a sum of proportions of the city monitoring video operating the video to be monitored in the monitoring area image to which the target city monitoring video belongs, and a viewing probability of the target city monitoring video to the video to be monitored, comprises: calculating the sum of the correlation degrees between the target city monitoring video and the sample city monitoring video operating the video to be monitored in the plurality of sample city monitoring videos, the sum of the proportions of the city monitoring video operating the video to be monitored in the monitoring area image to which the target city monitoring video belongs, and the weighted average value of the viewing probability of the target city monitoring video to the video to be monitored; and taking the weighted average value as the comprehensive viewing probability of the videos to be monitored, and monitoring the videos to be monitored in the target city monitoring videos according to the comprehensive viewing probability.
5. The method of claim 4, wherein the monitoring the videos to be monitored in the target city monitoring videos according to the comprehensive viewing probability comprises: and monitoring the video to be monitored in the target city monitoring video with the comprehensive viewing probability arranged at the front K positions, wherein K is a positive integer greater than or equal to 1.
6. The method of claim 1, wherein the method further comprises: and monitoring the video to be monitored in the target city monitoring video, and judging whether the target monitoring block corresponding to the monitored video to be monitored has traffic safety risk.
7. A smart city video monitoring device, the device comprising:
the system comprises a relevancy determining module, a relevancy determining module and a relevancy determining module, wherein the relevancy determining module is used for determining the relevancy among a plurality of city monitoring videos according to key video information among the city monitoring videos;
the regional image dividing module is used for dividing the urban monitoring videos into a plurality of monitoring region images according to the association degrees among the urban monitoring videos and determining the occupation ratio of each urban monitoring video in the urban monitoring videos in the monitoring region images;
a sum of occupation ratio calculation module, configured to calculate, when a video monitoring instruction sent by a monitoring device corresponding to a target city monitoring video in the multiple city monitoring videos is detected, a sum of relevance degrees between the target city monitoring video and a sample city monitoring video in which a video to be monitored is operated in the multiple sample city monitoring videos, and a sum of occupation ratios of the city monitoring video in which the video to be monitored is operated in a monitoring area image to which the target city monitoring video belongs, including: if a plurality of city monitoring videos operate the video to be monitored in the monitoring area image to which the target city monitoring video belongs, acquiring the occupation ratio of the plurality of city monitoring videos in the monitoring area image to which the target city monitoring video belongs, and calculating the sum of the occupation ratios of the plurality of city monitoring videos in the monitoring area image; obtaining the viewing probability of the target city monitoring video to the video to be monitored, wherein the sample city monitoring video is a city monitoring video which is in a direct connection relation with the target city monitoring video in the plurality of city monitoring videos;
and the video monitoring module is used for monitoring the video to be monitored in the target city monitoring video according to the sum of the correlation degrees between the target city monitoring video and the sample city monitoring video operating the video to be monitored in the plurality of sample city monitoring videos, the sum of the occupation ratios of the city monitoring video operating the video to be monitored in the monitoring area image to which the target city monitoring video belongs and the viewing probability of the target city monitoring video to the video to be monitored.
8. The apparatus of claim 7, wherein the apparatus further comprises: and the monitoring block safety judgment module is used for monitoring the video to be monitored in the target city monitoring video and judging whether the traffic safety risk exists in the target monitoring block corresponding to the monitored video to be monitored.
CN202011428869.8A 2020-12-09 2020-12-09 Smart city video monitoring method and device Withdrawn CN112601050A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011428869.8A CN112601050A (en) 2020-12-09 2020-12-09 Smart city video monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011428869.8A CN112601050A (en) 2020-12-09 2020-12-09 Smart city video monitoring method and device

Publications (1)

Publication Number Publication Date
CN112601050A true CN112601050A (en) 2021-04-02

Family

ID=75191408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011428869.8A Withdrawn CN112601050A (en) 2020-12-09 2020-12-09 Smart city video monitoring method and device

Country Status (1)

Country Link
CN (1) CN112601050A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863364A (en) * 2022-05-20 2022-08-05 碧桂园生活服务集团股份有限公司 Security detection method and system based on intelligent video monitoring

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863364A (en) * 2022-05-20 2022-08-05 碧桂园生活服务集团股份有限公司 Security detection method and system based on intelligent video monitoring
CN114863364B (en) * 2022-05-20 2023-03-07 碧桂园生活服务集团股份有限公司 Security detection method and system based on intelligent video monitoring

Similar Documents

Publication Publication Date Title
CN106887137B (en) Congestion event prompting method and device
CN110766258B (en) Road risk assessment method and device
CN109656973B (en) Target object association analysis method and device
CN112183367B (en) Vehicle data error detection method, device, server and storage medium
WO2022213565A1 (en) Review method and apparatus for prediction result of artificial intelligence model
CN110322687B (en) Method and device for determining running state information of target intersection
CN115965655A (en) Traffic target tracking method based on radar-vision integration
CN114627394B (en) Muck vehicle fake plate identification method and system based on unmanned aerial vehicle
CN111476685B (en) Behavior analysis method, device and equipment
CN112601050A (en) Smart city video monitoring method and device
CN114003672B (en) Method, device, equipment and medium for processing road dynamic event
CN116384844B (en) Decision method and device based on geographic information cloud platform
CN113515606A (en) Big data processing method based on intelligent medical safety and intelligent medical AI system
CN103377479A (en) Event detecting method, device and system and video camera
CN116192459A (en) Edge node network security threat monitoring method based on edge-to-edge cooperation
CN112581776B (en) Intelligent traffic scheduling method and device and scheduling center
CN116363863A (en) Traffic data anomaly detection method and device and traffic operation and maintenance system
CN112581760B (en) Traffic data matching method and device for intelligent traffic
CN113793501B (en) Road perception management and application service method and system
CN113727070B (en) Equipment resource management method and device, electronic equipment and storage medium
CN114998839B (en) Data management method and system based on hierarchical distribution
CN115472014B (en) Traffic tracing method, system, server and computer storage medium
CN114973165B (en) Event recognition algorithm testing method and device and electronic equipment
CN111611406B (en) Data storage system and method for artificial intelligence learning mode
CN113095306B (en) Security alarm method and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210402

WW01 Invention patent application withdrawn after publication