CN112381022A - Intelligent driving monitoring method, system, equipment and storable medium - Google Patents

Intelligent driving monitoring method, system, equipment and storable medium Download PDF

Info

Publication number
CN112381022A
CN112381022A CN202011309801.8A CN202011309801A CN112381022A CN 112381022 A CN112381022 A CN 112381022A CN 202011309801 A CN202011309801 A CN 202011309801A CN 112381022 A CN112381022 A CN 112381022A
Authority
CN
China
Prior art keywords
monitoring
image
determining
early warning
attribute information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011309801.8A
Other languages
Chinese (zh)
Other versions
CN112381022B (en
Inventor
胡坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huixin Video Electronics Co ltd
Original Assignee
Shenzhen Huixin Video Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huixin Video Electronics Co ltd filed Critical Shenzhen Huixin Video Electronics Co ltd
Priority to CN202011309801.8A priority Critical patent/CN112381022B/en
Publication of CN112381022A publication Critical patent/CN112381022A/en
Application granted granted Critical
Publication of CN112381022B publication Critical patent/CN112381022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Emergency Management (AREA)
  • Alarm Systems (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

The invention relates to the technical field of application of internet of things and intelligent driving monitoring, in particular to an intelligent driving monitoring method, system, equipment and a storage medium. The method comprises the steps of obtaining a safety monitoring image set corresponding to each image updating period of a target driving monitoring area in a first monitoring early warning activation period, determining an image characteristic correlation matrix between the safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period, determining monitoring index distribution of the target driving monitoring area in the first monitoring early warning activation period, and determining a safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation period. The invention can realize the depth recognition of the biological characteristic image based on the monitoring index distribution, can quickly and accurately judge various safety events existing in the target driving monitoring area according to the safety monitoring image set, and realizes the intellectualization of monitoring and the timely recognition of emergency events.

Description

Intelligent driving monitoring method, system, equipment and storable medium
Technical Field
The invention relates to the technical field of application of internet of things and intelligent driving monitoring, in particular to an intelligent driving monitoring method, system, equipment and a storage medium.
Background
In some non-open places, such as parking lots, continuous driving monitoring is required for these places in order to avoid various accidents. The analysis of the monitoring picture is the key to judge whether various accidents occur risks. With the development of science and technology, the monitoring of a target area is mostly performed through computer equipment, and if no manager analyzes a monitored image, the computer equipment is difficult to analyze and recognize the monitored image in a self-adaptive manner, so that the judgment of an abnormal/emergency event is difficult to realize. In conclusion, the common monitoring technology has the technical problem of low intelligent degree.
Disclosure of Invention
In order to solve the technical problems in the related art, the invention provides an intelligent driving monitoring method, an intelligent driving monitoring system, intelligent driving monitoring equipment and a storage medium.
The first aspect is an intelligent driving monitoring method, which comprises the following steps:
acquiring a safety monitoring image set corresponding to each image updating cycle in a first monitoring early warning activation period of a target driving monitoring area, wherein the first monitoring early warning activation period comprises at least two image updating cycles, and the safety monitoring image set corresponding to each image updating cycle comprises biological characteristic images of a target driving monitoring object, which are shot or received by an intelligent camera of the internet of things in the target driving monitoring area in the corresponding image updating cycle;
determining an image characteristic correlation matrix between safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period;
determining the monitoring index distribution of the target driving monitoring area in the first monitoring early warning activation period according to the image characteristic correlation matrix among the safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period;
and determining a safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period according to the monitoring index distribution.
Optionally, the acquiring a set of safety monitoring images corresponding to each image update cycle of the target driving monitoring area in the first monitoring early warning activation period includes:
acquiring a biological characteristic image of a target driving monitoring object shot by an intelligent camera of the internet of things in the target driving monitoring area within a set shooting time step after a first image updating period begins, and determining a safety monitoring image set corresponding to the first image updating period according to the biological characteristic image of the target driving monitoring object shot by the intelligent camera of the internet of things in the target driving monitoring area within the set shooting time step after the first image updating period begins, wherein the first image updating period is any image updating period within the first monitoring early warning activation period;
under the condition that the target driving monitoring object is not shot within a set shooting time step after a second image updating period begins, determining a safety monitoring image set corresponding to the second image updating period according to a biological characteristic image of the target driving monitoring object received by the intelligent camera of the internet of things in the target driving monitoring area, wherein the second image updating period is any image updating period except for the first image updating period within the first monitoring early warning activation period;
wherein the method further comprises:
the intelligent camera of the internet of things in the target driving monitoring area does not shoot the target driving monitoring object within the set shooting time step length after the third image updating period begins, and the safety monitoring image sets corresponding to a first set number of continuous image updating periods before the third image updating period are all determined according to the biological characteristic images of the target driving monitoring object received by the intelligent camera of the internet of things, sending a shooting instruction of the target driving monitoring object to the intelligent camera of the internet of things, so that the intelligent camera of the internet of things can respond to the shooting instruction of the target driving monitoring object to shoot the target driving monitoring object, the third image update cycle is any one of the first image update cycle and the second image update cycle within the first monitoring early warning activation period;
and acquiring a biological characteristic image of the target driving monitoring object shot by the intelligent camera of the internet of things in response to the shooting instruction of the target driving monitoring object, and determining a safety monitoring image set corresponding to the third image updating period according to the biological characteristic image of the target driving monitoring object shot by the intelligent camera of the internet of things in response to the shooting instruction of the target driving monitoring object.
Alternatively to this, the first and second parts may,
the determining of the image feature correlation matrix between the safety monitoring image sets corresponding to the image update cycles in the first monitoring early warning activation period includes:
determining a dynamic biological characteristic image set from the safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period; respectively determining an image feature correlation matrix between each security monitoring image set except the dynamic biological feature image set in a security monitoring image set corresponding to each image updating cycle in the first monitoring early warning activation period and the dynamic biological feature image set;
or
And respectively determining an image characteristic correlation matrix between the safety monitoring image sets corresponding to every two adjacent image updating periods in the first monitoring early warning activation period.
Optionally, the security monitoring image sets corresponding to the image update cycles in the first monitoring and early warning activation period include a markable security monitoring image set and a non-markable security monitoring image set, and the monitoring index distribution includes a first monitoring index distribution determined according to the image feature correlation matrix corresponding to the markable security monitoring image set of each image update cycle specified in the first monitoring and early warning activation period, and a second monitoring index distribution determined according to the image feature correlation matrix corresponding to the non-markable security monitoring image set of each image update cycle specified in the first monitoring and early warning activation period;
the step of determining the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation period according to the monitoring index distribution comprises the following steps:
and determining a safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period according to the first monitoring index distribution and the second monitoring index distribution.
Optionally, the determining, according to the image feature correlation matrix between the safety monitoring image sets corresponding to the image update cycles in the first monitoring and early warning activation period, the monitoring index distribution of the target driving monitoring area in the first monitoring and early warning activation period includes:
determining at least one target markable safety monitoring image set with a safety evaluation index corresponding to the behavior feature identification result of the target driving monitoring object higher than a first evaluation index threshold value and at least one target non-marked safety monitoring image set with a safety evaluation index corresponding to the behavior feature identification result of the target driving monitoring object higher than a second evaluation index threshold value from the safety monitoring image sets corresponding to the image update cycles in the first monitoring early warning activation period;
determining the first monitoring index distribution according to the image characteristic correlation matrix corresponding to the at least one target markable safety monitoring image set, and determining the second monitoring index distribution according to the image characteristic correlation matrix corresponding to the at least one target non-markable safety monitoring image set;
wherein, according to the first monitoring index distribution and the second monitoring index distribution, determining the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation period comprises:
determining that the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period is a first identification result under the condition that the index concentration of the first monitoring index distribution is not less than a preset first target concentration and the index concentration of the second monitoring index distribution is not less than a preset second target concentration;
determining that the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period is a second identification result under the condition that the index concentration of the first monitoring index distribution is not less than the first target concentration and the index concentration of the second monitoring index distribution is less than the second target concentration;
the index concentration that first control index distributes is less than first target concentration, just the index concentration that the second control index distributes is less than under the condition of second target concentration, confirm that target driving monitoring area is in safety monitoring identification result in the first control early warning activation period is the third identification result.
Optionally, if the security monitoring identification result is a third identification result, the method further includes:
acquiring N result attribute information sets corresponding to a third identification result and an attribute label set corresponding to each result attribute information set, wherein each result attribute information set comprises M different real-time attribute information, and N and M are positive integers greater than or equal to 1;
determining a current time attribute label corresponding to the result attribute information set in an attribute label set corresponding to the result attribute information set;
extracting label features by adopting the current time attribute label corresponding to the result attribute information set to obtain label attribute fusion features of each piece of real-time attribute information in the result attribute information set;
updating the label of the attribute label at the current moment corresponding to the result attribute information set based on the label attribute fusion characteristics of each real-time attribute information in the N result attribute information sets to obtain a real-time dynamic attribute label corresponding to the result attribute information set;
adding the real-time dynamic attribute tags corresponding to the result attribute information set into the attribute tag set corresponding to the result attribute information set;
returning and executing the step of determining the attribute tag at the current moment corresponding to the result attribute information set in the attribute tag set corresponding to the result attribute information set until the global security attribute coefficient corresponding to the N kinds of result attribute information sets is greater than a set security coefficient, and obtaining the adjustment information of the monitoring picture angle corresponding to the N kinds of result attribute information sets according to the global security attribute coefficient;
wherein the determining of the current-time attribute tag corresponding to the result attribute information set in the attribute tag set corresponding to the result attribute information set includes:
determining a last-time attribute tag and current-time monitoring angle information corresponding to the result attribute information set, and current-time monitoring angle information corresponding to the target result attribute information set;
comparing the current-time monitoring angle information corresponding to the result attribute information set with the current-time monitoring angle information corresponding to a target result attribute information set to obtain first monitoring range coverage information of the current-time monitoring angle information corresponding to the result attribute information set, wherein the target result attribute information set is all result attribute information sets including the result attribute information set in the N kinds of result attribute information sets;
comparing the current-time monitoring angle information corresponding to the result attribute information set with the last-time attribute label corresponding to the result attribute information set to obtain second monitoring range coverage information of the current-time monitoring angle information of the result attribute information set;
determining the last-time attribute tag corresponding to the result attribute information set or the current-time monitoring angle information corresponding to the result attribute information set as the attribute tag corresponding to the current time of the result attribute information set based on the second monitoring range coverage information and the first monitoring range coverage information;
the determining of the monitoring angle information at the current moment corresponding to the target result attribute information set includes:
acquiring an attribute characteristic track of the target result attribute information set, and determining a plurality of monitoring result description information at the last moment corresponding to the target result attribute information set;
determining current-time monitoring angle information corresponding to the target result attribute information set in a plurality of monitoring result description information at the last time corresponding to the target result attribute information set according to the attribute characteristic track of the target result attribute information set;
wherein the determining of the description information of the plurality of monitoring results at the previous time corresponding to the target result attribute information set includes:
determining second monitoring range coverage information and first monitoring range coverage information of each attribute label set in the attribute label sets corresponding to the target result attribute information set;
calculating the angle adjustment characteristic of each monitoring angle information in the attribute label set corresponding to the target result attribute information set based on the second monitoring range coverage information and the first monitoring range coverage information;
sequencing each monitoring angle information in the attribute label set corresponding to the target result attribute information set according to the descending order of the feature weight corresponding to the angle adjustment feature, determining a first set threshold value monitoring angle information before the sequencing number as a first monitoring angle information set, and determining a second set threshold value monitoring angle information after the sequencing number as a second monitoring angle information set;
and determining the second monitoring angle information set as the last-time monitoring result description information corresponding to the target result attribute information set.
Optionally, the determining, according to the image feature correlation matrix between the safety monitoring image sets corresponding to the image update cycles in the first monitoring and early warning activation period, the monitoring index distribution of the target driving monitoring area in the first monitoring and early warning activation period includes:
determining a matrix correction coefficient of each image characteristic correlation matrix according to the number of safety monitoring images contained in a safety monitoring image set corresponding to each image updating period in the first monitoring early warning activation period;
determining the monitoring index distribution of the target driving monitoring area in the first monitoring early warning activation period according to the image characteristic correlation matrix among the safety monitoring image sets corresponding to the image updating periods in the first monitoring early warning activation period and the matrix correction coefficient of each image characteristic correlation matrix;
wherein the method further comprises: and sending prompt information to a target driving monitoring object through prompt equipment in the target driving monitoring area according to a safety monitoring recognition result of the target driving monitoring area in the first monitoring early warning activation time period.
The second aspect is an intelligent driving monitoring system, which comprises intelligent driving monitoring equipment and an Internet of things intelligent camera, wherein the intelligent driving monitoring equipment and the Internet of things intelligent camera are communicated with each other; wherein, intelligence driving supervisory equipment is used for:
acquiring a safety monitoring image set corresponding to each image updating cycle in a first monitoring early warning activation period of a target driving monitoring area, wherein the first monitoring early warning activation period comprises at least two image updating cycles, and the safety monitoring image set corresponding to each image updating cycle comprises biological characteristic images of a target driving monitoring object, which are shot or received by an intelligent camera of the internet of things in the target driving monitoring area in the corresponding image updating cycle;
determining an image characteristic correlation matrix between safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period;
determining the monitoring index distribution of the target driving monitoring area in the first monitoring early warning activation period according to the image characteristic correlation matrix among the safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period;
and determining a safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period according to the monitoring index distribution.
A third aspect is an intelligent traffic monitoring device comprising a processor and a memory communicating with each other, the processor performing the method of the first aspect when executing a computer program read from the memory.
A fourth aspect is a storable medium having stored thereon a computer program which, when executed, implements the method of the first aspect.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects.
Firstly, a safety monitoring image set corresponding to each image updating period of a target driving monitoring area in a first monitoring early warning activation period is obtained, secondly, an image characteristic correlation matrix between the safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period is determined, then, monitoring index distribution of the target driving monitoring area in the first monitoring early warning activation period is determined, and finally, a safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation period is determined. By the design, on one hand, continuity and global analysis of a safety monitoring image set can be realized on the basis of the image characteristic correlation matrix, and on the other hand, depth identification of the biological characteristic image can be realized on the basis of monitoring index distribution. Therefore, various safety events in the target driving monitoring area can be rapidly and accurately judged according to the safety monitoring image set, the monitoring image is analyzed and identified in a self-adaptive manner, and the monitoring intelligent degree is improved so as to identify the emergency in time.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating an intelligent driving monitoring method according to an exemplary embodiment.
Fig. 2 is an architecture diagram illustrating an intelligent vehicle monitoring system according to an exemplary embodiment.
Fig. 3 is a hardware diagram illustrating an intelligent driving monitoring device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
In order to solve the technical problem that the common monitoring technology is difficult to perform biological feature depth identification and global analysis of a monitored image during image monitoring, the embodiment of the invention provides an intelligent driving monitoring method, an intelligent driving monitoring system, intelligent driving monitoring equipment and a storage medium.
Referring first to fig. 1, an intelligent driving monitoring method is shown, which may be applied to an intelligent driving monitoring device and further may include the following steps S11-S14.
Step S11, acquiring a safety monitoring image set corresponding to each image updating cycle of the target driving monitoring area in the first monitoring early warning activation period.
For example, the first monitoring early warning activation period includes at least two image update cycles, and the safety monitoring image set corresponding to each image update cycle includes a biological feature image of the target driving monitoring object, which is captured or received by the internet of things intelligent camera in the target driving monitoring area in the corresponding image update cycle. The target driving monitoring area may be an industrial park, a residential home, or other non-open location. The biometric image includes a face image and a motion image. The target driving monitoring object can be a person or a small animal.
Step S12, determining an image feature correlation matrix between the security monitoring image sets corresponding to each image update cycle in the first monitoring and early warning activation period.
For example, the image feature correlation matrix is the same as the matrix describing the correlation and continuity in time sequence between the sets of security monitoring images corresponding to different image update periods, so that the global analysis of the complete monitoring image is facilitated.
Step S13, determining, according to the image feature correlation matrix between the safety monitoring image sets corresponding to each image update cycle in the first monitoring and early warning activation period, the monitoring index distribution of the target driving monitoring area in the first monitoring and early warning activation period.
For example, the monitoring index distribution may be a distribution list or a distribution list formed by different monitoring indexes together and used for judging the monitoring safety of the target driving monitoring area.
And step S14, determining the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period according to the monitoring index distribution.
For example, the safety monitoring identification result may be used to represent monitoring safety corresponding to the target driving monitoring area (e.g., whether there is an abnormal person performing an abnormal behavior, etc.).
It can be understood that through the descriptions in the foregoing steps S11 to S14, firstly, the security monitoring image sets corresponding to the image update cycles of the target driving monitoring region in the first monitoring and early warning activation period are obtained, secondly, the image feature correlation matrix between the security monitoring image sets corresponding to the image update cycles in the first monitoring and early warning activation period is determined, then, the monitoring index distribution of the target driving monitoring region in the first monitoring and early warning activation period is determined, and finally, the security monitoring identification result of the target driving monitoring region in the first monitoring and early warning activation period is determined. By the design, on one hand, continuity and global analysis of a safety monitoring image set can be realized on the basis of the image characteristic correlation matrix, and on the other hand, depth identification of the biological characteristic image can be realized on the basis of monitoring index distribution. Therefore, various safety events in the target driving monitoring area can be rapidly and accurately judged according to the safety monitoring image set, the monitoring image is analyzed and identified in a self-adaptive manner, and the monitoring intelligent degree is improved so as to identify the emergency in time.
In some examples, the acquiring of the set of safety monitoring images corresponding to each image update cycle of the target driving monitoring area in the first monitoring early warning activation period described in step S11 may further include the following contents described in step S111 and step S112.
Step S111, obtaining a biological characteristic image of the target driving monitoring object shot by the intelligent camera of the Internet of things in the target driving monitoring area within a set shooting time step after a first image updating period begins, and determining a safety monitoring image set corresponding to the first image updating period according to the biological characteristic image of the target driving monitoring object shot by the intelligent camera of the Internet of things in the target driving monitoring area within the set shooting time step after the first image updating period begins, wherein the first image updating period is any image updating period within the first monitoring early warning activation period.
Step S112, under the condition that the internet of things intelligent camera in the target driving monitoring area does not shoot the target driving monitoring object within a set shooting time step after a second image update period begins, determining a safety monitoring image set corresponding to the second image update period according to a biometric image of the target driving monitoring object received by the internet of things intelligent camera in the target driving monitoring area, where the second image update period is any image update period except for the first image update period within the first monitoring early warning activation period.
It is understood that, on the basis of the above steps S111 and S112, the contents described in the following steps S113 and S114 may be further included.
Step S113, the intelligent camera of the Internet of things in the target driving monitoring area does not shoot the target driving monitoring object within the set shooting time step after the third image updating period begins, and the safety monitoring image sets corresponding to a first set number of continuous image updating periods before the third image updating period are all determined according to the biological characteristic images of the target driving monitoring object received by the intelligent camera of the internet of things, sending a shooting instruction of the target driving monitoring object to the intelligent camera of the internet of things, so that the intelligent camera of the internet of things can respond to the shooting instruction of the target driving monitoring object to shoot the target driving monitoring object, the third image update cycle is any one of the first image update cycle and the second image update cycle within the first monitoring and early warning activation period.
Step S114, obtaining a biological characteristic image of the target driving monitoring object shot by the intelligent camera of the internet of things in response to the shooting instruction of the target driving monitoring object, and determining a safety monitoring image set corresponding to the third image updating period according to the biological characteristic image of the target driving monitoring object shot by the intelligent camera of the internet of things in response to the shooting instruction of the target driving monitoring object.
In the practical application of the steps S111 to S114, a shooting instruction can be issued based on whether the internet of things intelligent camera shoots the target driving monitoring object, so that the biological characteristic image of the target driving monitoring object can be obtained under different conditions, and the complete biological characteristic image can be analyzed and identified subsequently.
In another example, the determining of the image feature correlation matrix between the safety monitoring image sets corresponding to the image update periods in the first monitoring precaution activation period described in step S12 may be implemented by the following two embodiments.
In a first implementation manner, a dynamic biological characteristic image set is determined from a security monitoring image set corresponding to each image updating cycle in a first monitoring early warning activation period; and respectively determining an image feature correlation matrix between each security monitoring image set except the dynamic biological feature image set in the security monitoring image set corresponding to each image updating cycle in the first monitoring early warning activation period and the dynamic biological feature image set.
In a second implementation manner, image feature correlation matrixes between security monitoring image sets corresponding to every two adjacent image update cycles in the first monitoring and early warning activation period are respectively determined.
In some possible examples, the security monitoring image sets corresponding to the respective image update cycles in the first monitoring and early warning activation period include a markable security monitoring image set and a non-markable security monitoring image set, and the monitoring index distribution includes a first monitoring index distribution determined according to an image feature correlation matrix corresponding to the markable security monitoring image set of the respective image update cycle specified in the first monitoring and early warning activation period, and a second monitoring index distribution determined according to an image feature correlation matrix corresponding to the non-markable security monitoring image set of the respective image update cycle specified in the first monitoring and early warning activation period. Based on the above, the step S14 of determining the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation period according to the monitoring index distribution includes the following steps S140: and determining a safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period according to the first monitoring index distribution and the second monitoring index distribution.
In a possible embodiment, the determining, according to the image feature correlation matrix between the safety monitoring image sets corresponding to the image update cycles in the first monitoring and early warning activation period, the monitoring index distribution of the target driving monitoring area in the first monitoring and early warning activation period, which is described in step S13, may be implemented by the following contents described in step S131 and step S132.
Step S131, determining at least one target markable security monitoring image set whose security evaluation index corresponding to the behavior feature recognition result of the target driving monitoring object is higher than a first evaluation index threshold value, and at least one target non-marked security monitoring image set whose security evaluation index corresponding to the behavior feature recognition result of the target driving monitoring object is higher than a second evaluation index threshold value, from the security monitoring image sets corresponding to each image update cycle within the first monitoring and early warning activation period.
Step S132, determining the first monitoring index distribution according to the image feature correlation matrix corresponding to the at least one target markable security monitoring image set, and determining the second monitoring index distribution according to the image feature correlation matrix corresponding to the at least one target non-markable security monitoring image set.
In this way, based on the above steps S131 and S132, the monitoring index distribution can be determined based on the evaluation index threshold, so as to ensure that there is a correlation and influence between the monitoring indexes in the monitoring index distribution, and thus ensure global analysis and identification of the monitoring image.
Further, the step S140 of determining the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation period according to the first monitoring index distribution and the second monitoring index distribution may be performed through the following steps S141 to S143.
Step S141, under the condition that the index concentration of the first monitoring index distribution is not less than a preset first target concentration and the index concentration of the second monitoring index distribution is not less than a preset second target concentration, determining that the target driving monitoring area is in a safety monitoring identification result in the first monitoring early warning activation time period is a first identification result.
And S142, determining that the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period is a second identification result under the condition that the index concentration of the first monitoring index distribution is not less than the first target concentration and the index concentration of the second monitoring index distribution is less than the second target concentration.
Step S143, under the condition that the index concentration of the first monitoring index distribution is smaller than the first target concentration and the index concentration of the second monitoring index distribution is smaller than the second target concentration, determining that the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period is a third identification result.
In the above steps S141 to S143, the first recognition result is used to represent that the target driving monitoring area is normal, the second recognition result is used to represent that the target driving monitoring area is abnormal, and the third recognition result is used to represent that an emergency occurs in the target driving monitoring area. Therefore, multiple recognition results can be completely determined based on the comparison relationship between the index concentration of different monitoring index distributions and different second target concentrations, and therefore monitoring recognition is comprehensively and intelligently achieved.
On the basis of the step S143, in order to implement the omni-directional monitoring of the target driving monitoring area and provide a decision basis for the subsequent emergency measure making, the following steps S144 to S149 may be further included.
Step S144, acquiring N result attribute information sets corresponding to the third recognition result and an attribute tag set corresponding to each result attribute information set, where each result attribute information set includes M different real-time attribute information, and N and M are positive integers greater than or equal to 1.
Step S145, determining the current time attribute tag corresponding to the result attribute information set in the attribute tag set corresponding to the result attribute information set.
Step S146, extracting the tag feature by using the current time attribute tag corresponding to the result attribute information set, to obtain the tag attribute fusion feature of each piece of real-time attribute information in the result attribute information set.
Step S147, updating the label of the current time attribute label corresponding to the result attribute information set based on the label attribute fusion characteristic of each real-time attribute information in the N result attribute information sets, to obtain a real-time dynamic attribute label corresponding to the result attribute information set.
Step S148, adding the real-time dynamic attribute tag corresponding to the result attribute information set to the attribute tag set corresponding to the result attribute information set.
Step S149, returning to and executing the step of determining the current time attribute tag corresponding to the result attribute information set in the attribute tag set corresponding to the result attribute information set until the global security attribute coefficients corresponding to the N kinds of result attribute information sets are greater than the set security coefficient, and obtaining the adjustment information of the monitoring screen angles corresponding to the N kinds of result attribute information sets according to the global security attribute coefficients.
It can be understood that, through the above steps S144 to S149, the global security attribute coefficient can be determined based on the N result attribute information sets, so that the adjustment information of the monitoring screen angle corresponding to the N result attribute information sets is obtained when the global security attribute coefficient is greater than the set security coefficient. Therefore, the monitoring angle can be adjusted based on the adjustment information, so that the target driving monitoring area can be comprehensively monitored, and an accurate decision basis is provided for the subsequent emergency measure.
Further, the determining, in the attribute tag set corresponding to the result attribute information set, the current-time attribute tag corresponding to the result attribute information set in step S145 may include the following contents described in steps S1451 to S1454.
Step S1451, determining the previous-time attribute tag and the current-time monitoring angle information corresponding to the result attribute information set, and the current-time monitoring angle information corresponding to the target result attribute information set.
Step S1452, comparing the current-time monitoring angle information corresponding to the result attribute information set with the current-time monitoring angle information corresponding to a target result attribute information set, to obtain first monitoring range coverage information of the current-time monitoring angle information corresponding to the result attribute information set, where the target result attribute information set is all result attribute information sets including the result attribute information set in the N kinds of result attribute information sets.
Step S1453, comparing the current-time monitoring angle information corresponding to the result attribute information set with the previous-time attribute tag corresponding to the result attribute information set, to obtain second monitoring range coverage information of the current-time monitoring angle information of the result attribute information set.
Step S1454, based on the second monitoring range coverage information and the first monitoring range coverage information, determining the previous-time attribute tag corresponding to the result attribute information set or the current-time monitoring angle information corresponding to the result attribute information set as the attribute tag corresponding to the current time of the result attribute information set.
In this way, based on the contents described in steps S1451 to S1454, the current time attribute tag can be determined accurately and quickly.
In step S1451, determining current time monitoring angle information corresponding to the target result attribute information set includes: acquiring an attribute characteristic track of the target result attribute information set, and determining a plurality of monitoring result description information at the last moment corresponding to the target result attribute information set; determining current-time monitoring angle information corresponding to the target result attribute information set in a plurality of monitoring result description information at the last time corresponding to the target result attribute information set according to the attribute characteristic track of the target result attribute information set;
further, the determining the description information of the plurality of monitoring results at the previous time corresponding to the target result attribute information set includes: determining second monitoring range coverage information and first monitoring range coverage information of each attribute label set in the attribute label sets corresponding to the target result attribute information set; calculating the angle adjustment characteristic of each monitoring angle information in the attribute label set corresponding to the target result attribute information set based on the second monitoring range coverage information and the first monitoring range coverage information; sequencing each monitoring angle information in the attribute label set corresponding to the target result attribute information set according to the descending order of the feature weight corresponding to the angle adjustment feature, determining a first set threshold value monitoring angle information before the sequencing number as a first monitoring angle information set, and determining a second set threshold value monitoring angle information after the sequencing number as a second monitoring angle information set; and determining the second monitoring angle information set as the last-time monitoring result description information corresponding to the target result attribute information set.
In an alternative embodiment, the determining, according to the image feature correlation matrix between the safety monitoring image sets corresponding to the image update cycles in the first monitoring and early warning activation period, the monitoring index distribution of the target driving monitoring area in the first monitoring and early warning activation period described in step S13 may further include the following steps (1) and (2).
(1) And determining a matrix correction coefficient of each image characteristic correlation matrix according to the number of the safety monitoring images contained in the safety monitoring image set corresponding to each image updating period in the first monitoring early warning activation period.
(2) And determining the monitoring index distribution of the target driving monitoring area in the first monitoring early warning activation period according to the image characteristic correlation matrix among the safety monitoring image sets corresponding to the image updating periods in the first monitoring early warning activation period and the matrix correction coefficient of each image characteristic correlation matrix.
Further, on the basis of the above, the method further comprises: and sending prompt information to a target driving monitoring object through prompt equipment in the target driving monitoring area according to a safety monitoring recognition result of the target driving monitoring area in the first monitoring early warning activation time period. For example, the prompt message may be a voice broadcast or a sound and light warning message.
Based on the same inventive concept, please refer to fig. 2 in combination, further providing an intelligent driving monitoring system 100, which includes an intelligent driving monitoring device 200 and an internet of things intelligent camera 300, which are in communication with each other; wherein, intelligence driving monitoring equipment 200 is used for:
acquiring a safety monitoring image set corresponding to each image updating cycle in a first monitoring early warning activation period of a target driving monitoring area, wherein the first monitoring early warning activation period comprises at least two image updating cycles, and the safety monitoring image set corresponding to each image updating cycle comprises biological characteristic images of a target driving monitoring object, which are shot or received by an intelligent camera of the internet of things in the target driving monitoring area in the corresponding image updating cycle;
determining an image characteristic correlation matrix between safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period;
determining the monitoring index distribution of the target driving monitoring area in the first monitoring early warning activation period according to the image characteristic correlation matrix among the safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period;
and determining a safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period according to the monitoring index distribution.
It is understood that, for the description of the embodiment of the system, reference may be made to the description of the embodiment of the method shown in fig. 1, and details are not described here.
On the basis of the above, please refer to fig. 3 in combination, further providing an intelligent driving monitoring device 200, which includes a processor 210 and a memory 220 communicating with each other, wherein the processor 210 executes the above method when executing the computer program read from the memory 220.
Furthermore, a storable medium is provided on which a computer program is stored which, when executed, carries out the method described above.
In summary, by applying the above scheme, compared with the prior art, on one hand, continuity and global analysis of a security monitoring image set can be realized based on an image feature correlation matrix, and on the other hand, depth recognition of a biological feature image can be realized based on monitoring index distribution. Therefore, various safety events in the target driving monitoring area can be rapidly and accurately judged according to the safety monitoring image set, the monitoring image is analyzed and identified in a self-adaptive manner, and the monitoring intelligent degree is improved so as to identify the emergency in time.
It is to be understood that the present invention is not limited to what has been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. An intelligent driving monitoring method is characterized by comprising the following steps:
acquiring a safety monitoring image set corresponding to each image updating cycle in a first monitoring early warning activation period of a target driving monitoring area, wherein the first monitoring early warning activation period comprises at least two image updating cycles, and the safety monitoring image set corresponding to each image updating cycle comprises biological characteristic images of a target driving monitoring object, which are shot or received by an intelligent camera of the internet of things in the target driving monitoring area in the corresponding image updating cycle;
determining an image characteristic correlation matrix between safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period;
determining the monitoring index distribution of the target driving monitoring area in the first monitoring early warning activation period according to the image characteristic correlation matrix among the safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period;
and determining a safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period according to the monitoring index distribution.
2. The method of claim 1, wherein the obtaining of the set of safety monitoring images corresponding to each image update cycle of the target driving monitoring area in the first monitoring early warning activation period comprises:
acquiring a biological characteristic image of a target driving monitoring object shot by an intelligent camera of the internet of things in the target driving monitoring area within a set shooting time step after a first image updating period begins, and determining a safety monitoring image set corresponding to the first image updating period according to the biological characteristic image of the target driving monitoring object shot by the intelligent camera of the internet of things in the target driving monitoring area within the set shooting time step after the first image updating period begins, wherein the first image updating period is any image updating period within the first monitoring early warning activation period;
under the condition that the target driving monitoring object is not shot within a set shooting time step after a second image updating period begins, determining a safety monitoring image set corresponding to the second image updating period according to a biological characteristic image of the target driving monitoring object received by the intelligent camera of the internet of things in the target driving monitoring area, wherein the second image updating period is any image updating period except for the first image updating period within the first monitoring early warning activation period;
wherein the method further comprises:
the intelligent camera of the internet of things in the target driving monitoring area does not shoot the target driving monitoring object within the set shooting time step length after the third image updating period begins, and the safety monitoring image sets corresponding to a first set number of continuous image updating periods before the third image updating period are all determined according to the biological characteristic images of the target driving monitoring object received by the intelligent camera of the internet of things, sending a shooting instruction of the target driving monitoring object to the intelligent camera of the internet of things, so that the intelligent camera of the internet of things can respond to the shooting instruction of the target driving monitoring object to shoot the target driving monitoring object, the third image update cycle is any one of the first image update cycle and the second image update cycle within the first monitoring early warning activation period;
and acquiring a biological characteristic image of the target driving monitoring object shot by the intelligent camera of the internet of things in response to the shooting instruction of the target driving monitoring object, and determining a safety monitoring image set corresponding to the third image updating period according to the biological characteristic image of the target driving monitoring object shot by the intelligent camera of the internet of things in response to the shooting instruction of the target driving monitoring object.
3. The method of claim 1,
the determining of the image feature correlation matrix between the safety monitoring image sets corresponding to the image update cycles in the first monitoring early warning activation period includes:
determining a dynamic biological characteristic image set from the safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period; respectively determining an image feature correlation matrix between each security monitoring image set except the dynamic biological feature image set in a security monitoring image set corresponding to each image updating cycle in the first monitoring early warning activation period and the dynamic biological feature image set;
or
And respectively determining an image characteristic correlation matrix between the safety monitoring image sets corresponding to every two adjacent image updating periods in the first monitoring early warning activation period.
4. The method according to claim 1, wherein the security monitoring image sets corresponding to the respective image update cycles in the first monitoring and forewarning activation period include a markable security monitoring image set and a non-markable security monitoring image set, and the monitoring index distribution includes a first monitoring index distribution determined according to the image feature correlation matrix corresponding to the markable security monitoring image set of the respective image update cycle specified in the first monitoring and forewarning activation period, and a second monitoring index distribution determined according to the image feature correlation matrix corresponding to the non-markable security monitoring image set of the respective image update cycle specified in the first monitoring and forewarning activation period;
the step of determining the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation period according to the monitoring index distribution comprises the following steps:
and determining a safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period according to the first monitoring index distribution and the second monitoring index distribution.
5. The method according to claim 4, wherein the determining, according to the image feature correlation matrix between the safety monitoring image sets corresponding to the image update cycles in the first monitoring and early warning activation period, the monitoring index distribution of the target driving monitoring area in the first monitoring and early warning activation period includes:
determining at least one target markable safety monitoring image set with a safety evaluation index corresponding to the behavior feature identification result of the target driving monitoring object higher than a first evaluation index threshold value and at least one target non-marked safety monitoring image set with a safety evaluation index corresponding to the behavior feature identification result of the target driving monitoring object higher than a second evaluation index threshold value from the safety monitoring image sets corresponding to the image update cycles in the first monitoring early warning activation period;
determining the first monitoring index distribution according to the image characteristic correlation matrix corresponding to the at least one target markable safety monitoring image set, and determining the second monitoring index distribution according to the image characteristic correlation matrix corresponding to the at least one target non-markable safety monitoring image set;
wherein, according to the first monitoring index distribution and the second monitoring index distribution, determining the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation period comprises:
determining that the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period is a first identification result under the condition that the index concentration of the first monitoring index distribution is not less than a preset first target concentration and the index concentration of the second monitoring index distribution is not less than a preset second target concentration;
determining that the safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period is a second identification result under the condition that the index concentration of the first monitoring index distribution is not less than the first target concentration and the index concentration of the second monitoring index distribution is less than the second target concentration;
the index concentration that first control index distributes is less than first target concentration, just the index concentration that the second control index distributes is less than under the condition of second target concentration, confirm that target driving monitoring area is in safety monitoring identification result in the first control early warning activation period is the third identification result.
6. The method of claim 5, wherein if the security monitoring identification result is a third identification result, the method further comprises:
acquiring N result attribute information sets corresponding to a third identification result and an attribute label set corresponding to each result attribute information set, wherein each result attribute information set comprises M different real-time attribute information, and N and M are positive integers greater than or equal to 1;
determining a current time attribute label corresponding to the result attribute information set in an attribute label set corresponding to the result attribute information set;
extracting label features by adopting the current time attribute label corresponding to the result attribute information set to obtain label attribute fusion features of each piece of real-time attribute information in the result attribute information set;
updating the label of the attribute label at the current moment corresponding to the result attribute information set based on the label attribute fusion characteristics of each real-time attribute information in the N result attribute information sets to obtain a real-time dynamic attribute label corresponding to the result attribute information set;
adding the real-time dynamic attribute tags corresponding to the result attribute information set into the attribute tag set corresponding to the result attribute information set;
returning and executing the step of determining the attribute tag at the current moment corresponding to the result attribute information set in the attribute tag set corresponding to the result attribute information set until the global security attribute coefficient corresponding to the N kinds of result attribute information sets is greater than a set security coefficient, and obtaining the adjustment information of the monitoring picture angle corresponding to the N kinds of result attribute information sets according to the global security attribute coefficient;
wherein the determining of the current-time attribute tag corresponding to the result attribute information set in the attribute tag set corresponding to the result attribute information set includes:
determining a last-time attribute tag and current-time monitoring angle information corresponding to the result attribute information set, and current-time monitoring angle information corresponding to the target result attribute information set;
comparing the current-time monitoring angle information corresponding to the result attribute information set with the current-time monitoring angle information corresponding to a target result attribute information set to obtain first monitoring range coverage information of the current-time monitoring angle information corresponding to the result attribute information set, wherein the target result attribute information set is all result attribute information sets including the result attribute information set in the N kinds of result attribute information sets;
comparing the current-time monitoring angle information corresponding to the result attribute information set with the last-time attribute label corresponding to the result attribute information set to obtain second monitoring range coverage information of the current-time monitoring angle information of the result attribute information set;
determining the last-time attribute tag corresponding to the result attribute information set or the current-time monitoring angle information corresponding to the result attribute information set as the attribute tag corresponding to the current time of the result attribute information set based on the second monitoring range coverage information and the first monitoring range coverage information;
the determining of the monitoring angle information at the current moment corresponding to the target result attribute information set includes:
acquiring an attribute characteristic track of the target result attribute information set, and determining a plurality of monitoring result description information at the last moment corresponding to the target result attribute information set;
determining current-time monitoring angle information corresponding to the target result attribute information set in a plurality of monitoring result description information at the last time corresponding to the target result attribute information set according to the attribute characteristic track of the target result attribute information set;
wherein the determining of the description information of the plurality of monitoring results at the previous time corresponding to the target result attribute information set includes:
determining second monitoring range coverage information and first monitoring range coverage information of each attribute label set in the attribute label sets corresponding to the target result attribute information set;
calculating the angle adjustment characteristic of each monitoring angle information in the attribute label set corresponding to the target result attribute information set based on the second monitoring range coverage information and the first monitoring range coverage information;
sequencing each monitoring angle information in the attribute label set corresponding to the target result attribute information set according to the descending order of the feature weight corresponding to the angle adjustment feature, determining a first set threshold value monitoring angle information before the sequencing number as a first monitoring angle information set, and determining a second set threshold value monitoring angle information after the sequencing number as a second monitoring angle information set;
and determining the second monitoring angle information set as the last-time monitoring result description information corresponding to the target result attribute information set.
7. The method according to claim 1, wherein the determining, according to the image feature correlation matrix between the safety monitoring image sets corresponding to the image update cycles in the first monitoring and early warning activation period, the monitoring index distribution of the target driving monitoring area in the first monitoring and early warning activation period includes:
determining a matrix correction coefficient of each image characteristic correlation matrix according to the number of safety monitoring images contained in a safety monitoring image set corresponding to each image updating period in the first monitoring early warning activation period;
determining the monitoring index distribution of the target driving monitoring area in the first monitoring early warning activation period according to the image characteristic correlation matrix among the safety monitoring image sets corresponding to the image updating periods in the first monitoring early warning activation period and the matrix correction coefficient of each image characteristic correlation matrix;
wherein the method further comprises: and sending prompt information to a target driving monitoring object through prompt equipment in the target driving monitoring area according to a safety monitoring recognition result of the target driving monitoring area in the first monitoring early warning activation time period.
8. An intelligent driving monitoring system is characterized by comprising intelligent driving monitoring equipment and an Internet of things intelligent camera which are communicated with each other; wherein, intelligence driving supervisory equipment is used for:
acquiring a safety monitoring image set corresponding to each image updating cycle in a first monitoring early warning activation period of a target driving monitoring area, wherein the first monitoring early warning activation period comprises at least two image updating cycles, and the safety monitoring image set corresponding to each image updating cycle comprises biological characteristic images of a target driving monitoring object, which are shot or received by an intelligent camera of the internet of things in the target driving monitoring area in the corresponding image updating cycle;
determining an image characteristic correlation matrix between safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period;
determining the monitoring index distribution of the target driving monitoring area in the first monitoring early warning activation period according to the image characteristic correlation matrix among the safety monitoring image sets corresponding to each image updating period in the first monitoring early warning activation period;
and determining a safety monitoring identification result of the target driving monitoring area in the first monitoring early warning activation time period according to the monitoring index distribution.
9. An intelligent traffic monitoring device, comprising a processor and a memory communicating with each other, the processor performing the method of any one of claims 1-7 when running a computer program read from the memory.
10. A storable medium having stored thereon a computer program which, when executed, implements the method of any one of claims 1-7.
CN202011309801.8A 2020-11-20 2020-11-20 Intelligent driving monitoring method, system, equipment and storable medium Active CN112381022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011309801.8A CN112381022B (en) 2020-11-20 2020-11-20 Intelligent driving monitoring method, system, equipment and storable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011309801.8A CN112381022B (en) 2020-11-20 2020-11-20 Intelligent driving monitoring method, system, equipment and storable medium

Publications (2)

Publication Number Publication Date
CN112381022A true CN112381022A (en) 2021-02-19
CN112381022B CN112381022B (en) 2021-05-18

Family

ID=74584532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011309801.8A Active CN112381022B (en) 2020-11-20 2020-11-20 Intelligent driving monitoring method, system, equipment and storable medium

Country Status (1)

Country Link
CN (1) CN112381022B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1703437A2 (en) * 2005-03-15 2006-09-20 Omron Corporation Monitoring system, monitoring device and method, recording medium, and program
CN103258427A (en) * 2013-04-24 2013-08-21 北京工业大学 Urban expressway traffic real-time monitoring system and method based on information physical network
WO2017217865A1 (en) * 2016-06-16 2017-12-21 Roxel Aanestad As Tunnel monitoring system and method of operation
CN107528757A (en) * 2017-08-30 2017-12-29 北京润科通用技术有限公司 A kind of monitoring method of MVB data, apparatus and system
CN109447048A (en) * 2018-12-25 2019-03-08 苏州闪驰数控系统集成有限公司 A kind of artificial intelligence early warning system
CN110164128A (en) * 2019-04-23 2019-08-23 银江股份有限公司 A kind of City-level intelligent transportation analogue system
CN110349292A (en) * 2019-07-15 2019-10-18 西安邮电大学 A kind of driving states monitoring method based on intelligent mobile phone platform
CN110929224A (en) * 2019-11-15 2020-03-27 上海电科智能系统股份有限公司 Safety index system establishing method based on bus driving safety
CN111145545A (en) * 2019-12-25 2020-05-12 西安交通大学 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
CN111667697A (en) * 2019-03-06 2020-09-15 北京京东尚科信息技术有限公司 Abnormal vehicle identification method and device, and computer-readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1703437A2 (en) * 2005-03-15 2006-09-20 Omron Corporation Monitoring system, monitoring device and method, recording medium, and program
CN103258427A (en) * 2013-04-24 2013-08-21 北京工业大学 Urban expressway traffic real-time monitoring system and method based on information physical network
WO2017217865A1 (en) * 2016-06-16 2017-12-21 Roxel Aanestad As Tunnel monitoring system and method of operation
CN107528757A (en) * 2017-08-30 2017-12-29 北京润科通用技术有限公司 A kind of monitoring method of MVB data, apparatus and system
CN109447048A (en) * 2018-12-25 2019-03-08 苏州闪驰数控系统集成有限公司 A kind of artificial intelligence early warning system
CN111667697A (en) * 2019-03-06 2020-09-15 北京京东尚科信息技术有限公司 Abnormal vehicle identification method and device, and computer-readable storage medium
CN110164128A (en) * 2019-04-23 2019-08-23 银江股份有限公司 A kind of City-level intelligent transportation analogue system
CN110349292A (en) * 2019-07-15 2019-10-18 西安邮电大学 A kind of driving states monitoring method based on intelligent mobile phone platform
CN110929224A (en) * 2019-11-15 2020-03-27 上海电科智能系统股份有限公司 Safety index system establishing method based on bus driving safety
CN111145545A (en) * 2019-12-25 2020-05-12 西安交通大学 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CÉSAR PEDRAZA 等: "PCIV, an RFID-Based Platform for Intelligent Vehicle Monitoring", 《IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE》 *
CONG LAI 等: "Zynq-based Full HD Around View Monitor System for Intelligent Vehicle", 《PROCEEDINGS OF APSIPA ANNUAL SUMMIT AND CONFERENCE 2017》 *
KUNDJANASITH THONGLEK 等: "IVAA: Intelligent Vehicle Accident Analysis System", 《2019 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER SCIENCE AND SOFTWARE ENGINEERING (JCSSE)》 *
马小博 等: "5G 时代车联网信息物理融合系统综合安全研究", 《中国科学》 *

Also Published As

Publication number Publication date
CN112381022B (en) 2021-05-18

Similar Documents

Publication Publication Date Title
KR101995107B1 (en) Method and system for artificial intelligence based video surveillance using deep learning
CN112164227B (en) Parking violation vehicle warning method and device, computer equipment and storage medium
CN118247775B (en) Intelligent fatigue driving monitoring and early warning method and system based on camera
CN114637884B (en) Method, device and equipment for matching cable-stayed cable-computed space-time trajectory with road network
CN112686186A (en) High-altitude parabolic recognition method based on deep learning and related components thereof
CN114040094B (en) Preset position adjusting method and device based on cradle head camera
CN111383248B (en) Pedestrian red light running judging method and device and electronic equipment
CN113674317B (en) Vehicle tracking method and device for high-level video
JP2009506468A (en) Improved event detection
CN112381022B (en) Intelligent driving monitoring method, system, equipment and storable medium
JP7525561B2 (en) Validation of updated analytical procedures in surveillance systems
CN111325178A (en) Warning object detection result acquisition method and device, computer equipment and storage medium
CN113593256B (en) Unmanned aerial vehicle intelligent driving-away control method and system based on city management and cloud platform
CN115169588A (en) Electrographic computation space-time trajectory vehicle code correlation method, device, equipment and storage medium
CN115049987A (en) Pet safety management method, system, computer equipment and storage medium
CN114743140A (en) Fire fighting access occupation identification method and device based on artificial intelligence technology
CN114119531A (en) Fire detection method and device applied to campus smart platform and computer equipment
CN111145558B (en) Illegal behavior identification method based on high-point video monitoring
CN113592902A (en) Target tracking method and device, computer equipment and storage medium
CN118433330B (en) Method for reducing false alarm rate of side monitoring by using large model
CN115714717B (en) Internet of things terminal communication link fault positioning method based on flow characteristics
CN118366109B (en) Expressway monitoring management method and management platform based on artificial intelligence
CN115861926A (en) Passenger behavior identification method and system in car type elevator and electronic equipment
CN114399725A (en) Monitoring method, monitoring device and computer readable storage medium
CN115689297A (en) Child abduction risk early warning method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant