CN115620248B - Camera calling method and system based on traffic monitoring - Google Patents

Camera calling method and system based on traffic monitoring Download PDF

Info

Publication number
CN115620248B
CN115620248B CN202211410545.0A CN202211410545A CN115620248B CN 115620248 B CN115620248 B CN 115620248B CN 202211410545 A CN202211410545 A CN 202211410545A CN 115620248 B CN115620248 B CN 115620248B
Authority
CN
China
Prior art keywords
vehicle
data
target object
arm
curve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211410545.0A
Other languages
Chinese (zh)
Other versions
CN115620248A (en
Inventor
张伟
罗鑫
钟星
黄俊涛
向宇舟
陈智行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Central China Technology Development Of Electric Power Co ltd
Wuhan Shiyun Technology Co ltd
Original Assignee
Wuhan Shiyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Shiyun Technology Co ltd filed Critical Wuhan Shiyun Technology Co ltd
Priority to CN202211410545.0A priority Critical patent/CN115620248B/en
Publication of CN115620248A publication Critical patent/CN115620248A/en
Application granted granted Critical
Publication of CN115620248B publication Critical patent/CN115620248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a camera calling method and a camera calling system based on traffic monitoring, wherein the method comprises the following steps: acquiring current position information of a vehicle and a first threshold value; searching traffic monitoring videos around the vehicle according to the current position information of the vehicle, positioning each target object, and calculating the average distance between each target object and the vehicle within a preset time period by using a clustering algorithm and an anomaly detection algorithm; comparing the average distance between each target object and the vehicle with a first threshold value, and identifying arm key points of the target objects if the average distance is smaller than the first threshold value; fitting the arm key points to obtain an arm motion curve, and obtaining the dangerous degree of the target object to the vehicle according to the arm motion curve; and according to different dangerous degrees, different cameras on the vehicle are called to shoot the target object, and shooting results are obtained. The video data obtained by the method can increase the possibility of article recovery and reduce the loss of the vehicle owners.

Description

Camera calling method and system based on traffic monitoring
Technical Field
The invention relates to the technical field of traffic, in particular to a camera calling method and system based on traffic monitoring.
Background
At present, a plurality of parking spaces are arranged at two sides of a road, a plurality of vehicles can also be parked on the parking spaces on the ground, and the situation that people skid the vehicles can occur when the vehicles are parked on the ground parking spaces; in general, in this case, the vehicle owner is not beside the vehicle and cannot handle the vehicle, and there is a possibility that the articles on the vehicle are lost. Meanwhile, after the lost article is lost, the lost article cannot be retrieved because the corresponding video data is not available.
Disclosure of Invention
The invention aims to provide a camera calling method and a camera calling system based on traffic monitoring so as to solve the problems.
In order to achieve the above purpose, the embodiment of the present application provides the following technical solutions:
in one aspect, an embodiment of the present application provides a camera invoking method based on traffic monitoring, where the method includes:
acquiring current position information and a first threshold value of a vehicle, wherein at least two camera devices are installed on the vehicle, the installation positions of the camera devices are different, and the position information at least comprises street information of the vehicle;
searching traffic monitoring videos around the vehicle according to current position information of the vehicle, positioning each target object in the traffic monitoring videos, and calculating the average distance between each target object and the vehicle within a preset time period by using a clustering algorithm and an anomaly detection algorithm;
comparing the average distance between each target object and the vehicle with the first threshold value, and identifying arm key points of the target objects if the average distance is smaller than the first threshold value;
fitting the arm key points by adopting a Bezier curve to obtain an arm motion curve, obtaining a motion trail identification result according to the arm motion curve, and obtaining the dangerous degree of the target object to the vehicle based on the motion trail identification result;
and according to the different dangerous degrees, calling different cameras on the vehicle to shoot the target object, so as to obtain shooting results.
In a second aspect, an embodiment of the present application provides a camera invoking system based on traffic monitoring, where the apparatus includes an obtaining module, a first calculating module, an identifying module, a second calculating module, and an invoking module.
The system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring current position information and a first threshold value of a vehicle, at least two camera devices are installed on the vehicle, the installation positions of the camera devices are different, and the position information at least comprises street information of the vehicle;
the first calculation module is used for searching traffic monitoring videos around the vehicle according to the current position information of the vehicle, positioning each target object in the traffic monitoring videos, and calculating the average distance from the vehicle to each target object in a preset time period by using a clustering algorithm and an anomaly detection algorithm;
the identification module is used for comparing the average distance between each target object and the vehicle with the first threshold value, and if the average distance is smaller than the first threshold value, identifying arm key points of the target objects;
the second calculation module is used for fitting the arm key points by adopting a Bezier curve to obtain an arm motion curve, obtaining a motion track recognition result according to the arm motion curve, and obtaining the dangerous degree of the target object to the vehicle based on the motion track recognition result;
and the calling module is used for calling different cameras on the vehicle to shoot the target object according to the different dangerous degrees to obtain shooting results.
In a third aspect, embodiments of the present application provide a camera recall device based on traffic monitoring, the device including a memory and a processor. The memory is used for storing a computer program; the processor is used for realizing the steps of the camera calling method based on traffic monitoring when executing the computer program.
In a fourth aspect, embodiments of the present application provide a readable storage medium having a computer program stored thereon, where the computer program when executed by a processor implements the steps of the above-described traffic monitoring-based camera invoking method.
The beneficial effects of the invention are as follows:
according to the invention, the fact that the dangerous degree of the target object close to the vehicle is high is considered, so that a reasonable range is divided for frame selection of the target object; on the basis, the average distance of a period of time is used for measuring the distance between the target object and the vehicle in the period of time, and compared with a method of comparing the distance of each moment with a threshold value, the average distance is used for reflecting the distance degree between the target object and the vehicle; based on the comparison of the average distance and the threshold value, the motion trail of the arm is obtained in consideration of the fact that the motion of the arm is generally accompanied when the sled is implemented, whether the vehicle is dangerous or not is judged by utilizing the motion trail, and then the motion trail of the handheld device is calculated in consideration of the fact that the sled generally uses tools, so that the degree of danger to the vehicle is finally determined by the method; finally, according to the dangerous degree, different camera devices are called to capture the behavior action of the target object so as to facilitate the back-up of the subsequent articles. According to the method, different calculation modes are selected through a layer-by-layer judgment progressive mode to calculate the calling method of the final camera device, so that resources can be reasonably utilized, and the calculation workload is reduced; the method can also acquire the video data of the target object, and under the condition that the object is lost due to the actual occurrence of the prying event, the possibility of object recovery can be increased through the acquired video data, and the loss of the vehicle owner is reduced.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a camera calling method based on traffic monitoring according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a camera call system based on traffic monitoring according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a camera calling device based on traffic monitoring according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals or letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Example 1
As shown in fig. 1, the present embodiment provides a camera calling method based on traffic monitoring, which includes step S1, step S2, step S3, step S4 and step S5.
S1, acquiring current position information and a first threshold value of a vehicle, wherein at least two camera devices are installed on the vehicle, the installation positions of the camera devices are different, and the position information at least comprises street information of the vehicle;
in this step, the mounting positions of the image capturing devices may be in a front-rear relationship, a front-side relationship, a rear-side relationship, or a front-rear relationship, and a specific mounting manner may be mounted according to the needs of the user; in addition, the first threshold value in the step is also set in a self-defining way according to the requirement of a user;
s2, searching traffic monitoring videos around the vehicle according to current position information of the vehicle, positioning each target object in the traffic monitoring videos, and calculating the average distance between each target object and the vehicle within a preset time period by using a clustering algorithm and an anomaly detection algorithm;
in the step, according to the current position information of the vehicle, namely street information, a traffic monitoring video around the vehicle can be searched, and then a target object around the vehicle, namely a target person, is selected through the monitoring video; meanwhile, the preset time period in the step can be set in a self-defined way according to the requirement of a user, for example, 10s, 20s and the like; the specific implementation method for calculating the average distance between each target object and the vehicle in the preset time period by using a clustering algorithm and an abnormality detection algorithm comprises the following steps of S21, S22, S23 and S24;
step S21, setting a second threshold value, and determining a monitoring range according to the second threshold value, wherein the monitoring range is a circular range formed by taking the vehicle as a center point and taking the second threshold value as a radius;
in practical situations, the target person generally close to the vehicle is likely to cause danger to the vehicle, so in order to reduce the screening range and simultaneously reduce the calculation work, a second threshold is set in the embodiment, and the target object to be analyzed and calculated can be determined through the second threshold;
step S22, acquiring a plurality of data sets, wherein each data set comprises the distance from a target object to the vehicle at each moment in the circular range, and the first data, the second data and the third data in each data set are used as a first clustering center, and are the distances from the target object to the vehicle at different moments in the data set;
this step can be understood as: for example, one data set includes 10 data, each data corresponding to a distance from the vehicle at a different time, and then 3 data are randomly selected from the 10 data, and each of the 3 data is recorded as a first cluster center;
step S23, calculating the distance between each data in the data set and the first data, the second data and the third data respectively, screening out the first clustering center which corresponds to each data and has the nearest distance, and attributing the first clustering center to obtain a plurality of data groups;
this step can be understood as: for example, the first data in the data set is closest to the first data, and then the first data is classified as the first data, and according to the logic, the data can be divided into multiple classes, namely multiple data groups;
s24, analyzing the data sets to obtain abnormal data in each data set, removing the abnormal data to obtain the removed data sets, and calculating the average value of all data in all the removed data sets to obtain the average distance between a target object and the vehicle;
the specific implementation steps of the step comprise a step S241 and a step S242;
s241, improving a K-means++ algorithm, and combining an Rsim function and a cosine coefficient as a similarity measurement function to obtain the improved K-means++ algorithm;
step S242, training the modified K-means++ algorithm by using each data set to obtain configuration parameters of the modified K-means++ algorithm corresponding to each data set, and identifying abnormal data of each data set by using the modified K-means++ algorithm after the configuration parameters, and removing the abnormal data after the identification to obtain the removed data set.
In the step, abnormal data are removed through a clustering algorithm, and the average distance obtained through calculation of the removed data is more accurate and can reflect the degree of the target object from the vehicle;
s3, comparing the average distance between each target object and the vehicle with the first threshold value, and identifying arm key points of the target objects if the average distance is smaller than the first threshold value;
in this step, considering that if the target object is close to the vehicle, damage may be caused to the vehicle, in this embodiment, by comparing the threshold value with the average distance, it is determined whether the next step is required according to the comparison result, where if the target object is greater than or equal to the average distance, the following steps are not executed, and the method ends; meanwhile, the arm is used for common prying vehicles, so that the arm key points of the target object are acquired in the embodiment;
s4, fitting the arm key points by adopting a Bezier curve to obtain an arm motion curve, obtaining a motion trail identification result according to the arm motion curve, and obtaining the dangerous degree of the target object to the vehicle based on the motion trail identification result; the specific implementation steps of the step comprise a step S41, a step S42 and a step S43;
step S41, recognizing the arm key points of each target object in the traffic monitoring video at each moment, and fitting the motion trail of the arm key points of the target object by utilizing a Bezier curve according to the recognized arm key points to obtain an arm motion trail curve;
step S42, based on a CART algorithm and the arm movement track curve, a movement track recognition result is obtained, wherein the movement track recognition result comprises dangerous and non-dangerous results, if the movement track recognition result is non-dangerous, information without danger is sent to a user, if the movement track recognition result is dangerous, whether equipment for prying a car door or a car window is held on the hand of the target object is judged, if the equipment for prying the car door or the car window is not held on the hand of the target object, an alarm device on a car is started, and the target object is lifted away from the car by the alarm device; if the motion trail is held, starting an alarm device on the vehicle, identifying key points of the equipment, and performing motion trail fitting by using a Bezier curve to obtain an equipment motion fitting curve; in this step, the specific implementation steps for obtaining the motion trail recognition result based on the CART algorithm and the arm motion trail curve include step S421, step S422 and step S423;
step S421, acquiring historical arm movement track curves, calibrating each historical arm movement track curve, and dividing the calibrated historical arm movement track curves into four data subsets;
step S422, based on CART algorithm and each data subset, obtaining an initial decision tree, and cutting the CART decision tree to obtain four target decision trees;
step S423, recognizing the arm movement locus curve to be recognized by using four target decision trees, and taking the mode of the recognition result as the recognition result of the arm movement locus curve to be recognized;
and step S43, judging the distance between the equipment and the vehicle according to the equipment motion fitting curve, and obtaining the risk degree of the target object according to the distance between the equipment and the vehicle.
In the step, a distance-risk degree correspondence table can be preset manually between the distance of the equipment from the vehicle and the risk degree, and the risk degree of the target object is obtained according to the correspondence table;
and S5, according to the different dangerous degrees, calling different cameras on the vehicle to shoot the target object, and obtaining shooting results. The specific implementation step of the step comprises a step S51;
step S51, analyzing the risk degree, and if the risk degree is mild risk, calling the camera device closest to the linear distance of the target object to shoot the target object; and if the danger degree is heavy danger, calling all the image pickup devices to pick up the target object.
In the step, if the risk is slight, only one camera device is mobilized to shoot in order to avoid waste of resources; if the danger degree is heavy danger, in order to ensure the back of the subsequent articles, all cameras are called to carry out all-round shooting, and the shooting resources can be reasonably utilized by the distribution mode;
in this embodiment, after step S5, the collected resources may also be sent to the user and displayed on the display interface of the user, which specifically includes:
step S6, displaying a first object in a first area of a display interface, wherein the first object comprises shooting resources acquired by the shooting device; three second objects are displayed in a second area of the display interface, namely an image enhancement object and an image amplification object which are sequentially and longitudinally arranged;
step S7, acquiring a selection operation, wherein the selection operation comprises a selection operation on any position in the first object; acquiring a clicking operation, wherein the clicking operation comprises a clicking operation on any one of the image enhancement object and the image amplification object;
and step S8, responding to the clicking operation, and displaying a third object in a third area of the display interface, wherein the third object comprises an image enhancement result or an image amplification result of the selected position.
Through display and selection operation on a display interface, each detail part in the video can be observed in a targeted manner, and the method can help a user to acquire more useful video details;
through the above steps, it can be obtained that, first, in the embodiment, the risk degree of the target object close to the vehicle is considered to be higher, so that a reasonable range is divided for frame selection of the target object; on the basis, the average distance of a period of time is used for measuring the distance between the target object and the vehicle in the period of time, and compared with a method of comparing the distance of each moment with a threshold value, the average distance is used for reflecting the distance degree between the target object and the vehicle; on the basis of comparing the average distance with a threshold value, the motion trail of the arm is obtained in consideration of the fact that the motion trail of the arm is generally accompanied with the implementation of the vehicle prying, whether the vehicle is dangerous or not is judged by utilizing the motion trail, and then the motion trail of the handheld device is calculated in consideration of the fact that the vehicle prying generally uses tools, so that the degree of danger to the vehicle is finally determined by the method; finally, according to the dangerous degree, different camera devices are called to capture the behavior action of the target object so as to facilitate the back-up of the subsequent articles.
Therefore, according to the embodiment, different calculation modes are selected through a layer-by-layer judgment progressive mode to calculate the calling method of the final camera device, and resources can be reasonably utilized through the method, so that the calculation workload is reduced; the method can also acquire the video data of the target object, and under the condition that the object is lost due to the actual occurrence of the prying event, the possibility of object recovery can be increased through the acquired video data, and the loss of the vehicle owner is reduced.
Example 2
As shown in fig. 2, the present embodiment provides a camera invoking system based on traffic monitoring, which includes an acquisition module 701, a first calculation module 702, an identification module 703, a second calculation module 704, and an invoking module 705.
An obtaining module 701, configured to obtain current location information of a vehicle and a first threshold, where at least two image capturing devices are installed on the vehicle, and installation locations of the image capturing devices are different, where the location information at least includes street information on which the vehicle is located;
the first calculating module 702 is configured to find a traffic monitoring video around the vehicle according to current position information of the vehicle, locate each target object in the traffic monitoring video, and calculate an average distance between each target object and the vehicle in a preset time period by using a clustering algorithm and an anomaly detection algorithm;
an identifying module 703, configured to compare an average distance from the vehicle to each target object with the first threshold, and identify an arm key point of the target object if the average distance is less than the first threshold;
the second calculation module 704 is configured to fit the arm key points with a bezier curve to obtain an arm motion curve, obtain a motion track recognition result according to the arm motion curve, and obtain a risk degree of the target object on the vehicle based on the motion track recognition result;
and the calling module 705 is configured to call different cameras on the vehicle to shoot the target object according to the different dangerous degrees, so as to obtain a shooting result.
In a specific embodiment of the disclosure, the first calculating module 702 further includes a setting unit 7021, a first obtaining unit 7022, a first calculating unit 7023, and an analyzing unit 7024.
A setting unit 7021, configured to set a second threshold, and determine a monitoring range according to the second threshold, where the monitoring range is a circular range formed by taking the vehicle as a center point and the first threshold as a radius;
a first obtaining unit 7022, configured to obtain a plurality of data sets, each data set including a distance from the vehicle at each moment of a target object in the circular range, and using first data, second data, and third data in each data set as a first cluster center, where the first data, the second data, and the third data are distances from the vehicle at different moments of the target object in the data set;
a first calculating unit 7023, configured to calculate distances between each data in the dataset and the first data, the second data, and the third data, screen out the first cluster center with the closest distance corresponding to each data, and assign the first cluster center to the first cluster center, so as to obtain a plurality of data groups;
the analysis unit 7024 is configured to analyze the data sets to obtain abnormal data in each data set, reject the abnormal data to obtain the rejected data sets, and perform average calculation on all data in all the rejected data sets to obtain an average distance between the target object and the vehicle.
In one embodiment of the present disclosure, the analysis unit 7024 further comprises a modification unit 70241 and a training unit 70242.
The improving unit 70241 is used for improving the K-means++ algorithm, combining the Rsim function and the cosine coefficient as a similarity measurement function, and obtaining the improved K-means++ algorithm;
the training unit 70242 is configured to train the modified K-means++ algorithm by using each data set to obtain configuration parameters of the modified K-means++ algorithm corresponding to each data set, identify abnormal data of each data set by using the modified K-means++ algorithm after the configuration parameters, and reject the abnormal data after the identifying, so as to obtain the data set after rejection.
In a specific embodiment of the disclosure, the second calculating module 704 further includes a first fitting unit 7041, a second fitting unit 7042, and a second calculating unit 7043.
A first fitting unit 7041, configured to identify an arm key point of each target object in the traffic monitoring video at each time, and perform motion trail fitting on the arm key point of the target object by using a bezier curve according to the identified arm key point, so as to obtain an arm motion trail curve;
the second fitting unit 7042 is configured to obtain a motion trail identification result based on a CART algorithm and the arm motion trail curve, where the motion trail identification result includes two results, i.e., dangerous and non-dangerous, if the motion trail identification result is non-dangerous, send information that there is no danger to a user, if the motion trail identification result is dangerous, determine whether the target object has equipment for prying open a door or a window on the hand, and if the target object does not have equipment for prying open a door or a window on the hand, start an alarm device on a vehicle, and wake the target object away from the vehicle by using the alarm device; if the motion trail is held, starting an alarm device on the vehicle, identifying key points of the equipment, and performing motion trail fitting by using a Bezier curve to obtain an equipment motion fitting curve;
and the second calculating unit 7043 is configured to determine a distance between the device and the vehicle according to the motion fitting curve of the device, and obtain the risk degree of the target object according to the distance between the device and the vehicle.
In a specific embodiment of the disclosure, the second fitting unit 7042 further includes a second obtaining unit 70421, a clipping unit 70422, and an identifying unit 70423.
The second obtaining unit 70421 is configured to obtain a historical arm motion trajectory curve, calibrate each historical arm motion trajectory curve, and divide the calibrated historical arm motion trajectory curve into four data subsets;
the clipping unit 70422 is configured to obtain an initial decision tree based on a CART algorithm and each data subset, and clip the CART decision tree to obtain four target decision trees;
the recognition unit 70423 is configured to recognize the arm motion trajectory curve to be recognized by using four target decision trees, and take the mode of the recognition result as the recognition result of the arm motion trajectory curve to be recognized.
In one embodiment of the present disclosure, the calling module 705.
A calling unit 7051, configured to analyze the risk level, and if the risk level is a mild risk, call the imaging device closest to the target object in a straight line distance to capture the target object; and if the danger degree is heavy danger, calling all the image pickup devices to pick up the target object.
It should be noted that, regarding the apparatus in the above embodiments, the specific manner in which the respective modules perform the operations has been described in detail in the embodiments regarding the method, and will not be described in detail herein.
Example 3
Corresponding to the above method embodiments, the embodiments of the present disclosure further provide a traffic monitoring-based camera calling device, where the traffic monitoring-based camera calling device described below and the traffic monitoring-based camera calling method described above may be referred to correspondingly with each other.
Fig. 3 is a block diagram illustrating a traffic monitoring based camera recall device 800 in accordance with an exemplary embodiment. As shown in fig. 3, the traffic monitoring-based camera invoking apparatus 800 may include: a processor 801, a memory 802. The traffic monitoring based camera invoking device 800 may also include one or more of a multimedia component 803, an i/O interface 804, and a communication component 805.
The processor 801 is configured to control the overall operation of the traffic monitoring-based camera calling device 800, so as to complete all or part of the steps in the traffic monitoring-based camera calling method. The memory 802 is used to store various types of data to support operation at the traffic-monitoring based camera recall device 800, which may include, for example, instructions for any application or method operating on the traffic-monitoring based camera recall device 800, as well as application-related data such as contact data, transceived messages, pictures, audio, video, and the like. The Memory 802 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 803 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 802 or transmitted through the communication component 805. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 804 provides an interface between the processor 801 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 805 is configured to perform wired or wireless communication between the traffic monitoring-based camera invoking device 800 and other devices. Wireless communication, such as Wi-Fi, bluetooth, near field communication (Near FieldCommunication, NFC for short), 2G, 3G or 4G, or a combination of one or more thereof, the respective communication component 805 may thus comprise: wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the traffic monitoring based camera invoking device 800 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), digital signal processor (DigitalSignal Processor, abbreviated as DSP), digital signal processing device (Digital Signal Processing Device, abbreviated as DSPD), programmable logic device (Programmable Logic Device, abbreviated as PLD), field programmable gate array (Field Programmable Gate Array, abbreviated as FPGA), controller, microcontroller, microprocessor, or other electronic component for performing the traffic monitoring based camera invoking method described above.
In another exemplary embodiment, a computer readable storage medium is also provided, comprising program instructions which, when executed by a processor, implement the steps of the traffic monitoring based camera invoking method described above. For example, the computer readable storage medium may be the memory 802 described above including program instructions executable by the processor 801 of the traffic monitoring based camera invoking apparatus 800 to perform the traffic monitoring based camera invoking method described above.
Example 4
Corresponding to the above method embodiments, the present disclosure further provides a readable storage medium, where a readable storage medium described below and a camera invoking method based on traffic monitoring described above may be referred to correspondingly.
A readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the camera invoking method based on traffic monitoring of the above-described method embodiments.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, and the like.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. The camera calling method based on traffic monitoring is characterized by comprising the following steps:
acquiring current position information and a first threshold value of a vehicle, wherein at least two camera devices are installed on the vehicle, the installation positions of the camera devices are different, and the position information at least comprises street information of the vehicle;
searching traffic monitoring videos around the vehicle according to current position information of the vehicle, positioning each target object in the traffic monitoring videos, and calculating the average distance between each target object and the vehicle within a preset time period by using a clustering algorithm and an anomaly detection algorithm;
comparing the average distance between each target object and the vehicle with the first threshold value, and identifying arm key points of the target objects if the average distance is smaller than the first threshold value;
fitting the arm key points by adopting a Bezier curve to obtain an arm motion curve, obtaining a motion trail identification result according to the arm motion curve, and obtaining the dangerous degree of the target object to the vehicle based on the motion trail identification result;
according to the different dangerous degrees, different cameras on the vehicle are called to shoot the target object, and shooting results are obtained;
calculating the average distance from each target object to the vehicle in a preset time period by using a clustering algorithm and an abnormality detection algorithm, wherein the method comprises the following steps of:
setting a second threshold value, and determining a monitoring range according to the second threshold value, wherein the monitoring range is a circular range formed by taking the vehicle as a center point and taking the second threshold value as a radius;
acquiring a plurality of data sets, wherein each data set comprises the distance from a target object to the vehicle at each moment in the circular range, and the first data, the second data and the third data in each data set are used as a first clustering center, and are the distances from the target object to the vehicle at different moments in the data set;
calculating the distance between each data in the data set and the first data, the second data and the third data respectively, screening out the first clustering center which corresponds to each data and has the nearest distance, and attributing the first clustering center to obtain a plurality of data groups;
analyzing the data sets to obtain abnormal data in each data set, removing the abnormal data to obtain removed data sets, and carrying out average value calculation on all data in all the removed data sets to obtain the average distance between a target object and the vehicle;
analyzing the data sets to obtain abnormal data in each data set, wherein the abnormal data comprises the following steps:
improving the K-means++ algorithm, and combining an Rsim function and a cosine coefficient as a similarity measurement function to obtain the improved K-means++ algorithm;
training the improved K-means++ algorithm by using each data set to obtain configuration parameters of the improved K-means++ algorithm corresponding to each data set, identifying abnormal data of each data set by using the improved K-means++ algorithm after the configuration parameters, and removing the abnormal data after the identification to obtain the removed data set.
2. The traffic monitoring-based camera invoking method according to claim 1, wherein fitting the arm key points by using a bezier curve to obtain an arm motion curve, obtaining a motion track recognition result according to the arm motion curve, and obtaining the hazard degree of the target object to the vehicle by using the motion track recognition result comprises:
identifying the arm key points of each target object in the traffic monitoring video at each moment, and fitting the motion trail of the arm key points of the target object by utilizing a Bezier curve according to the identified arm key points to obtain an arm motion trail curve;
based on a CART algorithm and the arm movement track curve, obtaining a movement track recognition result, wherein the movement track recognition result comprises dangerous and non-dangerous results, if the movement track recognition result is non-dangerous, sending information of no danger to a user, if the movement track recognition result is dangerous, judging whether equipment for prying a vehicle door or a vehicle window is held on the hand of the target object, if not, starting an alarm device on the vehicle, and reminding the target object to be far away from the vehicle by using the alarm device; if the motion trail is held, starting an alarm device on the vehicle, identifying key points of the equipment, and performing motion trail fitting by using a Bezier curve to obtain an equipment motion fitting curve;
and judging the distance between the equipment and the vehicle according to the equipment motion fitting curve, and obtaining the risk degree of the target object according to the distance between the equipment and the vehicle.
3. A traffic monitoring based camera recall system for implementing the traffic monitoring based camera recall method of claim 1 comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring current position information and a first threshold value of a vehicle, at least two camera devices are installed on the vehicle, the installation positions of the camera devices are different, and the position information at least comprises street information of the vehicle;
the first calculation module is used for searching traffic monitoring videos around the vehicle according to the current position information of the vehicle, positioning each target object in the traffic monitoring videos, and calculating the average distance from the vehicle to each target object in a preset time period by using a clustering algorithm and an anomaly detection algorithm;
the identification module is used for comparing the average distance between each target object and the vehicle with the first threshold value, and if the average distance is smaller than the first threshold value, identifying arm key points of the target objects;
the second calculation module is used for fitting the arm key points by adopting a Bezier curve to obtain an arm motion curve, obtaining a motion track recognition result according to the arm motion curve, and obtaining the dangerous degree of the target object to the vehicle based on the motion track recognition result;
and the calling module is used for calling different cameras on the vehicle to shoot the target object according to the different dangerous degrees to obtain shooting results.
4. The traffic-monitoring-based camera recall system of claim 3 wherein the first computing module comprises:
the setting unit is used for setting a second threshold value, and determining a monitoring range according to the second threshold value, wherein the monitoring range is a circular range formed by taking the vehicle as a center point and the second threshold value as a radius;
a first acquisition unit, configured to acquire a plurality of data sets, where each data set includes a distance from a target object to the vehicle at each moment in the circular range, and a first data, a second data, and a third data in each data set are used as a first cluster center, and the first data, the second data, and the third data are distances from the target object to the vehicle at different moments in the data set;
the first calculation unit is used for calculating the distance between each data in the data set and the first data, the second data and the third data respectively, screening out the first clustering center which corresponds to each data and has the nearest distance, and attributing the first clustering center to obtain a plurality of data groups;
the analysis unit is used for analyzing the data sets to obtain abnormal data in each data set, eliminating the abnormal data to obtain an eliminated data set, and carrying out average value calculation on all data in all the eliminated data sets to obtain the average distance between the target object and the vehicle.
5. The traffic monitoring based camera recall system of claim 4 wherein the analysis unit comprises:
the improvement unit is used for improving the K-means++ algorithm, combining the Rsim function and the cosine coefficient as a similarity measurement function, and obtaining the improved K-means++ algorithm;
the training unit is used for training the improved K-means++ algorithm by using each data set to obtain configuration parameters of the improved K-means++ algorithm corresponding to each data set, and carrying out abnormal data identification on each data set by using the improved K-means++ algorithm after the configuration parameters, and removing the abnormal data after the identification to obtain the removed data set.
6. The traffic-monitoring-based camera recall system of claim 3 wherein the second computing module comprises:
the first fitting unit is used for identifying the arm key points of each target object in the traffic monitoring video at each moment, and fitting the motion trail of the arm key points of the target objects by utilizing a Bezier curve according to the identified arm key points to obtain an arm motion trail curve;
the second fitting unit is used for obtaining a motion trail identification result based on a CART algorithm and the arm motion trail curve, wherein the motion trail identification result comprises dangerous and non-dangerous results, if the motion trail identification result is non-dangerous, information without danger is sent to a user, if the motion trail identification result is dangerous, whether equipment for prying a car door or a car window is held on the target object is judged, if the equipment for prying the car door or the car window is not held on the target object, an alarm device on a car is started, and the target object is awakened away from the car by the alarm device; if the motion trail is held, starting an alarm device on the vehicle, identifying key points of the equipment, and performing motion trail fitting by using a Bezier curve to obtain an equipment motion fitting curve;
and the second calculation unit is used for judging the distance between the equipment and the vehicle according to the equipment motion fitting curve and obtaining the risk degree of the target object according to the distance between the equipment and the vehicle.
7. A camera recall device based on traffic monitoring, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the traffic monitoring based camera invoking method according to any of claims 1 to 2 when executing the computer program.
8. A readable storage medium, characterized by: the readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the traffic monitoring based camera invoking method according to any of claims 1 to 2.
CN202211410545.0A 2022-11-11 2022-11-11 Camera calling method and system based on traffic monitoring Active CN115620248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211410545.0A CN115620248B (en) 2022-11-11 2022-11-11 Camera calling method and system based on traffic monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211410545.0A CN115620248B (en) 2022-11-11 2022-11-11 Camera calling method and system based on traffic monitoring

Publications (2)

Publication Number Publication Date
CN115620248A CN115620248A (en) 2023-01-17
CN115620248B true CN115620248B (en) 2023-06-16

Family

ID=84879243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211410545.0A Active CN115620248B (en) 2022-11-11 2022-11-11 Camera calling method and system based on traffic monitoring

Country Status (1)

Country Link
CN (1) CN115620248B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120561A (en) * 2021-11-09 2022-03-01 中国第一汽车股份有限公司 Reminding method, device, equipment and storage medium
CN115240141A (en) * 2022-07-21 2022-10-25 北京交通大学 Method and system for identifying abnormal behavior of passenger in urban rail station

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104527570A (en) * 2014-12-22 2015-04-22 清华大学苏州汽车研究院(吴江) Vehicle anti-theft system based on panoramic picture
DE102015214892A1 (en) * 2015-08-05 2017-02-09 Robert Bosch Gmbh Device and method for theft detection
CN110008857A (en) * 2019-03-21 2019-07-12 浙江工业大学 A kind of human action matching methods of marking based on artis
CN113469115A (en) * 2021-07-20 2021-10-01 阿波罗智联(北京)科技有限公司 Method and apparatus for outputting information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120561A (en) * 2021-11-09 2022-03-01 中国第一汽车股份有限公司 Reminding method, device, equipment and storage medium
CN115240141A (en) * 2022-07-21 2022-10-25 北京交通大学 Method and system for identifying abnormal behavior of passenger in urban rail station

Also Published As

Publication number Publication date
CN115620248A (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN109325429B (en) Method, device, storage medium and terminal for associating feature data
CN111597962B (en) Antitheft alarm method and device for warehouse materials and electronic equipment
KR101298684B1 (en) Non sensor based vehicle number recognition system and operating method thereof
CN110610137B (en) Method and device for detecting vehicle running state, electronic equipment and storage medium
WO2020047446A1 (en) Method and system for facilitating detection and identification of vehicle parts
CN116129350B (en) Intelligent monitoring method, device, equipment and medium for safety operation of data center
CN112434566A (en) Passenger flow statistical method and device, electronic equipment and storage medium
CN114241370A (en) Intrusion identification method and device based on digital twin transformer substation and computer equipment
CN114170272A (en) Accident reporting and storing method based on sensing sensor in cloud environment
CN114387780A (en) Vehicle damage identification and accident management system and method
CN114005105B (en) Driving behavior detection method and device and electronic equipment
CN110619256A (en) Road monitoring detection method and device
CN115620248B (en) Camera calling method and system based on traffic monitoring
CN112104838B (en) Image distinguishing method, monitoring camera and monitoring camera system thereof
CN110188645B (en) Face detection method and device for vehicle-mounted scene, vehicle and storage medium
CN111753587B (en) Ground falling detection method and device
CN108873097B (en) Safety detection method and device for parking of vehicle carrying plate in unmanned parking garage
CN114821978B (en) Method, device and medium for eliminating false alarm
CN115880632A (en) Timeout stay detection method, monitoring device, computer-readable storage medium, and chip
CN102740107A (en) Damage monitoring system of image surveillance equipment and method
CN111060507B (en) Vehicle verification method and device
CN112885106A (en) Vehicle big data-based regional prohibition detection system and method and storage medium
CN112818847A (en) Vehicle detection method, device, computer equipment and storage medium
CN111723601A (en) Image processing method and device
CN117227551B (en) New energy equipment safety monitoring method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231214

Address after: 430000, No. 13-1 Daxueyuan Road, Donghu New Technology Development Zone, Wuhan City, Hubei Province, China. Buildings 10 and 11, 8th Floor, Building 1, Modern Service Industry Base, Huazhong University of Science and Technology Science Park

Patentee after: Wuhan Shiyun Technology Co.,Ltd.

Patentee after: HUBEI CENTRAL CHINA TECHNOLOGY DEVELOPMENT OF ELECTRIC POWER Co.,Ltd.

Address before: No. 10 and 11, Floor 8, Building 1, Modern Service Base, Science Park, Huazhong University of Science and Technology, No. 13, Daxueyuan Road, Donghu New Technology Development Zone, Wuhan, Hubei Province, 430200

Patentee before: Wuhan Shiyun Technology Co.,Ltd.

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Camera Calling Method and System Based on Traffic Monitoring

Effective date of registration: 20231226

Granted publication date: 20230616

Pledgee: Guanggu Branch of Wuhan Rural Commercial Bank Co.,Ltd.

Pledgor: Wuhan Shiyun Technology Co.,Ltd.

Registration number: Y2023980074042

PE01 Entry into force of the registration of the contract for pledge of patent right