CN114283361A - Method and apparatus for determining status information, storage medium, and electronic apparatus - Google Patents

Method and apparatus for determining status information, storage medium, and electronic apparatus Download PDF

Info

Publication number
CN114283361A
CN114283361A CN202111567464.7A CN202111567464A CN114283361A CN 114283361 A CN114283361 A CN 114283361A CN 202111567464 A CN202111567464 A CN 202111567464A CN 114283361 A CN114283361 A CN 114283361A
Authority
CN
China
Prior art keywords
target
information
image
target object
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111567464.7A
Other languages
Chinese (zh)
Inventor
彭垚
侯仁政
林亦宁
赵之健
邱爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shanma Zhiqing Technology Co Ltd
Shanghai Supremind Intelligent Technology Co Ltd
Original Assignee
Hangzhou Shanma Zhiqing Technology Co Ltd
Shanghai Supremind Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shanma Zhiqing Technology Co Ltd, Shanghai Supremind Intelligent Technology Co Ltd filed Critical Hangzhou Shanma Zhiqing Technology Co Ltd
Priority to CN202111567464.7A priority Critical patent/CN114283361A/en
Publication of CN114283361A publication Critical patent/CN114283361A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for determining state information, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring image information of a certain number of frames, wherein the image information comprises first object information of a target object; performing tracking judgment on the target object based on the first object information to determine the state of the target object; under the condition that the target object is determined to be in the parking state, based on second object information of the target object, clustering operation is carried out on the target object through a clustering algorithm to obtain a target area, and according to historical data of the target area, a video classification network model is used for carrying out classification processing on the target area to obtain target state information; by adopting the technical scheme, the problems that the state of the target object in the video image information cannot be accurately determined and the like in the related technology are solved.

Description

Method and apparatus for determining status information, storage medium, and electronic apparatus
Technical Field
The present application relates to the field of communications, and in particular, to a method and an apparatus for determining status information, a storage medium, and an electronic apparatus.
Background
In the related art, traffic accident detection is one of the most important parts in an intelligent traffic system, and a real-time and robust traffic accident detection method can make great contribution to reducing casualties and property loss. With the rapid development of intelligent transportation systems, the research of traffic accident detection systems based on computer vision technology and image processing technology has attracted extensive attention, and many researchers have made significant progress in this field.
However, due to the complexity of the traffic environment, the currently proposed method still has certain limitations in the practical application process. In the related art, the existing method has the problems of not high running speed or not good robustness under a complex traffic environment and the like. In particular, the following drawbacks or disadvantages exist: the traditional logic judgment method is used, for example, whether an accident occurs is judged according to whether people exist around the vehicle, and the robustness is not high; the direct use of the video classification method introduces too much irrelevant background information, which makes it difficult to focus on the area where the accident occurs.
Aiming at the problems that the state of a target object in video image information cannot be accurately determined and the like in the related technology, no effective solution is found at present.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining state information, a storage medium and an electronic device, so as to at least solve the problem that the state of a target object in video image information cannot be accurately determined in the related art.
According to an embodiment of the present application, there is provided a method for determining status information, including: acquiring image information of a target number of frames, wherein the image information comprises first object information of a target object; performing tracking judgment on the target object based on the first object information to determine the state of the target object; under the condition that the target object is determined to be in the parking state, clustering operation is carried out on the target object based on second object information of the target object to obtain a target area; and classifying the target area according to the historical data of the target area to obtain target state information.
In an exemplary embodiment, classifying the target area according to the historical data of the target area to obtain target state information includes: and acquiring historical frame images of the target area from the image information, and analyzing a target area image sequence in the historical frame images through a video classification network model to determine whether the target area has an accident.
In one exemplary embodiment, the target object is determined to be in the parking state by: acquiring first image information of a target object in a frame image which is previous to a target frame image, and second image information of the target object in the target frame image, wherein the image information comprises: the target frame image and the previous frame image; and under the condition that the intersection ratio of the first image information and the second image information exceeds a preset threshold value, determining that the target object is in a parking state.
In an exemplary embodiment, clustering the target object based on the second object information of the target object to obtain a target area includes: acquiring size information and inter-object distance information of the target object, wherein the inter-object distance information is the adjacent distance between the target object and an adjacent object of the target object; and carrying out clustering analysis on the size information and the inter-object distance information according to a target clustering algorithm to obtain one or more target areas.
In an exemplary embodiment, after obtaining the historical frame image of the target area from the image information, the method further includes: acquiring a plurality of historical frame images with the target area from the historical frame images of the target area; and matting the plurality of historical frame images to obtain the target area image sequence.
In one exemplary embodiment, matting the sequence of parking area images from the plurality of historical frame images comprises: acquiring an analysis standard of the video classification network model, wherein the analysis standard is used for indicating the frame number requirement of the video classification network model on an input target area image sequence; and matting the target area image sequence from the plurality of historical frame images according to the analysis standard.
In one exemplary embodiment, analyzing the target area image sequence in the historical frame images through a video classification network model to determine whether the target area has an accident comprises: inputting a sequence of target region images in the historical frame images into the video classification network model; acquiring a first image characteristic of a pre-stored image sequence with an accident through the video classification network model, and acquiring a second image characteristic of the image sequence of the target area; matching the first image characteristic and the second image characteristic through the video classification network model, determining that an accident occurs in the target area under the condition that the similarity of the first image characteristic and the second image characteristic is greater than a preset threshold, and determining that no accident occurs in the target area under the condition that the similarity of the first image characteristic and the second image characteristic is less than the preset threshold.
According to another embodiment of the present application, there is also provided a device for determining status information, including: the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring image information of a target number of frames, and the image information comprises first object information of a target object; the first determining module is used for performing tracking judgment on the target object based on the first object information so as to determine the state of the target object; the second determining module is used for performing clustering operation on the target object based on second object information of the target object to obtain a target area under the condition that the target object is determined to be in a parking state; and the processing module is used for classifying the target area according to the historical data of the target area to obtain target state information.
According to another aspect of the embodiments of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned determination method of status information when running.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the accident confirmation method through the computer program.
In the embodiment of the application, by acquiring image information of a certain number of frames, the image information includes first object information of a target object; performing tracking judgment on the target object based on the first object information to determine the state of the target object; under the condition that the target object is determined to be in the parking state, based on second object information of the target object, clustering operation is carried out on the target object through a clustering algorithm to obtain a target area, and according to historical data of the target area, a video classification network model is used for carrying out classification processing on the target area to obtain target state information; by adopting the technical scheme, the problems that the state of the target object in the video image information cannot be accurately determined and the like in the related technology are solved, and the state of the target object can be accurately determined in real time.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a traffic accident detection system according to an alternative method for determining status information according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative method of determining status information according to an embodiment of the present application;
FIG. 3 is a flow chart illustrating an alternative method for determining status information according to an embodiment of the present application;
FIG. 4 is a structural assembly diagram of an alternative traffic accident detection system in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating an alternative implementation of status information determination according to an embodiment of the present application;
fig. 6 is a block diagram of an alternative status information determination system according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method provided by the embodiment of the application can be executed in a traffic accident detection system or a similar operation system. Taking the example of the operation on the traffic accident detection system, fig. 1 is a hardware structure block diagram of the traffic accident detection system of a method for determining status information according to the embodiment of the present application. As shown in fig. 1, the traffic accident detection system may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a processing system such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, which in an exemplary embodiment may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the traffic accident detection system described above. For example, the traffic accident detection system may also include more or fewer components than shown in FIG. 1, or have a different configuration with equivalent functionality to that shown in FIG. 1 or more functionality than that shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the determination method of the state information in the embodiment of the present application, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage systems, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the traffic accident detection system via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission system 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the traffic accident detection system. In one example, the transmission system 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet.
In this embodiment, a method for determining status information is provided, which is applied to the above traffic accident detection system, and fig. 2 is a flowchart of an alternative method for determining status information according to an embodiment of the present application, where the flowchart includes the following steps:
step S202, acquiring image information of a target number of frames, wherein the image information comprises first object information of a target object;
step S204, based on the first object information, performing tracking judgment on the target object to determine the state of the target object;
step S206, under the condition that the target object is determined to be in the parking state, clustering operation is carried out on the target object based on second object information of the target object to obtain a target area;
and step S208, classifying the target area according to the historical data of the target area to obtain target state information.
It should be noted that, the above-mentioned clustering operation on the target object may use a clustering algorithm, and the clustering algorithm may include: DBSCAN (Density-Based Spatial Clustering of Applications with Noise, representative of Density-Based Clustering algorithms), K-means (K-means Clustering algorithms), GMM (Gaussian Mixture Model), and the like, which are not limited in this application.
It should be noted that, the above-mentioned obtaining of the image information of the target number of frames may be performed by performing vehicle detection on the image, where the detection method may be performed by using a common deep learning detection network yolo series or a fast-rcnn series, which is not limited in this application.
It should be noted that the target object may be tracked and determined by using a sort/deep sort multi-target tracking method, which is not limited in this application.
Through the steps, in the embodiment of the application, the image information of the target number of frames is acquired through vehicle detection, wherein the image information comprises first object information of a target object; based on the first object information, the tracking module carries out tracking judgment on the target object so as to determine the state of the target object; under the condition that the target object is determined to be in the parking state, clustering operation is carried out on the target object based on second object information of the target object to obtain a target area, and classification processing is carried out on the target area by using a video classification network model according to historical image frame data of the target area to obtain target state information. By adopting the technical scheme, the problems that the state of the target object in the video image information cannot be accurately determined and the like in the related technology are solved, and the technical effect of accurately determining the state of the target object in real time is realized.
In an exemplary embodiment, classifying the target area according to the historical data of the target area to obtain target state information includes: and acquiring historical frame images of the target area from the image information, and analyzing a target area image sequence in the historical frame images through a video classification network model to determine whether the target area has an accident.
That is to say, in the embodiment of the present invention, the traffic accident detection system needs to first obtain the historical image frame data of the target area where the traffic accident may exist, form a target area image sequence by the historical image frame of the target area and the current image frame, and analyze and classify the target area image sequence through the video classification network model to determine whether the traffic accident exists in the target area.
Based on the above process, determining that the target object is in the parking state by: acquiring first image information of a target object in a frame image which is previous to a target frame image, and second image information of the target object in the target frame image, wherein the image information comprises: the target frame image and the previous frame image; and under the condition that the intersection ratio of the first image information and the second image information exceeds a preset threshold value, determining that the target object is in a parking state.
When an accident occurs, surrounding vehicles usually exist around the accident vehicle, and the accident vehicle inevitably stops running, so that whether the accident possibly exists can be preliminarily judged by detecting whether a parking vehicle exists, and the workload of the video classification network model is reduced; specifically, the specific method for determining whether the target object is in the parking state includes: by acquiring first image information of a target object in a frame image before a target frame image and second image information of the target object in the target frame image, the target object is determined to be in a parking state under the condition that the intersection ratio of the first image information and the second image information exceeds a preset threshold value.
In an exemplary embodiment, clustering the target object based on the second object information of the target object to obtain a target area includes: acquiring size information and inter-object distance information of the target object, wherein the inter-object distance information is the adjacent distance between the target object and an adjacent object of the target object; and carrying out clustering analysis on the size information and the inter-object distance information according to a target clustering algorithm to obtain one or more target areas.
After the vehicle to be detected is determined to be a parking vehicle, in order to further determine an area where an accident may occur, the parking vehicles existing around the parking vehicle need to be detected and clustered, in practical application, surrounding vehicles may exist around a traffic accident scene, but the surrounding vehicles are generally far away, when the parking vehicles are clustered, the distance of the parking vehicles is judged, and when the distance is smaller than a preset value, the parking vehicles are classified into a parking area, so that the surrounding parking vehicles are prevented from being introduced into an identification image, and the identification workload is increased. As can be seen from fig. 5, by detecting all the parking vehicles around the parking vehicle, and when the distance between the parking vehicles is determined to be less than the predetermined threshold, the parking vehicles are classified into the parking areas, since there may be a plurality of parking concentrated areas at the accident site, all the existing parking areas are obtained for further detection.
In an exemplary embodiment, after obtaining the historical frame image of the target area from the image information, the method further includes: acquiring a plurality of historical frame images with the target area from the historical frame images of the target area; and matting the plurality of historical frame images to obtain the target area image sequence.
It can be understood that the detection result cannot be accurately obtained only by detecting the image of the parking area at the target moment, so that the traffic accident detection system first obtains a plurality of historical frame images in which the parking area exists from the buffered historical frame images and cuts out the images of the parking area from the plurality of historical frame images, so as to facilitate subsequent further identification and classification.
Based on the above process, the cutout of the plurality of historical frame images to obtain the parking area image sequence comprises: acquiring an analysis standard of the video classification network model, wherein the analysis standard is used for indicating the frame number requirement of the video classification network model on an input target area image sequence; and matting the target area image sequence from the plurality of historical frame images according to the analysis standard.
Because different video classification networks have different requirements on input images, in order to help the video classification networks to complete analysis quickly, before a parking area image sequence is obtained by matting from a plurality of historical frame images, an analysis standard of the video classification network is obtained, specifically, the analysis standard comprises the frame number requirement on the input parking area image sequence, and the parking area image sequence is obtained according to the analysis standard.
In one exemplary embodiment, analyzing the target area image sequence in the historical frame images through a video classification network model to determine whether the target area has an accident comprises: inputting a sequence of target region images in the historical frame images into the video classification network model; acquiring a first image characteristic of a pre-stored image sequence with an accident through the video classification network model, and acquiring a second image characteristic of the image sequence of the target area; matching the first image characteristic and the second image characteristic through the video classification network model, determining that an accident occurs in the target area under the condition that the similarity of the first image characteristic and the second image characteristic is greater than a preset threshold, and determining that no accident occurs in the target area under the condition that the similarity of the first image characteristic and the second image characteristic is less than the preset threshold.
After a target area image sequence of a target parking area is obtained, inputting the target area image sequence into a video classification network model, matching a first image characteristic and a second image characteristic of the current image sequence with an accident through obtaining the first image characteristic and the second image characteristic of the target area image sequence, and determining that the accident does not occur in the target area under the condition that the similarity of the first image characteristic and the second image characteristic is smaller than a preset threshold value.
It should be noted that, after determining that an accident occurs in the target area, the traffic accident detection system may also archive the information to facilitate subsequent information search, where the archived information may include: parking area image sequences, video classification results, target time, accident occurrence places and other information. This is not limited by the present application.
In one exemplary embodiment, matting the sequence of parking area images from the plurality of historical frame images comprises: acquiring an analysis standard of the video classification network, wherein the analysis standard is used for indicating the frame number requirement of the video classification network on an input parking area image sequence; and matting the parking area image sequence from the plurality of historical frame images according to the analysis standard.
Because different video classification networks have different requirements on input images, in order to help the video classification networks to complete analysis quickly, before a parking area image sequence is obtained by matting from a plurality of historical frame images, an analysis standard of the video classification network is obtained, specifically, the analysis standard comprises the frame number requirement on the input parking area image sequence, and the parking area image sequence is obtained according to the analysis standard.
It should be noted that the video classification network model belongs to a deep learning video classification network, and may be a 2D network, such as a TSN model; 3D networks, such as the I3D model, are not limited in this regard.
It should be noted that the parking area image sequence is obtained by deducting images of the parking areas from the historical image sequence one by one, and the parking area image sequence may be 8 frames or 16 frames depending on the used deep learning video classification network, which is not limited in this application.
Based on the above process, after analyzing the parking area image sequence in the historical frame image through a video classification network model to determine whether the parking area has an accident, the method further includes: under the condition that the parking area determines that an accident occurs, performing image recognition on the target frame image to acquire description information of the accident, wherein the description information comprises at least one of the following information: the accident place, the license plate information of the vehicle; and initiating an alarm operation according to the description information.
After the traffic accident is determined to occur, the traffic accident detection system can continue to perform image recognition on the target frame image to acquire information such as address information of an accident occurrence place and license plate information of the accident occurrence, and an alarm operation is initiated through the information to remind people of taking a countermeasure.
It should be noted that after the information is acquired and the alarm is given, the traffic accident detection system may also archive the information to facilitate subsequent information search, and the archived information may include: parking area image sequences, video classification results, target time, accident occurrence places and other information. This is not limited by the present application.
In order to better understand the technical solutions of the above embodiments, the embodiments of the present invention explain the following terms.
And (3) robustness: ability of the system to survive abnormal and dangerous conditions.
Robustness: the control system maintains certain other performance characteristics under certain parameter perturbation (structure and size).
In order to better understand the process of the determination method of the state information, the following describes an implementation method flow of the determination of the state information with reference to an optional embodiment, but the implementation method flow is not limited to the technical solution of the embodiment of the present application.
In this embodiment, a method for determining status information is provided, and fig. 3 is a detection flowchart of the method for determining status information according to the embodiment of the present application, and as shown in fig. 3, the following steps are specifically performed:
step S302: starting to detect accidents;
step S304: inputting a video stream to a traffic accident detection unit;
step S306: carrying out vehicle detection and tracking on the image;
step S308: searching for a parking vehicle using the tracking information;
step S310: judging whether a parking vehicle exists or not, if not, continuing to return to the step S304;
step S312: if the parking vehicles exist, performing parking area clustering on the parking vehicles through a clustering algorithm;
step S314: deducting an ROI (Region of interest) image sequence of a parking Region from the cached historical video frame;
step S316: performing video classification on the image sequence by using a video classification network, judging whether an accident occurs, and returning to the step S304 if the accident does not occur;
step S318: if the accident is judged to occur, alarming is carried out through an alarming unit;
step S320: the video stream is buffered while the input video stream is detected.
In the embodiment of the invention, the traffic accident detection system continuously acquires the video stream through the video stream acquisition unit and inputs the video stream into the traffic accident detection unit, the traffic accident detection unit judges whether a parking vehicle exists or not by carrying out vehicle detection and tracking on images in the video stream, and if the parking vehicle does not exist, the detection of the next video stream is continued; if the parking vehicle exists, performing parking area clustering on the parking vehicle to obtain a parking area, intercepting an image sequence with the parking area from a cached historical video frame, performing video classification on the image sequence by using a video classification network, judging whether a traffic accident occurs, and if the traffic accident does not occur, continuing to detect the next video stream; and if the accident is judged to occur, alarming is carried out through an alarming unit.
Through the steps, the state of the target object can be accurately determined in real time, and the problems that the state of the target object in the video image information cannot be accurately determined in the related technology and the like are solved.
FIG. 4 is a structural assembly diagram of an alternative traffic accident detection system in accordance with an embodiment of the present invention; as shown in fig. 4, the method specifically includes:
a video stream acquisition unit: the traffic accident detection unit is used for continuously acquiring a traffic video stream and inputting the video stream to the traffic accident detection unit;
a traffic accident detection unit: receiving the video stream transmitted by the video stream acquisition unit, and detecting each frame of image in the video stream; under the condition that a parking vehicle exists in a target frame image in the video stream, clustering the parking vehicle in the target frame image by using a clustering algorithm to obtain a parking area, wherein the parking area comprises: the parking vehicle and an adjacent vehicle adjacent to the parking vehicle, wherein the target frame image corresponds to a target moment; and acquiring historical frame images of the parking area before the target moment from the video stream, and analyzing a parking area image sequence in the historical frame images through a video classification network model to determine whether an accident exists in the parking area.
An alarm unit: under the condition that the parking area is determined to have accidents, alarming through an alarming unit; and corresponding instructions can be executed according to the accident grade output by the traffic accident detection unit.
The event retrieval and query unit: under the condition that the parking area is determined to have an accident, integrating and archiving the image sequence of the parking area, the video classification result, the target time and the accident occurrence place; and inputting the image sequence of the parking area and the video classification result into the video classification network model, and further strengthening the video classification network model through deep learning.
FIG. 5 is a schematic diagram illustrating an alternative implementation of status information determination according to an embodiment of the present application; as shown in fig. 5, the method specifically includes:
when a traffic accident occurs, one or more automobiles can be parked on a road to process the accident or cannot be driven away due to the accident, at the moment, a detection algorithm is used for detecting the parked vehicles on the road, one or more clustering areas are clustered through the clustering algorithm according to all the detected parked vehicles, as shown in fig. 5, two automobiles on a highway have a collision accident and are parked on the right side of the road to process the accident, a traffic accident detection system carries out detection and identification through an input video stream to capture the two automobiles, and divides the parking areas, and then video classification is carried out on the parking areas to obtain an accident detection result and output the accident detection result.
In the embodiment of the invention, a traffic accident detection system is provided for detecting whether an accident occurs, the traffic accident detection system determines a parking area by detecting a parking vehicle, acquires a historical frame image of the parking area, inputs an image sequence of the parking area into a video classification network model for identification and classification, determines whether the accident occurs, reduces the calculation pressure of a subsequent video classification network by accurately positioning a potential accident area, judges whether the accident exists through the video classification network, and has better robustness compared with a logic judgment method.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
Fig. 6 is a block diagram of a status information determination apparatus according to an embodiment of the present application; as shown in fig. 6, includes:
an obtaining module 62, configured to obtain image information of a target number of frames, where the image information includes first object information of a target object;
a first determining module 64, configured to perform tracking judgment on the target object based on the first object information to determine a state of the target object;
a second determining module 66, configured to, in a case that it is determined that the target object is in a parking state, perform a clustering operation on the target object based on second object information of the target object to obtain a target area;
and the processing module 68 is configured to perform classification processing on the target area according to the historical data of the target area to obtain target state information.
By the above apparatus, by acquiring image information of a certain number of frames, the image information includes first object information of a target object; performing tracking judgment on the target object based on the first object information to determine the state of the target object; under the condition that the target object is determined to be in the parking state, based on second object information of the target object, clustering operation is carried out on the target object through a clustering algorithm to obtain a target area, and according to historical data of the target area, a video classification network model is used for carrying out classification processing on the target area to obtain target state information; by adopting the technical scheme, the problems that the state of the target object in the video image information cannot be accurately determined and the like in the related technology are solved, and the state of the target object can be accurately determined in real time.
In an exemplary embodiment, the processing module is further configured to classify the target area to obtain target status information, and includes: and acquiring historical frame images of the target area from the image information, and analyzing a target area image sequence in the historical frame images through a video classification network model to determine whether the target area has an accident.
That is to say, in the embodiment of the present invention, the traffic accident detection system needs to first obtain the historical image frame data of the target area where the traffic accident may exist, form a target area image sequence by the historical image frame of the target area and the current image frame, and analyze and classify the target area image sequence through the video classification network model to determine whether the traffic accident exists in the target area.
In an exemplary embodiment, the first determining module is further configured to determine that the target object is in the parking state by: acquiring first image information of a target object in a frame image which is previous to a target frame image, and second image information of the target object in the target frame image, wherein the image information comprises: the target frame image and the previous frame image; and under the condition that the intersection ratio of the first image information and the second image information exceeds a preset threshold value, determining that the target object is in a parking state.
When an accident occurs, surrounding vehicles usually exist around the accident vehicle, and the accident vehicle inevitably stops running, so that whether the accident possibly exists can be preliminarily judged by detecting whether a parking vehicle exists, and the workload of the video classification network model is reduced; specifically, the specific method for determining whether the target object is in the parking state includes: by acquiring first image information of a target object in a frame image before a target frame image and second image information of the target object in the target frame image, the target object is determined to be in a parking state under the condition that the intersection ratio of the first image information and the second image information exceeds a preset threshold value.
In an exemplary embodiment, the second determining module is further configured to perform a clustering operation on the target object based on second object information of the target object to obtain a target area, and the clustering operation includes: acquiring size information and inter-object distance information of the target object, wherein the inter-object distance information is the adjacent distance between the target object and an adjacent object of the target object; and carrying out clustering analysis on the size information and the inter-object distance information according to a target clustering algorithm to obtain one or more target areas.
After the vehicle to be detected is determined to be a parking vehicle, in order to further determine an area where an accident may occur, the parking vehicles existing around the parking vehicle need to be detected and clustered, in practical application, surrounding vehicles may exist around a traffic accident scene, but the surrounding vehicles are generally far away, when the parking vehicles are clustered, the distance of the parking vehicles is judged, and when the distance is smaller than a preset value, the parking vehicles are classified into a parking area, so that the surrounding parking vehicles are prevented from being introduced into an identification image, and the identification workload is increased. As can be seen from fig. 5, by detecting all the parking vehicles around the parking vehicle, and when the distance between the parking vehicles is determined to be less than the predetermined threshold, the parking vehicles are classified into the parking areas, since there may be a plurality of parking concentrated areas at the accident site, all the existing parking areas are obtained for further detection.
In an exemplary embodiment, after the obtaining module is further configured to obtain the historical frame image of the target area from the image information, the method further includes: acquiring a plurality of historical frame images with the target area from the historical frame images of the target area; and matting the plurality of historical frame images to obtain the target area image sequence.
It can be understood that the detection result cannot be accurately obtained only by detecting the image of the parking area at the target moment, so that the traffic accident detection system first obtains a plurality of historical frame images in which the parking area exists from the buffered historical frame images and cuts out the images of the parking area from the plurality of historical frame images, so as to facilitate subsequent further identification and classification.
In an exemplary embodiment, the obtaining module is further configured to extract the parking area image sequence from the plurality of historical frame images by matting, and includes: acquiring an analysis standard of the video classification network model, wherein the analysis standard is used for indicating the frame number requirement of the video classification network model on an input target area image sequence; and matting the target area image sequence from the plurality of historical frame images according to the analysis standard.
Because different video classification networks have different requirements on input images, in order to help the video classification networks to complete analysis quickly, before a parking area image sequence is obtained by matting from a plurality of historical frame images, an analysis standard of the video classification network is obtained, specifically, the analysis standard comprises the frame number requirement on the input parking area image sequence, and the parking area image sequence is obtained according to the analysis standard.
In an exemplary embodiment, the processing module is further configured to analyze a target area image sequence in the historical frame images through a video classification network model to determine whether an accident exists in the target area, and includes: inputting a sequence of target region images in the historical frame images into the video classification network model; acquiring a first image characteristic of a pre-stored image sequence with an accident through the video classification network model, and acquiring a second image characteristic of the image sequence of the target area; matching the first image characteristic and the second image characteristic through the video classification network model, determining that an accident occurs in the target area under the condition that the similarity of the first image characteristic and the second image characteristic is greater than a preset threshold, and determining that no accident occurs in the target area under the condition that the similarity of the first image characteristic and the second image characteristic is less than the preset threshold.
After a target area image sequence of a target parking area is obtained, inputting the target area image sequence into a video classification network model, matching a first image characteristic and a second image characteristic of the current image sequence with an accident through obtaining the first image characteristic and the second image characteristic of the target area image sequence, and determining that the accident does not occur in the target area under the condition that the similarity of the first image characteristic and the second image characteristic is smaller than a preset threshold value.
In an exemplary embodiment, the obtaining module is further configured to extract the parking area image sequence from the plurality of historical frame images by matting, and includes: acquiring an analysis standard of the video classification network, wherein the analysis standard is used for indicating the frame number requirement of the video classification network on an input parking area image sequence; and matting the parking area image sequence from the plurality of historical frame images according to the analysis standard.
Because different video classification networks have different requirements on input images, in order to help the video classification networks to complete analysis quickly, before a parking area image sequence is obtained by matting from a plurality of historical frame images, an analysis standard of the video classification network is obtained, specifically, the analysis standard comprises the frame number requirement on the input parking area image sequence, and the parking area image sequence is obtained according to the analysis standard.
In an exemplary embodiment, the obtaining module is further configured to analyze, through a video classification network model, a parking area image sequence in the historical frame images to determine whether there is an accident in the parking area, and the method further includes: under the condition that the parking area determines that an accident occurs, performing image recognition on the target frame image to acquire description information of the accident, wherein the description information comprises at least one of the following information: the accident place, the license plate information of the vehicle; and initiating an alarm operation according to the description information.
After the traffic accident is determined to occur, the traffic accident detection system can continue to perform image recognition on the target frame image to acquire information such as address information of an accident occurrence place and license plate information of the accident occurrence, and an alarm operation is initiated through the information to remind people of taking a countermeasure.
Embodiments of the present application also provide a storage medium including a stored program, where the program performs any one of the methods described above when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store program codes for performing the following steps:
s1, acquiring image information of a target number of frames, wherein the image information comprises first object information of a target object;
s2, performing tracking judgment on the target object based on the first object information to determine the state of the target object;
s3, under the condition that the target object is determined to be in the parking state, clustering operation is carried out on the target object based on second object information of the target object to obtain a target area;
and S4, classifying the target area according to the historical data of the target area to obtain target state information.
Embodiments of the present application further provide an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring image information of a target number of frames, wherein the image information comprises first object information of a target object;
s2, performing tracking judgment on the target object based on the first object information to determine the state of the target object;
s3, under the condition that the target object is determined to be in the parking state, clustering operation is carried out on the target object based on second object information of the target object to obtain a target area;
and S4, classifying the target area according to the historical data of the target area to obtain target state information.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented in a general purpose computing system, centralized on a single computing system or distributed across a network of computing systems, or alternatively implemented in program code that is executable by a computing system, such that the steps shown and described may be executed by a computing system stored in a memory system and, in some cases, executed out of order, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A method for determining status information, comprising:
acquiring image information of a target number of frames, wherein the image information comprises first object information of a target object;
performing tracking judgment on the target object based on the first object information to determine the state of the target object;
under the condition that the target object is determined to be in the parking state, clustering operation is carried out on the target object based on second object information of the target object to obtain a target area;
and classifying the target area according to the historical data of the target area to obtain target state information.
2. The method for determining status information according to claim 1, wherein classifying the target area according to the historical data of the target area to obtain the target status information comprises:
and acquiring historical frame images of the target area from the image information, and analyzing a target area image sequence in the historical frame images through a video classification network model to determine whether the target area has an accident.
3. The method of determining status information according to claim 1, wherein the target object is determined to be in the parking state by:
acquiring first image information of a target object in a frame image which is previous to a target frame image, and second image information of the target object in the target frame image, wherein the image information comprises: the target frame image and the previous frame image;
and under the condition that the intersection ratio of the first image information and the second image information exceeds a preset threshold value, determining that the target object is in a parking state.
4. The method according to claim 1, wherein clustering the target object to obtain a target region based on second object information of the target object comprises:
acquiring size information and inter-object distance information of the target object, wherein the inter-object distance information is the adjacent distance between the target object and an adjacent object of the target object;
and carrying out clustering analysis on the size information and the inter-object distance information according to a target clustering algorithm to obtain one or more target areas.
5. The method for determining status information according to claim 2, wherein after acquiring the historical frame image of the target area from the image information, the method further comprises:
acquiring a plurality of historical frame images with the target area from the historical frame images of the target area;
and matting the plurality of historical frame images to obtain the target area image sequence.
6. The method of determining status information according to claim 5, wherein matting the sequence of parking area images from the plurality of historical frame images comprises:
acquiring an analysis standard of the video classification network model, wherein the analysis standard is used for indicating the frame number requirement of the video classification network model on an input target area image sequence; and matting the target area image sequence from the plurality of historical frame images according to the analysis standard.
7. The method for determining status information according to claim 2, wherein analyzing the target area image sequence in the historical frame images through a video classification network model to determine whether the target area has accidents comprises:
inputting a sequence of target region images in the historical frame images into the video classification network model;
acquiring a first image characteristic of a pre-stored image sequence with an accident through the video classification network model, and acquiring a second image characteristic of the image sequence of the target area; matching the first image characteristic and the second image characteristic through the video classification network model, determining that an accident occurs in the target area under the condition that the similarity of the first image characteristic and the second image characteristic is greater than a preset threshold, and determining that no accident occurs in the target area under the condition that the similarity of the first image characteristic and the second image characteristic is less than the preset threshold.
8. An apparatus for determining status information, comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring image information of a target number of frames, and the image information comprises first object information of a target object;
the first determining module is used for performing tracking judgment on the target object based on the first object information so as to determine the state of the target object;
the second determining module is used for performing clustering operation on the target object based on second object information of the target object to obtain a target area under the condition that the target object is determined to be in a parking state;
and the processing module is used for classifying the target area according to the historical data of the target area to obtain target state information.
9. A computer-readable storage medium, comprising a stored program, wherein the program is operable to perform the method of any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program and the processor is arranged to execute the method of any of claims 1 to 7 by means of the computer program.
CN202111567464.7A 2021-12-20 2021-12-20 Method and apparatus for determining status information, storage medium, and electronic apparatus Pending CN114283361A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111567464.7A CN114283361A (en) 2021-12-20 2021-12-20 Method and apparatus for determining status information, storage medium, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111567464.7A CN114283361A (en) 2021-12-20 2021-12-20 Method and apparatus for determining status information, storage medium, and electronic apparatus

Publications (1)

Publication Number Publication Date
CN114283361A true CN114283361A (en) 2022-04-05

Family

ID=80873355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111567464.7A Pending CN114283361A (en) 2021-12-20 2021-12-20 Method and apparatus for determining status information, storage medium, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN114283361A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311824A (en) * 2022-07-05 2022-11-08 南京邮电大学 Campus security management system and method based on Internet

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180150704A1 (en) * 2016-11-28 2018-05-31 Kwangwoon University Industry-Academic Collaboration Foundation Method of detecting pedestrian and vehicle based on convolutional neural network by using stereo camera
WO2020042984A1 (en) * 2018-08-28 2020-03-05 杭州海康威视数字技术股份有限公司 Vehicle behavior detection method and apparatus
CN111325262A (en) * 2020-02-14 2020-06-23 逸驾智能科技有限公司 Method and device for analyzing vehicle faults
CN112509315A (en) * 2020-11-04 2021-03-16 杭州远眺科技有限公司 Traffic accident detection method based on video analysis
CN113808066A (en) * 2020-05-29 2021-12-17 Oppo广东移动通信有限公司 Image selection method and device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180150704A1 (en) * 2016-11-28 2018-05-31 Kwangwoon University Industry-Academic Collaboration Foundation Method of detecting pedestrian and vehicle based on convolutional neural network by using stereo camera
WO2020042984A1 (en) * 2018-08-28 2020-03-05 杭州海康威视数字技术股份有限公司 Vehicle behavior detection method and apparatus
CN111325262A (en) * 2020-02-14 2020-06-23 逸驾智能科技有限公司 Method and device for analyzing vehicle faults
CN113808066A (en) * 2020-05-29 2021-12-17 Oppo广东移动通信有限公司 Image selection method and device, storage medium and electronic equipment
CN112509315A (en) * 2020-11-04 2021-03-16 杭州远眺科技有限公司 Traffic accident detection method based on video analysis

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311824A (en) * 2022-07-05 2022-11-08 南京邮电大学 Campus security management system and method based on Internet

Similar Documents

Publication Publication Date Title
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN112085952B (en) Method and device for monitoring vehicle data, computer equipment and storage medium
CN110738857B (en) Vehicle violation evidence obtaining method, device and equipment
CN104200671B (en) A kind of virtual bayonet socket management method based on large data platform and system
CN111274881A (en) Driving safety monitoring method and device, computer equipment and storage medium
CN110866427A (en) Vehicle behavior detection method and device
CN113155173B (en) Perception performance evaluation method and device, electronic device and storage medium
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN110895662A (en) Vehicle overload alarm method and device, electronic equipment and storage medium
CN109377694B (en) Monitoring method and system for community vehicles
CN202940921U (en) Real-time monitoring system based on face identification
CN112434566B (en) Passenger flow statistics method and device, electronic equipment and storage medium
CN113723176B (en) Target object determination method and device, storage medium and electronic device
CN112949439A (en) Method and system for monitoring invasion of personnel in key area of oil tank truck
CN111191507A (en) Safety early warning analysis method and system for smart community
CN110838230A (en) Mobile video monitoring method, monitoring center and system
Sikirić et al. Image representations on a budget: Traffic scene classification in a restricted bandwidth scenario
CN111967377A (en) Method, device and equipment for identifying state of engineering vehicle and storage medium
CN109766821A (en) Vehicle driving law analytical method, system, computer equipment and storage medium
CN111062319B (en) Driver call detection method based on active infrared image
CN115984830A (en) Safety belt wearing detection method, device, equipment and storage medium
CN114283361A (en) Method and apparatus for determining status information, storage medium, and electronic apparatus
CN111783618A (en) Garden brain sensing method and system based on video content analysis
CN115880632A (en) Timeout stay detection method, monitoring device, computer-readable storage medium, and chip
CN113593256B (en) Unmanned aerial vehicle intelligent driving-away control method and system based on city management and cloud platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination