CN115147752A - Video analysis method and device and computer equipment - Google Patents

Video analysis method and device and computer equipment Download PDF

Info

Publication number
CN115147752A
CN115147752A CN202210542759.7A CN202210542759A CN115147752A CN 115147752 A CN115147752 A CN 115147752A CN 202210542759 A CN202210542759 A CN 202210542759A CN 115147752 A CN115147752 A CN 115147752A
Authority
CN
China
Prior art keywords
information
identification
video
algorithm
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210542759.7A
Other languages
Chinese (zh)
Inventor
王琳琛
穆翀
马洪民
王亚茹
孙韵佳
刘昌毅
吕晓鹏
张星
史晓蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing E Hualu Information Technology Co Ltd
Original Assignee
Beijing E Hualu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing E Hualu Information Technology Co Ltd filed Critical Beijing E Hualu Information Technology Co Ltd
Priority to CN202210542759.7A priority Critical patent/CN115147752A/en
Publication of CN115147752A publication Critical patent/CN115147752A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video analysis method, a video analysis device and computer equipment, which are executed by nodes corresponding to a target area, wherein the method comprises the following steps: acquiring data information acquired by at least one camera device in a target area, wherein the data information comprises identification information, time information and video information; selecting a target algorithm for identifying the video information from a preset model algorithm set based on the identification information and the time information; extracting images of the video information according to a preset frequency to obtain image information with a preset number of frames; identifying each frame of image information by using a target algorithm to obtain an identification result corresponding to each frame of image information; and determining whether the video information has abnormal information according to the identification result. By the method, the waste of network resources is avoided, the problem of insufficient computing resources is solved, and the identification accuracy is improved.

Description

Video analysis method and device and computer equipment
Technical Field
The invention relates to the technical field of image recognition, in particular to a video analysis method, a video analysis device and computer equipment.
Background
With the increasingly wide application of deep learning and computer vision technology in smart cities, smart traffic, intelligent surveillance, security and the like, the demand for computing resources also increases exponentially. In the existing video analysis, the analysis of video streams is completed by centralized processing in a large-computing-capacity server common to a plurality of communities, which consumes a large amount of network resources and poses a great challenge to the computing resources of the server. The network conditions and the computing power of equipment in the community are different, the transmitted data formats and specifications are different, and the centralized cluster type video analysis cannot meet the quick response requirement of the community safety supervision task.
Disclosure of Invention
Therefore, to overcome the defects in the prior art, embodiments of the present invention provide a video analysis method, an apparatus, and a computer device.
According to a first aspect, an embodiment of the present invention discloses a video analysis method, which is performed by a node corresponding to a target area, and includes:
acquiring data information acquired by at least one camera device in a target area, wherein the data information comprises identification information, time information and video information;
selecting a target algorithm for identifying the video information from a preset model algorithm set based on the identification information and the time information;
extracting images of the video information according to a preset frequency to obtain image information with a preset frame number;
identifying each frame of image information by using a target algorithm to obtain an identification result corresponding to each frame of image information;
and determining whether the video information has abnormal information according to the identification result.
Optionally, determining whether the video information has abnormal information according to the identification result, specifically including:
and if the video information is determined to have abnormal information according to at least two identification results, generating prompt information.
Optionally, the method further comprises: and if each identification result is inconsistent, determining that abnormal information does not exist in the video information.
Optionally, based on the identification information and the time information, selecting a target algorithm for identifying the video information from a preset model algorithm set, specifically including:
determining a first algorithm subset from a preset model algorithm set according to the identification information, wherein the first algorithm subset comprises at least one candidate algorithm;
from the first subset of algorithms, a target algorithm is determined based on the time information. Alternatively,
optionally, before the target algorithm is used to identify each frame of image information and obtain the identification result corresponding to each frame of image information, the method further includes:
and preprocessing each frame of image information in the video information according to a target algorithm to obtain each frame of image information after preprocessing, and inputting the frame of image information into the target algorithm for identification.
Optionally, before each image information is identified by using the target algorithm to obtain a preset number of identification results corresponding to the image information, the method further includes:
training a target algorithm according to a pre-acquired labeled sample;
and when the target algorithm reaches the preset precision, stopping training so as to identify the image information by using the trained target algorithm.
Optionally, when the image capturing apparatus includes a plurality of image capturing apparatuses, the method further includes:
respectively identifying each frame of image information in the video information of each camera by using a target algorithm to obtain an identification result corresponding to the image information;
if the identification result corresponding to the image information of at least one camera device comprises the object to be identified, recording identification information and time information of the image information of the object to be identified;
sequencing all the image information containing the object to be identified according to all the time information containing the image information of the object to be identified, and acquiring a sequencing result;
and determining the moving path of the object to be recognized according to the identification information in the sequencing result.
According to a second aspect, an embodiment of the present invention further discloses a video analysis apparatus, which is disposed in a node corresponding to a target area, and includes:
the acquisition module is used for acquiring data information acquired by at least one camera device in a target area, wherein the data information comprises identification information, time information and video information;
the selection module is used for selecting a target algorithm for identifying the video information from a preset model algorithm set based on the identification information and the time information;
the extraction module is used for extracting images of the video information according to the preset frequency to obtain image information with the preset frame number;
the identification module is used for identifying each frame of image information by using a target algorithm to obtain an identification result corresponding to each frame of image information;
and the determining module is used for determining whether the video information has abnormal information according to the identification result.
According to a third aspect, an embodiment of the present invention further discloses a computer device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the steps of the video analytics method as described in the first aspect or any one of the optional embodiments of the first aspect.
According to a fourth aspect, the present invention further discloses a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the video analysis method according to the first aspect or any one of the optional embodiments of the first aspect.
The technical scheme of the invention has the following advantages:
the video analysis method provided by the invention is implemented by nodes corresponding to the target area, and comprises the following steps: acquiring data information acquired by at least one camera device in a target area, wherein the data information comprises identification information, time information and video information; according to the identification information and the time information, selecting a corresponding target algorithm from a preset model algorithm set to identify the video information; and extracting images of the video information according to a preset frequency to obtain image information with a preset frame number, identifying each image information by using a target algorithm, and determining whether the corresponding video information has abnormal information according to each identification result.
The data information acquired by at least one camera device in the target area is acquired through the nodes corresponding to the target area, and the data information is identified and judged, so that the waste of network resources caused by sending the data information in the target area to a central cluster server is avoided, and meanwhile, the problems of allocation and shortage of computing resources are solved; further, a target algorithm which is most matched with the video information can be accurately selected from a preset model algorithm set according to the identification information and the time information in the data information; therefore, the image information of the preset frame number in the video information is identified by using the target algorithm to obtain the identification result of the preset frame number, so that the identification of all the image information in the video information is avoided, and the waste of computational resources is reduced while the accuracy is ensured; and determining whether the video information has abnormal information according to the identification result of the number of the preset frames, so that the identification accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a specific example of a video analysis method according to an embodiment of the present invention;
fig. 2 is a flowchart of a specific example of a video analysis method according to an embodiment of the present invention;
fig. 3 is a flowchart of a specific example of a video analysis method according to an embodiment of the present invention;
fig. 4 is a flowchart of a specific example of a video analysis method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a specific example of a video analysis method in the embodiment of the present invention;
fig. 6 is a schematic block diagram of a specific example of a video analysis apparatus in the embodiment of the present invention;
FIG. 7 is a diagram showing an exemplary embodiment of a computer device.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
In order to solve the problems of network resource waste and insufficient computing power when a central cluster processes video analysis of various communities, a node (also called an edge computing node) is arranged in each community, and the video of the corresponding community is identified and analyzed in the node.
In order to solve the technical problems mentioned in the background art, an embodiment of the present application provides a video analysis method, which is specifically shown in fig. 1. In order to solve the problems in the background art, a node is arranged in each community and used for completing video identification and analysis in one corresponding community. The node is internally provided with an identification algorithm aiming at data information collected by all camera devices in the community, so that the data information can be directly identified and analyzed according to the identification algorithm in the node.
Fig. 1 is a video analysis method according to an embodiment of the present invention, which is executed by a node corresponding to a target area, that is, the above-mentioned node disposed in a corresponding community (target area), and the method includes the following steps:
step 101, data information collected by at least one camera device in a target area is obtained.
Wherein the data information includes identification information, time information, and video information.
Illustratively, the target area may be a community, square, supermarket, mall, etc. that requires video analysis. The method comprises the steps that at least one camera device is arranged in a target area and used for collecting data information in the corresponding target area, wherein the data information comprises identification information, time information and video information, the identification information is area position information collected by the camera device or installation position information of the camera device, and the time information is time information of the data information collected by the camera device. The type of the target area is not limited in the embodiment of the present invention, and can be determined by those skilled in the art according to actual needs.
And 102, selecting a target algorithm for identifying the video information from a preset model algorithm set based on the identification information and the time information.
Illustratively, according to the identification information and the time information of the data information, the most suitable model algorithm is selected from a preset model algorithm set to identify the corresponding video information.
The preset model algorithm set is the model algorithms which are set and stored in the nodes, and each model algorithm can be compressed, calibrated and stored when the algorithms are stored, so that the occupation of the model algorithms on the node memories can be reduced.
In a specific embodiment, the model algorithm can be compressed and accelerated by adopting a quantization algorithm, interlayer fusion and operator optimization. The identification accuracy rate may be reduced through the model algorithm of compression and acceleration, the model algorithm after compression and acceleration needs to be calibrated, and the method of calibration can train and calibrate the model algorithm after compression and acceleration by using a small amount of multi-scene picture data.
Alternatively, the method for training and calibrating the model algorithm after compression and acceleration can be realized by the following steps:
training a target algorithm according to a pre-acquired labeled sample;
and when the target algorithm reaches the preset precision, stopping training so as to identify the image information by using the trained target algorithm.
Alternatively, as shown in fig. 2, the specific process of selecting the target algorithm may be implemented by,
step 1021, determining a first algorithm subset from the preset model algorithm set according to the identification information.
Wherein the first subset of algorithms comprises at least one candidate algorithm.
Step 1022, determining a target algorithm from the first subset of algorithms based on the time information.
For example, in all the preset model algorithms, each model algorithm may interpret input data of the model algorithm, stored file addresses and training precision of the model algorithm, and the like by writing a configuration file, so that the model algorithm may be directly called during use.
The types of the data information collected by each camera device are different, and the contents of the data information to be identified are also different, so that the model algorithm corresponding to each camera device is also different. Therefore, the selection of the proper target algorithm is an important guarantee for the accuracy of data information identification.
In a specific embodiment, when selecting the model algorithm, a set of model algorithms corresponding to the image capture device is selected based on the identification information of the image capture device. For example, if the camera is installed in the elevator, the selected model algorithm is the algorithm related to whether the elevator is faulty or not and whether the elevator is occupied or not.
In the time span of data information acquired by the camera device, the abnormal problems needing to be mainly judged and identified in different time periods are different, and the model algorithm needing to be identified in each time period is selected as the target algorithm in the selected model algorithm set. For example, in the early peak working time period, whether someone overloads the elevator is a point which needs attention, and in other time periods, whether someone overloads the elevator or the elevator fails can be judged in a time-crossing manner, and certainly, if the calculation is enough, whether the elevator is overlooked and whether the elevator fails can be judged in the early peak time period. The embodiment of the invention does not limit the specific selection type of the model algorithm, and can be determined by the person skilled in the art according to the actual needs.
And 103, extracting the image of the video information according to a preset frequency to obtain image information with a preset frame number.
Illustratively, in the process of identifying the video information, the video information is subjected to frame extraction at a preset frequency, and the image information after frame extraction is identified. Therefore, the calculation pressure of the node calculation force is reduced, and the identification accuracy is also ensured. The preset frequency may be preset in a configuration file of the model algorithm or in time information of the image capturing device, and the size of the preset frequency and the configuration mode of the preset frequency are not limited in the embodiment of the present invention, and can be determined by a person skilled in the art according to actual needs.
And 104, identifying each frame of image information by using a target algorithm to obtain an identification result corresponding to each frame of image information.
And 105, determining whether the video information has abnormal information according to the identification result.
Illustratively, after each frame of image information obtained by frame extraction is identified according to the target algorithm, a corresponding identification result is obtained. And judging whether the event section has abnormal information or not according to all the identification results.
In a specific embodiment, as shown in fig. 3, the process of determining whether there is abnormal information according to the recognition result is implemented by the following steps:
step 1051, if the video information is determined to have abnormal information according to at least two recognition results, prompt information is generated.
Step 1052, if each recognition result is inconsistent, determining that no abnormal information exists in the video information.
Illustratively, according to each recognition result, in the time-series category of the corresponding time period, if all recognition results are consistent, the abnormal information is output. If there is at least one recognition result that is inconsistent, the abnormality information is not output.
For example, when judging whether the elevator is in a state of being overhauled, if people or vehicles are identified to block the elevator door within 3s continuously so that the elevator door cannot be closed, outputting abnormal information of the overhauled elevator, rendering on a video information interface, displaying a target frame or characters of the abnormal information, and the like; if the elevator is not occupied in the middle time period within 3s, the abnormal information of the elevator is not output, and the elevator at the moment is normal in operation and has no abnormal information.
On the basis of the foregoing embodiment, an embodiment of the present invention further provides another video analysis method, and details already described in the foregoing embodiment will not be repeated in this embodiment, and in this embodiment, it is considered that when there are multiple image capture devices, a corresponding model algorithm is determined in a node according to data information of each image capture device, and after the model algorithm is determined, all video information in all data information is identified in parallel.
Before inputting the corresponding image information into the model algorithms, each model algorithm has different requirements for the input data, so that the image information needs to be preprocessed according to the requirements of the input data.
Optionally, according to a target algorithm, preprocessing each frame of image information in the video information to obtain each frame of image information after preprocessing, and inputting the frame of image information into the target algorithm for identification.
Illustratively, the image information obtained by frame extraction can be processed by a preprocessing plug-in, and operations such as scaling, aspect ratio keeping and filling, clipping and the like can be performed on the image information according to the requirements of the model algorithm input data, wherein the preprocessing process is different due to different model algorithms. The embodiment of the present invention does not limit the pretreatment method, and those skilled in the art can determine the pretreatment method according to actual needs.
In another alternative embodiment, when there are a plurality of cameras and tracking of the target object is required in the data information collected by the cameras, the moving path of the target object is determined in the following manner, as shown in fig. 4.
Step 401, respectively identifying each frame of image information in the video information of each camera by using a target algorithm, and obtaining an identification result corresponding to the image information.
Step 402, if the identification result corresponding to the image information of at least one camera device includes the object to be identified, recording the identification information and the time information of the image information where the object to be identified is located.
Step 403, sorting all the image information containing the object to be recognized according to all the time information containing the image information of the object to be recognized, and obtaining a sorting result.
Step 405, determining the moving path of the object to be identified according to the identification information in the sequencing result.
For example, when tracking a target object and determining a moving path, it is necessary to identify video information in all image capturing devices by using a target identification algorithm according to the target identification algorithm, and store identification information and time information corresponding to the video information identifying the target object. Sequencing all the time information in the sequence of the time sequence, and determining the moving path of the target object according to the identification information corresponding to the sequencing result, thereby realizing the tracking of the target object by crossing the camera device.
A video analysis method is introduced in a specific embodiment, as shown in fig. 5, a plurality of video streams are accessed to video information in data information acquired by a plurality of image capturing devices in the above embodiment; the rotation training rule is configured into a preset configuration file, and the configuration file specifies which model algorithm is adopted by the video information collected by each camera device in each time period for identification; the round training configuration analysis is to analyze a configuration file configured by the round training rule, and match the input multiple video streams with a model algorithm; after the model algorithms are matched, the matched model algorithms are loaded from the node memories, wherein the model algorithms stored in the node memories need to be compressed, integrated and retrained so as to reduce the memory occupation and improve the accuracy of the model algorithms, configuration files are compiled according to information such as input data of each model algorithm, training precision, addresses of the model algorithms and the like, and therefore the calling efficiency of the model algorithms is improved.
After the model algorithm is matched, the multiple video streams are subjected to parallel computation according to the matched model algorithm, wherein the computation process can be that single-frame decoding is carried out on the video streams, and the decoded images are preprocessed according to the matched model algorithm, so that the image data meet the data standard of the model algorithm, and the identification accuracy rate is further increased; inputting the preprocessed image data into a model algorithm to obtain a structured recognition result, transmitting the recognition result to the rear end of a node through kafka middleware to analyze, integrating a multi-needle structured result sequence in a time sequence range, and outputting abnormal information according to the actual condition of a video stream.
By executing the method, the data information acquired by at least one camera device in the target area is acquired through the node corresponding to the target area, and the data information is identified and judged, so that the waste of network resources caused by sending the data information in the target area to the central cluster server is avoided, and the congestion and the deficiency of computing resources are solved; according to the identification information and the time information in the data information, a target algorithm which is most matched with the video information can be accurately selected from a preset model algorithm set; the image information of the preset frame number in the video information is identified by using the target algorithm to obtain the identification result of the preset frame number, so that the identification of all the image information in the video information is avoided, and the waste of calculation resources is reduced while the accuracy is ensured; and determining whether the video information has abnormal information according to the identification result of the preset frame number, so that the identification accuracy is improved.
In the above, for the embodiments of the video analysis method provided in the present application, other embodiments of the video analysis provided in the present application are described below, and specific references are made to the following.
The embodiment of the present invention further discloses a video analysis apparatus, as shown in fig. 6, the apparatus includes:
the acquiring module 601 is configured to acquire data information acquired by at least one camera in a target area.
The data information includes identification information, time information, and video information.
A selecting module 602, configured to select a target algorithm for identifying the video information from a preset model algorithm set based on the identification information and the time information.
The extracting module 603 is configured to perform image extraction on the video information according to a preset frequency to obtain image information with a preset number of frames.
The identifying module 604 is configured to identify each frame of image information by using a target algorithm, so as to obtain an identification result corresponding to each frame of image information.
And a determining module 605, configured to determine whether the video information has abnormal information according to the identification result.
As an optional implementation manner of the present invention, the determining module 605 specifically includes:
and the first determining submodule is used for generating prompt information if the video information is determined to have abnormal information according to the at least two identification results.
As an optional embodiment of the present invention, the determining module 605 further includes:
and the second determining submodule is used for determining that abnormal information does not exist in the video information if each identification result is inconsistent.
As an optional embodiment of the present invention, the selecting module 602 specifically includes:
the first selection sub-module is used for determining a first algorithm subset from a preset model algorithm set according to the identification information, wherein the first algorithm subset comprises at least one candidate algorithm;
and the second selection submodule is used for determining the target algorithm from the first algorithm subset according to the time information.
As an optional embodiment of the present invention, before the identifying module 604, the apparatus further comprises:
and the preprocessing module is used for preprocessing each frame of image information in the video information according to a target algorithm to obtain each frame of image information after preprocessing, and is used for inputting the target algorithm for identification.
As an optional embodiment of the present invention, before the identifying module 604, the apparatus further comprises:
the training submodule is used for training a target algorithm according to the pre-acquired labeled sample;
and the correction submodule is used for stopping training when the target algorithm reaches the preset precision so as to recognize the image information by using the trained target algorithm.
As an optional embodiment of the present invention, when the image pickup apparatus includes a plurality of, the apparatus further includes:
the recognition submodule is used for respectively recognizing each frame of image information in the video information of each camera by using a target algorithm to obtain a recognition result corresponding to the image information;
the recording submodule is used for recording the identification information and the time information of the image information of the object to be recognized if the recognition result corresponding to the image information of at least one camera device comprises the object to be recognized;
the sorting submodule is used for sorting all the image information containing the object to be recognized according to all the time information containing the image information of the object to be recognized and obtaining a sorting result;
and the determining submodule is used for determining the moving path of the object to be identified according to the identification information in the sequencing result.
The functions executed by each component in the video analysis apparatus provided in the embodiment of the present invention have been described in detail in any of the above method embodiments, and therefore, are not described herein again.
By executing the device, the data information acquired by at least one camera device in the target area is acquired through the node corresponding to the target area, and the data information is identified and judged, so that the waste of network resources caused by sending the data information in the target area to a central cluster server is avoided, and the congestion and the deficiency of computing resources are solved; according to the identification information and the time information in the data information, a target algorithm which is most matched with the video information can be accurately selected from a preset model algorithm set; the image information of the preset frame number in the video information is identified by using the target algorithm to obtain the identification result of the preset frame number, so that the identification of all the image information in the video information is avoided, and the waste of calculation resources is reduced while the accuracy is ensured; and determining whether the video information has abnormal information according to the identification result of the preset frame number, so that the identification accuracy is improved.
An embodiment of the present invention further provides a computer device, as shown in fig. 7, the computer device may include a processor 701 and a memory 702, where the processor 701 and the memory 702 may be connected by a bus or in another manner, and fig. 7 takes the example of connection by a bus as an example. Video analysis method
Processor 701 may be a Central Processing Unit (CPU). The Processor 701 may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 702, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the video analysis method in the embodiments of the present invention. The processor 701 executes various functional applications and data processing of the processor, i.e., implements the video analysis method in the above method embodiments, by running non-transitory software programs, instructions and modules stored in the memory 702.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 701, and the like. Further, the memory 702 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 702 may optionally include memory located remotely from processor 701, which may be connected to processor 701 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 702 and, when executed by the processor 701, perform the video analysis method as in the embodiment shown in fig. 1.
The details of the computer device can be understood with reference to the corresponding related descriptions and effects in the embodiment shown in fig. 1, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the program can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A video analysis method, performed by a node corresponding to a target area, the method comprising:
acquiring data information acquired by at least one camera device in the target area, wherein the data information comprises identification information, time information and video information;
selecting a target algorithm for identifying the video information from a preset model algorithm set based on the identification information and the time information;
performing image extraction on the video information according to a preset frequency to obtain image information with a preset frame number;
identifying each frame of image information by using the target algorithm to obtain an identification result corresponding to each frame of image information;
and determining whether the video information has abnormal information or not according to the identification result.
2. The method according to claim 1, wherein the determining whether the video information has abnormal information according to the identification result specifically includes:
and if the video information is determined to have abnormal information according to at least two identification results, generating prompt information.
3. The method of claim 2, further comprising:
and if each identification result is inconsistent, determining that abnormal information does not exist in the video information.
4. The method according to any one of claims 1 to 3, wherein the selecting a target algorithm for identifying the video information from a preset model algorithm set based on the identification information and the time information specifically comprises:
determining a first algorithm subset from the preset model algorithm set according to the identification information, wherein the first algorithm subset comprises at least one candidate algorithm;
determining the target algorithm from the first subset of algorithms based on the time information.
5. The method according to any one of claims 1 to 3, wherein before the identifying the image information of each frame by using the target algorithm and obtaining the identification result corresponding to the image information of each frame, the method further comprises:
and preprocessing each frame of image information in the video information according to the target algorithm to obtain each frame of image information after preprocessing, and inputting the image information into the target algorithm for identification.
6. The method according to any one of claims 1 to 3, wherein before the step of identifying each of the image information by the target algorithm to obtain a preset number of identification results corresponding to the image information, the method further comprises:
training the target algorithm according to the pre-acquired labeled sample;
and when the target algorithm reaches the preset precision, stopping training so as to identify the image information by using the trained target algorithm.
7. The method according to any one of claims 1 to 3, wherein when the image pickup device includes a plurality of, the method further includes:
respectively identifying each frame of image information in the video information of each camera by using the target algorithm to obtain an identification result corresponding to the image information;
if the identification result corresponding to the image information of the at least one camera device comprises an object to be identified, recording identification information and time information of the image information where the object to be identified is located;
sequencing all the image information containing the object to be identified according to all the time information containing the image information of the object to be identified, and acquiring a sequencing result;
and determining the moving path of the object to be recognized according to the identification information in the sequencing result.
8. A video analysis apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring data information acquired by at least one camera device in a target area, wherein the data information comprises identification information, time information and video information;
the selection module is used for selecting a target algorithm for identifying the video information from a preset model algorithm set based on the identification information and the time information;
the extraction module is used for extracting images of the video information according to a preset frequency to obtain image information with a preset frame number;
the identification module is used for identifying each frame of image information by using the target algorithm to obtain an identification result corresponding to each frame of image information;
and the determining module is used for determining whether the video information has abnormal information or not according to the identification result.
9. A computer device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the steps of the video analytics method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video analysis method according to any one of claims 1 to 7.
CN202210542759.7A 2022-05-17 2022-05-17 Video analysis method and device and computer equipment Pending CN115147752A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210542759.7A CN115147752A (en) 2022-05-17 2022-05-17 Video analysis method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210542759.7A CN115147752A (en) 2022-05-17 2022-05-17 Video analysis method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN115147752A true CN115147752A (en) 2022-10-04

Family

ID=83405916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210542759.7A Pending CN115147752A (en) 2022-05-17 2022-05-17 Video analysis method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN115147752A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116469025A (en) * 2022-12-30 2023-07-21 以萨技术股份有限公司 Processing method for identifying task, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116469025A (en) * 2022-12-30 2023-07-21 以萨技术股份有限公司 Processing method for identifying task, electronic equipment and storage medium
CN116469025B (en) * 2022-12-30 2023-11-24 以萨技术股份有限公司 Processing method for identifying task, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
EP2688296A1 (en) Video monitoring system and method
CN110348522B (en) Image detection and identification method and system, electronic equipment, and image classification network optimization method and system
CN111476191B (en) Artificial intelligent image processing method based on intelligent traffic and big data cloud server
CN108734684B (en) Image background subtraction for dynamic illumination scene
WO2022105019A1 (en) Snapshot quality evaluation method and apparatus for vehicle bayonet device, and readable medium
CN115018840B (en) Method, system and device for detecting cracks of precision casting
CN113591758A (en) Human behavior recognition model training method and device and computer equipment
CN110796039B (en) Face flaw detection method and device, electronic equipment and storage medium
CN115147752A (en) Video analysis method and device and computer equipment
CN111126112B (en) Candidate region determination method and device
CN115131634A (en) Image recognition method, device, equipment, storage medium and computer program product
CN114139016A (en) Data processing method and system for intelligent cell
CN114663871A (en) Image recognition method, training method, device, system and storage medium
CN113902993A (en) Environmental state analysis method and system based on environmental monitoring
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN108073854A (en) A kind of detection method and device of scene inspection
EP4332910A1 (en) Behavior detection method, electronic device, and computer readable storage medium
CN115147756A (en) Video stream processing method and device, electronic equipment and storage medium
CN113902412A (en) Environment monitoring method based on data processing
CN114039279A (en) Control cabinet monitoring method and system in rail transit station
WO2020185432A1 (en) Pre-processing image frames based on camera statistics
CN115272831B (en) Transmission method and system for monitoring images of suspension state of contact network
CN113542866B (en) Video processing method, device, equipment and computer readable storage medium
CN116977782A (en) Training method and related device for small sample detection model
CN114173190B (en) Video data detection method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination