CN117201873B - Intelligent analysis method and device for video image - Google Patents

Intelligent analysis method and device for video image Download PDF

Info

Publication number
CN117201873B
CN117201873B CN202311466662.3A CN202311466662A CN117201873B CN 117201873 B CN117201873 B CN 117201873B CN 202311466662 A CN202311466662 A CN 202311466662A CN 117201873 B CN117201873 B CN 117201873B
Authority
CN
China
Prior art keywords
video
processing
index structure
sampling
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311466662.3A
Other languages
Chinese (zh)
Other versions
CN117201873A (en
Inventor
谷志军
游望星
连启慧
肖浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Boyuanxiang Electronic Technology Co ltd
Original Assignee
Hunan Boyuanxiang Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Boyuanxiang Electronic Technology Co ltd filed Critical Hunan Boyuanxiang Electronic Technology Co ltd
Priority to CN202311466662.3A priority Critical patent/CN117201873B/en
Publication of CN117201873A publication Critical patent/CN117201873A/en
Application granted granted Critical
Publication of CN117201873B publication Critical patent/CN117201873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses an intelligent analysis method and device for video images, wherein the method comprises the following steps: acquiring a video to be analyzed and an analysis target of the video to be analyzed through an index control module; forming a plurality of video clips through a video segmentation module; the index control module establishes an index structure model for each video segment; establishing sub-control units for controlling each video clip in the index control module, sending the analysis target and the corresponding video clip to different sub-control units, and distributing a group of processing units for each sub-control unit; each sub-control unit arranges the processing units according to the index structure model to form a processing unit index structure according to the framework of the corresponding index structure model and the distributed load parameters of a group of processing units, so that the quick processing of the video clips is completed by utilizing the load parameter difference of the processing units in the processing unit index structure. The invention is beneficial to solving the problem of low real-time processing of video images in the prior art.

Description

Intelligent analysis method and device for video image
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an intelligent video image analysis method and an image analysis device.
Background
When analyzing video images in the traditional scheme, an analysis terminal needs to analyze each video frame of the video images one by one, and the video contains a plurality of images, so that the processing difficulty of the video images is high and the processing instantaneity is low.
Therefore, an intelligent analysis method and device for video images need to be provided to improve the real-time processing in the video image processing process.
Disclosure of Invention
The invention mainly aims to provide an intelligent analysis method for video images and an image analysis device, which aim to solve the problem of low real-time processing of video images in the prior art.
In order to achieve the above purpose, the invention provides an intelligent analysis method for video images, which is applied to an image analysis device, wherein the image analysis device comprises an index control module, and a video segmentation module and a processing module which are respectively connected with the index control module in a communication way, and the processing module comprises a plurality of processing units; the method comprises the following steps:
acquiring a video to be analyzed and an analysis target of the video to be analyzed through an index control module;
splitting the video to be analyzed through a video splitting module to form a plurality of video clips;
The index control module establishes an index structure model for each video segment according to the number of the video segments and the number of the processing units formed by segmentation, wherein the index structure model comprises a plurality of processing layers, each processing layer comprises a plurality of processing nodes, in two adjacent processing layers, the processing node of the previous processing layer is provided with a parent node, and the parent node is linked with a child node in the next processing layer so as to construct a hierarchy and an index relation among the processing nodes in the index structure model;
establishing sub-control units for controlling each video clip in the index control module, sending the analysis target and the corresponding video clip to different sub-control units, and distributing a group of processing units for each sub-control unit;
each sub-control unit arranges the processing units according to the index structure model to form a processing unit index structure according to the framework of the corresponding index structure model and the distributed load parameters of a group of processing units, so that the quick processing of the video clips is completed by utilizing the load parameter difference of the processing units in the processing unit index structure.
Preferably, the step of establishing an index structure model for each video clip by the index control module according to the number of video clips and the number of processing units formed by segmentation includes:
The index control module establishes an index structure model for each video segment;
the index control module acquires the number of video frames corresponding to each video segment, and determines the number of processing nodes of a first processing layer of the index structure model corresponding to the video segment according to the number of video frames;
according to the number of processing nodes of the first processing layer, the number of nodes of other processing layers in the index structure model is established, wherein as the level in the index structure model is increased, the number of nodes corresponding to each processing layer is increased;
determining a corresponding parent node in a previous processing layer for a node in each processing layer after the first processing layer in the index structure model;
and establishing a corresponding index structure model for each video segment according to the number of nodes of each processing layer and the parent node corresponding to the node in each processing layer after the first processing layer.
Preferably, the step of establishing a sub-control unit for controlling each video clip in the index control module and sending the analysis target and the corresponding video clip to different sub-control units includes:
establishing a sub-control unit for controlling each video clip according to the number of the video clips in the index control module;
Establishing a one-to-one correspondence between video clips and sub-control units;
and sending the analysis targets and the video clips to the corresponding sub-control units according to the one-to-one correspondence.
Preferably, each sub-control unit arranges the processing units according to the index structure model to form a processing unit index structure according to the assigned load parameters of a group of processing units according to the architecture of the corresponding index structure model, so as to complete the rapid processing of the video clips by utilizing the load parameter differences of the processing units in the processing unit index structure, and the method comprises the following steps:
the sub-control unit detects the load parameters of each processing unit in the distributed group of processing units, and the processing units are arranged to form a processing unit sequence according to the sequence from small to large of the load parameters;
according to the processing unit sequence and the node sequence of the index structure model, each processing unit in the processing unit sequence is sequentially arranged on each node in the processing unit index structure to form a processing unit index structure in an arrangement mode;
and distributing each video frame in the video clips distributed by the sub-control unit to a processing unit of each processing layer in the processing unit index structure for quick processing.
Preferably, the analysis target is a target image extracted from a video image; the step of allocating each video frame in the video clips allocated by the sub-control unit to the processing unit of each processing layer in the processing unit index structure to perform rapid processing includes:
determining a sampling interval according to the number of video frames in the video clips distributed by the sub-control units and the number of processing units of the first processing layer in the processing unit index structure corresponding to the sub-control units;
extracting sampling video frames from the video clips according to sampling intervals by adopting a processing unit of a first processing layer in a processing unit index structure corresponding to the sub-control unit;
the processing unit of the first processing layer extracts a target image from the sampled video frames, and takes the sampled video frames with the target image as core target video frames;
distributing first target video frames in a first sampling range of a preset sampling number positioned before and after a core target video frame to a second layer of processing units in the processing unit index structure to extract a target image;
defining a second sampling range which is not overlapped with the first sampling range according to boundary video frames of the target image in each first sampling range, core target video frames corresponding to the first target video frames and preset sampling quantity;
Distributing a second target video frame in a second sampling range to a third layer processing unit to extract a target image;
judging whether a target image exists in the second target video frame or not;
if so, according to the boundary video frames of the target image in each second target video frame and the preset sampling quantity, a third sampling range which is not overlapped with the second sampling range and the first target video frame is defined; distributing a third target video frame in a third sampling range to a fourth layer processing unit to extract a target image;
if not, ending the extraction of the target image of the round;
and forming an image analysis result according to all the target images extracted by the processing unit index structure.
Preferably, the step of determining the sampling interval according to the number of video frames in the video clips allocated to the sub-control unit and the number of processing units of the first processing layer in the processing unit index structure corresponding to the sub-control unit includes:
acquiring a preset maximum sampling interval;
determining a temporary sampling interval according to the number of video frames in the video clips distributed by the sub-control unit and the number of processing units of the first processing layer in the processing unit index structure corresponding to the sub-control unit;
Judging whether the temporary sampling interval does not exceed the maximum sampling interval;
if yes, determining the temporary sampling interval as a sampling interval, so that a processing unit of the first processing layer extracts all sampled video frames through one-time sampling;
if not, taking the maximum sampling interval as the sampling interval, so that the processing unit of the first processing layer extracts all the sampled video frames through multiple times of sampling.
Preferably, the step of forming the image analysis result according to all the target images extracted by the processing unit index structure includes:
acquiring video frame numbers corresponding to all extracted target images;
acquiring a time identification point corresponding to a video frame sequence number;
acquiring the positions of all extracted target images in a video frame;
and forming an image analysis result according to the time identification point corresponding to the video frame number of the target image and the corresponding position.
Preferably, the number of nodes of each processing layer in each index structure model is determined by:
wherein i is the sequence number of the index structure model, j is the sequence number of the processing layer in the index structure model,i is the number of index structure models, +.>,/>The number of processing layers in the ith index structure model; / >The total number of processing nodes of the jth processing layer in the ith index structure model; />For the number of processing nodes of the first processing layer in the ith index structure model, +.>For the number of child nodes, which each parent node in the (k-1) th processing layer links to the (k) th processing layer in the (i) th index structure model,/>;/>For the number of redundant nodes of the jth processing layer in the ith index structure model, +.>
According to->Determination of->Video frame number of the ith video clip corresponding to the ith index structure model, and +.>And inquiring a mapping relation table according to the video frame number of the ith video fragment, wherein the mapping relation table records the mapping relation between the video frame number and the node number of the first processing layer.
Preferably, the specific way of defining the second sampling range is:
determining boundary video frames with target images in a first sampling range, wherein the boundary video frames refer to video frames with minimum frame sequence and video frames with maximum frame sequence with the target images in the first sampling range;
determining a difference between a first frame sequence of a core target video frame corresponding to the first target video frame and a minimum boundary video frame, and a difference between a second frame sequence of a maximum boundary video frame and the core target video frame;
Determining the sampling direction of the second sampling range according to the magnitude relation between the difference of the first frame sequence and the difference of the second frame sequence, and specifically:
when (when)When the sampling direction of the second sampling range is the sampling direction decreasing toward the frame sequence;
when (when)The sampling direction of the second sampling range is sampling in the direction of increasing the frame sequence;
when (when)The sampling direction of the second sampling range is the direction decreasing toward the frame sequence and the direction increasing toward the frame sequence;
wherein,for the difference of the first frame order, +.>For the difference of the second frame order, +.>For the frame sequence of the core target video frame corresponding to the first target video frame,/for the first target video frame>For the frame sequence of the minimum boundary video frame in which the target image is present in the first sampling range, +.>A frame order for a maximum boundary video frame for which the target image exists in the first sampling range;
taking a boundary video frame in the sampling direction as a starting point, sampling in the sampling direction according to a preset sampling quantity, and defining a second sampling range which is not overlapped with the first sampling range.
In order to achieve the above purpose, the invention also provides an image analysis device, which applies the intelligent analysis method of video images, wherein the image analysis device comprises an index control module, a video segmentation module and a processing module which are respectively connected with the index control module in a communication way, and the processing module comprises a plurality of processing units.
In the technical scheme of the invention, an index control module in the image analysis device is used for acquiring analysis videos and analysis targets of the videos to be analyzed; the video segmentation module is used for segmenting the video to be analyzed to form a plurality of video segments; the index control module establishes an index structure model for each video segment, wherein the index structure model comprises a plurality of processing layers, each processing layer comprises a plurality of processing nodes, in two adjacent processing layers, the processing node of the previous processing layer is provided with a parent node, and the parent node is linked with a child node in the next processing layer so as to construct a hierarchy and an index relation between each processing node in the index structure model; each processing node is used for arranging a processing unit, so that the processing units for processing the same video fragment can be guided to be arranged to form a plurality of processing layers through the index structure model, and the number of the processing units of each processing layer and the processing unit link relation between two adjacent processing layers are determined; the index control module establishes a corresponding sub-control unit for each video clip, and distributes a group of processing units for each sub-control unit; the sub-control unit arranges each processing unit in the allocated group of processing units according to the load parameters of the allocated group of processing units to form a processing unit index structure by arranging according to the index structure model. Therefore, when the order of the processing units in each processing unit index structure is arranged according to the order of the load parameters from small to large, and the analysis tasks of the video clips are distributed to the processing units arranged into the processing unit index structure for processing, a plurality of processing units can be adopted to process the same video clip at the same time, and the processing units with low load parameters can be preferentially utilized for image processing, so that the real-time processing performance of video image analysis is improved.
Drawings
Fig. 1 is a schematic flow chart of a video image intelligent analysis method according to a first embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description, suffixes such as "unit", "part" or "unit" for representing elements are used only for facilitating the description of the present invention, and have no specific meaning per se. Thus, "unit," "component," or "unit" may be used in combination.
Referring to fig. 1, a first embodiment of the present invention provides an intelligent analysis method for video images, which is applied to an image analysis device, wherein the image analysis device includes an index control module, and a video segmentation module and a processing module which are respectively connected with the index control module in a communication manner, and the processing module includes a plurality of processing units; the method comprises the following steps:
step S10, acquiring a video to be analyzed and an analysis target of the video to be analyzed through an index control module;
step S20, segmenting the video to be analyzed through a video segmentation module to form a plurality of video clips;
Step S30, an index control module establishes an index structure model for each video segment according to the number of video segments and the number of processing units formed by segmentation, wherein the index structure model comprises a plurality of processing layers, each processing layer comprises a plurality of processing nodes, in two adjacent processing layers, the processing node of the previous processing layer is provided with a parent node, and the parent node is linked with a child node in the next processing layer so as to construct a hierarchy and index relation between each processing node in the index structure model;
step S40, establishing a sub-control unit for controlling each video segment in the index control module, sending an analysis target and the corresponding video segment to different sub-control units, and distributing a group of processing units for each sub-control unit;
in step S50, each sub-control unit arranges the processing units according to the index structure model to form a processing unit index structure according to the assigned load parameters of a group of processing units according to the architecture of the corresponding index structure model, so as to complete the rapid processing of the video clips by utilizing the load parameter differences of the processing units in the processing unit index structure.
In the technical scheme of the invention, an index control module in the image analysis device is used for acquiring analysis videos and analysis targets of the videos to be analyzed; the video segmentation module is used for segmenting the video to be analyzed to form a plurality of video segments; the index control module establishes an index structure model for each video segment, wherein the index structure model comprises a plurality of processing layers, each processing layer comprises a plurality of processing nodes, in two adjacent processing layers, the processing node of the previous processing layer is provided with a parent node, and the parent node is linked with a child node in the next processing layer so as to construct a hierarchy and an index relation between each processing node in the index structure model; each processing node is used for arranging a processing unit, so that the processing units for processing the same video fragment can be guided to be arranged to form a plurality of processing layers through the index structure model, and the number of the processing units of each processing layer and the processing unit link relation between two adjacent processing layers are determined; the index control module establishes a corresponding sub-control unit for each video clip, and distributes a group of processing units for each sub-control unit; the sub-control unit arranges each processing unit in the allocated group of processing units according to the load parameters of the allocated group of processing units to form a processing unit index structure by arranging according to the index structure model. Therefore, when the order of the processing units in each processing unit index structure is arranged according to the order of the load parameters from small to large, and the analysis tasks of the video clips are distributed to the processing units arranged into the processing unit index structure for processing, a plurality of processing units can be adopted to process the same video clip at the same time, and the processing units with low load parameters can be preferentially utilized for image processing, so that the real-time processing performance of video image analysis is improved.
Specifically, each processing unit is a separate image analysis unit, and the processing module may be an independent computing terminal, or a set of several computing terminals. The kind of image analysis is not limited, and may be, for example, image classification, image recognition, object detection, and object extraction.
The analysis target of the video to be analyzed is input to the image analysis device by the user according to the kind of image analysis. In the present invention, the analysis targets of each video clip may be the same or different. For example, the analysis target may be to extract target a from a first video segment and target B from a second video segment.
The video segmentation module segments the video to be analyzed to segment the video to be analyzed into a plurality of video segments for synchronous analysis, thereby improving the real-time response performance of image analysis. Specifically, when dividing the video, the video to be analyzed can be divided into a plurality of video segments according to the duration or the video frame number of the video to be analyzed.
In a specific embodiment, the video segmentation module may further segment the video to be analyzed into a plurality of video segments according to a segmentation instruction input by a user in a user-defined manner. Thus, the length of each video segment may be the same or different.
The index control module builds an index structure model for each video segment, and allocates a sub-control unit for each video segment, wherein each sub-control unit is used for controlling a group of processing units. Each sub-control unit arranges a group of allocated processing units according to the index structure model corresponding to the video clips according to the sequence of processing nodes in the index structure model, and in the process of arranging the processing nodes, the sequence of each processing unit for receiving the image analysis tasks is determined, wherein the processing unit with the smallest load parameter is arranged at the processing layer with the highest load parameter and preferentially receives the image analysis tasks, and as the load parameter is small, the first processing result can be quickly obtained, the sub-control unit distributes the image analysis tasks to the processing unit with the smaller load parameter again according to the first processing result, and obtains the second processing result quickly, and then the sub-control unit distributes the image analysis tasks to the processing unit with the slightly larger load parameter again according to the second processing result. Therefore, in the task distribution process, task distribution is carried out according to the sequential index relation of each processing unit instead of indiscriminate distribution of image processing tasks, and meanwhile, each batch of image analysis tasks are distributed to the processing units with load parameters close to each other, so that the processing results of the processing units in the same processing level can be obtained at similar time, and the condition that the individual processing units do not respond for a long time is avoided. Thereby improving the real-time performance of video image analysis.
Based on the first embodiment of the intelligent analysis method for video images of the present invention, in the second embodiment of the intelligent analysis method for video images of the present invention, the step S30 includes:
step S31, an index control module establishes an index structure model for each video clip; at this time, the established index structure model is a model with variable parameters including: indexing the number of processing layers of the structure model, the number of processing nodes of each processing layer and the link relation of the processing nodes of the front processing layer and the rear processing layer;
step S32, the index control module obtains the number of video frames corresponding to each video clip, and determines the number of processing nodes of a first processing layer of the index structure model corresponding to the video clip according to the number of video frames;
step S33, according to the number of processing nodes of the first processing layer, the number of nodes of other processing layers in the index structure model is established, wherein as the level in the index structure model is increased, the number of nodes corresponding to each processing layer is increased;
step S34, determining a corresponding parent node in the last processing layer for the node in each processing layer after the first processing layer in the index structure model;
step S35, corresponding index structure models are built for each video segment according to the number of nodes of each processing layer and the parent nodes corresponding to the nodes in each processing layer after the first processing layer.
When the index control module determines the number of processing nodes of the first processing layer of the index structure model corresponding to the video clip, determining the number of processing nodes of the first processing layer according to the number of video frames corresponding to the video clip, and setting the number of processing nodes of the first processing layer to be more when the number of video frames is more; when the number of video frames is small, the number of processing nodes of the first processing layer is also set small.
Meanwhile, the index structure model adopts at least four layers, and the number of processing nodes contained in each processing layer is gradually increased from the first processing layer.
In the present invention, the number of nodes of each processing layer in each index structure model is determined by:
wherein i is the sequence number of the index structure model, j is the sequence number of the processing layer in the index structure model,i is the number of index structure models, +.>The number of processing layers in the ith index structure model;the total number of processing nodes of the jth processing layer in the ith index structure model; />For the number of processing nodes of the first processing layer in the ith index structure model, +.>For the number of child nodes, which each parent node in the (k-1) th processing layer links to the (k) th processing layer in the (i) th index structure model,/ >;/>Is the ithThe number of redundant nodes of the jth processing layer in the index structure model, +.>
According to->Determination of->Video frame number of the ith video clip corresponding to the ith index structure model, and +.>And inquiring a mapping relation table according to the video frame number of the ith video fragment, wherein the mapping relation table records the mapping relation between the video frame number and the node number of the first processing layer.
When k is 1,the number of child nodes linked to the 1 st processing layer for each parent node in the 0 th processing layer in the ith index structure model, that is, the number of processing nodes of the first processing layer.
When the number of nodes of each processing layer is initially calculated, the corresponding number of redundant nodes can be set to be 0, after the number of processing nodes of each processing layer is calculated, the number of processing layers of the index structure model can be determined due to the fact that the number of processing nodes allocated to each video segment is determined, and after the number of layers is determined, the processing nodes which are insufficient for constructing the next processing layer can be allocated to one or more processing layers as redundant nodes.
Based on the second embodiment of the intelligent analysis method for video images in the present invention, in a third embodiment of the intelligent analysis method for video images in the present invention, the step of establishing a sub-control unit for controlling each video clip at the index control module in the step S40, and sending the analysis target and the corresponding video clip to different sub-control units includes:
Step S41, establishing a sub-control unit for controlling each video clip according to the number of the video clips in the index control module;
step S42, establishing a one-to-one correspondence between video clips and sub-control units;
and step S43, according to the one-to-one correspondence, the analysis targets and the video clips are sent to the corresponding sub-control units.
Based on the third embodiment of the intelligent analysis method for video image of the present invention, in a fourth embodiment of the intelligent analysis method for video image of the present invention, the step S50 includes:
step S51, the sub-control unit detects the load parameters of each processing unit in the distributed group of processing units, and the processing units are arranged to form a processing unit sequence according to the order from small to large of the load parameters;
step S52, according to the processing unit sequence and the node order of the index structure model, arranging each processing unit in the processing unit sequence in each node in the processing unit index structure in turn to form a processing unit index structure in an arrangement mode;
step S53, each video frame in the video clips distributed by the sub-control unit is distributed to the processing units of each processing layer in the processing unit index structure, and quick processing is performed.
The load parameter represents a response rate of each processing unit, wherein the load parameter may be determined by detecting the response rate of each processing unit over a period of time.
The index structure model comprises a plurality of processing layers, each processing layer comprises a plurality of processing nodes, and each processing node is provided with different processing node serial numbers according to the sequence of the processing layer and the position of each processing layer.
And sequentially extracting processing units from the processing unit sequence, wherein the processing units are arranged at the positions of corresponding processing nodes of the index structure model, so that the processing units with small load parameters are arranged in a front processing layer, and the processing units with large load parameters are arranged in a rear processing layer.
The hierarchy of the processing layers determines the sequence of the processing units distributed to the image analysis tasks, after the processing units with small load parameters are distributed to the image analysis tasks, the processing units can acquire the image analysis results more quickly, form the next image analysis results and transmit the next image analysis results to the processing units arranged in the subsequent processing layers, and in the processing time of the analysis image analysis tasks, a certain waiting time can be reserved for the processing units with large load parameters to finish the previous processing tasks.
The load parameter may be a load rate, and may also include the number of waiting tasks in the processing queue and the processing queue completion.
Based on the fourth embodiment of the intelligent analysis method for video image of the present invention, in a fifth embodiment of the intelligent analysis method for video image of the present invention, the step S53 includes:
step S531, determining a sampling interval according to the number of video frames in the video clips distributed by the sub-control units and the number of processing units of the first processing layer in the processing unit index structure corresponding to the sub-control units;
step S532, extracting sampled video frames from the video clips according to the sampling interval by adopting a processing unit of a first processing layer in a processing unit index structure corresponding to the sub-control unit;
step S533, the processing unit of the first processing layer extracts a target image from the sampled video frames, so as to take the sampled video frame with the target image as a core target video frame;
step S534, distributing the first target video frames in the first sampling range of the preset sampling number before and after the core target video frame to a second layer processing unit in the processing unit index structure to extract the target image;
step S535, defining a second sampling range which is not overlapped with the first sampling range according to the boundary video frames of the target image, the core target video frames corresponding to the first target video frames and the preset sampling quantity in each first sampling range;
Step S536, distributing the second target video frame in the second sampling range to the third layer processing unit to extract the target image;
step S537, judging whether a target image exists in the second target video frame;
if yes, go to step S538: according to the boundary video frames of the target image in each second target video frame and the preset sampling quantity, a third sampling range which is not overlapped with the second sampling range and the first target video frame is defined; distributing a third target video frame in a third sampling range to a fourth layer processing unit to extract a target image;
if not, go to step S539: ending the extraction of the target image of the round;
in step S5310, image analysis results are formed from all the target images extracted by the processing unit index structure.
Further, the first, second, and third sampling ranges can be intersected to cover a sampling interval. If a sampling interval cannot be covered initially, the preset number of samples needs to be increased.
It is easy to understand that the video is formed by connecting a large number of pictures having a certain logical relationship according to a set time. Therefore, video image analysis requires that the target image be first extracted from the video (i.e., the image be extracted from each video frame of the video), and then the extracted image be analyzed.
For example, the video image analysis may be to extract a video frame having a certain target from a video recording, and then analyze a period of occurrence of the target according to all the extracted video frames.
The embodiment is used for determining a specific scheme for rapidly extracting the target image from the video clip to be analyzed for image analysis.
Specifically, in this embodiment, the first layer processing unit in the processing unit index structure is used as the first batch sampling unit. The same video clip has a plurality of continuous video frames, a first layer processing unit in the processing unit index structure samples one video frame from the plurality of continuous video frames at intervals of sampling, for example, a frame number of the video frame to be sampled in the video clip is calculated at intervals of sampling, then the frame number to be sampled is distributed to all processing units in the first layer processing unit, then each processing unit in the first layer processing unit extracts one video frame from the video clip according to the distributed frame number, and then each processing unit in the first layer processing unit analyzes the extracted video frame to determine whether a target image can be extracted.
It is easy to understand that, assuming that the video to be analyzed is a section of monitoring screen, the target vehicle is to be extracted from the monitoring screen, the target vehicle may stay for a period of time or leave for a period of time in a shooting period corresponding to the monitoring screen, and may stay in the monitoring screen all the time or not appear in the monitoring screen all the time.
In this embodiment, after dividing the video to be analyzed into a plurality of video segments, the first layer processing unit in the processing unit index structure corresponding to each video segment extracts some sampling video frames from the video segments according to the sampling interval. The extracted sampled video frames are intermittent and are played at intervals, and then some of the sampled video frames may have target images present and some of the sampled video frames may not have target images present. It is easy to understand that a target image that has appeared in a certain picture among consecutive pictures is not likely to exist in only one picture but is continuously present in a plurality of pictures, and therefore, there is a video frame before and after the frame order of the sampled video frames of the target image, the probability of the target image appearing is high. And the probability that the target image does not exist is larger because the video frames before and after the frame sequence of the sampling video frames of the target image do not exist.
Thus, after sampling the video frame and extracting the target image using the first layer processing unit in the processing unit index structure is completed, the sampled video frame in which the target image exists is taken as the core target video frame. Since the probability of the target image appearing in the video frames around the core target video frame is greater, the sub-control unit extracts the video frames having the number equal to the preset number of samples from before the frame sequence of each core target video frame, and extracts the video frames having the number equal to the preset number of samples from after the frame sequence of each core target video frame, as the first target video frame. And distribute the first target video frame to the second tier processing nodes.
Specifically, when the first target video frame is allocated to the second layer processing unit, the following manner is adopted:
and the processing unit extracting the target image in the first processing layer distributes the first target video frames extracted before and after the core target video frame to the child nodes in the second processing layer.
The sub-control unit re-determines a sampling interval (e.g., sampling interval decreases) according to a video frame range corresponding to the sampled video frame from which the target image is not extracted, and re-extracts the sampled video frame from the video clip according to the re-determined sampling interval using the processing unit of the first processing layer. If a new core target video frame is found in the resampling process, the first target video frame corresponding to the new core target video frame is also distributed to the sub-nodes in the second layer processing unit.
Since the first target video frames are sampled according to the preset sampling number from the video frames located before and after the core target video frame, the first target video frame corresponding to one core target video frame is a continuous video frame (except the core target video frame) and corresponds to the first sampling range.
The specific way of defining the second sampling range is as follows:
determining boundary video frames with target images in a first sampling range, wherein the boundary video frames refer to video frames with minimum frame sequence and video frames with maximum frame sequence with the target images in the first sampling range;
determining a difference between a first frame sequence of a core target video frame corresponding to the first target video frame and a minimum boundary video frame, and a difference between a second frame sequence of a maximum boundary video frame and the core target video frame;
determining the sampling direction of the second sampling range according to the magnitude relation between the difference of the first frame sequence and the difference of the second frame sequence, and specifically:
when (when)When the sampling direction of the second sampling range is the sampling direction decreasing toward the frame sequence;
when (when)The sampling direction of the second sampling range is sampling in the direction of increasing the frame sequence;
when (when)The sampling direction of the second sampling range is the direction decreasing toward the frame sequence and the direction increasing toward the frame sequence;
Wherein,for the difference of the first frame order, +.>For the difference of the second frame order, +.>For the frame sequence of the core target video frame corresponding to the first target video frame,/for the first target video frame>For the frame sequence of the minimum boundary video frame in which the target image is present in the first sampling range, +.>A frame order for a maximum boundary video frame for which the target image exists in the first sampling range;
taking a boundary video frame in the sampling direction as a starting point, sampling in the sampling direction according to a preset sampling quantity, and defining a second sampling range which is not overlapped with the first sampling range.
Wherein the sampling number of the second sampling range is a preset sampling number when the difference of the first frame order is larger than the difference of the second frame order or the difference of the first frame order is smaller than the difference of the second frame order; when the difference between the first frame sequences is equal to the difference between the second frame sequences, the sampling numbers of the second sampling range in two directions are respectively half of the preset sampling number, so that the total sampling number is the preset sampling number.
For example, when the sampling direction is sampling in a direction increasing toward the frame order, the video frame of the preset number of samples is extracted in the direction increasing toward the frame order with the largest boundary video frame in which the target image exists in the first sampling range as the start point, and the video frame overlapping with the first sampling range is removed, thereby obtaining the second sampling range.
The second layer processing units allocated to the first target video frame have a second sampling range, so each second layer processing unit allocated to the first target video frame allocates the second sampling range to the corresponding sub-node in the third layer processing node to the second target video frame in the non-overlapping range of the second sampling range and the first target video frame.
And distributing the second target video frames in the second sampling range to a third layer processing unit to extract target images.
The manner in which the third sampling range is determined is by analogy with respect to the method of the second sampling range.
In this embodiment, each processing layer is used to find a video paragraph in which a target image exists from a video clip. In the process of searching the video segments, the boundary video frames without the target image are filtered through the boundary video frames with the target image, so that the task amount of image analysis is reduced, and the instantaneity of image analysis is improved. Thus, each video frame need not be analyzed, but rather video frames in which the target image is present may be screened for image analysis.
Based on the fifth embodiment of the intelligent analysis method for video image of the present invention, in a sixth embodiment of the intelligent analysis method for video image of the present invention, the step S531 includes:
Step S531a, obtaining a preset maximum sampling interval;
step S531b, determining a temporary sampling interval according to the number of video frames in the video clips distributed by the sub-control units and the number of processing units of the first processing layer in the processing unit index structure corresponding to the sub-control units;
step S531c, judging whether the temporary sampling interval does not exceed the maximum sampling interval;
if yes, step S531d is executed: determining the temporary sampling interval as a sampling interval so that a processing unit of the first processing layer extracts all sampled video frames through one sampling;
if not, go to step S531e: the maximum sampling interval is taken as the sampling interval, so that the processing unit of the first processing layer extracts all sampled video frames through multiple sampling.
Specifically, the sampling interval is determined as follows:
when (when)When (I)>
When (when)When (I)>
Wherein,for the temporal sampling interval of the ith video clip, < > x->Between maximum samplesA partition; />For the number of video frames in the ith video clip,/->The number of processing units of the first processing layer in the structure is indexed for the ith processing unit.
It will be readily appreciated that when the temporary sampling interval exceeds the maximum sampling interval, after all of the processing units of the first processing layer complete the first sampling, at least a portion of the processing units need to continue the second sampling to complete all of the sampling.
In a seventh embodiment of the intelligent video image analysis method according to the present invention, the step S5310 includes:
step S5310a, obtaining video frame numbers corresponding to all extracted target images;
step S5310b, obtaining a time identification point corresponding to the video frame number;
step S5310c, obtaining the positions of all the extracted target images in the video frame;
in step S5310d, an image analysis result is formed according to the time identification point corresponding to the video frame number and the corresponding position of the target image.
In the video image, each frame corresponds to a time sequence, and there are various cases in the embodiment in which the image analysis result is obtained. For example, in video recording, each video frame corresponds to a video recording time point. According to the extracted video frame sequence number corresponding to the target image, the time mark point corresponding to the video can be obtained, the time period of the target image can be determined according to the time mark point, and the position of the target image can be determined according to the position of the target image in the video frame, so that the monitoring result of the target object can be determined.
In order to achieve the above purpose, the invention also provides an image analysis device, which applies the intelligent analysis method of video images, wherein the image analysis device comprises an index control module, a video segmentation module and a processing module which are respectively connected with the index control module in a communication way, and the processing module comprises a plurality of processing units.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part in the form of a software product stored in a computer readable storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device to enter the method according to the embodiments of the present invention.
In the description of the present specification, descriptions of terms "one embodiment," "another embodiment," "other embodiments," or "first embodiment through X-th embodiment," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, method steps or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (7)

1. The intelligent analysis method for the video image is characterized by being applied to an image analysis device, wherein the image analysis device comprises an index control module, a video segmentation module and a processing module which are respectively in communication connection with the index control module, and the processing module comprises a plurality of processing units; the method comprises the following steps:
Acquiring a video to be analyzed and an analysis target of the video to be analyzed through an index control module;
splitting the video to be analyzed through a video splitting module to form a plurality of video clips;
the index control module establishes an index structure model for each video segment according to the number of the video segments and the number of the processing units formed by segmentation, wherein the index structure model comprises a plurality of processing layers, each processing layer comprises a plurality of processing nodes, in two adjacent processing layers, the processing node of the previous processing layer is provided with a parent node, and the parent node is linked with a child node in the next processing layer so as to construct a hierarchy and an index relation among the processing nodes in the index structure model;
establishing sub-control units for controlling each video clip in the index control module, sending the analysis target and the corresponding video clip to different sub-control units, and distributing a group of processing units for each sub-control unit;
each sub control unit arranges the processing units according to the index structure model to form a processing unit index structure according to the framework of the corresponding index structure model and the distributed load parameters of a group of processing units, so that the quick processing of the video clips is completed by utilizing the load parameter difference of the processing units in the processing unit index structure;
The index control module establishes an index structure model for each video clip according to the number of video clips formed by segmentation and the number of processing units, and the index structure model comprises the following steps:
the index control module establishes an index structure model for each video segment;
the index control module acquires the number of video frames corresponding to each video segment, and determines the number of processing nodes of a first processing layer of the index structure model corresponding to the video segment according to the number of video frames;
according to the number of processing nodes of the first processing layer, the number of nodes of other processing layers in the index structure model is established, wherein as the level in the index structure model is increased, the number of nodes corresponding to each processing layer is increased;
determining a corresponding parent node in a previous processing layer for a node in each processing layer after the first processing layer in the index structure model;
establishing a corresponding index structure model for each video segment according to the number of nodes of each processing layer and the parent node corresponding to the node in each processing layer after the first processing layer;
each sub-control unit arranges the processing units according to the index structure model to form a processing unit index structure according to the load parameters of a group of allocated processing units according to the architecture of the corresponding index structure model, so as to complete the rapid processing of the video clips by utilizing the load parameter differences of the processing units in the processing unit index structure, and the method comprises the following steps:
The sub-control unit detects the load parameters of each processing unit in the distributed group of processing units, and the processing units are arranged to form a processing unit sequence according to the sequence from small to large of the load parameters;
according to the processing unit sequence and the node sequence of the index structure model, each processing unit in the processing unit sequence is sequentially arranged on each node in the processing unit index structure to form a processing unit index structure in an arrangement mode;
each video frame in the video clips distributed by the sub-control units is distributed to the processing units of each processing layer in the processing unit index structure, and rapid processing is carried out;
the analysis target is to extract a target image from the video image; the step of allocating each video frame in the video clips allocated by the sub-control unit to the processing unit of each processing layer in the processing unit index structure to perform rapid processing includes:
determining a sampling interval according to the number of video frames in the video clips distributed by the sub-control units and the number of processing units of the first processing layer in the processing unit index structure corresponding to the sub-control units;
extracting sampling video frames from the video clips according to sampling intervals by adopting a processing unit of a first processing layer in a processing unit index structure corresponding to the sub-control unit;
The processing unit of the first processing layer extracts a target image from the sampled video frames, and takes the sampled video frames with the target image as core target video frames;
distributing first target video frames in a first sampling range of a preset sampling number positioned before and after a core target video frame to a second layer of processing units in the processing unit index structure to extract a target image;
defining a second sampling range which is not overlapped with the first sampling range according to boundary video frames with target images in each first sampling range, core target video frames corresponding to the first target video frames and preset sampling quantity, wherein the boundary video frames refer to video frames with minimum frame sequence and video frames with maximum frame sequence of the target images in the first sampling range;
distributing a second target video frame in a second sampling range to a third layer processing unit to extract a target image;
judging whether a target image exists in the second target video frame or not;
if so, according to the boundary video frames of the target image in each second target video frame and the preset sampling quantity, a third sampling range which is not overlapped with the second sampling range and the first target video frame is defined; distributing a third target video frame in a third sampling range to a fourth layer processing unit to extract a target image;
If not, ending the extraction of the target image of the round;
and forming an image analysis result according to all the target images extracted by the processing unit index structure.
2. The intelligent analysis method of video images according to claim 1, wherein the step of establishing a sub-control unit for controlling each video clip at the index control module and transmitting the analysis target and the corresponding video clip to different sub-control units comprises:
establishing a sub-control unit for controlling each video clip according to the number of the video clips in the index control module;
establishing a one-to-one correspondence between video clips and sub-control units;
and sending the analysis targets and the video clips to the corresponding sub-control units according to the one-to-one correspondence.
3. The intelligent analysis method according to claim 1, wherein the step of determining the sampling interval according to the number of video frames in the video clips allocated to the sub-control units and the number of processing units of the first processing layer in the processing unit index structure corresponding to the sub-control units comprises:
acquiring a preset maximum sampling interval;
determining a temporary sampling interval according to the number of video frames in the video clips distributed by the sub-control unit and the number of processing units of the first processing layer in the processing unit index structure corresponding to the sub-control unit;
Judging whether the temporary sampling interval does not exceed the maximum sampling interval;
if yes, determining the temporary sampling interval as a sampling interval, so that a processing unit of the first processing layer extracts all sampled video frames through one-time sampling;
if not, taking the maximum sampling interval as the sampling interval, so that the processing unit of the first processing layer extracts all the sampled video frames through multiple times of sampling.
4. The intelligent analysis method according to claim 1, wherein the step of forming the image analysis result according to all the target images extracted from the processing unit index structure comprises:
acquiring video frame numbers corresponding to all extracted target images;
acquiring a time identification point corresponding to a video frame sequence number;
acquiring the positions of all extracted target images in a video frame;
and forming an image analysis result according to the time identification point corresponding to the video frame number of the target image and the corresponding position.
5. The intelligent analysis method of video images according to claim 1, wherein the number of nodes of each processing layer in each index structure model is determined by:
wherein i is the sequence number of the index structure model, j is the sequence number of the processing layer in the index structure model, I is the number of index structure models, +.>,/>The number of processing layers in the ith index structure model; />For the ith index structure modelThe total number of processing nodes of the j-th processing layer; />For the number of processing nodes of the first processing layer in the ith index structure model, +.>For the number of child nodes, which each parent node in the (k-1) th processing layer links to the (k) th processing layer in the (i) th index structure model,/>,/>;/>For the number of redundant nodes for the jth processing layer in the ith indexing structure model,
according to->Determination of->Is the video frame number of the ith video segment corresponding to the ith index structure model, andand inquiring a mapping relation table according to the video frame number of the ith video fragment, wherein the mapping relation table records the mapping relation between the video frame number and the node number of the first processing layer.
6. The intelligent analysis method of video images according to claim 3, wherein the specific way of defining the second sampling range is:
determining a boundary video frame in which a target image exists in a first sampling range;
determining a difference between a first frame sequence of a core target video frame corresponding to the first target video frame and a minimum boundary video frame, and a difference between a second frame sequence of a maximum boundary video frame and the core target video frame;
Determining the sampling direction of the second sampling range according to the magnitude relation between the difference of the first frame sequence and the difference of the second frame sequence, and specifically:
when (when)When the sampling direction of the second sampling range is the sampling direction decreasing toward the frame sequence;
when (when)The sampling direction of the second sampling range is sampling in the direction of increasing the frame sequence;
when (when)The sampling direction of the second sampling range is the direction decreasing toward the frame sequence and the direction increasing toward the frame sequence;
wherein,for the difference of the first frame order, +.>For the difference of the second frame order, +.>For the frame sequence of the core target video frame corresponding to the first target video frame,/for the first target video frame>For the frame sequence of the minimum boundary video frame in which the target image is present in the first sampling range, +.>A frame order for a maximum boundary video frame for which the target image exists in the first sampling range;
taking a boundary video frame in the sampling direction as a starting point, sampling in the sampling direction according to a preset sampling quantity, and defining a second sampling range which is not overlapped with the first sampling range.
7. An image analysis device, characterized in that the intelligent analysis method of video images according to any one of claims 1 to 6 is applied, the image analysis device comprises an index control module, and a video segmentation module and a processing module which are respectively connected with the index control module in a communication way, and the processing module comprises a plurality of processing units.
CN202311466662.3A 2023-11-07 2023-11-07 Intelligent analysis method and device for video image Active CN117201873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311466662.3A CN117201873B (en) 2023-11-07 2023-11-07 Intelligent analysis method and device for video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311466662.3A CN117201873B (en) 2023-11-07 2023-11-07 Intelligent analysis method and device for video image

Publications (2)

Publication Number Publication Date
CN117201873A CN117201873A (en) 2023-12-08
CN117201873B true CN117201873B (en) 2024-01-02

Family

ID=89003828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311466662.3A Active CN117201873B (en) 2023-11-07 2023-11-07 Intelligent analysis method and device for video image

Country Status (1)

Country Link
CN (1) CN117201873B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11288424A (en) * 1998-02-03 1999-10-19 Jisedai Joho Hoso System Kenkyusho:Kk Recording medium for recording video index information and method for managing video information using video index information and recording medium for recording voice index information and method for managing voice information using voice index information
US6771875B1 (en) * 1998-09-03 2004-08-03 Ricoh Company Ltd. Recording medium with video index information recorded therein video information management method which uses the video index information recording medium with audio index information recorded therein audio information management method which uses the audio index information and a video retrieval system
US7287180B1 (en) * 2003-03-20 2007-10-23 Info Value Computing, Inc. Hardware independent hierarchical cluster of heterogeneous media servers using a hierarchical command beat protocol to synchronize distributed parallel computing systems and employing a virtual dynamic network topology for distributed parallel computing system
KR20080078217A (en) * 2007-02-22 2008-08-27 정태우 Method for indexing object in video, method for annexed service using index of object and apparatus for processing video
CN102402612A (en) * 2011-12-20 2012-04-04 广州中长康达信息技术有限公司 Video semantic gateway
CA2824330A1 (en) * 2011-01-12 2012-07-19 Videonetics Technology Private Limited An integrated intelligent server based system and method/systems adapted to facilitate fail-safe integration and/or optimized utilization of various sensory inputs
CN110324672A (en) * 2019-05-30 2019-10-11 腾讯科技(深圳)有限公司 A kind of video data handling procedure, device, system and medium
CN111950393A (en) * 2020-07-24 2020-11-17 杭州电子科技大学 Time sequence action fragment segmentation method based on boundary search agent
CN113343029A (en) * 2021-06-18 2021-09-03 中国科学技术大学 Social relationship enhanced complex video character retrieval method
CN114363721A (en) * 2022-01-19 2022-04-15 平安国际智慧城市科技股份有限公司 HLS-based video playing method, device, equipment and storage medium
CN116489449A (en) * 2023-04-04 2023-07-25 清华大学 Video redundancy fragment detection method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7697026B2 (en) * 2004-03-16 2010-04-13 3Vr Security, Inc. Pipeline architecture for analyzing multiple video streams
US9596447B2 (en) * 2010-07-21 2017-03-14 Qualcomm Incorporated Providing frame packing type information for video coding

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11288424A (en) * 1998-02-03 1999-10-19 Jisedai Joho Hoso System Kenkyusho:Kk Recording medium for recording video index information and method for managing video information using video index information and recording medium for recording voice index information and method for managing voice information using voice index information
US6771875B1 (en) * 1998-09-03 2004-08-03 Ricoh Company Ltd. Recording medium with video index information recorded therein video information management method which uses the video index information recording medium with audio index information recorded therein audio information management method which uses the audio index information and a video retrieval system
US7287180B1 (en) * 2003-03-20 2007-10-23 Info Value Computing, Inc. Hardware independent hierarchical cluster of heterogeneous media servers using a hierarchical command beat protocol to synchronize distributed parallel computing systems and employing a virtual dynamic network topology for distributed parallel computing system
KR20080078217A (en) * 2007-02-22 2008-08-27 정태우 Method for indexing object in video, method for annexed service using index of object and apparatus for processing video
CA2824330A1 (en) * 2011-01-12 2012-07-19 Videonetics Technology Private Limited An integrated intelligent server based system and method/systems adapted to facilitate fail-safe integration and/or optimized utilization of various sensory inputs
CN102402612A (en) * 2011-12-20 2012-04-04 广州中长康达信息技术有限公司 Video semantic gateway
CN110324672A (en) * 2019-05-30 2019-10-11 腾讯科技(深圳)有限公司 A kind of video data handling procedure, device, system and medium
CN111950393A (en) * 2020-07-24 2020-11-17 杭州电子科技大学 Time sequence action fragment segmentation method based on boundary search agent
CN113343029A (en) * 2021-06-18 2021-09-03 中国科学技术大学 Social relationship enhanced complex video character retrieval method
CN114363721A (en) * 2022-01-19 2022-04-15 平安国际智慧城市科技股份有限公司 HLS-based video playing method, device, equipment and storage medium
CN116489449A (en) * 2023-04-04 2023-07-25 清华大学 Video redundancy fragment detection method and system

Also Published As

Publication number Publication date
CN117201873A (en) 2023-12-08

Similar Documents

Publication Publication Date Title
EP3261017A1 (en) Image processing system to detect objects of interest
CN111784685A (en) Power transmission line defect image identification method based on cloud edge cooperative detection
US20160098636A1 (en) Data processing apparatus, data processing method, and recording medium that stores computer program
WO2021139049A1 (en) Detection method, detection apparatus, monitoring device, and computer readable storage medium
US10949984B2 (en) Object tracking system and method thereof
CN112966646A (en) Video segmentation method, device, equipment and medium based on two-way model fusion
CN113158909B (en) Behavior recognition light-weight method, system and equipment based on multi-target tracking
CN110691202A (en) Video editing method, device and computer storage medium
CN112183166A (en) Method and device for determining training sample and electronic equipment
CN109195011B (en) Video processing method, device, equipment and storage medium
JP2010165046A (en) Information processing apparatus and information processing method
WO2022213540A1 (en) Object detecting, attribute identifying and tracking method and system
CN114627406A (en) Method, system, equipment and medium for identifying rapid crowd gathering behaviors
CN114943923B (en) Method and system for recognizing explosion flare smoke of cannonball based on video of deep learning
CN107277557B (en) A kind of methods of video segmentation and system
CN117201873B (en) Intelligent analysis method and device for video image
CN114187558A (en) Video scene recognition method and device, computer equipment and storage medium
CN111539390A (en) Small target image identification method, equipment and system based on Yolov3
KR102219906B1 (en) Method and apparatus for automatically generating learning data for machine learning
CN110322391A (en) Visual alarm affair analytical method based on multithreading
CN111507424B (en) Data processing method and device
De Santo et al. An unsupervised algorithm for anchor shot detection
JP2010087882A (en) Moving object tracking device
CN114241363A (en) Process identification method, process identification device, electronic device, and storage medium
CN112989869A (en) Optimization method, device and equipment of face quality detection model and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant