CN111611433A - Video processing method, device, equipment and medium - Google Patents

Video processing method, device, equipment and medium Download PDF

Info

Publication number
CN111611433A
CN111611433A CN202010442760.3A CN202010442760A CN111611433A CN 111611433 A CN111611433 A CN 111611433A CN 202010442760 A CN202010442760 A CN 202010442760A CN 111611433 A CN111611433 A CN 111611433A
Authority
CN
China
Prior art keywords
video
detection block
frame
time
dynamic detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010442760.3A
Other languages
Chinese (zh)
Other versions
CN111611433B (en
Inventor
尤兰婷
陈明珠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010442760.3A priority Critical patent/CN111611433B/en
Publication of CN111611433A publication Critical patent/CN111611433A/en
Application granted granted Critical
Publication of CN111611433B publication Critical patent/CN111611433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7328Query by example, e.g. a complete video frame or video sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention discloses a video processing method, a video processing device, video processing equipment and a video processing medium, which are used for solving the problems that the existing process for determining a dynamic viewing frequency is complicated, and in addition, a target video segment for storing the dynamic viewing frequency occupies a large storage space. In the embodiment of the invention, at least two pre-stored dynamic detection blocks of a video to be analyzed are obtained, wherein the dynamic detection blocks record the time of at least one dynamic detection frame, when specific behavior analysis is carried out, corresponding target video segments are determined according to the maximum time and the minimum time contained in each dynamic detection block respectively, and a moving object in each target video segment is subjected to behavior analysis. In the embodiment of the invention, the time of the dynamic detection frame is recorded in the dynamic detection block, and the data volume is small, so the storage space for storing the dynamic detection block is small, and the time of the dynamic detection frame is stored in the dynamic detection block, and the processing such as coding is not needed, so the generation efficiency of the dynamic detection block is improved.

Description

Video processing method, device, equipment and medium
Technical Field
The present invention relates to the field of video processing, and in particular, to a method, an apparatus, a device, and a medium for processing a video.
Background
With the development of informatization, the use of video monitoring is more and more popular, and more video information is generated by video monitoring. The massive video information not only needs a large amount of storage space, but also is not beneficial to follow-up staff to look up the video information, thereby consuming a large amount of time and labor. Therefore, how to effectively and conveniently process videos is a problem which is concerned by people in recent years.
In the related art, after an analysis instruction for performing behavior analysis on a video to be analyzed is received, a motion detection video corresponding to the video to be analyzed, which is stored in advance, is determined, wherein the motion detection video is generated by recoding according to each motion detection frame containing motion detection data in the video to be analyzed in advance, and a moving object contained in the motion detection video is subjected to behavior analysis by traversing the motion detection video.
Because the method needs to consume time and resources before determining the motion detection video, the motion detection frame containing the motion detection data is re-encoded into the motion detection video, the process of determining the motion detection video is complicated, and the motion detection video needs to be stored before analysis, because the motion detection video contains a large amount of data and occupies a very large storage space.
Disclosure of Invention
The embodiment of the invention provides a video processing method, a video processing device, video processing equipment and a video processing medium, which are used for solving the problems that the existing process for determining a dynamic viewing frequency is complicated, and in addition, the dynamic viewing frequency is stored to occupy a large storage space.
The embodiment of the invention provides a video processing method, which comprises the following steps:
acquiring at least two motion detection blocks of a pre-stored video to be analyzed, wherein each motion detection block comprises time information of at least two motion detection frames in the video to be analyzed, and the time contained in each motion detection block is different;
and determining corresponding target video segments according to the maximum time and the minimum time contained in each dynamic detection block, and performing behavior analysis on the moving object in each target video segment.
Further, the at least two motion detection blocks are determined by:
if a dynamic inspection frame containing dynamic inspection data in the video to be analyzed is received, acquiring time information of the dynamic inspection frame in the video to be analyzed;
and when the preset packaging requirement is determined to be met, packaging the acquired time information into the dynamic detection block.
Further, the determining that the preset packaging requirement is met includes:
if the information of the set amount of time is acquired, determining that the preset packaging requirement is met; and/or
And if the current moment is the packaging moment, determining that the preset packaging requirement is met.
Further, after the information of the acquired time is encapsulated as the motion detection block, the method further includes:
judging whether the number of the time information contained in the dynamic detection block is not less than two;
and if not, filtering the dynamic detection block.
Further, after the information of the acquired time is encapsulated as the motion detection block, the method further includes:
and storing the dynamic detection block, determining a position offset according to the storage position of the dynamic detection block and the storage position of the last dynamic detection block of the video to be analyzed, and storing the position offset and the data length of the dynamic detection block into the last dynamic detection block, so that the storage position of the dynamic detection block is searched according to the position offset and the data length stored in the last dynamic detection block.
Further, the determining the corresponding target video segment according to the maximum time and the minimum time contained in each of the dynamic test blocks respectively includes:
aiming at each dynamic detection block, acquiring a first video frame corresponding to the minimum time and a second video frame corresponding to the maximum time, wherein the first video frame and the second video frame are contained in the dynamic detection block; taking a first target video frame corresponding to a first set frame number ahead of the first video frame as a starting frame of a target video segment corresponding to the dynamic detection block; taking a second target video frame corresponding to a second set frame number backward from the second video frame as a termination frame of a target video segment corresponding to the dynamic detection block; and determining a target video segment corresponding to the dynamic detection block according to each video frame of the video to be analyzed between the starting frame and the ending frame.
Further, the time is a timestamp of the collected corresponding motion detection frame, or a relative time of the corresponding motion detection frame in the video to be analyzed.
An embodiment of the present invention further provides a video processing apparatus, where the apparatus includes:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring at least two motion detection blocks of a pre-stored video to be analyzed, each motion detection block comprises time information of at least two motion detection frames in the video to be analyzed, and the time contained in each motion detection block is different;
and the processing module is used for determining corresponding target video segments according to the maximum time and the minimum time contained in each dynamic detection block respectively and performing behavior analysis on the moving object in each target video segment.
Further, the apparatus further comprises:
the preprocessing module is used for acquiring the time information of the motion detection frame in the video to be analyzed if the motion detection frame containing the motion detection data in the video to be analyzed is received; and when the preset packaging requirement is determined to be met, packaging the acquired time information into the dynamic detection block.
Further, the preprocessing module is configured to determine that a preset packaging requirement is met if the preset amount of time information is acquired; and/or determining that the preset packaging requirement is met if the current moment is the packaging moment.
Further, the preprocessing module is further configured to determine whether the number of pieces of time information included in the motion detection block is not less than two; and if not, filtering the dynamic detection block.
Further, the preprocessing module is further configured to store the motion detection block, determine a position offset according to a storage position of the motion detection block and a storage position of a last motion detection block of the stored video to be analyzed, and store the position offset and a data length of the motion detection block in the last motion detection block, so that a storage position of the motion detection block is searched according to the position offset and the data length stored in the last motion detection block.
Further, the processing module is configured to, for each motion detection block, obtain a first video frame corresponding to a minimum time and a second video frame corresponding to a maximum time that are included in the motion detection block; taking a first target video frame corresponding to a first set frame number ahead of the first video frame as a starting frame of a target video segment corresponding to the dynamic detection block; taking a second target video frame corresponding to a second set frame number backward from the second video frame as a termination frame of a target video segment corresponding to the dynamic detection block; and determining a target video segment corresponding to the dynamic detection block according to each video frame of the video to be analyzed between the starting frame and the ending frame.
An embodiment of the present invention further provides an electronic device, where the electronic device at least includes a processor and a memory, and the processor is configured to implement the steps of the video processing method according to any one of the above descriptions when executing the computer program stored in the memory.
The embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of any one of the video processing methods described above.
In the embodiment of the invention, at least two pre-stored dynamic detection blocks of a video to be analyzed are obtained, wherein the dynamic detection blocks record the time of at least one dynamic detection frame, when specific behavior analysis is carried out, corresponding target video segments are determined according to the maximum time and the minimum time contained in each dynamic detection block respectively, and a moving object in each target video segment is subjected to behavior analysis. In the embodiment of the invention, the time of the dynamic detection frame is recorded in the dynamic detection block, and the data volume is small, so the storage space for storing the dynamic detection block is small, and in addition, the time of the dynamic detection frame is stored in the dynamic detection block, and the processing such as coding is not needed, so the generation efficiency of the dynamic detection block is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a video processing process according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a specific process of determining a motion detection block of a video to be analyzed according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of behavior analysis of a video to be analyzed according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a specific video processing procedure according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the result of a video processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
The embodiment of the invention provides a video processing method, a video processing device, video processing equipment and a video processing medium, which are used for reducing the storage space occupied by a dynamic video.
Example 1:
fig. 1 is a schematic view of a video processing process according to an embodiment of the present invention, where the process includes the following steps:
s101: the method comprises the steps of obtaining at least two pre-stored dynamic examination blocks of a video to be analyzed, wherein each dynamic examination block comprises time information of at least two dynamic examination frames in the video to be analyzed, and the time contained in each dynamic examination block is different.
The video processing method provided by the embodiment of the invention is applied to electronic equipment, and the electronic equipment can be intelligent equipment with an image acquisition function, and can also be equipment such as a PC (personal computer), a server and the like.
And aiming at each video, generating at least two dynamic detection blocks corresponding to the video in real time when the video is received, recording the time information of at least two dynamic detection frames in the video in each dynamic detection block, and storing each generated dynamic detection block. The video and each corresponding dynamic test block are correspondingly stored, and the file path mapping table of the storage position of each video and the storage position of the corresponding dynamic test block is determined, so that when a certain video is determined to be analyzed, the storage position of the dynamic test block corresponding to the video is determined according to the storage position of the video and the file path mapping table of the storage position of each video and the storage position of the corresponding dynamic test block which are stored in advance.
When the electronic equipment receives an analysis instruction for performing behavior analysis on a video to be analyzed, the video to be analyzed and at least two corresponding dynamic detection blocks thereof are obtained according to identification information of the video to be analyzed carried in the analysis instruction, and each dynamic detection block of the video is obtained from a storage space.
The motion detection frame is a video frame containing motion detection data in a video, namely a video frame representing the motion of an object in the video. The identification information of the video to be analyzed may be a storage location of the video to be analyzed, or may also be a file identification of the video to be analyzed, such as a name or an index number of the video to be analyzed. Specifically, the setting can be flexibly performed according to actual requirements, and is not specifically limited herein.
In addition, the time is a timestamp of the collected corresponding motion detection frame, or the relative time of the corresponding motion detection frame in the video to be analyzed. For example, if the time is the time stamp of the corresponding motion detection frame, and if a certain motion detection frame of the video to be analyzed is acquired at 13 o 'clock, 50 min 48 sec 12 o' clock in 2020 month, the time of the motion detection frame is 1578808248. If the time is the relative time of the corresponding motion detection frame in the video to be analyzed, and if the total duration of the video to be analyzed is 20 minutes, the video frame corresponding to the 3 rd minute and 50 th second of the video to be analyzed is the motion detection frame, and the time of the motion detection frame may be 000350.
S102: and determining corresponding target video segments according to the maximum time and the minimum time contained in each dynamic detection block, and performing behavior analysis on the moving object in each target video segment.
After each dynamic detection block of the video to be analyzed is acquired, for each dynamic detection block, because time is recorded in the dynamic detection block, a first video frame corresponding to the minimum time in the dynamic detection block can be directly used as a starting frame of a target video segment corresponding to the dynamic detection block, a second video frame corresponding to the maximum time in the dynamic detection block can be used as an ending frame of the target video segment corresponding to the dynamic detection block, and a video segment of the video to be analyzed between the starting frame and the ending frame is determined as the target video segment corresponding to the dynamic detection block.
For example, if the time included in a certain motion block is "1578807948", "1578807959", "1578807960", "1578807961" or "1578808248", the minimum time included in the motion block is "1578807948", and the maximum time included in the motion block is "1578808248", the first video frame corresponding to the minimum time "1578807948" is determined as the start frame of the target video segment corresponding to the motion block, the second video frame corresponding to the maximum time "1578808248" is determined as the end frame of the target video segment corresponding to the motion block, and the video segment of the video to be analyzed between the start frame and the end frame is determined as the target video segment corresponding to the motion block.
After each target video segment of the video to be analyzed is acquired, behavior analysis is performed on a moving object in the target video segment respectively for each target video segment, for example, the speed of the moving object in the target video segment is analyzed, and the motion trajectory of the moving object in the target video segment is analyzed.
It should be noted that the process of performing behavior analysis on the moving object in each target video segment belongs to the prior art, and is not described herein again.
In the embodiment of the invention, at least two pre-stored dynamic detection blocks of a video to be analyzed are obtained, wherein the dynamic detection blocks record the time of at least one dynamic detection frame, when specific behavior analysis is carried out, corresponding target video segments are determined according to the maximum time and the minimum time contained in each dynamic detection block respectively, and a moving object in each target video segment is subjected to behavior analysis. In the embodiment of the invention, the time of the dynamic detection frame is recorded in the dynamic detection block, and the data volume is small, so the storage space for storing the dynamic detection block is small, and in addition, the time of the dynamic detection frame is stored in the dynamic detection block, and the processing such as coding is not needed, so the generation efficiency of the dynamic detection block is improved.
Example 2:
on the basis of the above embodiments, in the embodiment of the present invention, in order to conveniently and quickly determine the target video segment for behavior analysis, the at least two motion detection blocks are determined as follows:
if a dynamic inspection frame containing dynamic inspection data in the video to be analyzed is received, acquiring time information of the dynamic inspection frame in the video to be analyzed;
and when the preset packaging requirement is determined to be met, packaging the acquired time information into the dynamic detection block.
In the embodiment of the invention, when receiving the information of the video frame of a certain video sequentially sent by an image acquisition device, an electronic device determines whether the video frame is a motion detection frame according to whether the information of the video frame contains motion detection data, and directly stores the information of the video frame into a storage position of the video if the video frame is not the motion detection frame; if the video frame is a dynamic detection frame, acquiring time information corresponding to the dynamic detection frame in the video to be analyzed from the information of the video frame, and encapsulating the acquired time information into a dynamic detection block when the electronic equipment determines that the preset encapsulation requirement is met.
In a specific implementation, since the frame length of each motion detection frame is very small, and is often in the order of milliseconds, and the video segment related to the running process of the object is generally in the order of seconds, the video segment related to the running process of the object generally includes a plurality of motion detection frames. In order to avoid CPU consumption of the information of the time of frequently writing each motion detection frame, the acquired information of the time of the motion detection frame may be buffered in a buffer. And when the electronic equipment determines that the preset packaging requirement is met, packaging the information of the current cached time in the buffer area into a dynamic detection block.
In order to facilitate the packaging of the motion detection block, the determining that the preset packaging requirement is met includes:
if the information of the set amount of time is acquired, determining that the preset packaging requirement is met; and/or
And if the current moment is the packaging moment, determining that the preset packaging requirement is met.
In a specific implementation, since the length of the information of the time of each motion detection frame of the video to be analyzed is relatively fixed, for example, the time stamp is generally 10 bits, and the relative time is generally 6 bits. For a buffer with a fixed storage space size, the amount of time information that can be buffered is also determined. In the embodiment of the present invention, in order to prevent the CPU resource from being consumed by the information of the frequently written time, the acquired information of the time is cached in the buffer, therefore, in the embodiment of the present invention, the amount of the cached information of the time if the buffer is full may be determined in advance according to the size of the buffer and the length of the information of the time, the amount of the acquired information of the time may be set in advance according to the amount, and when the electronic device caches the set amount of information of the time in the buffer, it is determined that the buffer is full, and the preset encapsulation requirement is satisfied.
For example, the size of the storage space of the buffer is 1024K, and the size of the storage space occupied by the temporal information of the motion detection frame is 16K, then at most 1024/24-64 temporal information can be buffered in the buffer, so that the preset number can be 64. Of course, it is also possible to set the number of time information to be less than 64, and in particular, the time information can be flexibly set as needed.
In another possible implementation, whether the packaging requirement is met may also be determined according to whether the current time is the packaging time. In a specific implementation process, the packaging time can be determined in the following manner:
mode 1: an initial time and a time interval are preset, the initial time can be any time of the day, such as 00: 00/second, 13: 01/00/second, and the time interval can be any duration value, such as 20 seconds, 30 seconds, 15 seconds, and the like. When the packaging time of a certain day is determined, the time corresponding to the backward integral multiple of the preset time interval of the initial time of the day before the initial time of the next day is determined as the packaging time. For example, the preset initial time is 00 minutes 00 seconds at 00 point of each day, the preset time interval is 20 seconds, the time corresponding to the preset time interval 2 times backward from the initial time of the next day is 00 minutes 40 seconds at 00 point, and the 00 minutes 40 seconds is determined as the packaging time.
Mode 2: the encapsulation time is artificially preset, and because the probability of moving objects appearing in the video in the daytime is higher, the encapsulation time corresponding to the daytime can be set more, and the encapsulation time corresponding to the evening is less, for example: the interval between 0 and 8 points can be set to be one encapsulating time every 40 seconds, the interval between 8 and 20 points is set to be one encapsulating time every 20 seconds, and the interval between 20 and 24 points is set to be one encapsulating time every 30 seconds.
For example, still taking the above as an example, the packaging time between 0 point and 8 points may be 00 point 00 minute 40 seconds, 01 point 00 minute 00 seconds, 08 point 00 minute 00 seconds, and the like; the packaging time between 8 points and 19 points can be 15 points 00 minutes 20 seconds, 18 points 40 minutes 00 seconds and the like; the encapsulation time between 20 o 'clock and 24 o' clock may be 20 o 'clock 00 min 30 sec, 21 o' clock 10 min 00 sec, etc.
Mode 3: presetting a time interval, taking the time when the first dynamic inspection frame is cached in the buffer area as initial time, and determining the encapsulation time according to the initial time and the preset time interval. For example, the preset time interval is 20s, the initial time of buffering the first motion detection frame in the buffer is 15 o 'clock 30 min 00 s, and the encapsulation time is determined to be 15 o' clock 30 min 20s according to the initial time and the preset time interval.
Based on the determined packaging time in the embodiment, the electronic device determines whether the current time is the packaging time, and if so, determines that the preset packaging requirement is met; otherwise, continuing to buffer the time information of the dynamic inspection frame in the buffer area.
Wherein, when setting the time interval, different values can be set according to different scenes, if it is desired to reduce the writing consumption of the motion check block to the CPU, the value can be set larger, if it is desired to reduce the maximum time contained in the subsequent pass motion check frame and the length of the target video segment determined by the minimum time, the value can be set smaller.
In another possible implementation, since the information of the set amount of time may not be received, for example, the set amount is 60, and only 50 time information videos are received and then end, the information of the time in the buffer cannot be packaged into the motion detection block, so that the corresponding target video segment cannot be determined according to the 50 time and then the behavior of the target video segment is analyzed. Therefore, in order to determine as much as possible the video segment containing the motion picture frame in the video to be analyzed, the above two embodiments may be combined, that is, the number of pieces of information of the acquisition time and the encapsulation time are preset. When the electronic equipment meets any kind of packaging requirements, namely the electronic equipment obtains the information of the time with the set quantity, or when the current time is the packaging time, the electronic equipment is determined to meet the preset packaging requirements, so that the time information of each dynamic inspection frame of the video is packaged into the dynamic inspection block.
Since the motion detection frame is a video frame containing a moving object, and it is unpredictable when the moving object appears, a situation may occur in which the motion detection frame does not appear for a certain period of time. If it is determined whether the packaging requirement is met according to the current time, it may happen that the buffer does not buffer the time information or only buffers the time information when the packaging requirement is met, and the dynamic inspection block packaged according to the situation cannot determine the target video segment. Therefore, in order to reduce the space for storing the motion blocks and avoid the subsequent failure to determine the target video segment, for each motion block of the video to be analyzed, it can be determined whether the time information contained in the motion block is not less than two, so as to determine whether the motion block can determine the target video segment. Specifically, after the information of the acquired time is encapsulated as the motion detection block, the method further includes:
judging whether the number of the time information contained in the dynamic detection block is not less than two;
and if not, filtering the dynamic detection block.
In a specific implementation, if the time information contained in the dynamic detection block is less than two, which indicates that the target video segment cannot be determined according to the time information in the dynamic detection block, filtering the dynamic detection block; otherwise, the dynamic block is saved, which indicates that the target video segment can be determined according to the time information in the dynamic block.
In addition, in order to facilitate the acquisition of each motion block, after the information of the acquired time is packaged into the motion block, the method further includes:
and storing the dynamic detection block, determining a position offset according to the storage position of the dynamic detection block and the storage position of the last dynamic detection block of the video to be analyzed, and storing the position offset and the data length of the dynamic detection block into the last dynamic detection block, so that the storage position of the dynamic detection block is searched according to the position offset and the data length stored in the last dynamic detection block.
After a motion detection block of a video to be analyzed is packaged, the motion detection block is stored in a database, and each motion detection block of the video to be analyzed is conveniently searched for in a subsequent mode. The process of determining the position offset belongs to the prior art, and is not described herein.
And after the position offset is obtained, storing the position offset and the data length of the dynamic detection block into the last dynamic detection block of the video to be analyzed. When each dynamic detection block of the video to be analyzed is determined subsequently, the storage position of the dynamic detection block can be found according to the position offset and the data length stored in the previous dynamic detection block, so that the time information stored in the dynamic detection block, the position offset and the data length of the next dynamic detection block are obtained.
Fig. 2 is a schematic diagram of a specific process for determining a motion detection block of a video to be analyzed according to an embodiment of the present invention, where the process includes:
s201: and receiving the information of a certain video frame of the video C, and storing the information of the video frame to a storage position corresponding to the video C.
S202: and judging whether the video frame is a motion detection frame or not according to the information of the video frame, if so, executing S203, and otherwise, executing S207.
S203: information of the time of the motion detection frame in the video C is determined.
S204: the information of the time is buffered in a buffer.
S205: and judging whether the current time is the packaging time, if so, executing S208, and otherwise, executing S206.
S206: and judging whether the information of the set amount of time is acquired, if so, executing S208, and otherwise, executing S207.
It should be noted that the execution sequence of step S205 and step S206 is not sequential, that is, step S205 may be executed first and then step S206 is executed, or step S206 may be executed first and then step S205 is executed.
S207: information of a next video frame of the video frame is received and then S202 is performed.
S208: and packaging the information of the time cached in the buffer area into a dynamic detection block, and emptying the buffer area.
S209: and saving the dynamic detection block to a corresponding storage position.
Example 3:
in order to further accurately determine a target video segment for behavior analysis, on the basis of the above embodiments, in an embodiment of the present invention, the determining a corresponding target video segment according to the maximum time and the minimum time included in each of the motion blocks respectively includes:
aiming at each dynamic detection block, acquiring a first video frame corresponding to the minimum time and a second video frame corresponding to the maximum time, wherein the first video frame and the second video frame are contained in the dynamic detection block; taking a first target video frame corresponding to a first set frame number ahead of the first video frame as a starting frame of a target video segment corresponding to the dynamic detection block; taking a second target video frame corresponding to a second set frame number backward from the second video frame as a termination frame of a target video segment corresponding to the dynamic detection block; and determining a target video segment corresponding to the dynamic detection block according to each video frame of the video to be analyzed between the starting frame and the ending frame.
In order to more completely analyze the behavior of a moving object in a target video segment, in the embodiment of the present invention, a first set frame number and a second set frame number are preset. After each dynamic detection block of the video to be analyzed is obtained based on the above embodiment, for each dynamic detection block, a first video frame corresponding to the minimum time in the dynamic detection block and a second video frame corresponding to the maximum time in the dynamic detection block are determined first, and a first target video frame corresponding to a first set frame number before the first video frame is used as a starting frame of a target video segment corresponding to the dynamic detection block; and determining a second target video frame corresponding to a second set frame number backward from the second video frame as an end frame of a target video segment corresponding to the dynamic detection block, and determining the target video segment corresponding to the dynamic detection block according to each video frame of the video to be analyzed between the start frame and the end frame.
When setting the first setting frame number and the second setting frame number, the first setting frame number and the second setting frame number may be the same or different, and may be set to different values according to different usage scenarios, and if the subsequent analysis result is more accurate, the first setting frame number and the second setting frame number may be set to be larger, such as 300 frames, 500 frames, and the like; if the efficiency of performing the subsequent analysis results is further improved, the first set frame number and the second set frame number may be set to be smaller, such as 50 frames, 80 frames, etc.
Example 4:
fig. 3 is a schematic flowchart of a specific process for performing behavior analysis on a video to be analyzed according to an embodiment of the present invention, where the process includes:
s301: and reading the video to be analyzed from the database.
S302: and searching a first motion detection block of the video to be analyzed.
S303: and determining a target video segment corresponding to the dynamic detection block in the video to be analyzed.
The process of determining the target video segment corresponding to the dynamic block comprises the following steps: acquiring a first video frame corresponding to the minimum time and a second video frame corresponding to the maximum time contained in the dynamic detection block; taking a first target video frame corresponding to a first set frame number ahead of the first video frame as a starting frame of a target video segment corresponding to the dynamic detection block; taking a second target video frame corresponding to a second set frame number backward from the second video frame as a termination frame of a target video segment corresponding to the dynamic detection block; and determining a target video segment corresponding to the dynamic detection block according to each video frame of the video to be analyzed between the starting frame and the ending frame.
S304: and performing behavior analysis on the moving object in the target video segment, and storing the analysis result.
S305: according to the position offset and the data length included in the motion detection block, the next motion detection block is found, and then S303 is executed.
Fig. 4 is a schematic view of a specific video processing flow provided by an embodiment of the present invention, where the flow includes two parts, namely, a motion detection block generation part and a behavior analysis part, and the following details are described with an execution subject for video processing as a server:
a first part: the dynamic detection block is generated as follows:
s401: and if the server receives a dynamic inspection frame containing dynamic inspection data in the video to be analyzed, acquiring the time information of the dynamic inspection frame in the video to be analyzed.
The time may be a timestamp of the collected corresponding motion detection frame, or a relative time of the corresponding motion detection frame in the video to be analyzed.
S402: and when the server determines that the preset packaging requirement is met, packaging the acquired time information into a dynamic inspection block.
Wherein determining that the predetermined packaging requirement is met comprises:
if the information of the set amount of time is acquired, determining that the preset packaging requirement is met; and/or
And if the current moment is the packaging moment, determining that the preset packaging requirement is met.
S403: the server determines whether the number of pieces of time information included in the motion check block is not less than two, if so, executes S404, and if not, executes S405.
S404: the server stores the dynamic detection block, determines the position offset according to the storage position of the dynamic detection block and the storage position of the last dynamic detection block of the stored video to be analyzed, and stores the position offset and the data length of the dynamic detection block into the last dynamic detection block, so that the storage position of the dynamic detection block is searched according to the position offset and the data length stored in the last dynamic detection block.
S405: the biopsy block is filtered.
A second part: behavior analysis, the concrete implementation process comprises the following steps:
s406: the server receives an analysis instruction for performing behavior analysis on the video to be analyzed, and sequentially obtains at least two pre-stored dynamic detection blocks of the video to be analyzed.
S407: aiming at each dynamic detection block, the server acquires a first video frame corresponding to the minimum time and a second video frame corresponding to the maximum time, wherein the first video frame corresponds to the minimum time and is contained in the dynamic detection block; taking a first target video frame corresponding to a first set frame number ahead of the first video frame as a starting frame of a target video segment corresponding to the dynamic detection block; taking a second target video frame corresponding to a second set frame number backward from the second video frame as a termination frame of a target video segment corresponding to the dynamic detection block; and determining a target video segment corresponding to the dynamic detection block according to each video frame of the video to be analyzed between the starting frame and the ending frame.
S408: and aiming at each target video segment, the server performs behavior analysis on the moving object in the target video segment and stores the analysis result.
After the behavior analysis of the video to be analyzed is completed based on the above embodiment, the behavior analysis result can be stored in the database, and when the behavior analysis of the video to be analyzed is desired again, the behavior analysis result of the video to be analyzed can be directly retrieved through the video acquisition time period, the video acquisition equipment and the like. And structuring the track information according to the behavior analysis result, and independently storing the structured track information into a file. When the video is played, downloaded or forwarded subsequently, the analysis results such as the structured track information are synthesized into each video frame of the video to be analyzed.
Example 5:
fig. 5 is a schematic diagram illustrating a result of a video processing apparatus according to an embodiment of the present invention, where the apparatus includes:
an obtaining module 51, configured to obtain at least two motion detection blocks of a pre-stored video to be analyzed, where each motion detection block includes information of time of at least two motion detection frames in the video to be analyzed, and the time included in each motion detection block is different;
the processing module 52 is configured to determine a corresponding target video segment according to the maximum time and the minimum time included in each of the motion detection blocks, and perform behavior analysis on a moving object in each of the target video segments.
Further, the apparatus further comprises:
the preprocessing module 53 is configured to, if a motion detection frame including motion detection data in the video to be analyzed is received, obtain time information of the motion detection frame in the video to be analyzed; and when the preset packaging requirement is determined to be met, packaging the acquired time information into the dynamic detection block.
Further, the preprocessing module 53 is configured to determine that a preset encapsulation requirement is met if the information of a set amount of time is acquired; and/or determining that the preset packaging requirement is met if the current moment is the packaging moment.
Further, the preprocessing module 53 is further configured to determine whether the number of pieces of time information included in the motion detection block is not less than two; and if not, filtering the dynamic detection block.
Further, the preprocessing module 53 is further configured to store the motion detection block, determine a position offset according to the storage position of the motion detection block and the storage position of the last motion detection block of the stored video to be analyzed, and store the position offset and the data length of the motion detection block in the last motion detection block, so as to search the storage position of the motion detection block according to the position offset and the data length stored in the last motion detection block.
Further, the processing module 52 is configured to, for each motion detection block, obtain a first video frame corresponding to a minimum time included in the motion detection block and a second video frame corresponding to a maximum time; taking a first target video frame corresponding to a first set frame number ahead of the first video frame as a starting frame of a target video segment corresponding to the dynamic detection block; taking a second target video frame corresponding to a second set frame number backward from the second video frame as a termination frame of a target video segment corresponding to the dynamic detection block; and determining a target video segment corresponding to the dynamic detection block according to each video frame of the video to be analyzed between the starting frame and the ending frame.
In the embodiment of the invention, at least two pre-stored dynamic detection blocks of a video to be analyzed are obtained, wherein the dynamic detection blocks record the time of at least one dynamic detection frame, when specific behavior analysis is carried out, corresponding target video segments are determined according to the maximum time and the minimum time contained in each dynamic detection block respectively, and a moving object in each target video segment is subjected to behavior analysis. In the embodiment of the invention, the time of the dynamic detection frame is recorded in the dynamic detection block, and the data volume is small, so the storage space for storing the dynamic detection block is small, and in addition, the time of the dynamic detection frame is stored in the dynamic detection block, and the processing such as coding is not needed, so the generation efficiency of the dynamic detection block is improved.
Example 6:
as shown in fig. 6, which is a schematic structural diagram of an electronic device according to an embodiment of the present invention, on the basis of the foregoing embodiments, an embodiment of the present invention further provides an electronic device, as shown in fig. 6, including: the system comprises a processor 61, a communication interface 62, a memory 63 and a communication bus 64, wherein the processor 61, the communication interface 62 and the memory 63 complete mutual communication through the communication bus 64;
the memory 63 has stored therein a computer program which, when executed by the processor 61, causes the processor 61 to perform the steps of:
acquiring at least two motion detection blocks of a pre-stored video to be analyzed, wherein each motion detection block comprises time information of at least two motion detection frames in the video to be analyzed, and the time contained in each motion detection block is different;
and determining corresponding target video segments according to the maximum time and the minimum time contained in each dynamic detection block, and performing behavior analysis on the moving object in each target video segment.
Further, the processor 61 is further configured to, if a motion detection frame including motion detection data in the video to be analyzed is received, obtain time information of the motion detection frame in the video to be analyzed; and when the preset packaging requirement is determined to be met, packaging the acquired time information into the dynamic detection block.
Further, the processor 61 is configured to determine that a preset encapsulation requirement is met if the information of the set amount of time is acquired; and/or determining that the preset packaging requirement is met if the current moment is the packaging moment.
Further, the processor 61 is further configured to determine whether the number of pieces of time information included in the motion detection block is not less than two; and if not, filtering the dynamic detection block.
Further, the processor 61 is further configured to store the motion detection block, determine a position offset according to the storage position of the motion detection block and the storage position of the last motion detection block of the stored video to be analyzed, and store the position offset and the data length of the motion detection block in the last motion detection block, so as to search the storage position of the motion detection block according to the position offset and the data length stored in the last motion detection block.
Further, the processor 61 is configured to, for each motion detection block, obtain a first video frame corresponding to a minimum time included in the motion detection block and a second video frame corresponding to a maximum time; taking a first target video frame corresponding to a first set frame number ahead of the first video frame as a starting frame of a target video segment corresponding to the dynamic detection block; taking a second target video frame corresponding to a second set frame number backward from the second video frame as a termination frame of a target video segment corresponding to the dynamic detection block; and determining a target video segment corresponding to the dynamic detection block according to each video frame of the video to be analyzed between the starting frame and the ending frame.
Because the principle of the electronic device for solving the problem is similar to the video processing method, the implementation of the electronic device may refer to the implementation of the method, and repeated details are not repeated.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 62 is used for communication between the above-described electronic apparatus and other apparatuses.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The processor may be a general-purpose processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital instruction processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
In the embodiment of the invention, at least two pre-stored dynamic detection blocks of a video to be analyzed are obtained, wherein the dynamic detection blocks record the time of at least one dynamic detection frame, when specific behavior analysis is carried out, corresponding target video segments are determined according to the maximum time and the minimum time contained in each dynamic detection block respectively, and a moving object in each target video segment is subjected to behavior analysis. In the embodiment of the invention, the time of the dynamic detection frame is recorded in the dynamic detection block, and the data volume is small, so the storage space for storing the dynamic detection block is small, and in addition, the time of the dynamic detection frame is stored in the dynamic detection block, and the processing such as coding is not needed, so the generation efficiency of the dynamic detection block is improved.
Example 7:
on the basis of the foregoing embodiments, the present invention further provides a computer-readable storage medium, in which a computer program executable by a processor is stored, and when the program runs on the processor, the processor is caused to execute the following steps:
acquiring at least two motion detection blocks of a pre-stored video to be analyzed, wherein each motion detection block comprises time information of at least two motion detection frames in the video to be analyzed, and the time contained in each motion detection block is different;
and determining corresponding target video segments according to the maximum time and the minimum time contained in each dynamic detection block, and performing behavior analysis on the moving object in each target video segment.
Further, the at least two motion detection blocks are determined by:
if a dynamic inspection frame containing dynamic inspection data in the video to be analyzed is received, acquiring time information of the dynamic inspection frame in the video to be analyzed;
and when the preset packaging requirement is determined to be met, packaging the acquired time information into the dynamic detection block.
Further, the determining that the preset packaging requirement is met includes:
if the information of the set amount of time is acquired, determining that the preset packaging requirement is met; and/or
And if the current moment is the packaging moment, determining that the preset packaging requirement is met.
Further, after the information of the acquired time is encapsulated as the motion detection block, the method further includes:
judging whether the number of the time information contained in the dynamic detection block is not less than two;
and if not, filtering the dynamic detection block.
Further, after the information of the acquired time is encapsulated as the motion detection block, the method further includes:
and storing the dynamic detection block, determining a position offset according to the storage position of the dynamic detection block and the storage position of the last dynamic detection block of the video to be analyzed, and storing the position offset and the data length of the dynamic detection block into the last dynamic detection block, so that the storage position of the dynamic detection block is searched according to the position offset and the data length stored in the last dynamic detection block.
Further, the determining the corresponding target video segment according to the maximum time and the minimum time contained in each of the dynamic test blocks respectively includes:
aiming at each dynamic detection block, acquiring a first video frame corresponding to the minimum time and a second video frame corresponding to the maximum time, wherein the first video frame and the second video frame are contained in the dynamic detection block; taking a first target video frame corresponding to a first set frame number ahead of the first video frame as a starting frame of a target video segment corresponding to the dynamic detection block; taking a second target video frame corresponding to a second set frame number backward from the second video frame as a termination frame of a target video segment corresponding to the dynamic detection block; and determining a target video segment corresponding to the dynamic detection block according to each video frame of the video to be analyzed between the starting frame and the ending frame.
Further, the time is a timestamp of the collected corresponding motion detection frame, or a relative time of the corresponding motion detection frame in the video to be analyzed.
The computer readable storage medium may be any available medium or data storage device that can be accessed by a processor in an appliance device, including but not limited to magnetic memory such as floppy disks, hard disks, magnetic tape, magneto-optical disks (MO), etc., optical memory such as CDs, DVDs, BDs, HVDs, etc., and semiconductor memory such as ROMs, EPROMs, EEPROMs, nonvolatile memories (NANDFLASH), Solid State Disks (SSDs), etc.
In the embodiment of the invention, at least two pre-stored dynamic detection blocks of a video to be analyzed are obtained, wherein the dynamic detection blocks record the time of at least one dynamic detection frame, when specific behavior analysis is carried out, corresponding target video segments are determined according to the maximum time and the minimum time contained in each dynamic detection block respectively, and a moving object in each target video segment is subjected to behavior analysis. In the embodiment of the invention, the time of the dynamic detection frame is recorded in the dynamic detection block, and the data volume is small, so the storage space for storing the dynamic detection block is small, and in addition, the time of the dynamic detection frame is stored in the dynamic detection block, and the processing such as coding is not needed, so the generation efficiency of the dynamic detection block is improved.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for processing video, the method comprising:
acquiring at least two motion detection blocks of a pre-stored video to be analyzed, wherein each motion detection block comprises time information of at least two motion detection frames in the video to be analyzed, and the time contained in each motion detection block is different;
and determining corresponding target video segments according to the maximum time and the minimum time contained in each dynamic detection block, and performing behavior analysis on the moving object in each target video segment.
2. The method of claim 1, wherein the at least two motion blocks are determined by:
if a dynamic inspection frame containing dynamic inspection data in the video to be analyzed is received, acquiring time information of the dynamic inspection frame in the video to be analyzed;
and when the preset packaging requirement is determined to be met, packaging the acquired time information into the dynamic detection block.
3. The method of claim 2, wherein determining that the preset packaging requirement is met comprises:
if the information of the set amount of time is acquired, determining that the preset packaging requirement is met; and/or
And if the current moment is the packaging moment, determining that the preset packaging requirement is met.
4. The method of claim 2, wherein after the encapsulating the obtained time information into the motion detection block, the method further comprises:
judging whether the number of the time information contained in the dynamic detection block is not less than two;
and if not, filtering the dynamic detection block.
5. The method of claim 2, wherein after the encapsulating the obtained time information into the motion detection block, the method further comprises:
and storing the dynamic detection block, determining a position offset according to the storage position of the dynamic detection block and the storage position of the last dynamic detection block of the video to be analyzed, and storing the position offset and the data length of the dynamic detection block into the last dynamic detection block, so that the storage position of the dynamic detection block is searched according to the position offset and the data length stored in the last dynamic detection block.
6. The method according to claim 1, wherein said determining a corresponding target video segment according to a maximum time and a minimum time contained in each of said motion blocks comprises:
aiming at each dynamic detection block, acquiring a first video frame corresponding to the minimum time and a second video frame corresponding to the maximum time, wherein the first video frame and the second video frame are contained in the dynamic detection block; taking a first target video frame corresponding to a first set frame number ahead of the first video frame as a starting frame of a target video segment corresponding to the dynamic detection block; taking a second target video frame corresponding to a second set frame number backward from the second video frame as a termination frame of a target video segment corresponding to the dynamic detection block; and determining a target video segment corresponding to the dynamic detection block according to each video frame of the video to be analyzed between the starting frame and the ending frame.
7. The method of claim 1, wherein the time is a timestamp of the acquisition of the corresponding motion detection frame or a relative time of the corresponding motion detection frame in the video to be analyzed.
8. An apparatus for processing video, the apparatus comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring at least two motion detection blocks of a pre-stored video to be analyzed, each motion detection block comprises time information of at least two motion detection frames in the video to be analyzed, and the time contained in each motion detection block is different;
and the processing module is used for determining corresponding target video segments according to the maximum time and the minimum time contained in each dynamic detection block respectively and performing behavior analysis on the moving object in each target video segment.
9. An electronic device, characterized in that the electronic device comprises at least a processor and a memory, the processor being adapted to carry out the steps of the method for processing video according to any of claims 1-7 when executing a computer program stored in the memory.
10. A computer-readable storage medium, characterized in that it stores a computer program which, when being executed by a processor, carries out the steps of the method of processing a video according to any one of claims 1 to 7.
CN202010442760.3A 2020-05-22 2020-05-22 Video processing method, device, equipment and medium Active CN111611433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010442760.3A CN111611433B (en) 2020-05-22 2020-05-22 Video processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010442760.3A CN111611433B (en) 2020-05-22 2020-05-22 Video processing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN111611433A true CN111611433A (en) 2020-09-01
CN111611433B CN111611433B (en) 2023-06-02

Family

ID=72199575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010442760.3A Active CN111611433B (en) 2020-05-22 2020-05-22 Video processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111611433B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034458A (en) * 2021-03-18 2021-06-25 广州市索图智能电子有限公司 Indoor personnel trajectory analysis method and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567380A (en) * 2010-12-28 2012-07-11 沈阳聚德视频技术有限公司 Method for searching vehicle information in video image
CN103310467A (en) * 2013-07-04 2013-09-18 中国海洋大学 Motion segmentation and identification method applied to portable terminals
CN109495765A (en) * 2017-09-12 2019-03-19 优酷网络技术(北京)有限公司 Video intercepting method and device
CN110099238A (en) * 2019-05-31 2019-08-06 杭州涂鸦信息技术有限公司 A kind of packaging method of video requency frame data, intelligent terminal and storage device
US20190243854A1 (en) * 2017-03-02 2019-08-08 Ricoh Company, Ltd. Analysis of Operator Behavior Focalized on Machine Events

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567380A (en) * 2010-12-28 2012-07-11 沈阳聚德视频技术有限公司 Method for searching vehicle information in video image
CN103310467A (en) * 2013-07-04 2013-09-18 中国海洋大学 Motion segmentation and identification method applied to portable terminals
US20190243854A1 (en) * 2017-03-02 2019-08-08 Ricoh Company, Ltd. Analysis of Operator Behavior Focalized on Machine Events
CN109495765A (en) * 2017-09-12 2019-03-19 优酷网络技术(北京)有限公司 Video intercepting method and device
CN110099238A (en) * 2019-05-31 2019-08-06 杭州涂鸦信息技术有限公司 A kind of packaging method of video requency frame data, intelligent terminal and storage device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034458A (en) * 2021-03-18 2021-06-25 广州市索图智能电子有限公司 Indoor personnel trajectory analysis method and device and storage medium

Also Published As

Publication number Publication date
CN111611433B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
US9952905B2 (en) Methods and systems to identify and migrate threads among system nodes based on system performance metrics
US11765754B2 (en) Resource scheduling method and terminal device
CN110674408B (en) Service platform, and real-time generation method and device of training sample
WO2016141735A1 (en) Cache data determination method and device
KR102161192B1 (en) Method and apparatus for data mining from core trace
US11366757B2 (en) File pre-fetch scheduling for cache memory to reduce latency
CN110717536A (en) Method and device for generating training sample
JP6070712B2 (en) Information processing apparatus and performance analysis data collection method
EP4195149A1 (en) Target detection and tracking method and apparatus, electronic device, and storage medium
CN109800675A (en) A kind of method and device of the identification image of determining face object
CN111611433B (en) Video processing method, device, equipment and medium
CN108717354B (en) Method and device for acquiring rendering data of mobile game and storage equipment
CN110599004A (en) Risk control method, equipment, medium and device
JP7038656B2 (en) Access to cache
CN112489522A (en) Method, device, medium and electronic device for playing simulation scene data
CN105573885A (en) Method and device for monitoring and counting bottom hardware behaviours
CN110704742B (en) Feature extraction method and device
CN109063340B (en) Simulation-based GPU performance testing method and device
CN111190644A (en) Embedded Flash on-chip read instruction hardware acceleration method and device
CN110728516A (en) Method, device and equipment for updating wind control model
CN110989907B (en) Data display method and device, electronic equipment and storage medium
US20240089525A1 (en) Video transmission apparatus, computer-readable storage medium, video transmission method, and system
CN111598053B (en) Image data processing method and device, medium and system thereof
US11947476B2 (en) Cross-chiplet performance data streaming
US20220138630A1 (en) Predictive streaming system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant