CN111698555B - Video frame extraction processing method and device - Google Patents

Video frame extraction processing method and device Download PDF

Info

Publication number
CN111698555B
CN111698555B CN202010568679.XA CN202010568679A CN111698555B CN 111698555 B CN111698555 B CN 111698555B CN 202010568679 A CN202010568679 A CN 202010568679A CN 111698555 B CN111698555 B CN 111698555B
Authority
CN
China
Prior art keywords
video frame
video
target
queue
frame rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010568679.XA
Other languages
Chinese (zh)
Other versions
CN111698555A (en
Inventor
王文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010568679.XA priority Critical patent/CN111698555B/en
Publication of CN111698555A publication Critical patent/CN111698555A/en
Application granted granted Critical
Publication of CN111698555B publication Critical patent/CN111698555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4333Processing operations in response to a pause request
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Abstract

The invention provides a video frame extraction processing method and a video frame extraction processing device, wherein the method comprises the following steps: acquiring a video frame rate of a target video and a target frame rate to be set; determining a video frame number to be extracted according to the video frame rate and the target frame rate; the video frames corresponding to the video frame numbers are extracted from the target video, the problem that the extraction of the frames aiming at the video with the fixed frame rate in the related technology cannot be applied to the image algorithm effect self-inspection can be solved, the requirement that the video frames with any frame rate are extracted from the video with any frame rate FPS can be met, the extraction of the video frames with any frame rate can be applicable to the image algorithm effect self-inspection, and the content accuracy and the time interval accuracy of the video frames with the target frame rate extracted from the original frame rate video are ensured.

Description

Video frame extraction processing method and device
Technical Field
The invention relates to the field of video processing, in particular to a video frame extraction processing method and device.
Background
When the image algorithm solution is used for verifying the algorithm effect, different requirements are often imposed on the frame rate of a test video. On one hand, from the viewpoint of video content, video frames at a required frame rate are required to be extracted from videos at an existing frame rate; on the other hand, from the time interval of inputting algorithmic video frames, it is required that extracted video frames are inputted to the algorithmic interface at the time interval of its real frame rate. Therefore, a video frame extraction module which can meet the requirements of correct content of video frames and accurate frame time interval and the like is needed for self-verification of the effect of the algorithm solution.
In the related art, a video frame extraction method suitable for a holographic display device is provided, and before original frame data is acquired, the original frame data is scaled to the resolution, the frame extraction speed is adjusted in real time through a frame extraction frame rate, and then the original frame data is acquired, so that the frame rendering speed is improved, and the original frame data is acquired more quickly. The video frame extracting method suitable for the holographic display device is characterized in that the frame time T is calculated by extracting the frame rate FPS, and the frame time T is 1000.0 ms/FPS; the frame number C is calculated by the start time T0, the off time T1, and the frame time T, and the frame number C is (T1-T0)/T.
The above scheme defaults that the frame rate of the video itself is 1000fps, and the videos with the corresponding frame rates are extracted and combined in a mode that the time difference is greater than the frame time T, while the frame rates of the videos collected by the common cameras are neglected to be less than 1000fps, and the frame rate of the videos collected by the common algorithm personnel through the cameras is lower, for example, 25 fps. Therefore, the frame extraction function of the scheme does not have the condition of self-checking the algorithm effect of the image algorithm solution.
Aiming at the problem that frame extraction cannot be applied to image algorithm effect self-inspection in the related technology aiming at the video with a fixed frame rate, no solution is provided.
Disclosure of Invention
The embodiment of the invention provides a video frame extraction processing method and device, which are used for at least solving the problem that the video frame extraction at a fixed frame rate in the related art can not be applied to image algorithm effect self-inspection.
According to an embodiment of the present invention, there is provided a video frame extraction processing method, including:
acquiring a video frame rate of a target video and a target frame rate to be set;
determining a video frame number to be extracted according to the video frame rate and the target frame rate;
and extracting the video frame corresponding to the video frame number from the target video.
Optionally, the method further comprises:
when the video frame corresponding to the video frame number is extracted from the target video, directly storing the extracted video frame to a node memory of a pre-established cache queue, and mounting the node memory of the cache queue to a pre-established temporary queue, wherein the cache queue allocates the node memory, and the temporary queue does not allocate the node memory;
acquiring the video frame from the temporary queue;
determining the time interval of the input image according to the target frame rate;
inputting the video frame into an image input interface based on the time interval.
Optionally, before extracting the video frame corresponding to the video frame number from the target video, the method further includes:
creating a video reading thread and applying for a node memory;
creating the cache queue and the temporary queue with the same length as the node memory;
and mounting the node memory to the cache queue.
Optionally, when extracting the video frame corresponding to the video frame number from the target video, directly storing the extracted video frame into a node memory of a pre-created buffer queue, and mounting the node memory of the buffer queue into a pre-created temporary queue includes:
acquiring a node memory from the cache queue before extracting a video frame corresponding to the video frame number from the target video based on the video reading thread;
if the node memory is successfully acquired, storing the video frame on the node memory, and mounting the node memory on the temporary queue;
and if the node is failed to be acquired, suspending the extraction of the video frame corresponding to the video frame number from the target video, and continuing to extract the video frame corresponding to the video frame number from the target video after the node memory on the temporary queue returns to the cache queue.
Optionally, after the video frame is input into the image input interface, the method further comprises:
acquiring a processing result of the video frame through a result taking thread;
and returning the node memory to the cache queue.
Optionally, the method further comprises:
after the extraction of the video frame corresponding to the video frame number in the target video is finished, judging whether the number of nodes in an idle node memory in the cache queue is equal to the length of the cache queue or not;
and if so, recovering the memory node, the cache queue and the temporary queue.
Optionally, the method further comprises:
determining the video frame number to be extracted according to the video frame rate and the target frame rate in the following mode:
F(i)=(int)((float)(f 1 /f 2 )*i+1),
wherein F (i) is the video frame number, i is the number of the video frame, f 1 For the video frame rate, f 2 Is the target frame rate.
According to another embodiment of the present invention, there is also provided a video frame extraction processing apparatus, including:
the first acquisition module is used for acquiring the video frame rate of the target video and the target frame rate to be set;
the first determining module is used for determining the video frame number to be extracted according to the video frame rate and the target frame rate;
and the extraction module is used for extracting the video frame corresponding to the video frame number from the target video.
Optionally, the apparatus further comprises:
the storage module is used for extracting the video frame corresponding to the video frame number from the target video, simultaneously directly storing the extracted video frame to a node memory of a pre-established cache queue, and mounting the node memory of the cache queue to a pre-established temporary queue, wherein the cache queue allocates the node memory, and the temporary queue does not allocate the node memory;
a second obtaining module, configured to obtain the video frame from the temporary queue;
the second determining module is used for determining the time interval of the input image according to the target frame rate;
an input module for inputting the video frame into an image input interface based on the time interval.
Optionally, the apparatus further comprises:
the first creating module is used for creating a video reading thread and applying for a node memory;
the second creating module is used for creating the cache queue and the temporary queue with the same length as the node memory;
and the mounting module is used for mounting the node memory to the cache queue.
Optionally, the storage module comprises:
the obtaining submodule is used for obtaining the node memory from the cache queue before the video frame corresponding to the video frame number is extracted from the target video based on the video reading thread;
the storage submodule is used for storing the video frame to the node memory and mounting the node memory to the temporary queue if the node memory is successfully acquired;
and the pause submodule is used for pausing the extraction of the video frame corresponding to the video frame number from the target video if the node acquisition fails, and continuing to extract the video frame corresponding to the video frame number from the target video after the node memory on the temporary queue returns to the cache queue.
Optionally, the apparatus further comprises:
the third acquisition module is used for acquiring the processing result of the video frame through a result-taking thread;
and the return module is used for returning the node memory to the cache queue.
Optionally, the apparatus further comprises:
the judging module is used for judging whether the number of nodes in the idle node memory in the cache queue is equal to the length of the cache queue or not after the extraction of the video frame corresponding to the video frame number in the target video is finished;
and the recovery module is used for recovering the node memory, the cache queue and the temporary queue under the condition that the judgment result is yes.
Optionally, the first determining module is further configured to determine, according to the video frame rate and the target frame rate, a video frame number to be extracted by:
F(i)=(int)((float)(f 1 /f 2 )*i+1),
wherein F (i) is the video frame number, i is the number of the video frame, f 1 For the video frame rate, f 2 The target frame rate is used.
According to a further embodiment of the present invention, a computer-readable storage medium is also provided, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above-described method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the video frame rate of the target video and the target frame rate to be set are obtained; determining a video frame number to be extracted according to the video frame rate and the target frame rate; the video frames corresponding to the video frame numbers are extracted from the target video, the problem that the extraction of the frames aiming at the video with the fixed frame rate in the related technology cannot be applied to the image algorithm effect self-inspection can be solved, the requirement that the video frames with any frame rate are extracted from the video with any frame rate FPS can be met, the extraction of the video frames with any frame rate can be applicable to the image algorithm effect self-inspection, and the content accuracy and the time interval accuracy of the video frames with the target frame rate extracted from the original frame rate video are ensured.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a video frame extraction processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a video frame extraction processing method according to an embodiment of the invention;
FIG. 3 is a flow diagram of video reading according to an embodiment of the present invention;
FIG. 4 is a flow diagram of a video framing process according to an embodiment of the present invention;
fig. 5 is a block diagram of a video decimation processing apparatus according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking a mobile terminal as an example, fig. 1 is a hardware structure block diagram of a mobile terminal of a video frame extraction processing method according to an embodiment of the present invention, as shown in fig. 1, a mobile terminal 10 may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, and optionally, the mobile terminal may further include a transmission device 106 for a communication function and an input/output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to the message receiving method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In this embodiment, a video frame extraction processing method operating in the mobile terminal or the network architecture is provided, and fig. 2 is a flowchart of the video frame extraction processing method according to the embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, acquiring a video frame rate of a target video and a target frame rate to be set;
step S204, determining the video frame number to be extracted according to the video frame rate and the target frame rate;
further, the video frame number to be extracted may be determined according to the video frame rate and the target frame rate in the following manner:
F(i)=(int)((float)(f 1 /f 2 )*i+1),
wherein F (i) is the video frame number, i is the number of the video frame, f 1 For the video frame rate, f 2 The target frame rate is used.
In a security system, the principle of specifying the video frame rate to an image algorithm solution is as follows: the system generally captures images at a certain video frame rate, for example, 25fps, i.e., 1 second, can send out 25 yuv code streams. Because the required frame rate is less than the video acquisition frame rate in consideration of the time-consuming effect of the algorithm, for example, the required frame rate is 12fps, which means that 12 frames are extracted from 25 frames in 1 second and sent to the algorithm interface, and the frame rate for receiving the video by the algorithm solution is 12 fps. The setting methods of other frame rates are the same, and are not described herein again. When reading the test video, the same method can be used for frame extraction.
Step S206, extracting the video frame corresponding to the video frame number from the target video.
Through the steps S202 to S206, the video frame rate of the target video and the target frame rate to be set are obtained; determining a video frame number to be extracted according to the video frame rate and the target frame rate; the video frames corresponding to the video frame numbers are extracted from the target video, the problem that the extraction of the video frames at a fixed frame rate in the related technology cannot be applied to image algorithm effect self-inspection can be solved, the requirement that the video frames at any frame rate are extracted from videos at any frame rate is met, the extraction of the video frames at any frame rate can be applicable to image algorithm effect self-inspection, the content accuracy of the video frames at the target frame rate extracted from the original frame rate video is ensured, the frame number of the frames required to be extracted is deduced by adopting the relation between the original frame rate of the video and the target frame rate required to be generated, and therefore the frame extraction function and the accuracy of the time interval of the video at the specified frame rate are achieved.
In the embodiment of the invention, when the video frame corresponding to the video frame number is extracted from the target video, the extracted video frame is directly stored on a node memory of a pre-created buffer queue, and the node memory of the buffer queue is mounted in a pre-created temporary queue, wherein the buffer queue allocates the node memory, and the temporary queue does not allocate the node memory; acquiring the video frame from the temporary queue; determining the time interval of the input image according to the target frame rate; inputting the video frame into an image input interface based on the time interval. The method for directionally acquiring the serial number of the video frame by theoretical calculation and setting the video frame margin mechanism by adopting a double-queue cooperative management technology realizes the separation of video reading frame extraction operation and image algorithm solution effect verification operation by the cooperative cooperation of two queues and realizes the accurate simulation of any frame rate.
Further, before extracting the video frame corresponding to the video frame number from the target video, creating a video reading thread and applying for a node memory; creating the cache queue and the temporary queue with the same length as the node memory; and mounting the node memory to the cache queue.
Correspondingly, before extracting the video frame corresponding to the video frame number from the target video, directly storing the extracted video frame to a node memory of a pre-created buffer queue, and mounting the node memory of the buffer queue to a pre-created temporary queue may specifically include: acquiring a node memory from the cache queue before extracting a video frame corresponding to the video frame number from the target video based on the video reading thread; if the node memory is successfully acquired, storing the video frame on the node memory, and mounting the node memory on the temporary queue; and if the node memory is failed to be acquired, suspending the extraction of the video frame corresponding to the video frame number from the target video, and continuing to extract the video frame corresponding to the video frame number from the target video after the node memory on the temporary queue returns to the cache queue.
In an optional embodiment, after the video frame is input into the image input interface, the processing result of the video frame is obtained by a result fetching thread, and the node memory is returned to the cache queue.
In another optional embodiment, after the extraction of the video frame corresponding to the video frame number in the target video is completed, it is determined whether the number of nodes in the idle node memory in the buffer queue is equal to the length of the buffer queue, and if the determination result is yes, the node memory, the buffer queue, and the temporary queue are recycled.
The embodiment of the invention adopts a mode of combining a double-queue cooperative management technology and a multithreading technology to set a video frame margin mechanism and realize an image caching function, thereby realizing accurate simulation of the video frame rate of the input algorithm input interface.
Fig. 3 is a flowchart of video reading according to an embodiment of the present invention, as shown in fig. 3, including:
step S301, creating a video reading thread;
step S302, a buffer queue Q1 and a temporary queue Q2 for video reading are created, wherein each node of Q1 is distributed with a node memory, Q2 is a temporary transfer queue, and no node memory applies for a node;
step S303, circularly executing video frame extraction operation, saving the obtained frame data to a Q1 image node, and mounting the node content to a Q2; specifically, after the operations of video opening, frame number calculation of frames to be extracted, frame skipping, reading, scaling, format conversion and the like are executed, the obtained frame data is the image input required by the image algorithm solution, and the requirement of correct frame content is met. In the image sending thread, image nodes are sequentially obtained from the queue Q2, the time interval of sending the image nodes to the algorithm interface is calculated according to the target frame rate fps required by the algorithm solution, and the time accurate requirement on the video frame time is achieved by setting the delay before sending the image. The algorithm solution is thus fed back in a cyclic manner into the video frame data stream. In the result obtaining thread, after the algorithm result is obtained, the queue Q2 image nodes are released to the queue Q1 for the repeated use of the node memory for video reading, and the resource overhead is saved.
Step S304, after the video reading is finished and all the node memories on the Q2 are released (i.e. when the Q1 queue is full), the node memories, the queue and the thread resource are recovered, thereby completing the whole video reading task.
Through the design mode, the method generally adopts a serial and frame-extraction-free verification algorithm effect mode to be converted into a parallel mode of separating the video frame extraction reading and image sending processes, so that the correctness of the frame data of the extracted frames is ensured, and the influence of the time consumption for reading the video on the accuracy of setting the simulation frame rate is eliminated.
In order to realize frame extraction, a double-queue cooperative management technology is adopted to set a video frame margin mechanism, the design idea of separating video acquisition from algorithm processing is realized, and the video with accurate frame rate and correct frame content is sent to an algorithm processing interface. Fig. 4 is a flowchart of a video framing procedure according to an embodiment of the present invention, and as shown in fig. 4, in the case that the creation and setting of the queues Q1, Q2 are completed, the video framing comprises:
step S401, opening a video by using an ffmpeg basic library interface;
optionally, before the step S401, a video reading thread is created, and a corresponding number of node memory pools are applied, where the number of image frames is L; creating a buffer image queue Q1 and a temporary queue Q2 with the length of L; and mounting the memory of the L frame node to L nodes of Q1.
A step S402 of determining whether or not the video reading is finished, and if the determination result is no, executing a step S403, and if the determination result is yes, executing a step S409;
step S403, judging whether the node Q1 is successfully acquired, if so, executing step S405, otherwise, executing step S404;
step S404, circularly blocking and waiting;
step S405, calculating the acquired video frame number according to the FPS and the FPS;
step S406, jumping to a video frame corresponding to the video frame number by using a frame jumping module of the ffmpeg basic library, and reading the video frame;
step S407, scaling and format conversion are carried out according to requirements and the scaling and format conversion are stored in a Q1 node memory;
step S408, mount the node memory on the Q2 queue, and then return to step S402 to continue execution;
step S409, closing the video at the end mark position 1;
step S410, according to the length of Q1, determining that all node images of Q2 are processed completely;
in step S411, the Q1 node memory is destroyed and the queues Q1 and Q2 are deleted.
After the video is successfully opened and a queue Q1 node is obtained, the frame sequence number F (i) of a frame to be read is calculated according to FPS and FPS, then a frame skipping interface of a basic library is used for skipping to the F (i) frame, the frame data is read, the frame data is zoomed and the format of the frame data is converted to corresponding parameters according to the requirement of an algorithm solution, the converted video frame is stored in an obtained Q1 node memory, finally the node is mounted on a Q2, and the video reading frame extracting operation is executed in a circulating mode. When the video is finished, marking the position 1 with an end mark, and closing the video; according to the length of the queue Q1, all node images of the Q2 are ensured to be processed completely, and finally, the Q1 node memory is destroyed again, the queue Q1 and the queue Q2 are deleted, and the whole frame drawing setting is completed.
In the video reading thread, firstly, a node memory pool with a corresponding format is applied, the frame number is determined to be L, image cache queues Q1 and Q2 with the length of L are created, and L sections of memories of the memory pool are mounted on L nodes of the queue Q1. When the Q1 queue node is successfully acquired, the image of one frame after the frame extraction processing is saved on the Q1 node, and the node is hung on the Q2, and the steps are executed in a circulating way. When the get Q1 queue node fails, all data nodes representing Q1 are processed in the margin buffer and algorithm flow in queue Q2, at which time the loop block waits. After the node returns the Q1, the video frame acquisition operation can be continuously executed.
In order to realize parallel execution of different threads of video reading and graph sending, an interactive relation is provided between the queues Q1 and Q2, in the video reading thread, a node memory node is firstly obtained from the queue Q1, and an obtained video frame is stored in a corresponding node memory after frame extraction processing; the node is then mounted on the queue Q2 for the graph thread to fetch, and at the same time, the video reading operation is executed in a loop. In the graph sending process, video frame nodes are obtained from the queue Q2 according to the frame rate requirement, delay is set according to the required video resolution fps, and the accuracy of the video frame rate sent to the algorithm solution interface is ensured; the chart sending operation is executed in a recycling mode until the chart sending is finished. And in the result thread, after the result is obtained, the memory of the node is returned to the queue Q1 for cyclic use, so that the memory overhead is saved.
The embodiment of the invention deduces calculation according to theory to obtain the frame number of the extracted frame, and then actually extracts the frame, and the method ensures the content accuracy of the FPS video frame of the original FPS frame rate video extraction frame rate. Under the special scene of self-check of the image algorithm solution, a double-queue cooperative management technology is adopted to set a video frame margin mechanism, the influence of the time consumed by video reading and frame extraction on the real frame rate required by algorithm processing is overcome, and the time accuracy of sending the video frames into an algorithm interface is ensured. The method is suitable for original videos of FPS with various frame rates, and can generate videos to be detected with various frame rates and FPS; especially, the effect of the image algorithm solution is verified, and the module is high in functionality and practicability.
Example 2
According to another embodiment of the present invention, there is also provided a video frame extracting apparatus, and fig. 5 is a block diagram of the video frame extracting apparatus according to the embodiment of the present invention, as shown in fig. 5, including:
a first obtaining module 52, configured to obtain a video frame rate of a target video and a target frame rate to be set;
a first determining module 54, configured to determine a video frame number to be extracted according to the video frame rate and the target frame rate;
and an extracting module 56, configured to extract a video frame corresponding to the video frame number from the target video.
Optionally, the apparatus further comprises:
the storage module is used for extracting the video frame corresponding to the video frame number from the target video, simultaneously storing the extracted video frame to a node memory of a pre-established cache queue, and mounting the node memory of the cache queue to a pre-established temporary queue, wherein the cache queue allocates the node memory, and the temporary queue does not allocate the node memory;
a second obtaining module, configured to obtain the video frame from the temporary queue;
the second determining module is used for determining the time interval of the input image according to the target frame rate;
an input module for inputting the video frame into an image input interface based on the time interval.
Optionally, the apparatus further comprises:
the first creating module is used for creating a video reading thread and applying for a node memory;
a second creating module, configured to create the cache queue and the temporary queue that have the same length as the memory of the node;
and the mounting module is used for mounting the node memory to the cache queue.
Optionally, the storage module comprises:
the obtaining submodule is used for obtaining the node memory from the cache queue before the video frame corresponding to the video frame number is extracted from the target video based on the video reading thread;
the storage submodule is used for storing the video frame to the node memory and mounting the node memory to the temporary queue if the node memory is successfully acquired;
and the pause submodule is used for pausing the extraction of the video frame corresponding to the video frame number from the target video if the acquisition of the node memory fails, and continuing to extract the video frame corresponding to the video frame number from the target video after the node memory on the temporary queue returns to the cache queue.
Optionally, the apparatus further comprises:
the third acquisition module is used for acquiring the processing result of the video frame through a result-taking thread;
and the return module is used for returning the node memory to the cache queue.
Optionally, the apparatus further comprises:
the judging module is used for judging whether the number of nodes in the idle node memory in the cache queue is equal to the length of the cache queue or not after the extraction of the video frame corresponding to the video frame number in the target video is finished;
and the recycling module is used for recycling the buffer queue and the temporary queue under the condition that the judgment result is yes.
Optionally, the first determining module 54 is further configured to determine, according to the video frame rate and the target frame rate, a video frame number to be extracted by:
F(i)=(int)((float)(f 1 /f 2 )*i+1),
wherein F (i) is the video frame number, i is the number of the video frame, f 1 For the video frame rate, f 2 The target frame rate is used.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
Embodiments of the present invention also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring the video frame rate of the target video and the target frame rate to be set;
s2, determining the video frame number to be extracted according to the video frame rate and the target frame rate;
and S3, extracting the video frame corresponding to the video frame number from the target video.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Example 4
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring the video frame rate of the target video and the target frame rate to be set;
s2, determining the video frame number to be extracted according to the video frame rate and the target frame rate;
and S3, extracting the video frame corresponding to the video frame number from the target video.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized in a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a memory device and executed by a computing device, and in some cases, the steps shown or described may be executed out of order, or separately as individual integrated circuit modules, or multiple modules or steps thereof may be implemented as a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method for video frame extraction, comprising:
acquiring a video frame rate of a target video and a target frame rate to be set;
determining a video frame number to be extracted according to the video frame rate and the target frame rate;
extracting a video frame corresponding to the video frame number from the target video, simultaneously directly storing the extracted video frame to a node memory of a pre-established cache queue, and mounting the node memory of the cache queue to a pre-established temporary queue, wherein the cache queue allocates the node memory, and the temporary queue does not allocate the node memory;
acquiring the video frame from the temporary queue;
determining the time interval of the input image according to the target frame rate;
inputting the video frame into an image input interface based on the time interval.
2. The method according to claim 1, wherein before extracting the video frame corresponding to the video frame number from the target video, the method further comprises:
creating a video reading thread and applying for a node memory;
creating the cache queue and the temporary queue with the same length as the node memory;
and mounting the node memory to the cache queue.
3. The method according to claim 2, wherein extracting the video frame corresponding to the video frame number from the target video, and simultaneously directly storing the extracted video frame to a node memory of a pre-created buffer queue, and mounting the node memory of the buffer queue to a pre-created temporary queue comprises:
acquiring a node memory from the cache queue before extracting a video frame corresponding to the video frame number from the target video based on the video reading thread;
if the node memory is successfully acquired, directly storing the extracted video frame to the node memory, and mounting the node memory to the temporary queue;
and if the node is failed to be acquired, suspending the extraction of the video frame corresponding to the video frame number from the target video, and continuing to extract the video frame corresponding to the video frame number from the target video after the node on the temporary queue returns to the cache queue.
4. The method of claim 1, wherein after inputting the video frame into an image input interface, the method further comprises:
acquiring a processing result of the video frame through a result taking thread;
and returning the node memory to the cache queue.
5. The method of claim 4, further comprising:
after the extraction of the video frame corresponding to the video frame number in the target video is finished, judging whether the number of nodes in an idle node memory in the cache queue is equal to the length of the cache queue or not;
and if so, recovering the node memory, the cache queue and the temporary queue.
6. The method according to any one of claims 1 to 5, further comprising:
determining the video frame number to be extracted according to the video frame rate and the target frame rate in the following mode:
F(i)=(int)((float)(f 1 /f 2 )*i+1),
wherein F (i) is the video frame number, i is the number of the video frame, f 1 For the video frame rate, f 2 The target frame rate is used.
7. A video decimation processing apparatus, comprising:
the first acquisition module is used for acquiring the video frame rate of the target video and the target frame rate to be set;
the first determining module is used for determining the video frame number to be extracted according to the video frame rate and the target frame rate;
the extraction module is used for extracting the video frame corresponding to the video frame number from the target video;
the device further comprises:
the storage module is used for extracting the video frame corresponding to the video frame number from the target video, simultaneously directly storing the extracted video frame to a node memory of a pre-established cache queue, and mounting the node memory of the cache queue to a pre-established temporary queue, wherein the cache queue allocates the node memory, and the temporary queue does not allocate the node memory;
a second obtaining module, configured to obtain the video frame from the temporary queue;
the second determining module is used for determining the time interval of the input image according to the target frame rate;
an input module for inputting the video frame into an image input interface based on the time interval.
8. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to carry out the method of any one of claims 1 to 6 when executed.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 6.
CN202010568679.XA 2020-06-19 2020-06-19 Video frame extraction processing method and device Active CN111698555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010568679.XA CN111698555B (en) 2020-06-19 2020-06-19 Video frame extraction processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010568679.XA CN111698555B (en) 2020-06-19 2020-06-19 Video frame extraction processing method and device

Publications (2)

Publication Number Publication Date
CN111698555A CN111698555A (en) 2020-09-22
CN111698555B true CN111698555B (en) 2022-08-16

Family

ID=72482325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010568679.XA Active CN111698555B (en) 2020-06-19 2020-06-19 Video frame extraction processing method and device

Country Status (1)

Country Link
CN (1) CN111698555B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565886A (en) * 2020-12-29 2021-03-26 北京奇艺世纪科技有限公司 Video frame extraction method and device, electronic equipment and readable storage medium
CN112887510A (en) * 2021-01-19 2021-06-01 三一重工股份有限公司 Video playing method and system based on video detection
CN113163260B (en) * 2021-03-09 2023-03-24 北京百度网讯科技有限公司 Video frame output control method and device and electronic equipment
CN113438537A (en) * 2021-06-24 2021-09-24 广州欢网科技有限责任公司 Terminal screen saver loading method and device and terminal equipment
CN115514970A (en) * 2022-10-28 2022-12-23 重庆紫光华山智安科技有限公司 Image frame pushing method and system, electronic equipment and readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI332205B (en) * 2007-03-01 2010-10-21 Lite On It Corp Data modulation/encryption method used in holographic stotage system
CN106470323B (en) * 2015-08-14 2019-08-16 杭州海康威视系统技术有限公司 The storage method and equipment of video data
CN105578207A (en) * 2015-12-18 2016-05-11 无锡天脉聚源传媒科技有限公司 Video frame rate conversion method and device

Also Published As

Publication number Publication date
CN111698555A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN111698555B (en) Video frame extraction processing method and device
US10515117B2 (en) Generating and reviewing motion metadata
EP2724343B1 (en) Video remixing system
CN110719332B (en) Data transmission method, device, system, computer equipment and storage medium
CN109309831A (en) The test method and device of video delay in video conference
CN110149518A (en) Processing method, system, device, equipment and the storage medium of media data
CN105338564B (en) A kind of client adaptation method, client, server and system
CN110555334B (en) Face feature determination method and device, storage medium and electronic equipment
CN108200387A (en) A kind of transmission method of file, system and camera
CN111741247B (en) Video playback method and device and computer equipment
CN109698850B (en) Processing method and system
CN110855947B (en) Image snapshot processing method and device
WO2018113553A1 (en) Image analysis method and device
CN116048946B (en) Performance detection method, host and storage medium
CN109886234B (en) Target detection method, device, system, electronic equipment and storage medium
CN111010599B (en) Method and device for processing multi-scene video stream and computer equipment
CN109618207B (en) Video frame processing method and device, storage medium and electronic device
CN112817548B (en) Electronic device, display control method, display apparatus, and storage medium
US20220027623A1 (en) Object Location Determination in Frames of a Video Stream
US20200380267A1 (en) Object trajectory augmentation on a newly displayed video stream
CN113038261A (en) Video generation method, device, equipment, system and storage medium
US20230082766A1 (en) Image synchronization method and apparatus, and device and computer storage medium
CN115375208B (en) Camera data analysis method and device, electronic equipment and storage medium
CN111083413B (en) Image display method and device, electronic equipment and storage medium
CN114283454A (en) Training method of position relation mapping model and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant