CN111475676B - Video data processing method, system, device, equipment and readable storage medium - Google Patents

Video data processing method, system, device, equipment and readable storage medium Download PDF

Info

Publication number
CN111475676B
CN111475676B CN202010265441.XA CN202010265441A CN111475676B CN 111475676 B CN111475676 B CN 111475676B CN 202010265441 A CN202010265441 A CN 202010265441A CN 111475676 B CN111475676 B CN 111475676B
Authority
CN
China
Prior art keywords
information
target
video data
template
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010265441.XA
Other languages
Chinese (zh)
Other versions
CN111475676A (en
Inventor
崔贤浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ultra Hd Technology Co ltd
Original Assignee
Shenzhen Ultra Hd Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ultra Hd Technology Co ltd filed Critical Shenzhen Ultra Hd Technology Co ltd
Priority to CN202010265441.XA priority Critical patent/CN111475676B/en
Publication of CN111475676A publication Critical patent/CN111475676A/en
Application granted granted Critical
Publication of CN111475676B publication Critical patent/CN111475676B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Abstract

The application belongs to the technical field of multi-view video, and provides a video data processing method, a system, a device, equipment and a readable storage medium, wherein the video data processing method comprises a response video generation instruction, the video generation instruction comprises template identification information and target time information, template parameter information corresponding to a target template is acquired from a preset template information base according to the template identification information, and the template parameter information comprises search interval information and sequencing information; searching at least one path of target video data according to the search interval information, and acquiring a target frame image corresponding to target time information in the target video data; and sequencing the target frame images according to the sequencing information, and coding and rendering to generate a target video. By the method and the device, a user can quickly generate a desired video file only by selecting the template to be referred to, and the video file can be quickly generated only by simple operation, so that the problems of high repeatability and low speed of video production in the prior art are solved.

Description

Video data processing method, system, device, equipment and readable storage medium
Technical Field
The present application relates to the field of multi-view video technologies, and in particular, to a method, a system, an apparatus, a device, and a readable storage medium for processing video data.
Background
The free visual angle video is a video which is generated by synchronously shooting and capturing video data in different visual angle directions by utilizing a plurality of cameras and then carrying out certain editing processing and can be used for observing the same moment from a plurality of visual angles. The multiple cameras are synchronized in time and space. In order to produce a free-view video, it is necessary to analyze the characteristics and positions of the cameras, minimize the state difference between the cameras, and rapidly produce, edit, and output the video by video algorithm processing during playback.
So far, the video data of a plurality of existing cameras needs to be edited by professional personnel, and the editing process is highly repetitive and complicated, so that a method for quickly and effectively producing free-view video is needed.
Content of application
The application mainly aims to provide a video data processing method, a system, equipment and a readable storage medium, and aims to solve the problems that in the prior art, the manual operation repeatability is high and the generation speed is low in the free-view video production.
In order to achieve the above object, a first aspect of the embodiments of the present application provides a video data processing method, including:
responding to a video generation instruction, wherein the video generation instruction comprises template identification information and target time information, and acquiring template parameter information of a corresponding target template from a preset template information base according to the template identification information, wherein the template parameter information comprises search interval information and sequencing information;
searching at least one path of target video data according to the search interval information, and acquiring a target frame image corresponding to the target moment information in the target video data;
and sequencing the target frame images according to the sequencing information, and coding and rendering to generate a target video.
A second aspect of the embodiments of the present application provides a video data processing apparatus, including: the instruction acquisition module is used for responding to a video generation instruction, the video generation instruction comprises template identification information and target time information, and template parameter information of a corresponding target template is acquired from a preset template information base according to the template identification information, and the template parameter information comprises search interval information and sequencing information;
the data acquisition module is used for searching at least one path of target video data according to the search interval information; acquiring a target frame image corresponding to the target time information in the target video data;
and the video making module is used for sequencing the target frame images according to the sequencing information and generating a target video through coding and rendering.
A third aspect of the embodiments of the present application provides a video data processing apparatus, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a readable storage medium, which stores a computer program, and the computer program, when executed by a processor or a processing unit, implements the steps of the method described above.
According to the embodiment of the application, the template parameter information is obtained based on the preset template parameter information base according to the video generation instruction, and the target video can be quickly generated according to the template parameter information. Through the embodiment, a user can quickly generate a desired video file only by selecting the template to be referred to, the user can make the video file without a professional background, and the video file can be quickly generated only by performing simple operation, so that the problems of high repeatability and low generation speed of manual operation in video making in the prior art are solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a schematic diagram of a video data processing system according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation of a video data processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a template provided in an embodiment of the present application;
FIG. 4 is a schematic illustration of a template provided in accordance with another embodiment of the present application;
fig. 5 is a detailed flowchart of step S22 provided in another embodiment of the present application;
fig. 6 is a detailed flowchart of step S23 provided in another embodiment of the present application;
FIG. 7 is a schematic illustration of a template provided in accordance with another embodiment of the present application;
fig. 8 is a schematic diagram of a video generation process according to an embodiment of the present application;
fig. 9 is a schematic flow chart illustrating an implementation of a video data processing method according to another embodiment of the present application;
fig. 10 is a schematic diagram illustrating a storage process of a video file according to an embodiment of the present application;
fig. 11 is a schematic diagram of a video data processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic hardware configuration diagram of a video data processing apparatus according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the terminal devices described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal device that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal device supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal device may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
The video data processing method provided by the embodiment of the application is applied to video data processing equipment, the video data processing equipment can be a mobile phone, a tablet computer, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA) and other terminal pen-related or other equipment with processing functions, and the embodiment of the application does not limit the specific type of the video data processing equipment at all.
Referring to fig. 1, a video data processing apparatus 10 is exemplified as a notebook computer, and a video data processing system provided by an embodiment of the present application includes the video data processing apparatus 10 implementing a video data method according to an embodiment of the present application, and an imaging array formed by a plurality of imaging apparatuses 20 for performing video imaging on a target scene, and the video data processing apparatus 10 is communicatively connected to each imaging apparatus 20. The communication means includes a wired connection and a wireless connection. Wherein, each image pickup apparatus 20 in the image pickup array is arranged in a preset viewing angle direction, and the shooting ranges of adjacent image pickup apparatuses 20 intersect. Each image pickup device 20 performs synchronous shooting on a target scene to form multiple paths of video data, the video data processing device 10 collects each path of video data from the image pickup device 20, and processes the collected video data according to the video data processing method provided in the embodiment of the present application, so as to obtain a free view angle video with a view angle effect. The camera device 20 may be a camera or a mobile phone with a video recording function, a notebook computer, or other terminal devices.
In other embodiments, the video data processing system may further include a video capture device, which is in communication with each of the cameras 20 and has multiple capture channels for capturing each of the video data. The video capture device is also in communication connection with the video data processing device 10 in a wired or wireless manner, and is used for transmitting the captured video data to the video data processing device 10.
In order to explain the technical solution of the video data processing method of the present application, the following description is made by using specific embodiments.
Referring to fig. 2, it is a schematic diagram of an implementation flow of a video data processing method provided in the second embodiment of the present application, where the method includes:
s21, responding to a video generation instruction, wherein the video generation instruction comprises template identification information and target time information, and acquiring template parameter information of a corresponding target template from a preset template information base according to the template identification information, wherein the template parameter information comprises search interval information and sequencing information;
in the present embodiment, the video data processing apparatus 10 responds to a video generation instruction. The video generation instruction is generated by triggering at a target moment based on a currently played video by a user. The currently played video is one path of video data in the multiple paths of video data shot by the camera array. The device for playing the currently played video may be the video data processing device 10, or may be another playing device. The video data processing apparatus 10 will be described by taking as an example the case where it plays a currently playing video.
Before step S21, the method further includes: and playing the video data under the preset acquisition path. The video data of the preset capture path is video data of a currently played video, for example, the capture path of the camera device 20 at the middle position of the camera array is the preset capture path. Specifically, the video data processing device 10 acquires video data in real time according to a preset acquisition path and plays the video data or plays the video data after the shooting is finished.
Optionally, before step S21, the method further includes: s21a, receiving a template making instruction, wherein the template making instruction comprises template parameter information; s21b, generating template identification information uniquely corresponding to the template making instruction; and packaging the template identification information and the template parameter information, and storing the template identification information and the template parameter information in a preset module information base. In this embodiment, the video data processing apparatus 10 includes a template editing module, which provides a template to be edited, and is configured to receive a template parameter input by a user according to a preset format of the template to be edited, generate template identification information uniquely corresponding to the template, and finally store the template identification information and the template parameter information in a preset template information base in a one-to-one correspondence manner. The template parameter information includes search range information indicating a capture path corresponding to video data that the video data processing apparatus 10 needs to acquire or a range of the number of the image pickup apparatus 20, and ranking information. For example, if the search range information is camera No. 0 to 10, 11 channels of video data of camera No. 0 to 10 are acquired. The search interval information may also be an acquisition path interval, such as a data acquisition interface No. 1 to a data acquisition interface No. 11. The sequencing information is sequenced according to a preset visual direction. The preset visual direction may be a direction in which the camera number increases (decreases) or a direction in which the data collection interface number increases (decreases). As an example, fig. 3 is a template diagram, in which the camera number in the start information is 0, and the camera number in the end information is 10, then the search interval is camera No. 0 to camera No. 10, and the search interval is ordered from camera No. 0 to camera No. 10.
In other embodiments, the template may be created and stored on other devices, and only the preset template information base needs to be synchronously stored on the video data processing device 10.
When a user triggers a video generation instruction at a target moment based on a currently played video, the video data processing device 10 automatically acquires template identification information and target moment information of the template according to the template to be selected and referred to, and encapsulates the template identification information and the target moment information to form the video generation instruction. After responding to the video generation instruction, the video data processing device 10 acquires template parameter information corresponding to the template identification information from a preset template information base according to the template identification information, and analyzes the template parameter information to obtain search interval information and sequencing information.
Preferably, the video data corresponding to the currently played video is a certain path of video data included in the search interval information, and more preferably, the video data corresponding to the currently played video is the first path of video data included in the search interval information, so that the starting time of the target video and the target time selected by the user independently are at the same visual angle, and the viewing experience and the visual effect of the user are improved.
S22, searching at least one path of target video data according to the search interval information, and acquiring a target frame image corresponding to the target time information in the target video data;
the number of video data in the search space information may be 1 or multiple. Preferably, the search interval information includes a plurality of channels of video data to provide a certain visual effect. After the video data processing device 10 acquires the search interval information, at least one path of target video data is searched according to the search interval information, and the searched target video data is acquired. As an example, corresponding header file information is determined from the image pickup apparatus 20 number information contained in the search section information, and target video data is searched for from the header file information. The video data information of each camera device 20 includes header file information, a frame header, and frame data, where the header file information is used to bind the camera devices 20 and may be the numbers of the camera devices 20. The frame header includes frame sequence information and frame size information, where the frame sequence information indicates a frame time of the frame in the video data, and the frame time may be time or a sequence number. In a preferred embodiment, the header information corresponds to the camera device 20 number to speed up the search process.
And acquiring a target frame image in the target video data according to the target time information. Specifically, the frame data at the corresponding time in the target video data is decoded according to the target time information, so as to obtain the target frame image. And independently decoding the frame data in the video data by using a decoding technology to obtain a target frame image. The target time information may be time information or sequence number information. Note that, for synchronous shooting, the respective image capturing apparatuses 20 form video data captured during a shooting period having a considerable number of frame images, and the frame images of the respective videos have continuity in the visual direction at the same time. When the search interval information contains N paths of video data, N target frame images are obtained, and the frame sequence information of each target frame image in the respective video data is the same.
Optionally, the template parameter information includes a time difference value, where the time difference value is a difference value between an occurrence time of the target frame image and a target time corresponding to the target time information, where the occurrence time of the target frame image refers to sequence information of the target frame image in the target video data. Illustratively, referring to fig. 5, which is a detailed flowchart of step S22, step S22 includes: s221': searching at least one path of target video data according to the search interval information; s222': analyzing a target frame time, wherein the target frame time is the time corresponding to the target time information plus the time difference; s223', the target frame image corresponding to the target frame time in the target video data is obtained. In this example, the target frame image of a time unit with a certain time difference before and after the target time is obtained from the target video data, so that the user requirements under different situations can be met, and the video desired by the user can be obtained.
The time difference value may be a negative number, 0, and a positive number, and when the time difference value is a positive number, it indicates that the occurrence time of the target frame image is after the target time information, and when the time difference value is a negative number, it indicates that the occurrence time of the target frame image is before the target time information. For example, if the target time is 20min and the time difference is 100ms, searching for a frame image located at (20min + 100ms) time in each path of target video data; for another example, if the target time is 800 th frame and the time difference is-10 th frame, the frame image at the time of (800 frames-10 frames) in each path of target video data is searched.
And S23, sequencing the target frame images according to the sequencing information, and coding and rendering to generate a target video.
The ranking information is obtained by analyzing the template parameter information, and the expression form of the ranking information is not limited. Optionally, the starting information points to the ending information as a sorting mode corresponding to the template. Optionally, sorting is performed according to a preset visual direction.
And after sequencing the target frame images, coding and rendering to form a target video.
Optionally, the template parameter information further includes scaling information, and the scaling information is respectively set corresponding to each camera or is uniformly set corresponding to each camera. As an example, fig. 4 is another template diagram, and each camera in fig. 4 sets scaling information respectively. The setting input items of the intermediate camera are added by setting an add button. The video data processing apparatus 10 subjects the target frame image of the corresponding video stream data to scaling processing according to the scaling information, the scaling setting being favorable for increasing the visual effect such as the visual effect of the screen being enlarged step by step or the visual effect of the screen being reduced step by step.
Further, the template parameter information further includes center coordinate information. The center coordinate information is selected by a user from the frame pictures played at the target moment based on the currently played video so as to determine the attention object in the frame pictures at the target moment in the currently played video. For example, the frame picture played by the currently played video at the target moment is a picture of taking off and killing a badminton by a badminton player in front of the net, and a user needs to watch the free visual angle video of the badminton player at the killing moment, so that the user can drop the center coordinate position on the badminton player to indicate that the subsequently obtained target frame image is corrected according to the center coordinate position, and the free visual angle video with the attention object as the center is obtained. When the user selects the center coordinate, the video data processing device 10 acquires coordinate information of the center coordinate to generate center coordinate information, and encapsulates the center coordinate information in the video generation instruction.
Exemplarily, referring to fig. 6, fig. 6 is a detailed flowchart of step S23. Step S23 may further include: s231: respectively correcting and zooming the target frame image according to the central coordinate information and the zooming ratio information to obtain a preprocessed image; s232: sorting the preprocessed images according to the sorting information; s233: and coding and rendering the sequenced preprocessed images to generate a target video. The order of step S231 and step S232 may be changed. In this example, after each target frame image is obtained, each target frame image is scaled and corrected according to the scaling information and the center coordinate information corresponding to each target frame image. And the scaling information corresponding to each target frame image is the scaling information of the camera. The order of the rectification and scaling processes is not limited. Preferably, each target frame image is firstly corrected, and the corrected image is zoomed to finally obtain a preprocessed image; and then sequencing the preprocessed images, and coding and rendering the preprocessed images to form the free-view video. In other embodiments, the rectification and scaling processes may be performed after the sorting.
Optionally, the template parameter information further includes a frame repetition value. The frame repetition value refers to the number of times each target frame image is repeatedly arranged. Exemplarily, step S232 includes: copying the corresponding times of the frame repetition value to each preprocessed image to obtain a preprocessed image group; sequencing each preprocessed image group according to the sequencing information; accordingly, step S233 includes: and coding and rendering the sequenced preprocessed image group to generate the target video. In the present example, it is advantageous to improve the picture effect of the target video by setting the frame repetition value.
According to the embodiment of the application, based on the preset template, a user can quickly generate a desired video file only by selecting the template to be referred to, inputting the central coordinate information and the like, the user can make the video file without a professional background, and meanwhile, the video file can be quickly generated only by performing simple operation.
Further, the video data processing method provided in the above embodiment further includes: pausing the playing of the currently played video at the target moment; accordingly, after the encoding rendering generates the target video, the method further includes: and playing the target video, and continuing to play the currently played video after the target video is played. Optionally, the target video is played in a small window manner, and after the target video is finished, the currently played video is continuously played. Optionally, after the target video is generated, the currently played video is divided into a front segment and a rear segment based on the target time, the front segment of the currently played video, the target video and the rear segment of the currently played video are subjected to video splicing to form a new video, and the new video is played from the target time. The generated target video is linked with the currently played video in a playing mode, and the method and the device are beneficial to providing the on-site experience feeling and the experience timeliness of the user.
As a specific application example of the embodiments of the present application. Referring to fig. 7, a schematic structural diagram of a template provided in an embodiment of the present application is shown. The search interval of the template shown in fig. 7 is camera No. 0 to camera No. 9, the zoom times are all 100%, the time difference is all 2ms, the frame repetition number is 1, and the user triggers a video generation instruction at the target time i based on the video data S0. Wherein the video data corresponding to camera No. 0 to camera No. 10 are S0, S1 Sj (i) Is a frame image located at the i-th time instant of the video data Sj. Fig. 8 is a process in which the video data processing apparatus 10 generates the target video Z according to the template shown in fig. 7. The method specifically comprises the following steps: the video frequency processing equipment acquires the video data of S0, S1.. S9, decodes the frame image at the moment of i-2 in the video data of S0, S1.. S9, copies each frame image for 1 time, and displays the video data according to the visual directions of S0 to S9
Referring to fig. 9, a video data processing method according to another embodiment of the present application includes steps S31 to S36, where steps S31 and S36 are the same as steps S21 and S23, and steps S32 and S22 are the same, and the same points are not repeated herein, but the differences between steps S33 and S35 are as follows:
s33, searching a target group file in the target video data according to the group file information;
s34, searching target frame data in the target group file according to the frame time information;
and S35, decoding the target frame data to obtain the target frame image.
In this embodiment, in order to further speed up the production of the video, the video data is stored according to a certain format, which specifically includes: grouping multi-frame data included in video data to obtain a plurality of groups of files, and encapsulating the video data according to the plurality of groups of files to form the video data, where fig. 10 is a schematic diagram of a storage process of a path of video files, where frame header records frame time information, and frame data in the groups of files are sorted according to a generation sequence. The package information includes header file information, group file information, and frame time information of each frame data in the group file. The header information is used for searching the video data, the template information includes header information, and the header information may be serial number information or acquisition path information of the camera device. The target time information includes group file information and frame time information, and the video data processing apparatus 10 first finds a group file in which the target frame image is located according to the group file information, and then determines the target frame image in the group file according to the frame time information.
It can be understood that the grouping formula of each path of video data belonging to the same camera array is the same. For example, each grouped per 800 frames.
According to the technical scheme of the embodiment, after the video data processing device 10 searches the target video, the group file is searched first, and then the frame data in the group file is searched, so that the searching process is facilitated, and the operation pressure of the processor is reduced.
Referring to fig. 11, which is a schematic diagram of a video data processing apparatus 9 provided in an embodiment of the present application, the video data processing apparatus 9 includes units for performing the steps in the corresponding embodiment of fig. 1. Please refer to fig. 1 for the related description of the corresponding embodiment. Fig. 11 shows a schematic diagram of a video data processing apparatus 9 comprising:
the instruction obtaining module 91 is configured to respond to a video generation instruction, where the video generation instruction includes template identification information and target time information, and obtain template parameter information of a corresponding target template from a preset template information base according to the template identification information, where the template parameter information includes search interval information and sorting information;
a data obtaining module 92, configured to search for at least one path of target video data according to the search interval information; acquiring a target frame image corresponding to the target time information in the target video data;
and the video production module 93 is configured to sort the target frame images according to the sorting information, and encode and render the target frame images to generate a target video.
Further, the data obtaining module 92 is further configured to determine corresponding header file information according to the camera device number information included in the search interval information, and search for target video data according to the header file information; and decoding frame data at the corresponding moment in the target video data according to the target moment information to obtain the target frame image.
Further, the data obtaining module 92 is further configured to search a target group file in the target video data according to the group file information; searching target frame data in the target group file according to the frame time information; and decoding the target frame data to obtain the target frame image.
Further, the video production module 93 includes a pre-processing module 931 and a post-production module 932,
the preprocessing module 931 is configured to correct and scale the target frame image according to the center coordinate information and the scaling information, to obtain preprocessed images, and to sort the preprocessed images according to the sorting information;
the post-production module 932 is configured to encode and render the sorted preprocessed images to generate a target video
Further, the preprocessing module 931 is further configured to copy the number of times corresponding to the frame repetition value for each preprocessed image to obtain a preprocessed image group, and sort each preprocessed image group according to the sorting information;
the post-production module 932 is further configured to encode and render the sorted preprocessed image group to generate the target video.
Further, the data acquisition module 92 includes an analysis sub-module 921 and an acquisition sub-module 922,
the analysis submodule 921 is configured to analyze a target frame time, where the target frame time is a time corresponding to the target time information plus the time difference;
the obtaining sub-module 922 is configured to obtain the target frame image corresponding to the target frame time in the target video data.
Further, the video data processing apparatus 9 further includes a template making module 94,
the instruction obtaining module 91 is further configured to receive a template making instruction, where the template making instruction includes template parameter information;
and the template making module 94 is configured to encapsulate the template identification information and the template parameter information, and store the encapsulated template identification information and the encapsulated template parameter information in a preset module information base.
The function implementation of each module in the video data processing apparatus 9 corresponds to each step in the video data processing method embodiment, and the function and implementation process thereof are not described in detail here.
Referring to fig. 12, fig. 12 is a schematic diagram of a hardware structure of a video data processing apparatus 10 according to a ninth embodiment of the present application. As shown in fig. 12, the video data processing apparatus 10 of this embodiment includes: a processor 100, a memory 101 and a computer program 102, such as a video data processing program, stored in said memory 101 and executable on said processor 100. The processor 100, when executing the computer program 102, implements the steps in the various video data processing method embodiments described above, such as the steps S21 to S23 shown in fig. 2. Alternatively, the processor 100, when executing the computer program 102, implements the functions of each module/unit in each device embodiment described above, for example, the functions of the modules 91 to 93 shown in fig. 9.
Illustratively, the computer program 102 may be partitioned into one or more modules/units that are stored in the memory 101 and executed by the processor 100 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of the computer program 102 in the video data processing device 10. For example, the computer program 102 may be divided into an instruction acquisition module, a data acquisition module, and a video creation module (which is a module in a virtual device), and the specific functions of each module are as follows:
the instruction acquisition module is used for responding to a video generation instruction, wherein the video generation instruction comprises template identification information and target time information, and acquiring template parameter information of a corresponding target template from a preset template information base according to the template identification information, and the template parameter information comprises search interval information and sequencing information;
the data acquisition module is used for searching at least one path of target video data according to the search interval information; acquiring a target frame image corresponding to the target time information in the target video data;
and the video making module is used for sequencing the target frame images according to the sequencing information and generating a target video through coding and rendering.
The video data processing device 10 may be a desktop computer, a notebook computer, a palm computer, a cloud transaction management platform, or other computing devices. The video data processing apparatus 10 may include, but is not limited to, a processor 100, a memory 101. It will be appreciated by those skilled in the art that fig. 12 is merely an example of the video data processing apparatus 10 and does not constitute a limitation of the video data processing apparatus 10 and may include more or less components than those shown, or combine certain components, or different components, for example the video data processing apparatus 10 may also include input output devices, network access devices, buses, etc.
The Processor 100 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 101 may be an internal storage unit of the video data processing apparatus 10, such as a hard disk or a memory of the video data processing apparatus 10. The memory 101 may also be an external storage device of the video data processing device 10, such as a plug-in hard disk provided on the terminal device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a flash memory Card (FlashCard), and the like. Further, the memory 101 may also include both an internal storage unit and an external storage device of the video data processing apparatus 10. The memory 101 is used for storing the computer program and other programs and data required by the terminal device. The memory 101 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application, and are intended to be included within the scope of the present application.

Claims (8)

1. A method of processing video data, comprising:
responding to a video generation instruction, wherein the video generation instruction comprises template identification information and target time information, and acquiring template parameter information of a corresponding target template from a preset template information base according to the template identification information, wherein the template parameter information comprises search interval information and sequencing information;
searching at least one path of target video data according to the search interval information, and acquiring a target frame image corresponding to the target moment information in the target video data;
sequencing the target frame images according to the sequencing information, and coding and rendering to generate a target video;
the video generation instruction comprises center coordinate information, the template parameter information comprises scaling information, the target frame images are sequenced according to the sequencing information and are encoded and rendered to generate a target video, and the method comprises the following steps:
respectively correcting and zooming the target frame image according to the central coordinate information and the zooming ratio information to obtain a preprocessed image;
sorting the preprocessed images according to the sorting information;
encoding and rendering the sequenced preprocessed images to generate a target video;
the template parameter information further includes a frame repetition value, and the sorting the preprocessed images according to the sorting information includes:
copying the corresponding times of the frame repetition value to each preprocessed image to obtain a preprocessed image group;
sequencing each preprocessed image group according to the sequencing information;
correspondingly, encoding and rendering the ordered preprocessed images to generate a target video, including:
and coding and rendering the sequenced preprocessed image group to generate the target video.
2. The video data processing method according to claim 1, wherein said searching for at least one path of target video data according to the search interval information, and acquiring a target frame image corresponding to the target time information in the target video data, comprises:
determining corresponding header file information according to the camera equipment number information contained in the search interval information, and searching target video data according to the header file information;
and decoding frame data at the corresponding moment in the target video data according to the target moment information to obtain the target frame image.
3. The video data processing method according to claim 1, wherein the target time information includes group file information and frame time information, and the acquiring a target frame image corresponding to the target time information in the target video data includes:
searching a target group file in the target video data according to the group file information;
searching target frame data in the target group file according to the frame time information;
and decoding the target frame data to obtain the target frame image.
4. The method for processing video data according to any of claims 1 to 3, wherein the template parameter information further includes a time difference value, the time difference value is a difference value between an occurrence time of the target frame image and a target time corresponding to the target time information, and the obtaining the target frame image corresponding to the target time information in the target video data includes:
analyzing a target frame time, wherein the target frame time is the time corresponding to the target time information plus the time difference;
and acquiring the target frame image corresponding to the target frame time in the target video data.
5. The video data processing method according to any one of claims 1 to 3, wherein the responding to the video generation instruction is preceded by:
receiving a template making instruction, wherein the template making instruction comprises template parameter information;
generating template identification information uniquely corresponding to the template making instruction;
and packaging the template identification information and the template parameter information, and storing the template identification information and the template parameter information in a preset module information base.
6. A video data processing apparatus, comprising:
the instruction acquisition module is used for responding to a video generation instruction, wherein the video generation instruction comprises template identification information and target time information, and acquiring template parameter information of a corresponding target template from a preset template information base according to the template identification information, and the template parameter information comprises search interval information and sequencing information;
the data acquisition module is used for searching at least one path of target video data according to the search interval information; acquiring a target frame image corresponding to the target time information in the target video data;
the video making module is used for sequencing the target frame images according to the sequencing information and generating a target video through coding and rendering;
the video generation instruction comprises center coordinate information, the template parameter information comprises scaling information, and the video production module comprises a preprocessing module and a post-production module;
the preprocessing module is used for respectively correcting and scaling the target frame image according to the central coordinate information and the scaling information to obtain a preprocessed image and sequencing the preprocessed image according to the sequencing information;
the post-production module is used for coding and rendering the sequenced preprocessed images to generate a target video;
the template parameter information further comprises a frame repetition value;
the preprocessing module is further configured to copy the number of times corresponding to the frame repetition value for each preprocessed image to obtain a group of preprocessed images, and sort each group of preprocessed images according to the sorting information;
and the post-production module is also used for coding and rendering the sequenced preprocessed image group to generate the target video.
7. Video data processing device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor implements the steps of the method according to any of claims 1 to 5 when executing said computer program.
8. A readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN202010265441.XA 2020-04-07 2020-04-07 Video data processing method, system, device, equipment and readable storage medium Active CN111475676B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010265441.XA CN111475676B (en) 2020-04-07 2020-04-07 Video data processing method, system, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010265441.XA CN111475676B (en) 2020-04-07 2020-04-07 Video data processing method, system, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111475676A CN111475676A (en) 2020-07-31
CN111475676B true CN111475676B (en) 2023-03-24

Family

ID=71750129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010265441.XA Active CN111475676B (en) 2020-04-07 2020-04-07 Video data processing method, system, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111475676B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542790A (en) * 2020-09-04 2021-10-22 张义满 Remote, quick and convenient instant film production method for children growing micro-film
CN112948628A (en) * 2021-03-25 2021-06-11 智道网联科技(北京)有限公司 Internet of vehicles data processing method, device, equipment and storage medium
CN115150563A (en) * 2021-03-31 2022-10-04 华为技术有限公司 Video production method and system
CN114125556B (en) * 2021-11-12 2024-03-26 深圳麦风科技有限公司 Video data processing method, terminal and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581354A (en) * 2013-10-25 2015-04-29 腾讯科技(深圳)有限公司 Video buffering method and video buffering device
WO2018076952A1 (en) * 2016-10-24 2018-05-03 杭州海康威视数字技术股份有限公司 Method and apparatus for storage and playback positioning of video file
CN109697245A (en) * 2018-12-05 2019-04-30 百度在线网络技术(北京)有限公司 Voice search method and device based on video web page

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090011070A (en) * 2007-07-25 2009-02-02 삼성전자주식회사 Video processing apparatus and mobile apparatus and control method of video processing apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581354A (en) * 2013-10-25 2015-04-29 腾讯科技(深圳)有限公司 Video buffering method and video buffering device
WO2018076952A1 (en) * 2016-10-24 2018-05-03 杭州海康威视数字技术股份有限公司 Method and apparatus for storage and playback positioning of video file
CN109697245A (en) * 2018-12-05 2019-04-30 百度在线网络技术(北京)有限公司 Voice search method and device based on video web page

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
企业网站视频管理信息系统的设计与研究;马丽等;《电力信息与通信技术》;20141115(第11期);全文 *

Also Published As

Publication number Publication date
CN111475676A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN111475676B (en) Video data processing method, system, device, equipment and readable storage medium
CN111475675B (en) Video processing system
CN109618222B (en) A kind of splicing video generation method, device, terminal device and storage medium
WO2022001593A1 (en) Video generation method and apparatus, storage medium and computer device
CN109635621B (en) System and method for recognizing gestures based on deep learning in first-person perspective
CN113453040B (en) Short video generation method and device, related equipment and medium
CN109168026A (en) Instant video display methods, device, terminal device and storage medium
CN112291627A (en) Video editing method and device, mobile terminal and storage medium
CN107870999B (en) Multimedia playing method, device, storage medium and electronic equipment
EP3917131A1 (en) Image deformation control method and device and hardware device
CN111757175A (en) Video processing method and device
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
CN112995418B (en) Video color ring playing method, sending method and related equipment
CN111491187A (en) Video recommendation method, device, equipment and storage medium
CN111770386A (en) Video processing method, video processing device and electronic equipment
JP2017098957A (en) Method for generating user interface presenting videos
CN112950640A (en) Video portrait segmentation method and device, electronic equipment and storage medium
CN111491208A (en) Video processing method and device, electronic equipment and computer readable medium
CN111583348A (en) Image data encoding method and device, display method and device, and electronic device
CN113010738B (en) Video processing method, device, electronic equipment and readable storage medium
CN112906553B (en) Image processing method, apparatus, device and medium
US10924637B2 (en) Playback method, playback device and computer-readable storage medium
CN101924847A (en) Multimedia playing device and playing method thereof
CN111818364B (en) Video fusion method, system, device and medium
CN112738423B (en) Method and device for exporting animation video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant