CN111343502B - Video processing method, electronic device and computer readable storage medium - Google Patents

Video processing method, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN111343502B
CN111343502B CN202010241528.3A CN202010241528A CN111343502B CN 111343502 B CN111343502 B CN 111343502B CN 202010241528 A CN202010241528 A CN 202010241528A CN 111343502 B CN111343502 B CN 111343502B
Authority
CN
China
Prior art keywords
video
video file
files
preset
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010241528.3A
Other languages
Chinese (zh)
Other versions
CN111343502A (en
Inventor
王亚雄
龙喜洋
高宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Merchants Finance Technology Co Ltd
Original Assignee
China Merchants Finance Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Merchants Finance Technology Co Ltd filed Critical China Merchants Finance Technology Co Ltd
Priority to CN202010241528.3A priority Critical patent/CN111343502B/en
Publication of CN111343502A publication Critical patent/CN111343502A/en
Application granted granted Critical
Publication of CN111343502B publication Critical patent/CN111343502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • H04N21/440272Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA for performing aspect ratio conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Abstract

The invention discloses a video processing method, which comprises the following steps: pulling a video file to be processed; classifying the video files to be processed to obtain a classification result, and determining one or more first video file pairs according to the classification result; respectively carrying out standardization processing on the first video file pairs to obtain second video file pairs; determining two video files in the second video file pair, and judging whether the two video files are synchronous; and if so, synthesizing the two video files to obtain a target video file, and storing the target video file into a second preset storage path. The invention also discloses an electronic device and a computer storage medium. The invention can improve the video processing efficiency and quality.

Description

Video processing method, electronic device and computer readable storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a video processing method, an electronic device, and a computer-readable storage medium.
Background
Current video processing techniques are mainly used for post-processing of video, generally requiring specific software and manual intervention. Taking the insurance video customer service scene as an example, since the insurance industry needs to normalize all the customer service video files for archiving, the quantity of the customer service video files is huge, the labor cost is high, the video synthesis speed is low, and the synthesis efficiency is poor.
Therefore, how to perform video file synthesizing processing with high efficiency and high quality becomes a problem which needs to be solved urgently.
Disclosure of Invention
In view of the foregoing, the present invention provides a video processing method, an electronic device and a computer readable storage medium, which mainly aims to improve video processing efficiency and quality.
To achieve the above object, the present invention provides a video processing method, including:
a pulling step, namely pulling the video file to be processed from a first preset storage path at intervals of preset time;
an analysis step, namely reading and analyzing the name of the video file to be processed to obtain the related attribute field of the video file to be processed, classifying the video file to be processed according to the related attribute field to obtain a classification result, and determining one or more first video file pairs according to the classification result;
a processing step, namely respectively carrying out standardized processing on the first video file pair based on a preset standardized processing rule to obtain a second video file pair;
a judging step, namely determining two video files in the second video file pair, and judging whether the two video files are synchronous according to a preset synchronous judgment rule; and
and a synthesizing step, namely synthesizing the two video files to obtain a target video file when the two video files in the second video file pair are judged to be synchronous, and storing the target video file into a second preset storage path.
In addition, to achieve the above object, the present invention also provides an electronic device, including: the video processing method comprises the following steps of storing a memory and a processor, wherein the memory stores a video processing program which can run on the processor, and the video processing program can realize any step of the video processing method when being executed by the processor.
In addition, to achieve the above object, the present invention further provides a computer-readable storage medium, which includes a video processing program, and when the video processing program is executed by a processor, the computer-readable storage medium can implement any of the steps in the video processing method as described above.
According to the video processing method, the electronic device and the computer readable medium, firstly, the name of a video file to be processed is analyzed to determine a first video pair, then the first video pair is subjected to standardization processing to obtain a second video pair, then the second video pair is subjected to synchronous judgment and processing, and finally the synchronous video file is subjected to synthesis processing. The quality and the efficiency of video file synthesis processing are improved by determining video pairs, carrying out standardization processing, and carrying out synchronous judgment and cutting processing.
Drawings
FIG. 1 is a flow chart of a video processing method according to a preferred embodiment of the present invention;
FIG. 2 is a diagram of an electronic device according to a preferred embodiment of the present invention;
fig. 3 is a schematic diagram of program modules of the video processing program in fig. 2.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a video processing method. The method may be performed by an apparatus, which may be implemented by software and/or hardware.
Referring to fig. 1, a flow chart of a video processing method according to a preferred embodiment of the invention is shown.
In this embodiment, the video processing method includes: step S1-step S5.
And step S1, pulling the video file to be processed from the first preset storage path at preset time intervals.
The present embodiment is described with an electronic apparatus as an execution subject. The electronic device pulls the video file to be processed from the first preset storage path at preset time intervals so as to perform file splicing synthesis.
The preset time can be adjusted according to actual requirements, if the file amount is very large, the preset time can be set to be a small interval, for example, 1ms to 1s, and if the file amount is small, the preset time can be set to be a large interval, for example, 3s to 5 s.
The first preset storage path is a storage position which is specified in advance and used for storing the generated video file.
Taking a video call in an insurance video customer service scene as an example, two paths of videos are generated after the video call is finished, namely two paths of video files are generated: one path is a video file of the customer service end, and the other path is a video file of the client end.
Step S2, reading and analyzing the name of the video file to be processed to obtain the relevant attribute field of the video file to be processed, classifying the video file to be processed according to the relevant attribute field to obtain a classification result, and determining one or more first video file pairs according to the classification result.
It can be understood that the name of the to-be-processed video file pulled from the first preset storage path is determined according to a preset naming rule, and therefore, the name of the video file can be parsed based on the naming rule to determine the category corresponding to each video file. In this embodiment, the preset naming rule includes:
after the video call is finished, acquiring a preset relevant attribute field of the current video call; and
and generating the name of the video file corresponding to the video call based on the preset relevant attribute field and the preset sequence.
For example, the preset relevant attribute field includes session of video call (unique session per session), call time, application name, and user uid. And generating the name of the video file according to session + call time (including call starting time and call ending time) + application name + user uid.
In this embodiment, in the process of classifying the pulled video files, a first attribute field (for example, a session field in the video file name) in the related attribute field of the video file name is obtained, and the video files with the same first attribute field are used as a video file pair, so as to determine one or more first video file pairs.
It should be noted that there may be only two video files in some classes and there may be more than two video files in some classes in the classification result of the video file classification. The determining one or more first video file pairs according to the classification result comprises:
when the classification result only comprises two video files, directly taking the two video files as a first video file pair; and
and when the number of the video files in the classification result exceeds two, secondarily classifying the video files in the same classification result according to the second attribute field of the video file name, splicing the secondarily classified video files based on a preset video splicing rule to obtain two spliced video files, and taking the two spliced video files as a first video file pair.
For example, the second attribute field includes a user uid.
It can be understood that, due to the influence of network fluctuation, a plurality of video files may be generated from the same video path intermittently, so that the plurality of video files need to be spliced to form a complete video file.
In this embodiment, the preset video stitching rule includes:
respectively acquiring the generation time of the secondarily classified video files, and splicing the secondarily classified video files according to the sequence of the generation time from morning to evening; and
and naming the spliced video file according to a preset naming rule to generate a complete video file.
Further, in order to prevent a situation that a video stitching error occurs, the preset video stitching rule further includes:
judging whether the time of the spliced video file is continuous or not; and
and when the spliced video file is judged to be discontinuous in time, sending out early warning information.
For example, taking splicing two video files F1 and F2 as an example, the generation times of F1 and F2 are respectively obtained, if the results of sorting according to the generation times are F2 and F1, the end time of F2 and the start time of F1 (the start time is calculated according to the generation time and the video duration) are obtained, whether the two time points are continuous or not is judged (the time interval is smaller than a preset interval, for example, 1 second is met), if yes, the video file is considered to be correct, and if not, exception handling is performed, for example, warning information is sent to remind a manager to handle.
Whether the splicing is correct or not is judged by carrying out continuity check on the spliced video files, so that abnormal splicing, such as the condition of file leakage in the middle, can be prevented, and the accuracy of video processing is improved.
And step S3, respectively carrying out standardization processing on the first video file pair based on preset standardization processing rules to obtain a second video file pair.
In this embodiment, the preset normalization processing rule includes:
the resolution of each video file in the first pair of video files is analyzed and compared with a standardized resolution, and the resolution of each video file in the first pair of video files is adjusted based on the comparison result.
It should be noted that, in practical applications, the resolution of the video file is usually reduced. In other special cases, for example, when the user holds the certificate photo particularly blurred, the resolution of the video file may be enhanced using machine learning techniques, which will not be described in detail herein.
In other embodiments, the preset normalization processing rule further includes:
respectively reading the aspect ratio of each video file in the first video file pair, comparing the aspect ratio of each video file in the first video file pair with the standardized aspect ratio, and adjusting the aspect ratio of each video file in the first video file pair based on the comparison result.
The standardized resolution and the standardized length-width ratio are preset and stored, and can be adjusted according to actual requirements in the actual application process.
In addition to adjusting the resolution and aspect ratio of the video to make the file conform to the standard, in other embodiments, the preset normalization processing rule may further include: and recognizing the face position in each video file in the one or more video file pairs, adjusting the video picture based on the face position and the length-width ratio, and generating the video file after the normalized processing. The technology for recognizing the face position is mature and is not described herein.
Step S4, determining two video files in the second video file pair, and determining whether the two video files are synchronized according to a preset synchronization determination rule.
In this embodiment, the determining whether the two video files are synchronized according to a preset synchronization determination rule includes:
respectively acquiring video generation time and video duration of two video files in the second video file pair, and calculating video starting time and time difference of the two video files in the second video file pair; and
and judging whether the time difference degree is smaller than a preset threshold value, if so, judging that the two video files are synchronous, and if not, judging that the two video files are asynchronous.
The video starting time difference degree of the video files can be the time interval of the starting times of the two video files, and can also be the proportion of the time difference of the starting times of the two video files in the total duration of the video files. It can be understood that the starting time between two videos is allowed to have a certain time difference, only the difference needs to be controlled, and the video asynchronism is severe due to the overlarge difference, so that the smaller the preset threshold corresponding to the difference is, the better the preset threshold is. The difference degree and the preset threshold value can be adjusted according to actual requirements in the actual use process.
And step S5, when the two video files in the second video file pair are judged to be synchronous, synthesizing the two video files to obtain a target video file, and storing the target video file into a second preset storage path.
In the process of synthesizing the two video files, naming the synthesized target video file by "session + start time + end time + application name".
The second preset storage path is a predetermined storage location for storing the synthesized target video file meeting the requirement.
To save storage space, the intermediate process video, such as temporary video files ending with ". part 1" or ". part 2", is cleaned after the composition is complete.
If all the steps are normally finished, verification is usually not needed, and the customer service system manager can randomly draw a certain proportion of examination every week.
In other embodiments, the video processing method further comprises:
when the two video files are judged to be asynchronous, acquiring the starting time and the time difference of the two video files;
determining a video file to be cut in the two video files based on the starting time, and determining a cutting time point corresponding to the video file to be cut based on the time difference; and
and cutting the video file to be cut according to the cutting time point, and generating a synchronous second video file pair based on the cut video file.
In this embodiment, a video file with an earlier start time is taken as a video file to be cut.
After the video file is cut, the name of the video file needs to be updated, for example, a suffix "$ { session }. part 1" or "$ { session }. part 2" is added.
The video stitching and cropping in this embodiment uses the open source FFmpeg technique.
And cutting the file to obtain a synchronous video file pair, and then executing file synthesis operation.
In other embodiments, the video processing method further comprises:
receiving and responding to an access/query/deletion instruction sent by a user, and displaying/deleting a video file related to the instruction;
and receiving and responding to a modification instruction sent by a user, modifying the video related to the instruction according to a modification coefficient in the user modification instruction, and storing the modified result.
The steps of the embodiment are decoupled and independently deployed to form a modular video processing chain, and the video processing chain can be used for processing the input stream of the video file in parallel, so that the whole process is free from blockage, the CPU utilization rate is improved, and the processing efficiency is improved.
In the video processing method provided in the above embodiment, the name of the video file to be processed is first analyzed to determine the first video pair, then the first video pair is normalized to obtain the second video pair, then the second video pair is synchronously judged and processed, and finally the synchronized video file is synthesized. The video pair is determined through analysis, normalized processing is carried out, synchronous judgment and cutting processing are carried out, and the quality and the efficiency of video file synthesis processing are improved.
The invention also provides an electronic device 1. Fig. 2 is a schematic diagram of an electronic device 1 according to a preferred embodiment of the invention.
In this embodiment, the electronic device 1 may be a server, a smart phone, a tablet computer, a portable computer, a desktop computer, or other terminal equipment with a data processing function, and the server may be a rack server, a blade server, a tower server, or a cabinet server.
The electronic device 1 includes a memory 11, a processor 12, and a network interface 13.
The memory 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
The memory 11 may also be an external storage device of the electronic apparatus 1 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic apparatus 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic apparatus 1.
The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as the video processing program 10, but also to temporarily store data that has been output or is to be output.
The processor 12 may be, in some embodiments, a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip for executing program codes stored in the memory 11 or Processing data, such as the video Processing program 10.
The network interface 13 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), typically used for establishing a communication connection between the electronic apparatus 1 and other electronic devices, such as a client (not shown in the figure).
Fig. 2 only shows the electronic device 1 with the components 11-13, and it will be understood by a person skilled in the art that the structure shown in fig. 2 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
Optionally, the electronic device 1 may further comprise a user interface, the user interface may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface may further comprise a standard wired interface, a wireless interface.
Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an Organic Light-Emitting Diode (OLED) touch screen, or the like. The display, which may also be referred to as a display screen or display unit, is used for displaying information processed in the electronic apparatus 1 and for displaying a visualized user interface.
In the embodiment of the electronic device 1 shown in fig. 2, the memory 11 as a kind of computer storage medium stores the program code of the video processing program 10, and when the processor 12 executes the program code of the video processing program 10, the following steps are implemented:
and a pulling step, namely pulling the video file to be processed from the first preset storage path at preset time intervals.
The electronic device 1 pulls the video file to be processed from the first preset storage path at preset time intervals to perform file splicing synthesis.
The preset time can be adjusted according to actual requirements, if the file amount is very large, the preset time can be set to be a small interval, for example, 1ms to 1s, and if the file amount is small, the preset time can be set to be a large interval, for example, 3s to 5 s.
The first preset storage path is a storage position which is specified in advance and used for storing the generated video file.
Taking a video call in an insurance video customer service scene as an example, two paths of videos are generated after the video call is finished, namely two paths of video files are generated: one path is a video file of the customer service end, and the other path is a video file of the client end.
And an analysis step, namely reading and analyzing the name of the video file to be processed to obtain the related attribute field of the video file to be processed, classifying the video file to be processed according to the related attribute field to obtain a classification result, and determining one or more first video file pairs according to the classification result.
It can be understood that the name of the to-be-processed video file pulled from the first preset storage path is determined according to a preset naming rule, and therefore, the name of the video file can be parsed based on the naming rule to determine the category corresponding to each video file. In this embodiment, the preset naming rule includes:
after the video call is finished, acquiring a preset relevant attribute field of the current video call; and
and generating the name of the video file corresponding to the video call based on the preset relevant attribute field and the preset sequence.
For example, the preset relevant attribute field includes session of video call (unique session per session), call time, application name, and user uid. And generating the name of the video file according to session + call time (including call starting time and call ending time) + application name + user uid.
In this embodiment, in the process of classifying the pulled video files, a first attribute field (for example, a session field in the video file name) in the related attribute field of the video file name is obtained, and the video files with the same first attribute field are used as a video file pair, so as to determine one or more first video file pairs.
It should be noted that there may be only two video files in some classes and there may be more than two video files in some classes in the classification result of the video file classification. The determining one or more first video file pairs according to the classification result comprises:
when the classification result only comprises two video files, directly taking the two video files as a first video file pair; and
and when the number of the video files in the classification result exceeds two, secondarily classifying the video files in the same classification result according to the second attribute field of the video file name, splicing the secondarily classified video files based on a preset video splicing rule to obtain two spliced video files, and taking the two spliced video files as a first video file pair.
For example, the second attribute field includes a user uid.
It can be understood that, due to the influence of network fluctuation, a plurality of video files may be generated from the same video path intermittently, so that the plurality of video files need to be spliced to form a complete video file.
In this embodiment, the preset video stitching rule includes:
respectively acquiring the generation time of the secondarily classified video files, and splicing the secondarily classified video files according to the sequence of the generation time from morning to evening;
and naming the spliced video file according to a preset naming rule to generate a complete video file.
Further, in order to prevent a situation that a video stitching error occurs, the preset video stitching rule further includes:
judging whether the time of the spliced video file is continuous or not;
and when the spliced video file is judged to be discontinuous in time, sending out early warning information.
For example, taking splicing two video files F1 and F2 as an example, the generation times of F1 and F2 are respectively obtained, if the results of sorting according to the generation times are F2 and F1, the end time of F2 and the start time of F1 (the start time is calculated according to the generation time and the video duration) are obtained, whether the two time points are continuous or not is judged (the time interval is smaller than a preset interval, for example, 1 second is met), if yes, the video file is considered to be correct, and if not, exception handling is performed, for example, warning information is sent to remind a manager to handle.
Whether the splicing is correct or not is judged by carrying out continuity check on the spliced video files, so that abnormal splicing, such as the condition of file leakage in the middle, can be prevented, and the accuracy of video processing is improved.
And a processing step, namely respectively carrying out normalized processing on the first video file pair based on a preset normalized processing rule to obtain a second video file pair.
In this embodiment, the preset normalization processing rule includes:
the resolution of each video file in the first pair of video files is analyzed and compared with a standardized resolution, and the resolution of each video file in the first pair of video files is adjusted based on the comparison result.
It should be noted that, in practical applications, the resolution of the video file is usually reduced. In other special cases, for example, when the user holds the certificate photo particularly blurred, the resolution of the video file may be enhanced using machine learning techniques, which will not be described in detail herein.
In other embodiments, the preset normalization processing rule further includes:
respectively reading the aspect ratio of each video file in the first video file pair, comparing the aspect ratio of each video file in the first video file pair with the standardized aspect ratio, and adjusting the aspect ratio of each video file in the first video file pair based on the comparison result.
The standardized resolution and the standardized length-width ratio are preset and stored, and can be adjusted according to actual requirements in the actual application process.
In addition to adjusting the resolution and aspect ratio of the video to make the file conform to the standard, in other embodiments, the preset normalization processing rule may further include: and recognizing the face position in each video file in the one or more video file pairs, adjusting the video picture based on the face position and the length-width ratio, and generating the video file after the normalized processing. The technology for recognizing the face position is mature and is not described herein.
And a judging step, namely determining two video files in the second video file pair, and judging whether the two video files are synchronous according to a preset synchronous judgment rule.
In this embodiment, the determining whether the two video files are synchronized according to a preset synchronization determination rule includes:
respectively acquiring video generation time and video duration of two video files in the second video file pair, and calculating video starting time and time difference of the two video files in the second video file pair;
and judging whether the time difference degree is smaller than a preset threshold value, if so, judging that the two video files are synchronous, and if not, judging that the two video files are asynchronous.
The video starting time difference degree of the video files can be the time interval of the starting times of the two video files, and can also be the proportion of the time difference of the starting times of the two video files in the total duration of the video files. It can be understood that the starting time between two videos is allowed to have a certain time difference, only the difference needs to be controlled, and the video asynchronism is severe due to the overlarge difference, so that the smaller the preset threshold corresponding to the difference is, the better the preset threshold is. The difference degree and the preset threshold value can be adjusted according to actual requirements in the actual use process.
And a synthesizing step, namely synthesizing the two video files to obtain a target video file when the two video files in the second video file pair are judged to be synchronous, and storing the target video file into a second preset storage path.
In the process of synthesizing the two video files, naming the synthesized target video file by "session + start time + end time + application name".
The second preset storage path is a predetermined storage location for storing the synthesized target video file meeting the requirement.
To save storage space, the intermediate process video, such as temporary video files ending with ". part 1" or ". part 2", is cleaned after the composition is complete.
If all the steps are normally finished, verification is usually not needed, and the customer service system manager can randomly draw a certain proportion of examination every week.
In other embodiments, when the processor 12 executes the program code of the video processing program 10, the following steps are further implemented:
when the two video files are judged to be asynchronous, acquiring the starting time and the time difference of the two video files;
determining a video file to be cut in the two video files based on the starting time, and determining a cutting time point corresponding to the video file to be cut based on the time difference; and
and cutting the video file to be cut according to the cutting time point, and generating a synchronous second video file pair based on the cut video file.
In this embodiment, a video file with an earlier start time is taken as a video file to be cut.
After the video file is cut, the name of the video file needs to be updated, for example, a suffix "$ { session }. part 1" or "$ { session }. part 2" is added.
The video stitching and cropping in this embodiment uses the open source FFmpeg technique.
And cutting the file to obtain a synchronous video file pair, and then executing file synthesis operation.
In the electronic device 1 provided in the above embodiment, the name of the video file to be processed is first analyzed to determine the first video pair, then the first video pair is normalized to obtain the second video pair, then the second video pair is synchronously judged and processed, and finally the synchronized video files are synthesized. The video pair is determined through analysis, normalized processing is carried out, synchronous judgment and cutting processing are carried out, and the quality and the efficiency of video file synthesis processing are improved.
Alternatively, in other embodiments, the video processing program 10 may be divided into one or more modules, and one or more modules are stored in the memory 11 and executed by the one or more processors 12 to implement the present invention.
For example, referring to fig. 3, which is a schematic diagram of program modules of the video processing program 10 in fig. 2, in this embodiment, the video processing program 10 may be divided into modules 110 and 150, and the functions or operation steps implemented by the modules 110 and 150 are similar to those described above, and are not described in detail here, for example, wherein:
the pull module 110 is configured to pull the video file to be processed from the first preset storage path at preset intervals;
the analysis module 120 is configured to read and analyze the name of the video file to be processed to obtain a related attribute field of the video file to be processed, classify the video file to be processed according to the related attribute field to obtain a classification result, and determine one or more first video file pairs according to the classification result;
the processing module 130 is configured to perform normalization processing on the first video file pairs respectively based on preset normalization processing rules to obtain second video file pairs;
a determining module 140, configured to determine two video files in the second video file pair, and determine whether the two video files are synchronized according to a preset synchronization determination rule; and
and a synthesizing module 150, configured to synthesize the two video files to obtain a target video file when it is determined that the two video files in the second video file pair are synchronous, and store the target video file in a second preset storage path.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a video processing program 10, and when executed by a processor, the video processing program 10 implements the following steps:
a pulling step, namely pulling the video file to be processed from a first preset storage path at intervals of preset time;
an analysis step, namely reading and analyzing the name of the video file to be processed to obtain the related attribute field of the video file to be processed, classifying the video file to be processed according to the related attribute field to obtain a classification result, and determining one or more first video file pairs according to the classification result;
a processing step, namely respectively carrying out standardized processing on the first video file pair based on a preset standardized processing rule to obtain a second video file pair;
a judging step, namely determining two video files in the second video file pair, and judging whether the two video files are synchronous according to a preset synchronous judgment rule; and
and a synthesizing step, namely synthesizing the two video files to obtain a target video file when the two video files in the second video file pair are judged to be synchronous, and storing the target video file into a second preset storage path.
The specific implementation of the computer-readable storage medium of the present invention is substantially the same as the content in the embodiments of the video processing method, and is not described herein again.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A video processing method applied to an electronic device is characterized by comprising the following steps:
a pulling step, namely pulling the video file to be processed from a first preset storage path at intervals of preset time;
the method comprises an analysis step, namely reading and analyzing the name of the video file to be processed to obtain a first attribute field of the video file to be processed, classifying the video file to be processed according to the first attribute field, determining the video files to be processed of the same video call as one class, and determining a first video file pair for each class, wherein the analysis step comprises the following steps: when only two video files are included in one class, directly taking the two video files as a first video file pair; when the number of the video files in one class exceeds two, performing secondary classification on the video files in the same classification result according to a second attribute field of the video file name, splicing the video files subjected to secondary classification based on a preset video splicing rule to obtain two spliced video files, and taking the two spliced video files as a first video file pair;
a processing step, namely respectively carrying out standardized processing on the first video file pair based on a preset standardized processing rule to obtain a second video file pair;
a judging step, namely determining two video files in the second video file pair, calculating the video starting time and the time difference of the two video files according to the video generation time and the video duration of the two video files, judging whether the time difference is smaller than a preset threshold value, if so, judging that the two video files are synchronous, otherwise, judging that the two video files are asynchronous; and
and a synthesizing step, namely synthesizing the two video files to obtain a target video file when the two video files in the second video file pair are judged to be synchronous, and storing the target video file into a second preset storage path.
2. The video processing method according to claim 1, wherein the preset video stitching rule comprises:
respectively acquiring the generation time of the secondarily classified video files, and splicing the secondarily classified video files according to the sequence of the generation time from morning to evening; and
and naming the spliced video file according to a preset naming rule to generate a complete video file.
3. The video processing method according to claim 2, wherein the preset video stitching rule further comprises:
judging whether the time of the spliced video file is continuous or not; and
and when the spliced video file is judged to be discontinuous in time, sending out early warning information.
4. The video processing method according to claim 1, wherein the preset normalization processing rule comprises: the resolution of each video file in the first pair of video files is analyzed and compared with a standardized resolution, and the resolution of each video file in the first pair of video files is adjusted based on the comparison result.
5. The video processing method according to claim 4, wherein the preset normalization processing rule further comprises: respectively reading the aspect ratio of each video file in the first video file pair, comparing the aspect ratio of each video file in the first video file pair with the standardized aspect ratio, and adjusting the aspect ratio of each video file in the first video file pair based on the comparison result.
6. The video processing method of claim 1, wherein the video processing method further comprises:
when the two video files are judged to be asynchronous;
determining a video file to be cut in the two video files based on the video starting time of the two video files, and determining a cutting time point corresponding to the video file to be cut based on the time difference; and
and cutting the video file to be cut according to the cutting time point, and generating a synchronous second video file pair based on the cut video file.
7. An electronic device, comprising a memory and a processor, wherein the memory stores a video processing program operable on the processor, and the video processing program, when executed by the processor, implements the steps of the video processing method according to any one of claims 1 to 6.
8. A computer-readable storage medium, comprising a video processing program, which when executed by a processor implements the steps of the video processing method according to any one of claims 1 to 6.
CN202010241528.3A 2020-03-30 2020-03-30 Video processing method, electronic device and computer readable storage medium Active CN111343502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010241528.3A CN111343502B (en) 2020-03-30 2020-03-30 Video processing method, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010241528.3A CN111343502B (en) 2020-03-30 2020-03-30 Video processing method, electronic device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111343502A CN111343502A (en) 2020-06-26
CN111343502B true CN111343502B (en) 2021-11-09

Family

ID=71187455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010241528.3A Active CN111343502B (en) 2020-03-30 2020-03-30 Video processing method, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111343502B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115883874A (en) * 2022-01-27 2023-03-31 北京中关村科金技术有限公司 Compliance service detection method and device based on file

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7353236B2 (en) * 2001-03-21 2008-04-01 Nokia Corporation Archive system and data maintenance method
US8682939B2 (en) * 2002-05-22 2014-03-25 Teac Aerospace Technologies, Inc. Video and audio recording using file segmentation to preserve the integrity of critical data
US7609947B2 (en) * 2004-09-10 2009-10-27 Panasonic Corporation Method and apparatus for coordinating playback from multiple video sources
US20080189735A1 (en) * 2006-12-26 2008-08-07 Jason Shawn Barton System and Method for Combining Media Data
JP4569676B2 (en) * 2008-07-02 2010-10-27 株式会社デンソー File operation device
US20110173235A1 (en) * 2008-09-15 2011-07-14 Aman James A Session automated recording together with rules based indexing, analysis and expression of content
CN101751404A (en) * 2008-12-12 2010-06-23 金宝电子工业股份有限公司 Classification method of multimedia files
CN101938623A (en) * 2010-09-09 2011-01-05 宇龙计算机通信科技(深圳)有限公司 Multipath image transmission method and terminal based on video call
CN102982033B (en) * 2011-09-05 2016-08-03 深圳市天趣网络科技有限公司 The storage method and system of small documents
CN103279497B (en) * 2013-05-07 2017-03-15 珠海金山办公软件有限公司 A kind of method, system and device for carrying out automatically sort operation according to data type
CN103544252B (en) * 2013-10-14 2017-11-14 成都云朵技术有限公司 A kind of video source name processing method and processing device
US9608879B2 (en) * 2014-12-02 2017-03-28 At&T Intellectual Property I, L.P. Methods and apparatus to collect call packets in a communications network
CN105786857B (en) * 2014-12-24 2019-12-10 Tcl集团股份有限公司 Method and system for improving video aggregation efficiency
CN109213875A (en) * 2018-07-24 2019-01-15 努比亚技术有限公司 File management method, mobile terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN111343502A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
US10073861B2 (en) Story albums
US9009850B2 (en) Database management by analyzing usage of database fields
CN114998484A (en) Audio and video generation method and device, computer equipment and storage medium
CN111400361A (en) Data real-time storage method and device, computer equipment and storage medium
US9665574B1 (en) Automatically scraping and adding contact information
CN111343502B (en) Video processing method, electronic device and computer readable storage medium
EP3564833B1 (en) Method and device for identifying main picture in web page
US20240037134A1 (en) Method and apparatus for searching for clipping template
CN112363814A (en) Task scheduling method and device, computer equipment and storage medium
CN110502557B (en) Data importing method, device, computer equipment and storage medium
CN111538672A (en) Test case layered test method, computer device and computer-readable storage medium
CN110727576A (en) Web page testing method, device, equipment and storage medium
CN116661936A (en) Page data processing method and device, computer equipment and storage medium
CN111400289B (en) Intelligent user classification method, server and storage medium
CN112632422B (en) Intelligent graph cutting method and device, electronic equipment and storage medium
CN114936269A (en) Document searching platform, searching method, device, electronic equipment and storage medium
CN112950167A (en) Design service matching method, device, equipment and storage medium
CN113382283A (en) Video title identification method and system
WO2016173136A1 (en) Terminal application processing method and device thereof
CN112560938A (en) Model training method and device and computer equipment
CN112308074A (en) Method and device for generating thumbnail
CN111984839A (en) Method and apparatus for rendering a user representation
CN112232320B (en) Printed matter text proofreading method and related equipment
CN118013057A (en) Manuscript material recommendation method and device
CN112650569A (en) Timed task relation network graph generation method based on Oracle code and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant