CN110087123B - Video file production method, device, equipment and readable storage medium - Google Patents

Video file production method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN110087123B
CN110087123B CN201910402575.9A CN201910402575A CN110087123B CN 110087123 B CN110087123 B CN 110087123B CN 201910402575 A CN201910402575 A CN 201910402575A CN 110087123 B CN110087123 B CN 110087123B
Authority
CN
China
Prior art keywords
video
image
content
screen
screenshot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910402575.9A
Other languages
Chinese (zh)
Other versions
CN110087123A (en
Inventor
蒋鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910402575.9A priority Critical patent/CN110087123B/en
Publication of CN110087123A publication Critical patent/CN110087123A/en
Application granted granted Critical
Publication of CN110087123B publication Critical patent/CN110087123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video file manufacturing method, a device, equipment and a readable storage medium, which relate to the field of video processing, and the method comprises the following steps: acquiring a screen recording instruction for recording the display content of the terminal display screen; collecting display content of a display screen in a video stream mode; acquiring a screenshot image of display content from a video stream in the screen recording process; recording a time stamp of the screenshot image in the video stream when the screenshot image meets the video file making requirement; and performing video file clipping processing according to the time stamp to obtain the target video segment. After the display content in the display screen is recorded, the screenshot image in the video stream obtained by recording the screen is matched with the video editing requirement, and when the screenshot image meets the video file making requirement, the target video clip is obtained by editing the video stream according to the timestamp of the screenshot image, so that the process of automatically editing the recorded screen video is realized, and the editing efficiency of the target video clip is improved.

Description

Video file production method, device and equipment and readable storage medium
Technical Field
The embodiment of the application relates to the field of video processing, in particular to a method, a device and equipment for making a video file and a readable storage medium.
Background
The screen recording software is used for recording the content displayed by the terminal within a period of time, such as: when a player needs to record the game process in the game process, the player can start the screen recording function of the screen recording software before the game starts, and stop screen recording after the game is finished to obtain a screen recording video corresponding to the game process, and the player can clip the screen recording video to obtain a more wonderful video clip in the game process.
In the related art, in the process of editing a screen recording video to obtain a highlight video segment, after the screen recording video needs to be played integrally, a starting time corresponding to the highlight video segment and an ending time corresponding to the highlight video segment are determined, and the screen recording video is edited according to the starting time and the ending time, so that the highlight video segment is obtained.
However, in the process of editing the highlight video clip by the above method, the player needs to watch the screen recording video as a whole to determine the starting time and the ending time corresponding to the highlight video clip, so that the editing is realized, and the editing process is complicated.
Disclosure of Invention
The embodiment of the application provides a video file manufacturing method, a video file manufacturing device, video file manufacturing equipment and a readable storage medium, and can solve the problem that the process of editing a highlight video clip is complex. The technical scheme is as follows:
in one aspect, a method for making a video file is provided, where the method includes:
acquiring a screen recording instruction for recording the display content of the terminal display screen;
acquiring display content of the display screen in a video stream mode to perform screen recording processing;
acquiring a screenshot image of the display content from the acquired video stream in the screen recording process, wherein the screenshot image acquisition process comprises any one of acquisition in a preset period and acquisition according to an acquisition condition;
when the screenshot image meets the video file making requirement, recording a corresponding timestamp of the screenshot image in the video stream;
and editing the video file according to the screenshot image which accords with the video file manufacture and the corresponding timestamp to obtain a target video segment.
In another aspect, there is provided a video file producing apparatus, the apparatus including:
the acquisition module is used for acquiring a screen recording instruction for recording the display content of the terminal display screen;
the acquisition module is used for acquiring the display content of the display screen in a video stream mode so as to perform screen recording processing;
the acquisition module is further configured to acquire a screenshot image of the display content from the acquired video stream in the screen recording process, where the screenshot image acquisition process includes any one of acquisition in a preset period and acquisition according to an acquisition condition;
the recording module is used for recording a corresponding timestamp of the screenshot image in the video stream when the screenshot image meets the video file making requirement;
and the clipping module is used for clipping the video file according to the screenshot image which accords with the video file manufacture and the corresponding timestamp to obtain a target video segment.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the video file production method as provided in the embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the video file production method as provided in the embodiments of the present application.
In another aspect, a computer program product is provided, which when run on a computer causes the computer to execute the video file production method as provided in the embodiments of the present application.
The beneficial effects that technical scheme that this application embodiment brought include at least:
after the display content in the display screen is recorded, the screenshot image in the video stream obtained by recording the screen is matched with the video file making requirement, and when the screenshot image meets the video file making requirement, the target video clip is obtained by clipping from the video stream according to the timestamp of the screenshot image, so that the process of automatically clipping the screen recording video is realized, the process that a user needs to integrally preview the screen recording video and manually clip the clip in the screen recording video after recording the screen is avoided, the target video clip is automatically clipped according to the video file making requirement, and the clipping efficiency of the target video clip is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic diagram of a terminal server interaction implementation environment of a video file production method according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a method for producing a video file according to an exemplary embodiment of the present application;
fig. 3 is a schematic interface diagram of the smart screen recording function provided based on the embodiment shown in fig. 2;
FIG. 4 is a schematic diagram of an interface for granting a screen recording right to a hypervisor after the screen recording function is turned on according to the embodiment shown in FIG. 2;
fig. 5 is a schematic interface diagram of the smart screen recording function provided based on the embodiment shown in fig. 2, which is displayed in the form of a floating window after being turned on;
FIG. 6 is a schematic data interaction diagram of a video file production method according to an exemplary embodiment of the present application;
FIG. 7 is a flow chart of a method for producing a video file according to another exemplary embodiment of the present application;
FIG. 8 is a schematic diagram provided based on the embodiment shown in FIG. 7, for determining whether the screenshot image meets the production requirement of the video file by performing character recognition on the image content;
fig. 9 is another schematic diagram provided based on the embodiment shown in fig. 7, which is used for determining whether the screenshot image meets the production requirement of the video file by performing character recognition on the image content;
FIG. 10 is a schematic diagram provided based on the embodiment shown in FIG. 7, for determining whether the captured image meets the requirements of video file production by performing image recognition on the image content;
FIG. 11 is a schematic diagram of a process for determining a game character provided based on the embodiment shown in FIG. 7;
FIG. 12 is a schematic illustration of a feature matching algorithm provided by the game character determination process shown in FIG. 11;
fig. 13 is a schematic diagram provided based on the embodiment shown in fig. 7, for judging whether the screenshot image meets the production requirement of the video file by determining whether the image content changes;
FIG. 14 is another schematic diagram provided based on the embodiment shown in FIG. 7, which is used for judging whether the screenshot image meets the production requirement of the video file by judging whether the content of the image changes;
FIG. 15 is a schematic diagram of determining whether a game play is over based on the embodiment shown in FIG. 7;
FIG. 16 is a flowchart of a method for producing a video file according to another exemplary embodiment of the present application;
fig. 17 is a schematic diagram of determining whether an aggregate sub-segment exists for a target video segment provided based on the embodiment shown in fig. 16;
fig. 18 is a schematic diagram of another determination of whether an aggregate sub-segment exists for a target video segment provided based on the embodiment shown in fig. 16;
FIG. 19 is a schematic diagram of stitching a target video segment provided based on the target video segment shown in FIG. 17;
FIG. 20 is a schematic illustration of a stitching of a target video segment provided based on the target video segment shown in FIG. 18;
FIG. 21 is an interface diagram of a clip video list provided based on the embodiment shown in FIG. 16;
FIG. 22 is a flow chart of a method of video file production provided by another exemplary embodiment of the present application;
FIG. 23 is a flowchart of a method for producing a video file according to another exemplary embodiment of the present application;
fig. 24 is an overall process schematic diagram of a video file production method provided based on the embodiment shown in fig. 23;
fig. 25 is a block diagram of a video file production apparatus according to an exemplary embodiment of the present application;
fig. 26 is a block diagram of a video file creation apparatus according to another exemplary embodiment of the present application;
fig. 27 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the following detailed description of the embodiments of the present application will be made with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
screen recording: the screen recording is also called screen recording, and refers to a processing mode in which a terminal acquires display contents of all or part of a display screen and encodes the acquired display contents in a video stream form to obtain a video file. Optionally, the screen recording area may be the entire area of the display screen or a partial area of the display screen, when the screen recording area is the entire area of the display screen, the entire content displayed in the display screen is recorded in the video file, and when the screen recording area is the partial area of the display screen, the content displayed in the partial area is recorded in the video file. Optionally, when the screen recording region is a partial region of the display screen, the screen recording region may be set by a user, or may be automatically determined according to a window of a foreground running program displayed in the display screen, for example: displaying the display content of the application program 1 currently running in the foreground in the area a of the display screen, and setting the area a as a screen recording area according to the display area of the application program 1 to perform screen recording.
Illustratively, the application scenario of the embodiment of the present application at least includes at least one of the following scenarios:
first, the video file production method provided by the embodiment of the present application is applied to a game management program, and automatically clips a game highlight through video image recognition, where the specific scenes are as follows:
the terminal is provided with a game management program, the game management program is provided with an intelligent screen recording function, the intelligent screen recording function is provided with a wonderful video editing function for a target game, when a user starts the intelligent screen recording function and starts the target game to play the game, the intelligent screen recording function in the game management program records a screen of the user in the game process of the target game, and identifies whether an image frame in a screen recording video stream meets the video file production requirement in the screen recording process, schematically identifies whether characters in the image frame meet characters corresponding to the wonderful video, such as: and 1, killing, namely when the image frame meets the video file making requirement, clipping a video clip with preset time length before the image frame according to the time stamp of the image frame in the video file to obtain a wonderful video clip.
Second, the video file production method provided by the embodiment of the present application is applied to a live broadcast management program, and performs automatic editing on a live broadcast segment through video image recognition, where the specific scenes are as follows:
install live management program in the terminal, be provided with intelligent record screen function in this live management program, this intelligence record screen function is provided with live section clip function to the live program of target, opens this intelligence record screen function when the user to start the live program of target and when broadcasting directly, this intelligence record screen function records the screen to this live process, and whether record screen in-process accords with video file production requirement to the image frame in the record screen video stream and discern, if: and when the live broadcast process is a dance live broadcast process, identifying whether a character in the image frame is in a standing dance process, and editing a video clip between the first image frame meeting the video file production requirement and the last image frame meeting the video file production requirement to obtain a live broadcast clip.
It should be noted that the above application scenarios are only illustrative examples, and the embodiments of the present application may be applied to any scenario in which a target video segment is obtained by editing a video obtained by screen recording.
Optionally, the video file production method provided by the application can be applied to a terminal, and can also be applied to an implementation environment where the terminal and a server interact. When the method is applied to the terminal, an application program with a screen recording function is installed in the terminal, the display content in the display screen of the terminal is recorded through the application program, and the images in the recorded screen video stream are subjected to video file making requirement matching, so that the target video clip is edited; when the method is applied to an implementation environment of interaction between a terminal and a server, an application program with a screen recording function is installed in the terminal, after screen recording is carried out on display contents in a display screen of the terminal through the application program, the terminal sends an image to be identified in a screen recording video stream to the server, the server matches the image with a video file manufacturing requirement, and a matching result is fed back to the terminal, so that the clipping of a target video segment is realized.
Referring to fig. 1, schematically, an implementation environment in which the implementation environment is a terminal interacting with a server is described as an example, and as shown in fig. 1, the implementation environment of the video file production method provided in the embodiment of the present application includes: a terminal 110, a server 120, and a communication network 130.
The terminal 110 is installed with an application 140, the application 140 provides a screen recording function and is used for recording display contents of a display screen of the terminal 110, when the terminal 110 starts screen recording, the terminal 110 sends image frames in the video stream 150 to the server 120 through the communication network 130 according to the video stream 150 obtained by recording, the server 120 comprises an image matching module 121, the image matching module 121 is used for matching the image frames sent by the terminal with video file making requirements and obtaining a matching result, the server 120 feeds the matching result back to the terminal 110 through the communication network 130, and then the terminal 110 clips the video stream according to the matching result to obtain a target video segment 160.
It should be noted that, in the above embodiment, the image frame sent to the server 120 is taken as an example of an image frame obtained from the video stream 150, and the image frame may also be obtained by directly capturing a screen of a display screen of the terminal.
With reference to the above description of the noun introduction and the implementation environment, a video file production method provided in an embodiment of the present application is described, and fig. 2 is a flowchart of a video file production method provided in an exemplary embodiment of the present application, which is described by taking the method as an example and applied to a terminal, as shown in fig. 2, the method includes:
step 201, acquiring a screen recording instruction for recording the display content of the terminal display screen.
Optionally, the screen recording instruction is used to instruct the terminal to start recording the display content of the display screen, and optionally, the screen recording instruction is an instruction corresponding to start a screen recording function.
Optionally, in this embodiment of the application, a description is given by taking an example in which the video file production method is applied to a management program installed in a terminal, and the management program is provided with a screen recording function.
Optionally, the screen recording instruction may be automatically generated by the terminal according to the application program currently running in the foreground, or may be obtained by the user by selecting the screen recording opening control in the application program.
The above two modes are explained separately:
firstly, the screen recording instruction is automatically generated by the terminal according to the application program operated by the current foreground;
receiving a receiving and recording screen function starting signal, starting a pre-recording screen function, wherein the pre-recording screen function is used for monitoring the operation process of the terminal and starting the recording screen function when the operation process of the terminal meets the recording screen condition; when the application program running in the terminal is the target application program, determining that the running process of the terminal accords with the screen recording condition, and acquiring a screen recording instruction for recording the display content of the display screen of the terminal.
Schematically, as shown in fig. 3, an auxiliary tool list 310 is displayed in a display interface 300 of a management program, where the auxiliary tool list 310 includes an intelligent screen recording function 311, and when a user selects the intelligent screen recording function 311, a detailed introduction interface 320 of the intelligent screen recording function 311 is displayed, and an application 321 and an application 322 that support the intelligent screen recording function 311 are displayed in the detailed introduction interface 320, for example: when the management program is a game management program, the application programs supporting the intelligent screen recording function in the game management program are a game program A and a game program B. The detailed introduction interface further comprises an opening control 323, when a user selects the opening control 323, a screen recording opening message 324 is displayed, the screen recording opening message 324 is used for prompting that the intelligent screen recording function is opened, screen recording can be started automatically after a game is started, the opening message 324 is used for indicating that the pre-screen recording function is opened, and the screen recording function is started after the game is started, namely, a screen recording instruction is generated to start screen recording on the display content of the terminal display screen.
Optionally, the screen recording function generally monitors an application program running on a foreground of the terminal when monitoring the running process of the terminal. Optionally, the monitoring of the application program running on the foreground of the terminal is implemented in a stack top monitoring mode, where the stack top monitoring mode is a mode of monitoring the switching behavior of the terminal application program, such as: and through the stack top performance, when the application program operated by the terminal is determined to be switched from the program A to the program B, namely the application program operated by the current terminal foreground is determined to be the program B.
Optionally, when the application program running in the terminal is determined to be the target application program in the stack top monitoring mode, the screen recording instruction is obtained to record the display content of the display screen of the terminal.
Optionally, the stack top monitoring mode needs to be applied after the management program acquires the stack top monitoring authority of the terminal, for example: after the user starts the screen recording function, the system automatically prompts that the management program acquires the foreground application program monitoring function, and after the user confirms the prompt message, the stack top monitoring mode is started for monitoring.
Optionally, the screen recording function further needs to acquire a screen recording permission of the terminal, that is, after the user authorizes the screen recording permission to the management program, the management program may start the screen recording function to record the display content of the display screen. Optionally, the screen recording function further needs to acquire a floating window permission of the terminal, where the floating window permission is used to prompt a screen recording process through a floating window in the screen recording process of the display content. Optionally, the management program may also prompt the screen recording process in other manners, such as: and a notification bar prompting mode.
Referring to fig. 4 schematically, after a user selects to start a screen recording function in a management program, an authority opening interface 400 is displayed, an authority list 410 to be opened is displayed in the authority opening interface 400, the authority list 410 includes a screen recording authority 411, a floating window authority 412 and a use condition access authority 413, where the use condition access authority 413 is the above-mentioned stack top monitoring authority, and since the term stack top monitoring authority is a term specific to a computer, the use condition access authority 413 is used as a display content to display the stack top monitoring authority in the authority opening interface 400. Illustratively, when the user checks the screen recording permission 411, a system message 420 is superimposed and displayed on the permission opening interface 400, where the system message 420 is used to prompt the user to open the screen recording permission 411 to the management program, and after the user selects the determination control 421 in the system message 420, the terminal opens the permission to record the display content of the display screen to the management program.
Illustratively, when the floating window authority is opened, and when the user opens the application program supported by the screen recording function, a floating window is also displayed in the running interface in the running process of the application program, and the floating window is used for prompting the user that the screen recording is currently performed. Schematically, please refer to fig. 5, taking the management program as a game management program as an example, when a terminal grants a floating window permission to the game management program, and a user opens a target game, a game starting interface 500 is displayed in the terminal interface, a floating window 510 is displayed above the game starting interface 500 in an overlapping manner, the floating window 510 is used for prompting the user to record a screen during a game process of the target game currently, and a content "smart screen recording is opened and clicked to be retracted" is displayed in the floating window 510.
Secondly, the screen recording instruction is obtained after a user selects a screen recording starting control in an application program;
optionally, with reference to the description in the first case, when the user does not open the usage access right shown in fig. 4, that is, does not open the stack top monitoring right, the management program cannot automatically open the screen recording function according to the application program running on the current terminal foreground, and the user is required to manually control the opening of the screen recording function. Optionally, the user may directly start the screen recording function in the management program, or may start the screen recording function through the floating window when the user starts the floating window right to the management program.
Step 202, collecting display content of a display screen in a video stream mode to perform screen recording processing.
Optionally, in the process of acquiring the display content of the display screen in the form of a video stream, the display content of the display screen is acquired frame by frame, and the acquired image frames are labeled with corresponding timestamps, so as to generate a video stream corresponding to the display content.
Optionally, the timestamp may be a timestamp labeled with the terminal time, or may be a timestamp labeled with the screen recording start time being 0 time. Illustratively, when the terminal time is labeled, the terminal time is 11: 11: 20, the timestamp marked by the currently acquired display content is 11: 11: optionally, the degree of refinement for labeling the timestamp may be determined according to the frequency of capturing the display content, such as: the display content is acquired once per second, and the thinning degree of the time stamp is second. Illustratively, when the screen recording start time is 0, timestamp labeling is performed on the currently acquired display content according to a time difference between the currently acquired display content and the initially acquired display content, and similarly, the thinning degree of the labeling is determined according to the frequency of acquiring the display content.
Step 203, in the process of screen recording, acquiring a screenshot image of the display content from the acquired video stream.
Optionally, the captured image corresponds to a timestamp in the video stream.
Optionally, the process of acquiring the screenshot image includes any one of acquiring in a preset period and acquiring according to an acquisition condition. Optionally, when acquiring the screenshot image of the display content, the screenshot image of the display content may be acquired from the video stream at preset time intervals, or may be acquired according to a change rule between image frames in the video stream, and when the change rule between the image frames conforms to the preset rule, the screenshot image of the display content is acquired, for example: if the image difference between the current image frame and the last acquired image frame is greater than the preset difference, acquiring the current image frame as a screenshot image, and also acquiring the current image frame according to the image content, and acquiring the screenshot image of the display content when the image content of the image frame meets the preset content requirement, such as: and when preset content appears in the image frame, acquiring the image frame as a screenshot image.
Optionally, when screenshot images are acquired from the video stream every preset time, the screenshot images may be acquired during the screen recording process, or may be acquired uniformly from the video stream after the screen recording is completed.
Optionally, when the screenshot image is obtained in the screen recording process, obtaining the display content of the currently acquired display screen every preset time in the screen recording process to obtain a screenshot image of the display content; when the screenshot image is obtained uniformly from the video stream after the screen recording is finished, acquiring an i-frame image frame from the video stream as the screenshot image of the display content, wherein the interval duration between the time stamps of two adjacent frame image frames is preset duration.
Optionally, when the screenshot image is acquired in the screen recording process, the screenshot image may be obtained by screenshot the display content before the screenshot image is acquired to the video stream, or the latest image frame may be acquired from the video stream as the screenshot image after the screenshot image is acquired to the video stream. When the screenshot image is obtained by screenshot the display content before the screenshot image is collected to the video stream, the timestamp corresponding to the screenshot image can be determined and obtained by the current terminal moment, and can also be obtained by obtaining the current latest timestamp of the video stream, and the method can also be realized as follows: and if the time stamp of the image frame in the video stream is the time stamp marked according to the terminal time, and the time stamp of the screenshot image is the current terminal time, when the time stamp of the screenshot image in the video stream is determined, the time stamp of the image frame corresponding to the terminal time is determined in the video stream according to the terminal time corresponding to the screenshot image.
Optionally, the screenshot image obtained from the video stream every preset time may be a continuous multi-frame screenshot image, that is, a video sub-segment with a preset length is obtained from the video stream every preset time, and then step 203 may also be implemented by obtaining a currently collected video sub-segment with a preset length in a preset period during the screen recording process, where the preset time is longer than the time corresponding to the preset length; or acquiring i video sub-segments from the video stream, wherein the interval duration between the start timestamps of two adjacent video sub-segments is preset duration.
And step 204, recording the time stamp of the screenshot image in the video stream when the screenshot image meets the video file making requirement.
Optionally, the video file production requirement is used for matching an image whose image content of the screenshot image meets the clipping requirement, such as: when the management program is implemented as a game management program, when the video stream is edited, the video stream needs to be edited for a wonderful segment, and when the screenshot image includes features corresponding to the wonderful combat segment, the screenshot image is determined to meet the video file production requirements, such as: if the key word 'beat' is included in the screenshot image, it is determined that the player realizes stage winning in the game process before the screenshot image is displayed, and it is determined that the screenshot image meets the video file production requirement.
Optionally, the method for determining whether the screenshot image meets the production requirement of the video file comprises at least one of the following methods:
firstly, matching first image content in a screenshot image with required content in an image content list, and determining that the screenshot image meets the video file making requirement when the required content in the image content list is matched with the first image content;
and secondly, comparing the first image content in the screenshot image with the second image content of the latest screenshot image before the screenshot image, and determining that the screenshot image meets the video file making requirement when the first image content changes relative to the second image content.
And step 205, performing video file editing processing according to the screenshot image meeting the video file making requirement and the corresponding timestamp to obtain a target video clip.
In summary, according to the video file making method provided in this embodiment, after the display content in the display screen is recorded, the screenshot image in the video stream obtained by recording the screen is matched with the video file making requirement, and when the screenshot image meets the video file making requirement, the target video segment is obtained by clipping from the video stream according to the timestamp of the screenshot image, so that the process of automatically clipping the screen-recorded video is implemented, the process that a user needs to preview the screen-recorded video integrally and clip the segment in the screen-recorded video manually after recording the screen is avoided, the target video segment is automatically clipped according to the video file making requirement, and the clipping efficiency of the target video segment is improved.
In an alternative embodiment, the above screenshot image further needs to be subjected to image recognition after image cropping and image preprocessing, schematically, fig. 6 is an overall data interaction diagram of a video file production system according to an exemplary embodiment of the present application, as shown in fig. 6, where the video file production system includes: a screen recording module 610, a highlight moment identification module 620 and a highlight moment data management module 630.
The screen recording module 610 includes a video acquisition unit 611, where the video acquisition unit 611 is configured to acquire display contents of a display screen to obtain a screen recording video stream, and the screen recording module 610 further includes a picture acquisition unit 612, where the picture acquisition unit 612 is configured to acquire a still picture in the video stream every preset time period, and send the still picture to the highlight moment identification module 620.
The highlight moment recognition module 620 includes an image cropping unit 621, an image preprocessing unit 622, and an image recognition unit 623, where the image cropping unit 621 is configured to crop the still picture sent by the screen recording module 610 according to a preset cropping rule, and send the cropped picture to the image preprocessing unit 622, and the image preprocessing unit 622 is configured to preprocess the cropped picture, where the preprocessing process includes at least one of image feature extraction, image segmentation, image grayscale processing, contrast stretching, sharpness adjustment, and character recognition. The image preprocessing unit 622 sends the preprocessed image to the image recognition unit 623, the image recognition unit 623 is configured to match the preprocessed image with the video file production requirement, and send the highlight detail corresponding to the still picture to the highlight data management module 630 when the preprocessed image meets the video file production requirement, after the screen recording is finished, the highlight data management module 630 sends the highlight detail of the recorded still picture to the video clipping unit 613 in the screen recording module 611 for video clipping, so as to obtain the target video segment.
In an optional embodiment, determining whether the screenshot image meets the video file production requirement may be determined by matching between the image content and the image content list, or by determining whether the image content changes, where fig. 7 is a flowchart of a video file production method provided in another exemplary embodiment of the present application, and is described by taking an example in which the method is applied to a terminal, as shown in fig. 7, the method includes:
and 701, acquiring a screen recording instruction for recording the display content of the terminal display screen.
Optionally, the screen recording instruction is used for instructing the terminal to start recording the display content of the display screen, and optionally, the screen recording instruction is an instruction corresponding to a screen recording starting function.
Optionally, the screen recording instruction may be automatically generated by the terminal according to an application program currently running in the foreground, or may be obtained by the user after selecting the screen recording opening control in the application program.
Optionally, the obtaining manner of the screen recording instruction is described in detail in step 201, and is not described herein again.
Step 702, collecting the display content of the display screen in the form of video stream to perform screen recording processing.
Optionally, in the process of acquiring the display content of the display screen in the form of a video stream, the display content of the display screen is acquired frame by frame, and the acquired image frame is labeled with a corresponding timestamp, so as to generate a video stream corresponding to the display content.
And 703, acquiring a screenshot image of the display content from the acquired video stream in the screen recording process.
Optionally, the captured image corresponds to a timestamp in the video stream.
Optionally, the screenshot image may be obtained during the screen recording process, or may be obtained uniformly from the video stream after the screen recording is completed.
Optionally, the manner of obtaining the screenshot image has been described in detail in step 203, and is not described herein again.
Step 704, matching the first image content in the screenshot image with the required content in the image content list.
Optionally, the image content list includes at least one of a requirement character and a requirement graph, where the requirement character is used for matching with character content in the first image content, and the requirement graph is used for matching with a graph appearing in the first image content.
Optionally, before the first image content is matched with the required content, the screenshot image is cut to obtain a cut area, and whether the first image content in the screenshot image meets the required content is determined through the cut area.
Optionally, the manner of determining whether the content of the first image in the screenshot image meets the required content through the cropping area includes any one of the following manners:
firstly, character recognition is carried out on a cutting area, character content in the cutting area is obtained and used as first image content, and required characters in an image content list are matched with the character content.
Secondly, detecting the image content in the case area, and matching the detected image detection result with the required image in the image content list.
Step 705, when the required content in the image content list is correspondingly matched with the first image content, determining that the screenshot image meets the video file production requirement.
Optionally, regarding the above-mentioned manner for determining whether the first image content in the screenshot image meets the requirement content, at least one of the following two cases is included:
firstly, when the characters required to be matched with the character content exist in the image content list, determining that the screenshot image meets the video file making requirement;
illustratively, the image content list includes a required character: "beat-beat", "triple win-win", "frontal awnbilu", please refer to fig. 8, a current game picture is displayed in a game interface 810, a screenshot image corresponding to the game picture is cut according to a preset cutting rule to obtain a cutting area 820, the cutting area 820 is subjected to character recognition to obtain a character content "beat-beat", the character content is matched with a required character in an image content list, the required character "beat-beat" in the image content list is matched with the character content, and therefore the screenshot image meets the production requirement of a video file.
Illustratively, the image content list includes a required character "n beat", where the character "n" may be implemented as any numerical value, please refer to fig. 9, a current game screen is displayed in a game interface 910, a screenshot image corresponding to the game screen is clipped according to a preset clipping rule to obtain a clipping area 920, character recognition is performed on the clipping area 920 to obtain a character content "1 beat", the character content is matched with the required character in the image content list, and the required character "n beat" in the image content list is matched with the character content, so that the screenshot image meets a video file production requirement.
Secondly, when the required graph in the image content list is matched with the graph detection result, the screenshot image is determined to meet the video file making requirement.
Schematically, the example that the graph detection is circle detection is taken as an example for explanation, after a screenshot image is cut to obtain a cutting area, the cutting area is detected through a hough circle detection algorithm, it is detected that the cutting area comprises a circle, the circle is matched with a graph in an image content list, a graph matched with the circle pattern is obtained in the image content list, and then the screenshot image meets the video file manufacturing requirement.
Optionally, when matching the required graph with the graph detection result, matching may be performed through a plurality of feature points, such as: and matching through 40 characteristic points, and determining that the required graph is matched with the graph detection result when more than 90% of the characteristic points are in accordance with the mutual matching relation. Optionally, the matching process may also be performed through a matching algorithm such as a template matching algorithm, a mean hash matching algorithm, a perceptual hash matching algorithm, and the like, which is not limited in the embodiment of the present application.
Optionally, after it is determined that the screenshot image meets the video file production requirement, a second clipping region may be obtained by clipping the screenshot image according to another clipping rule, and the highlight category corresponding to the screenshot image is determined according to a graphic detection result of the second clipping region.
Referring to fig. 10, schematically, after a first cropping is performed on a screenshot image 1010, a first cropping area 1020 is obtained, after the first cropping area 1020 is detected by using a hough circle detection algorithm, the first cropping area 1020 includes a circle, it is determined by using a feature matching algorithm that 36 feature points exist in the circle pattern and the pattern in the image content list are well matched, it is determined that the screenshot image 1010 meets the video file production requirement, after a second cropping is performed on the cropped image 1010, a second cropping area 1030 is obtained, the second cropping area 1030 is detected by using a hough circle detection algorithm, and when a circle is detected in the second cropping area 1030, the screenshot image 1010 is of a first highlight category: a killing category; when no circles are detected in the second cropped area 1030, then the screenshot image 1010 is in a second highlight category: the tui tower/the jun king dominates the classification.
Optionally, in the game process, when the user needs to select a game character, the required graphics in the image content list are different according to the difference of the game character, and then the required graphics in the image content list are determined according to the game character selected by the user after the game character selected by the user is determined.
Referring to fig. 11, schematically, a game match interface 1110 is shown, through which a game character selected by a player in a game match is determined, first, the game match interface 1110 is cut according to a preset cutting rule to obtain a first cutting area 1120, and whether a circle with a specific size exists in the first cutting area 1120 is detected according to a hough circle detection algorithm provided by OpenCV, where the hough circle detection algorithm may detect whether a circular contour exists in an image according to set parameters, and detect the parameters include: a circular radius range (e.g., 45 pixels minimum radius and 55 pixels maximum radius for a 1920 x 1080 resolution handset), a minimum distance between circles (e.g., 100 pixels minimum distance between circles for a 1920 x 1080 resolution handset), a detection confidence level (the larger the value of the parameter, the more circles detected), and other parameters. Then, the first cropping area is detected to obtain a detection result 1130, a target circle 1140 is determined and cropped according to the center and radius of each circle in the detection result 1130, an unnecessary part is removed to obtain a circular part 1150, the circular part 1150 is matched with the character pattern in the character table 1160 to determine the character corresponding to the circular part, and the required graph 1170 in the image content list is determined according to the character.
Optionally, when there are more roles in the role table 1160, performing preliminary screening by using a perceptual hash algorithm in OpenCV, where a matching principle of the perceptual hash algorithm is as follows: calculating perceptual hash values of the two images, calculating a hamming distance between the two values, wherein the smaller the hamming distance is, the higher the similarity between the two images is, calculating the perceptual hash value of each character pattern in advance for each character in the angular table 1160, when the circular part 1150 needs to be matched with the character pattern in the angular table 1160, calculating the perceptual hash value of the circular part 1150, and calculating the hamming distance between the circular part 1150 and each character pattern, thereby realizing preliminary screening. Optionally, in the preliminary screening, a character pattern with a hamming distance smaller than a preset distance is selected as an alternative, and the preset distance may take a value of 15.
Optionally, the screening result of the preliminary screening may include a plurality of character patterns, and the circular portion 1150 is accurately matched with the alternative character patterns, where the accurate matching may employ an image feature matching algorithm provided by OpenCV, where the features of the image refer to features of an edge, an angular point, a texture, and the like of the image, and the above elements can reflect the overall features of the image, and when the features of the two images have a similar distribution rule, the similarity of the two images is higher. Optionally, the image feature matching algorithm comprises: at least one of a speed Up Robust Features matching algorithm, a Scale-invariant feature transform (SIFT) algorithm, and an image processing detect (ORB) feature matching algorithm. Illustratively, the ORB feature matching algorithm calculates feature points of two images, and when the distribution of the feature points of the two images has a similar rule, it is determined that the two feature points are successfully matched, and the greater the number of the feature points successfully matched in the two images, the higher the similarity of the two images, as shown in fig. 12, the two points connected by a line between the first image 1210 and the second image 1220 are a pair of feature points successfully matched in the two images.
Step 706, comparing the first image content in the screenshot image with the second image content in the most recent screenshot image before the screenshot image.
Optionally, before comparing the first image content with the second image content, the screenshot image and the latest screenshot image before the screenshot image are cut to obtain a cutting area of the mth screenshot image and a cutting area of the (m-1) th screenshot image, and the image contents in the two cutting areas are compared, wherein m is greater than or equal to 2.
And step 707, when the content of the first image changes relative to the content of the second image, determining that the screenshot image meets the production requirement of the video file.
Referring to fig. 13, schematically, a current game screen is displayed in a game interface 1310, a screenshot image corresponding to the game screen is cut according to a preset cutting rule to obtain a cutting area 1320, a screenshot image immediately before the screenshot image is cut according to the same preset rule to obtain a cutting area 1330, the cutting area 1320 is compared with the cutting area 1330, and the content of the cutting area 1320 changes relative to the cutting area 1330, so that the screenshot image corresponding to the game interface 1310 meets the video file production requirement.
Referring to fig. 14, schematically, a current game screen is displayed in a game interface 1410, a screenshot image corresponding to the game screen is cut according to a preset cutting rule to obtain a cutting area 1420, a screenshot image immediately before the screenshot image is cut according to the same preset rule to obtain a cutting area 1430, the cutting area 1420 is compared with the cutting area 1430, and the content of the cutting area 1420 does not change relative to the cutting area 1430, so that the screenshot image corresponding to the game interface 1410 does not meet the video file production requirement.
Optionally, when the screenshot image does not meet the video file production requirement, the next screenshot image is continuously obtained for image recognition.
It should be noted that, the first scenario corresponding to the steps 704 to 705, and the second scenario corresponding to the steps 706 to 707 may be implemented only by the first scenario, may be implemented only by the second scenario, may be implemented first and then by the first scenario, and may be implemented both by the first scenario and the second scenario, where it is determined that the screenshot image meets the video file production requirement when the first scenario or the second scenario is met.
And 708, recording the time stamp of the screenshot image in the video stream when the screenshot image meets the video file making requirement.
And 709, editing the video file according to the screenshot image meeting the video file making requirement and the corresponding timestamp to obtain a target video clip.
Optionally, the video can be edited after the recording of the screen is finished to obtain the complete video stream, or the video can be directly edited when the captured image is identified to meet the video file production requirement.
Optionally, when the screen recording process needs to be ended before the target video segment is obtained by clipping from the video stream and the video stream is obtained for clipping, the screen recording process can be ended by identifying the display content in the display screen and automatically ending the screen recording process when the identification result meets the end condition; or, the ending screen recording process may also be performed manually by the user, such as: the user ends the screen recording process by clicking the screen recording control.
Optionally, when the screenshot image does not meet the video file making requirement, cutting the screenshot image corresponding to the match judgment to obtain a cutting area for judging whether the fighting process is finished or not, matching the cutting area with a preset template, determining that the fighting process is finished when the matching is successful, and finishing the screen recording process. Referring to fig. 15, a cut-out image 1510 is cut out to obtain a cut-out area 1520 for determining whether the battle process is finished, and the cut-out area 1520 is matched with two preset templates: if the first template 1531 is matched with the second template 1532 and the cut region 1520 is successfully matched with the first template 1531, it is determined that the fighting process is finished, and the screen recording process is finished. Optionally, the process of matching the cropped region 1520 with the first template 1531 and the second template 1532 may be performed by a feature matching algorithm, specifically referring to step 705 of the figure.
Alternatively, the process of determining the end of the battle may be further performed by performing character recognition on the cut area, and determining the end of the battle process when the recognized character is identical to the template character.
Optionally, when the target video segment is clipped from the video stream according to the timestamp, any one of the following manners is included:
firstly, taking a time stamp corresponding to a screenshot image as an ending time stamp of a target video clip, and acquiring a video clip with a preset time length before the ending time stamp from a video stream as the target video clip;
secondly, determining a timestamp with a first preset duration before a timestamp corresponding to the screenshot image as a starting timestamp of the target video clip; and determining a timestamp with a second preset time length after the timestamp corresponding to the screenshot image as an ending timestamp of the target video segment, and editing to obtain a video segment between the starting timestamp and the ending timestamp as the target video segment.
In summary, according to the video file production method provided by this embodiment, after the display content in the display screen is recorded, the screenshot image in the video stream obtained by recording the screen is matched with the video file production requirement, and when the screenshot image meets the video file production requirement, the target video segment is obtained by editing from the video stream according to the timestamp of the screenshot image, so as to implement the process of automatically editing the screen-recorded video, avoid the process that a user needs to preview the screen-recorded video integrally and manually edit the segment in the screen-recorded video after recording the screen, automatically edit the target video segment according to the video file production requirement, and improve the editing efficiency of the target video segment.
According to the method provided by the embodiment, the first image content in the screenshot image is matched with the required content in the image content list, when the first image content is matched with the required content, the screenshot image is determined to meet the video file making requirement, namely whether the screenshot image meets the video file making requirement is determined according to the image content, so that the target video clip is clipped, and the clipping efficiency of the target video clip is improved.
In the method provided by the embodiment, the first image content of the screenshot image is compared with the second image content of the previous screenshot image, and when the first image content changes relative to the second image content, that is, when the display content changes in a staged manner, whether the screenshot image meets the video file making requirement is determined, so that the target video segment is obtained by clipping, and the clipping efficiency of the target video segment is improved.
In the method provided by this embodiment, a target video segment is obtained by clipping the video content with a preset duration before the timestamp, and it is ensured that the video content before the display content is changed in a staged manner is clipped into the target video segment, for example: after defeating the opponent in the game play, the process before defeat is successful is clipped into the target video clip.
In the method provided by this embodiment, the target video segment is obtained by clipping the video content before the timestamp for the first preset duration and after the timestamp for the second preset duration, so as to avoid discarding the highlight video content after the timestamp, and ensure that the video content before and after the display content is changed in a periodic manner is clipped into the target video segment.
In an optional embodiment, the video stream includes n screenshot images meeting the video file production requirement, that is, n target video segments corresponding to the n screenshot images, where n is a positive integer, fig. 16 is a flowchart of a video file production method provided in another exemplary embodiment of the present application, and the method is applied to a terminal and implemented after step 204 or step 708 described above as an example, as shown in fig. 16, the method includes:
step 1601, determine whether there are aggregate sub-segments for n segments of the target video segment.
Optionally, the intersection sub-segment refers to a portion of an intersection between at least two target video segments.
Referring to fig. 17, schematically, fig. 17 shows a schematic diagram of time points of occurrence of events in a screen recording time axis on the time axis, as shown in fig. 17, the screen recording time axis corresponds to a screen recording start time, a game match start time, a highlight time 1, a highlight time 2, a highlight time 3, and a match end time, where the highlight time 1 corresponds to a target video clip 1710, the highlight time 2 corresponds to a target video clip 1720, and the highlight time 3 corresponds to a target video clip 1730, and it can be known from fig. 17 that there are no collection sub-clips in the three target video clips.
Referring to fig. 18 schematically, fig. 18 shows a schematic diagram of time points of occurrence of each event in a screen recording time axis on the time axis, as shown in fig. 18, the screen recording time axis corresponds to a screen recording start time, a game match start time, a highlight time 1, a highlight time 2, a highlight time 3, and a match end time, where the highlight time 1 corresponds to a target video clip 1810, the highlight time 2 corresponds to a target video clip 1820, and the highlight time 3 corresponds to a target video clip 1830, and as can be seen in conjunction with fig. 18, the target video clip 1810 and the target video clip 1820 have an intersection portion, that is, the target video clip 1810 and the target video clip 1820 have an intersection sub-clip.
And step 1602, when the intersection sub-segments do not exist in the n target video segments, splicing the n target video segments to obtain a clipped video.
Optionally, when the n target video segments are spliced, simple splicing may be directly performed, or transition may be performed by adding a transition content between each two target video segments, where the transition content may be a count of a sequence number of the target video segments, may be an excessive video, may also be an excessive image or a black screen, and this is not limited in this embodiment of the application.
For an example of directly and simply splicing n target video segments, please refer to fig. 19, and on the basis of the target video segment 1710, the target video segment 1720, and the target video segment 1730 in fig. 17, the three target video segments are spliced to obtain a clip video 1900.
Step 1603, when at least two target video segments of the n target video segments have an intersection sub-segment, determining a union video segment of the at least two target video segments.
Alternatively, when determining a union video segment of the at least two target video segments, the respective start timestamps and end timestamps of the at least two target video segments may be determined, and the start timestamp with the first occurrence time is selected from all the start timestamps as the start timestamp of the union video segment, and the end timestamp with the latest occurrence time is selected from all the end timestamps as the end timestamp of the union video segment.
And 1604, splicing the other target video segments except the at least two target video segments with the union video segment to obtain a clipped video.
Optionally, when the target video segment and the union video segment are spliced, simple splicing may be directly performed, or s may be added between each two video segments to perform transition.
Referring to fig. 20, in addition to the target video segment 1810, the target video segment 1820 and the target video segment 1830 in fig. 18, since the target video segment 1810 and the target video segment 1820 have an intersection sub-segment, the target video segment 1810 and the target video segment 1820 are merged to obtain a merged video segment 2010, and the merged video segment 2010 is spliced with the target video segment 1830 to obtain the clip video 2000.
Referring to fig. 21, a preview list of clip videos including clip video 2110 and clip video 2120 is displayed in highlight show interface 2100. Optionally, the clip video corresponds to a sharing control, as shown in fig. 21, the clip video 2110 corresponds to a sharing control 2111, and after the user selects the sharing control, the clip video 2110 is shared with other application programs, for example: instant messaging applications, social applications, and the like.
In summary, according to the video file making method provided in this embodiment, after the display content in the display screen is recorded, the screenshot image in the video stream obtained by recording the screen is matched with the video file making requirement, and when the screenshot image meets the video file making requirement, the target video segment is obtained by clipping from the video stream according to the timestamp of the screenshot image, so that the process of automatically clipping the screen-recorded video is implemented, the process that a user needs to preview the screen-recorded video integrally and clip the segment in the screen-recorded video manually after recording the screen is avoided, the target video segment is automatically clipped according to the video file making requirement, and the clipping efficiency of the target video segment is improved.
According to the method provided by the embodiment, the plurality of target video segments are spliced to generate the cut video, the problem that the watching process is complicated due to the fact that different target video segments need to be watched for multiple times is avoided, a plurality of wonderful segments are displayed in a single cut video, and the splicing efficiency of the cut video is improved.
Schematically, fig. 22 is an overall flowchart of a video file production method provided in another exemplary embodiment of the present application, which is described by taking as an example that the method is applied to a game management program, and as shown in fig. 22, the method includes:
step 2201, the video acquisition module starts to record the screen.
Optionally, after the game management program obtains the screen recording permission of the terminal, the video acquisition module performs screen recording on the display content of the terminal display screen.
Optionally, the screen recording opening manner is described in detail in step 201, and is not described herein again.
Step 2202, the image acquisition module periodically acquires images.
Optionally, the image capturing module may capture an image from a video stream captured by the video capturing module, or may directly capture a display content of the display screen to achieve image capturing.
At step 2203, a different image processing scheme is selected based on the currently running game.
Optionally, for different games, the location of the specific content displayed in the user interface is also different, and illustratively, the area in which the killing prompt message is displayed in game a is a first area in which the content is displayed, and the area in which the killing prompt message is displayed in game B is a second area in which the content is displayed, the first area is cut for game a, and the second area is cut for game B.
Step 2204, cutting out the target area A according to the preset configuration.
Step 2205, according to the image processing algorithm, it is determined whether there is an expected image element in the target area a.
Optionally, the desired image element may be at least one of a desired character, a desired pattern, and a desired variation.
Step 2206, when the expected image elements appear in the target area A, the highlight moment is determined to be recognized and stored.
Optionally, when an expected image element appears in the target area a, the image corresponding to the target area a is an image corresponding to the highlight occurrence, and the highlight time corresponding to the target area a is recorded and saved.
And step 2207, continuing to cut other target areas and performing highlight moment identification.
Optionally, a plurality of different highlight types may exist in one image, and the plurality of different highlight types of the image are identified by cutting and identifying the plurality of image areas differently.
In step 2208, it is determined whether the game play is over.
Optionally, matching is performed according to the area content in the preset area in the image and the template, so as to determine whether the game matching is finished, and when the game matching is finished, the screen recording process is finished.
Step 2209, when the game is finished, video clip is carried out to obtain the target video segment.
Referring to fig. 23, the video file generation method is described with reference to the game management program and the first game, and as shown in fig. 23, the method includes:
step 2301, the image acquisition module periodically acquires images.
Optionally, after the game management program obtains the screen recording permission of the terminal, the video acquisition module performs screen recording on the display content of the display screen of the terminal, and optionally, the image acquisition module may acquire an image from a video stream acquired by the video acquisition module after the screen recording starts, or may directly capture the display content of the display screen to realize image acquisition.
At step 2302, it is identified whether a game play is initiated.
Step 2303, when the game is played, identifying whether a hit occurs in the game.
Optionally, whether a hit or a kill occurs in the game match is identified by identifying the image content in the first preset area.
Step 2304, when killing occurs in the game, recording the wonderful moment, and repeatedly executing step 2301.
Step 2305, when killing does not occur in the game, identifying whether death occurs in the game.
That is, when the killing does not occur in the first preset area, the second preset area is cut and recognized, so that whether death occurs in the game match is determined.
In step 2306, when death occurs in the game, the wonderful moment is recorded, and step 2301 is repeatedly performed.
In step 2307, when no death occurs in the game, it is identified whether the game play is over.
And 2308, when the game is finished, editing according to the recorded wonderful moment to obtain a target video clip, and otherwise, repeatedly executing the step 2301.
Schematically, the process of fig. 23 is described with reference to fig. 24, as shown in fig. 24, after the game is played, the game character is identified with reference to the game interface 2410 (please refer to step 705 above), the game character is determined to be an angel, after the screenshot image corresponding to the game interface 2420 is captured, the highlight moment is determined, and the timestamp of the screenshot image corresponding to the game interface 2420 is recorded. When the terminal displays the game interface 2430, the game match is determined to be finished according to the identification of the game interface 2430, the target video clip is generated according to the timestamp of the wonderful moment, and the target video clip display interface 2440 is displayed.
Fig. 25 is a block diagram of a video file creation apparatus according to an exemplary embodiment of the present application, and as shown in fig. 25, the apparatus is described as applied to a terminal, and the apparatus includes: an acquisition module 2510, an acquisition module 2520, a recording module 2530, and a clipping module 2540;
an obtaining module 2510, configured to obtain a screen recording instruction for recording a display content of a display screen of the terminal;
an acquiring module 2520, configured to acquire display content of the display screen in a form of a video stream to perform screen recording processing;
the obtaining module 2510 is further configured to obtain a screenshot image of the display content from the acquired video stream during the screen recording process, where the process of obtaining the screenshot image includes any one of obtaining in a preset period and obtaining according to a obtaining condition;
a recording module 2530, configured to record a corresponding timestamp of the screenshot image in the video stream when the screenshot image meets a video file production requirement;
and the clipping module 2540 is configured to perform clipping processing on the video file according to the screenshot image conforming to the video file production and the corresponding timestamp to obtain the target video segment.
In an alternative embodiment, as shown in fig. 26, the apparatus further comprises:
a matching module 2550, configured to match first image content in the screenshot image with required content in an image content list;
the matching module 2550 is further configured to determine that the screenshot image meets the video file production requirement when the required content in the image content list correspondingly matches the first image content.
In an optional embodiment, the matching module 2550 is further configured to cut the screenshot image according to a preset cutting rule, so as to obtain a cutting area;
the matching module 2550 is further configured to perform character recognition on the cropping area, obtain character content in the cropping area as the first image content, and match the character content with a required character in the image content list; and/or, carrying out pattern detection on the image content in the cutting area, and matching the detected pattern detection result with the required pattern in the image content list.
In an optional embodiment, the apparatus further comprises:
a matching module 2550, configured to compare first image content in the screenshot image with second image content in a last screenshot image before the screenshot image; and when the content of the first image changes relative to the content of the second image, determining that the screenshot image meets the production requirement of the video file.
In an optional embodiment, the obtaining module 2510 is further configured to receive a radio screen recording function starting signal, start a pre-recording screen function, where the pre-recording screen function is used to monitor an operation process of the terminal, and start the recording screen function when the operation process of the terminal meets a screen recording condition; and when the application program running in the terminal is the target application program, determining that the running process of the terminal meets the screen recording condition, and acquiring a screen recording instruction for recording the display content of the display screen of the terminal.
In an optional embodiment, the clipping module 2540 is further configured to use a timestamp corresponding to the screenshot image as an end timestamp of the target video segment, and obtain, from the video stream, a video segment with a preset duration before the end timestamp as the target video segment.
In an optional embodiment, the clipping module 2540 is further configured to determine a timestamp which is a first preset time length before the timestamp corresponding to the screenshot image, as the start timestamp of the target video segment; determining a timestamp of a second preset time length after the timestamp corresponding to the screenshot image as the ending timestamp of the target video clip; and editing to obtain the video segment between the start time stamp and the end time stamp as the target video segment.
In an optional embodiment, the video stream includes n screenshot images meeting the video file production requirement, n target video segments correspond to the n screenshot images, and n is a positive integer;
the device, still include:
a splicing module 2560, configured to splice the n segments of target video segments to obtain a clipped video when the n segments of target video segments do not have the intersection sub-segments;
the splicing module 2560 is further configured to, when at least two target video segments in the n target video segments have an intersection sub-segment, determine a union video segment of the at least two target video segments, and splice other target video segments except the at least two target video segments with the union video segment to obtain the clip video.
In an optional embodiment, the obtaining module 2510 is further configured to obtain the currently acquired display content of the display screen every preset time interval during the screen recording process, so as to obtain a screenshot image of the display content;
or the like, or a combination thereof,
the obtaining module 2510 is further configured to obtain i frame image frames from the video stream as screenshot images of the display content, where an interval duration between timestamps of two adjacent frame image frames is the preset duration;
or the like, or a combination thereof,
the obtaining module 2510 is further configured to obtain a screenshot image of the display content when a change rule between image frames meets a preset rule;
or the like, or a combination thereof,
the obtaining module 2510 is further configured to obtain a screenshot image of the display content when the image content of the image frame meets a preset content requirement.
In summary, the video file production apparatus provided in this embodiment matches the screenshot image in the video stream obtained by screen recording with the video file production requirement after the display content in the display screen is recorded, and cuts the target video segment from the video stream according to the timestamp of the screenshot image when the screenshot image meets the video file production requirement, so as to implement the process of automatically cutting the screen recording video, avoid the process that a user needs to preview the screen recording video integrally and manually cut the segment in the screen recording video after the screen recording, automatically cut the target video segment according to the video file production requirement, and improve the cutting efficiency of the target video segment.
It should be noted that: the video file production apparatus provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the video file production apparatus and the video file production method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments, and are not described herein again.
Fig. 27 is a block diagram illustrating a structure of a terminal 2700 according to an exemplary embodiment of the present invention. The terminal 2700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 2700 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
In general, the terminal 2700 includes: a processor 2701 and memory 2702.
The processor 2701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 2701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), a Field-Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA). The processor 2701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 2701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 2701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 2702 may include one or more computer-readable storage media, which may be non-transitory. Memory 2702 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2702 is used to store at least one instruction for execution by processor 2701 to implement the video file production methods provided by the method embodiments herein.
In some embodiments, the terminal 2700 may further include: peripheral interface 2703 and at least one peripheral. The processor 2701, the memory 2702, and the peripheral interface 2703 may be connected by a bus or signal lines. Various peripheral devices may be connected to peripheral interface 2703 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: radio frequency circuitry 2704, touch display 2705, camera 2706, audio circuitry 2707, positioning components 2708, and power source 2709.
The peripheral interface 2703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 2701 and the memory 2702. In some embodiments, processor 2701, memory 2702, and peripherals interface 2703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 2701, the memory 2702, and the peripheral interface 2703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 2704 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 2704 communicates with communications networks and other communications equipment via electromagnetic signals. The radio frequency circuit 2704 converts an electrical signal into an electromagnetic signal to transmit the electromagnetic signal, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuitry 2704 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Radio frequency circuitry 2704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 2704 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 2705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 2705 is a touch display, the display 2705 also has the ability to capture touch signals on or over the surface of the display 2705. The touch signal may be input to the processor 2701 as a control signal for processing. At this point, the display 2705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 2705 may be one, providing a front panel of the terminal 2700; in other embodiments, the display 2705 can be at least two, respectively disposed on different surfaces of the terminal 2700 or in a folded design; in still other embodiments, the display 2705 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 2700. Even more, the display 2705 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 2705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 2706 is used to capture images or video. Optionally, camera assembly 2706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of a terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 2706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 2707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2701 for processing or inputting the electric signals to the radio frequency circuit 2704 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location on the terminal 2700. The microphone may also be an array microphone or an omni-directional acquisition microphone. The speaker is used to convert electrical signals from the processor 2701 or the radio frequency circuitry 2704 into sound waves. The loudspeaker can be a traditional film loudspeaker and can also be a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 2707 may also include a headphone jack.
The positioning component 2708 is configured to locate a current geographic Location of the terminal 2700 to implement navigation or LBS (Location Based Service). The Positioning component 2708 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 2709 is used to supply power to the various components in the terminal 2700. The power source 2709 may be alternating current, direct current, a disposable battery, or a rechargeable battery. When the power source 2709 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery can also be used to support fast charge technology.
In some embodiments, terminal 2700 also includes one or more sensors 2710. The one or more sensors 2710 include, but are not limited to: acceleration sensor 2711, gyro sensor 2712, pressure sensor 2713, fingerprint sensor 2714, optical sensor 2715, and proximity sensor 2716.
The acceleration sensor 2711 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 2700. For example, the acceleration sensor 2711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 2701 can control the touch display screen 2705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 2711. The acceleration sensor 2711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 2712 may detect a body direction and a rotation angle of the terminal 2700, and the gyro sensor 2712 may cooperate with the acceleration sensor 2711 to acquire a 3D motion of the user on the terminal 2700. The processor 2701 may implement the following functions according to the data collected by the gyro sensor 2712: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization while shooting, game control, and inertial navigation.
Pressure sensors 2713 may be disposed on the side frames of terminal 2700 and/or on the lower layers of touch display 2705. When the pressure sensor 2713 is disposed on the side frame of the terminal 2700, a user's grip signal to the terminal 2700 may be detected, and left-right hand recognition or shortcut operation may be performed by the processor 2701 according to the grip signal collected by the pressure sensor 2713. When the pressure sensor 2713 is disposed at the lower layer of the touch display 2705, the processor 2701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 2705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 2714 is used to collect the fingerprint of the user, and the processor 2701 identifies the user based on the fingerprint collected by the fingerprint sensor 2714, or the fingerprint sensor 2714 identifies the user based on the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 2701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 2714 may be provided on the front, back, or side of the terminal 2700. When a physical button or a vendor Logo is provided on the terminal 2700, the fingerprint sensor 2714 may be integrated with the physical button or the vendor Logo.
The optical sensor 2715 is used to collect the ambient light intensity. In one embodiment, the processor 2701 may control the display brightness of the touch display 2705 based on the ambient light intensity collected by the optical sensor 2715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 2705 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 2705 is turned down. In another embodiment, processor 2701 may also dynamically adjust the shooting parameters of camera head assembly 2706 based on the ambient light intensity collected by optical sensor 2715.
Proximity sensor 2716, also known as a distance sensor, is typically disposed on the front panel of terminal 2700. The proximity sensor 2716 is used to acquire the distance between the user and the front surface of the terminal 2700. In one embodiment, when proximity sensor 2716 detects that the distance between the user and the front face of terminal 2700 is gradually decreased, processor 2701 controls touch display 2705 to switch from a bright screen state to a silent screen state; when the proximity sensor 2716 detects that the distance between the user and the front surface of the terminal 2700 gradually becomes larger, the processor 2701 controls the touch display 2705 to switch from the sniff state to the lighted state.
Those skilled in the art will appreciate that the configuration shown in fig. 27 does not constitute a limitation of terminal 2700, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The embodiment of the present application further provides a computer device, where the computer device includes a memory and a processor, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded by the processor and implements the above video file production method.
An embodiment of the present application further provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or an instruction set is stored in the computer-readable storage medium, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the above video file production method.
The present application further provides a computer program product, which when running on a computer, causes the computer to execute the above video file production method provided by the above method embodiments.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated into the terminal. The computer readable storage medium has stored therein at least one instruction, at least one program, a set of codes, or a set of instructions that are loaded and executed by the processor to implement the video file production method as described above.
Optionally, the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), Solid State Drive (SSD), or optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (9)

1. A method of video file production, the method comprising:
acquiring a screen recording instruction for recording the display content of the terminal display screen;
acquiring display content of the display screen in a video stream mode to perform screen recording processing, wherein the display content is acquired frame by frame, a timestamp corresponding to an acquired image frame is marked to generate a video stream of the display content, and the thinning degree of the marked timestamp is determined according to the frequency of the acquired display content;
in the screen recording process, when the image content of the image frame in the video stream meets the requirement of preset content, acquiring a screenshot image of the display content;
cutting the screenshot image according to a preset cutting rule to obtain a cutting area;
performing character recognition on the cutting area to obtain character content in the cutting area as first image content, and matching the character content with required characters in the image content list;
performing graph detection on the image content in the cutting area, and matching a detected graph detection result with a required graph in the image content list;
when the required content and the first image content exist in the image content list and/or the required graph is correspondingly matched with the image detection result, determining that the screenshot image meets the video file production requirement;
when the screenshot image meets the video file making requirement, recording a corresponding timestamp of the screenshot image in the video stream, wherein the video stream comprises n screenshot images meeting the video file making requirement, n target video clips correspond to the n screenshot images, and n is a positive integer;
editing the video file according to the screenshot image meeting the video file making requirement and the corresponding timestamp to obtain a target video clip;
when the n target videos do not have the intersection sub-segments, splicing the n target video segments, and adding a linking content between each two target video segments for transition to obtain a clipped video, wherein the linking content comprises a transition video;
when at least two video segments in the n sections of target videos have intersection sub-segments, determining a union video segment of the at least two target video segments, splicing other target video segments except the at least two target video segments with the union video segment, and adding the joining content between every two sections of target video segments for transition to obtain the cut video.
2. The method of claim 1, wherein when the screenshot image meets the video file production requirement, recording the corresponding timestamp of the screenshot image in the video stream before, further comprising:
comparing the first image content in the screenshot image with the second image content in the latest screenshot image before the screenshot image;
and when the content of the first image changes relative to the content of the second image, determining that the screenshot image meets the production requirement of the video file.
3. The method according to claim 1 or 2, wherein the obtaining of the screen recording instruction for recording the display content of the terminal display screen comprises:
receiving a receiving and recording screen function starting signal, starting a pre-recording screen function, wherein the pre-recording screen function is used for monitoring the operation process of the terminal, and starting the recording screen function when the operation process of the terminal meets the recording screen condition;
and when the application program running in the terminal is the target application program, determining that the running process of the terminal meets the screen recording condition, and acquiring a screen recording instruction for recording the display content of the display screen of the terminal.
4. The method according to claim 1 or 2, wherein the clipping processing of the video file according to the screenshot image meeting the video file production requirement and the corresponding timestamp to obtain the target video segment comprises:
and taking the timestamp corresponding to the screenshot image as an ending timestamp of the target video clip, and acquiring the video clip with preset time length before the ending timestamp from the video stream as the target video clip.
5. The method according to claim 1 or 2, wherein the clipping processing of the video file according to the screenshot image meeting the video file production requirement and the corresponding timestamp to obtain the target video segment comprises:
determining a timestamp with a first preset time length before a timestamp corresponding to the screenshot image as a starting timestamp of the target video clip;
determining a timestamp with a second preset time length after the timestamp corresponding to the screenshot image as an ending timestamp of the target video clip;
and editing to obtain the video segment between the start time stamp and the end time stamp as the target video segment.
6. The method of claim 1 or 2, wherein obtaining a screenshot image of the display content from the captured video stream comprises:
in the screen recording process, acquiring the display content of the display screen acquired currently every preset time to obtain a screenshot image of the display content;
or the like, or a combination thereof,
acquiring i frame image frames from the video stream as screenshot images of the display content, wherein the interval duration between the time stamps of two adjacent frame image frames is the preset duration;
or the like, or, alternatively,
and when the change rule between the image frames in the video stream conforms to a preset rule, acquiring the screenshot image of the display content.
7. An apparatus for producing a video file, the apparatus comprising:
the acquisition module is used for acquiring a screen recording instruction for recording the display content of the terminal display screen;
the acquisition module is used for acquiring the display content of the display screen in a video stream mode so as to perform screen recording processing, wherein the display content is acquired frame by frame, a timestamp corresponding to the acquired image frame is marked to generate a video stream of the display content, and the thinning degree of marking the timestamp is determined according to the frequency of acquiring the display content;
the acquisition module is further configured to acquire a screenshot image of the display content when image content of an image frame in the video stream meets a preset content requirement in the screen recording process;
the matching module is used for cutting the screenshot image according to a preset cutting rule to obtain a cutting area;
the matching module is further used for performing character recognition on the cutting area to obtain character content in the cutting area as first image content, and matching the character content with required characters in the image content list; performing graph detection on the image content in the cutting area, and matching a detected graph detection result with a required graph in the image content list; when the required content and the first image content exist in the image content list and/or the required graph is correspondingly matched with the image detection result, determining that the screenshot image meets the video file production requirement;
the recording module is used for recording a corresponding timestamp of the screenshot image in the video stream when the screenshot image meets the video file making requirement, the video stream comprises n screenshot images meeting the video file making requirement, n target video clips correspond to the n screenshot images, and n is a positive integer;
the editing module is used for editing the video file according to the screenshot image which accords with the video file manufacture and the corresponding timestamp to obtain a target video segment;
the splicing module is used for splicing the n target video segments when the n target video segments do not have the intersection sub-segments, and adding a linking content between each two target video segments for transition to obtain a cut video, wherein the linking content comprises a transition video; when at least two video segments in the n sections of target videos have intersection sub-segments, determining a union video segment of the at least two target video segments, splicing other target video segments except the at least two target video segments with the union video segment, and adding the joining content between every two sections of target video segments for transition to obtain the cut video.
8. A computer device, characterized in that it comprises a processor and a memory, in which at least one program is stored, which is loaded and executed by the processor to implement the video file production method according to any one of claims 1 to 6.
9. A computer-readable storage medium, wherein at least one program is stored in the computer-readable storage medium, and the at least one program is loaded and executed by a processor to implement the video file production method according to any one of claims 1 to 6.
CN201910402575.9A 2019-05-15 2019-05-15 Video file production method, device, equipment and readable storage medium Active CN110087123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910402575.9A CN110087123B (en) 2019-05-15 2019-05-15 Video file production method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910402575.9A CN110087123B (en) 2019-05-15 2019-05-15 Video file production method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN110087123A CN110087123A (en) 2019-08-02
CN110087123B true CN110087123B (en) 2022-07-22

Family

ID=67420184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910402575.9A Active CN110087123B (en) 2019-05-15 2019-05-15 Video file production method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN110087123B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110769305B (en) * 2019-09-12 2021-05-18 腾讯科技(深圳)有限公司 Video display method and device, block chain system and storage medium
CN110855904B (en) * 2019-11-26 2021-10-01 Oppo广东移动通信有限公司 Video processing method, electronic device and storage medium
WO2021134237A1 (en) * 2019-12-30 2021-07-08 深圳市欢太科技有限公司 Video recording method and apparatus, and computer-readable storage medium
CN111228821B (en) * 2020-01-15 2022-02-01 腾讯科技(深圳)有限公司 Method, device and equipment for intelligently detecting wall-penetrating plug-in and storage medium thereof
WO2021163882A1 (en) * 2020-02-18 2021-08-26 深圳市欢太科技有限公司 Game screen recording method and apparatus, and computer-readable storage medium
CN113742183A (en) * 2020-05-29 2021-12-03 青岛海信移动通信技术股份有限公司 Screen recording method, terminal and storage medium
CN114189646B (en) * 2020-09-15 2023-03-21 深圳市万普拉斯科技有限公司 Terminal control method and device, electronic equipment and storage medium
CN112367530A (en) * 2020-10-29 2021-02-12 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN112650551A (en) * 2020-12-31 2021-04-13 中国农业银行股份有限公司 System function display method and device
CN114286142B (en) * 2021-01-18 2023-03-28 海信视像科技股份有限公司 Virtual reality equipment and VR scene screen capturing method
CN113115106B (en) * 2021-03-31 2023-05-05 影石创新科技股份有限公司 Automatic editing method, device, terminal and storage medium for panoramic video
CN113709560B (en) * 2021-03-31 2024-01-02 腾讯科技(深圳)有限公司 Video editing method, device, equipment and storage medium
CN113542257B (en) * 2021-07-12 2023-09-26 维沃移动通信有限公司 Video processing method, video processing device, electronic apparatus, and storage medium
CN113784072A (en) * 2021-09-24 2021-12-10 上海铜爪智能科技有限公司 AI algorithm-based pet video recording and automatic editing method
CN114915848B (en) * 2022-05-07 2023-12-08 上海哔哩哔哩科技有限公司 Live interaction method, device and equipment
CN115086759A (en) * 2022-05-13 2022-09-20 北京达佳互联信息技术有限公司 Video processing method, video processing device, computer equipment and medium
CN115460352B (en) * 2022-11-07 2023-04-07 摩尔线程智能科技(北京)有限责任公司 Vehicle-mounted video processing method, device, equipment, storage medium and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104601918A (en) * 2014-12-29 2015-05-06 小米科技有限责任公司 Video recording method and device
CN104796781A (en) * 2015-03-31 2015-07-22 小米科技有限责任公司 Video clip extraction method and device
CN104811787A (en) * 2014-10-27 2015-07-29 深圳市腾讯计算机系统有限公司 Game video recording method and game video recording device
CN108174132A (en) * 2016-12-07 2018-06-15 杭州海康威视数字技术股份有限公司 The back method and device of video file
CN108337532A (en) * 2018-02-13 2018-07-27 腾讯科技(深圳)有限公司 Perform mask method, video broadcasting method, the apparatus and system of segment
CN109672922A (en) * 2017-10-17 2019-04-23 腾讯科技(深圳)有限公司 A kind of game video clipping method and device
CN109718537A (en) * 2018-12-29 2019-05-07 努比亚技术有限公司 Game video method for recording, mobile terminal and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10248864B2 (en) * 2015-09-14 2019-04-02 Disney Enterprises, Inc. Systems and methods for contextual video shot aggregation
CN106803987B (en) * 2015-11-26 2021-09-07 腾讯科技(深圳)有限公司 Video data acquisition method, device and system
US10456680B2 (en) * 2016-12-09 2019-10-29 Blizzard Entertainment, Inc. Determining play of the game based on gameplay events

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811787A (en) * 2014-10-27 2015-07-29 深圳市腾讯计算机系统有限公司 Game video recording method and game video recording device
CN104601918A (en) * 2014-12-29 2015-05-06 小米科技有限责任公司 Video recording method and device
CN104796781A (en) * 2015-03-31 2015-07-22 小米科技有限责任公司 Video clip extraction method and device
CN108174132A (en) * 2016-12-07 2018-06-15 杭州海康威视数字技术股份有限公司 The back method and device of video file
CN109672922A (en) * 2017-10-17 2019-04-23 腾讯科技(深圳)有限公司 A kind of game video clipping method and device
CN108337532A (en) * 2018-02-13 2018-07-27 腾讯科技(深圳)有限公司 Perform mask method, video broadcasting method, the apparatus and system of segment
CN109718537A (en) * 2018-12-29 2019-05-07 努比亚技术有限公司 Game video method for recording, mobile terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN110087123A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110087123B (en) Video file production method, device, equipment and readable storage medium
CN108769562B (en) Method and device for generating special effect video
CN111147878B (en) Stream pushing method and device in live broadcast and computer storage medium
CN111065001B (en) Video production method, device, equipment and storage medium
CN109640125B (en) Video content processing method, device, server and storage medium
CN108965922B (en) Video cover generation method and device and storage medium
CN108449651B (en) Subtitle adding method, device, equipment and storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN111541907A (en) Article display method, apparatus, device and storage medium
CN111753784A (en) Video special effect processing method and device, terminal and storage medium
CN112565806B (en) Virtual gift giving method, device, computer equipment and medium
CN112084811A (en) Identity information determining method and device and storage medium
CN111083513B (en) Live broadcast picture processing method and device, terminal and computer readable storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN111723803A (en) Image processing method, device, equipment and storage medium
CN112770173A (en) Live broadcast picture processing method and device, computer equipment and storage medium
CN111586279B (en) Method, device and equipment for determining shooting state and storage medium
CN112419143A (en) Image processing method, special effect parameter setting method, device, equipment and medium
CN112822544A (en) Video material file generation method, video synthesis method, device and medium
CN112616082A (en) Video preview method, device, terminal and storage medium
CN109819308B (en) Virtual resource acquisition method, device, terminal, server and storage medium
CN114554112B (en) Video recording method, device, terminal and storage medium
CN113706807B (en) Method, device, equipment and storage medium for sending alarm information
CN110277105B (en) Method, device and system for eliminating background audio data
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant