WO2020042890A1 - 视频处理方法、终端及计算机可读存储介质 - Google Patents

视频处理方法、终端及计算机可读存储介质 Download PDF

Info

Publication number
WO2020042890A1
WO2020042890A1 PCT/CN2019/099921 CN2019099921W WO2020042890A1 WO 2020042890 A1 WO2020042890 A1 WO 2020042890A1 CN 2019099921 W CN2019099921 W CN 2019099921W WO 2020042890 A1 WO2020042890 A1 WO 2020042890A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
target
display area
video
terminal
Prior art date
Application number
PCT/CN2019/099921
Other languages
English (en)
French (fr)
Inventor
马明月
李兵
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2020042890A1 publication Critical patent/WO2020042890A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Definitions

  • the present disclosure relates to the field of communications, and in particular, to a video processing method, terminal, and computer-readable storage medium.
  • terminal technology users use the terminal to record and play videos more and more times, and also put forward higher requirements for their derivative functions. For example, the user wants to be able to take a picture of the current frame from the video being recorded, or take a picture or record a short video at the same time during the video recording.
  • the first solution in the related technology is to call the screen recording function provided by the mobile phone.
  • calling the screen recording function of the mobile phone requires calling the corresponding function interface, and then performing screenshot or screen recording operations based on the buttons of the function interface Get a screenshot or a short video.
  • the user ’s operation of the buttons on the function interface will block the current recorded video screen.
  • the second solution is to use a third-party application to take a screenshot and intercept a video on the phone. This method requires First open a third-party application, import the target video into the third-party application, and then take a screenshot or intercept a video in the third-party application.
  • the inventor found that when the first solution calls the screenshot or screen recording function, the user's calling operation will block the current video screen, and the second solution requires the user to stop recording the video and save the current video. In order to intercept the recorded video, it affects the continuity of the video recording.
  • none of the solutions in related technologies can achieve the purpose of taking screenshots, taking pictures, or recording sub-videos without affecting the currently recorded video.
  • Embodiments of the present disclosure provide a video processing method, terminal, and computer-readable storage medium to solve the related art.
  • a user takes a screenshot or takes a photo or records a sub-video during a video recording process, the video currently being recorded will be affected. problem.
  • an embodiment of the present disclosure provides a video processing method applied to a terminal.
  • the display area of the terminal screen includes a first display area and a second display area.
  • the method includes:
  • the target interface includes a recording interface or a playback interface; and the target operation includes at least one of a short video recording operation, a photographing operation, and a screenshot operation.
  • an embodiment of the present disclosure further provides a terminal.
  • a display area of the terminal screen includes a first display area and a second display area.
  • the terminal includes:
  • a first input receiving module configured to receive a first input of a user in a second display area in a state where a target interface of a target video is displayed in the first display area;
  • a target operation execution module configured to execute a target operation corresponding to the first input in response to the first input
  • the target interface includes a recording interface or a playback interface; and the target operation includes at least one of a short video recording operation, a photographing operation, and a screenshot operation.
  • an embodiment of the present disclosure further provides a terminal, including a processor, a memory, and a program stored on the memory and executable on the processor.
  • the program is implemented when the program is executed by the processor. The steps of the video processing method according to the present disclosure.
  • an embodiment of the present disclosure further provides a computer-readable storage medium.
  • the computer-readable storage medium stores a program, and when the program is executed by a processor, implements steps of the video processing method according to the present disclosure. .
  • the first input of the user in the second display area is received in a state where the target interface of the target video is displayed in the first display area; and the first input is executed in response to the first input Corresponding target operation.
  • the target input on the second display area short video recording, taking pictures or screenshots will not block the video image of the first display area, nor will it affect the recorded video. Continuity, simple and fast operation.
  • FIG. 1 shows a flowchart of a video processing method provided in an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of a first video interception method provided in an embodiment of the present disclosure
  • FIG. 3 shows a flowchart of a second video interception method provided in an embodiment of the present disclosure
  • FIG. 4 shows a flowchart of a first photographing method provided in an embodiment of the present disclosure
  • FIG. 5 shows a flowchart of a second photographing method provided in an embodiment of the present disclosure
  • FIG. 6 shows a flowchart of a short video recording method provided in an embodiment of the present disclosure
  • FIG. 7 shows a flowchart of a parameter setting method provided in an embodiment of the present disclosure
  • FIG. 8 is a flowchart of a method for performing a target operation corresponding to a first input provided in an embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of a first video interception method provided in an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a first photographing method provided in an embodiment of the present disclosure.
  • FIG. 11 shows a schematic diagram of a parameter setting method provided in an embodiment of the present disclosure
  • FIG. 12 shows a structural block diagram of a first terminal provided in an embodiment of the present disclosure
  • FIG. 13 is a structural block diagram of a second terminal provided in an embodiment of the present disclosure.
  • FIG. 14 is a structural block diagram of a third terminal provided in an embodiment of the present disclosure.
  • FIG. 15 is a structural block diagram of a fourth terminal provided in an embodiment of the present disclosure.
  • FIG. 16 is a structural block diagram of a fifth terminal provided in an embodiment of the present disclosure.
  • FIG. 17 shows a structural block diagram of a sixth terminal provided in an embodiment of the present disclosure.
  • FIG. 18 is a structural block diagram of a seventh terminal provided in an embodiment of the present disclosure.
  • FIG. 19 is a structural block diagram of an eighth terminal provided in an embodiment of the present disclosure.
  • FIG. 20 is a schematic diagram of a hardware structure of a terminal provided in an embodiment of the present disclosure.
  • FIG. 1 there is shown a flowchart of a video processing method provided in an embodiment of the present disclosure, which may specifically include the following steps:
  • Step 101 Receive a first input from a user in a second display area in a state where a target interface of a target video is displayed in the first display area.
  • the video processing method may be applied to a terminal, and the terminal includes, but is not limited to, a device such as a mobile phone, tablet computer, notebook computer, palmtop computer, navigation device, wearable device, smart bracelet, step counting Terminals such as routers are not specifically limited in this embodiment of the present disclosure.
  • the terminal includes a first display area and a second display area.
  • the terminal may have a display screen, and the first display area and the second display area are respectively located in different areas of the same display screen. In this display mode, a user can call up by using the terminal's split screen function.
  • the terminal includes a first screen and a second screen, the first display area is located in the display area of the first screen, and the second display area is located in the display area of the second screen.
  • This display method is mainly applicable to a terminal having a folding double-sided screen or a double-sided screen composed of a front screen and a second screen, wherein the first display area and the second display area are respectively located on the double-sided screen. Which screen is not specifically limited in the embodiment of the present disclosure.
  • the terminal is specifically a single-sided screen or a double-sided screen, which is not specifically limited in the embodiment of the present disclosure.
  • a user records or plays a video in the first display area.
  • the video can be recorded or played using the software provided by the terminal, or the third-party software installed by the user on the terminal can be used to record or play the video.
  • the video content played by the user may be a video recorded by the user himself before, or may be a video that the user has downloaded and saved on the terminal, or may be a video program requested by the user in real-time on the video software. In the embodiments of the present disclosure, this is not specifically limited.
  • the second display area may have a touch screen
  • the first input may be an input made in a preset area of the second display area
  • the preset area may be a second display area. Remove the middle area of the upper and lower quarter screen.
  • the preset area may also be another area of the second display area, which is not specifically limited in the embodiment of the present disclosure.
  • the first input may be a first operation, and the first input may be a user clicking, sliding, or touching in a preset area of the second display area; the second display area It can also have a smart voice control function. After the user wakes up, he can use the different voice words to make a first input to the second display area.
  • the second display area can also have an intelligent interactive camera. Before the first input is performed on the second display area using a preset gesture action or a preset facial expression. Regarding which method the user uses to make the first input in the second display area, in the embodiment of the present disclosure, this is not specifically limited.
  • the first input is made in the second display area. No matter what input method is used, it has no effect on the video being recorded or played in the first display area.
  • the need to pause the current video ensures the integrity of the video picture and also the continuity of the recorded or played video.
  • Step 102 In response to the first input, perform a target operation corresponding to the first input.
  • the target interface includes a recording interface or a playback interface; and the target operation includes at least one of a short video recording operation, a photographing operation, and a screenshot operation.
  • the second display area After receiving the first input from the user, the second display area immediately controls the terminal to perform a short video recording operation and take a picture of the video interface being recorded or played in the first display area according to the control instruction corresponding to the first input. At least one of operation and screenshot operation.
  • the short video recording operation refers to recording a short video at the same time while the first display area is recording a video.
  • the length of the recorded short video as long as it is less than or equal to the length of the video being recorded in the first display area and is within the allowable range of the storage space of the terminal, this embodiment of the present disclosure does not specifically limit this.
  • the camera operation refers to controlling a camera to take a photo while recording a video in the first display area.
  • the screenshot operation refers to a visible static image that can be displayed on the screen or other display devices taken from the terminal; generally, a static bitmap file is obtained, and the format can be a bitmap file (Bitmap, BMP), a portable network graphic (Portable Network Graphics (PNG), Joint Photographic Experts Group (JPEG), etc .; in the embodiment of the present disclosure, the format of the screenshot file is not specifically limited.
  • the screenshot can be capturing the full screen content to take a screenshot, it can only capture the current active window to take a screenshot, or it can only capture the video screen to take a screenshot, where the content of the active window generally includes a title bar , Menu bar, toolbar, and main content, while the video screen generally includes only the main content of a video activity window.
  • the content of the active window generally includes a title bar , Menu bar, toolbar, and main content
  • the video screen generally includes only the main content of a video activity window.
  • a user takes a screenshot of a video that is being recorded or played optionally, generally, he wants to capture only the video screen for screenshots without the need for other unrelated content.
  • the different needs of different users do not specifically limit the content area of the screenshot.
  • the video being recorded or played in the first display area will not be affected, and the recording or playback will continue; the user can also pause the recording or playback of the video in the first display area as needed.
  • the disclosed embodiment does not specifically limit this.
  • the terminal can save the result file in a storage location preset by the user for the convenience of the user to view and use.
  • a first input from the user in the second display area is received; in response to the first input, execution is performed The target operation corresponding to the first input, so that by receiving the target input on the second display area, a short video recording, taking a picture, or taking a screenshot is performed, which will not obstruct the video image of the first display area. Does not affect the continuity of the recorded video, and the operation is simple and fast.
  • FIG. 2 a flowchart of a first video capturing method provided in an embodiment of the present disclosure is shown. This can include:
  • Step 201 Receive a first input from a user in a second display area in a state where a target interface of a target video is displayed in the first display area.
  • the first input may be a user's finger sliding upward by at least 1 cm in a preset area of the second display area, and the first input may also be other operation gestures. This is not specifically limited.
  • Step 201 may refer to step 101, and details are not described herein again.
  • the target interface is a playback interface.
  • the first video image frame currently displayed in the first display area is intercepted, and a target image is generated and stored.
  • the terminal when the target interface for displaying the target video in the first display area is a playback interface, that is, when the first display area is playing a video, the terminal responds to the first input performed in step 201 to the first display area.
  • the currently displayed first video image frame is taken as a screenshot, and the screenshot is stored in a preset location.
  • Step 203 Display the target image and a preset identifier in a second display area.
  • a target image of the screenshot is displayed in a second display area for a user to view and edit.
  • the user can quickly capture the next video image frame, such as repeating steps 201 and 202 of the embodiment of the present disclosure again;
  • a preset identifier is also displayed in the second display area.
  • the preset identifier may instruct the user to perform certain operations on the target image.
  • the preset identifier may be an edit identifier or a share identifier.
  • Step 204 Receive a second input from the user to the preset identifier.
  • the user performs a second input based on a preset identifier.
  • the second input may be a second operation.
  • the second operation may be a user's click, swipe, or touch operation on the preset identifier. It may also be based on the preset identifier.
  • Step 205 In response to the second input, perform an editing operation or a sharing operation on the target image.
  • the terminal invokes image editing or social application software based on the user's second input to the preset identification, so that the user can add effects such as beauty, filters, etc. to the target image; it can also share the target image to social The internet.
  • the recording and playback of the video in the first display area will not be affected. In this way, users can complete more operations while recording or playing videos, improving This improves behavioral efficiency and saves user time.
  • FIG. 3 a flowchart of a second video capturing method provided in an embodiment of the present disclosure is shown.
  • Step 301 Receive a first input from a user in a second display area in a state where a target interface of a target video is displayed in the first display area.
  • the first input may be a user's finger sliding up at least 1 cm in a preset area of the second display area, and then the finger continues to rotate clockwise; the first input may also be a Long press operation, such as pressing your finger in the preset area for several seconds.
  • Step 301 may refer to step 101, and details are not described herein again.
  • the target interface is a playback interface.
  • N video image frames of the target video are intercepted, N target images are generated and stored.
  • the first video image frame among the N video image frames is a video image frame of the target video displayed in the first display area at the input start time of the first input, and the N video images are
  • the Nth video image frame in the frame is a video image frame of the target video displayed in the first display area at the end time of the first input; N is a positive integer.
  • the terminal obtains time parameters of the first input, such as a start time, an end time, and the like, and continuously captures N image frames of the target video based on the time parameters.
  • the terminal may continuously take the screenshot of the target video N times at a set time interval.
  • the start time of the first input is 14:30:01
  • the end time is 14:30:10. If the time interval is set to 1 second, the target video will be intercepted every 1 second.
  • a video image frame that is, 14:30:01, 14:30:02, ... until 14:30:10, each of the above moments will intercept a video image frame, and finally intercept a total of 10 video image frames .
  • FIG. 4 a flowchart of a first photographing method provided in an embodiment of the present disclosure is shown.
  • Step 401 Receive a first input from a user in a second display area in a state where a target interface of a target video is displayed in the first display area.
  • the first input may be a user's finger sliding down at least 1 cm in a preset area of the second display area.
  • Step 401 may refer to step 101, and details are not described herein again.
  • the target interface is a recording interface, and in response to the first input, the camera of the terminal is controlled to take a photo.
  • the terminal controls the camera of the terminal to shoot in response to the first input performed in step 401. a photo.
  • FIG. 5 a flowchart of a second photographing method provided in an embodiment of the present disclosure is shown.
  • Step 501 Receive a first input from a user in a second display area in a state where a target interface of a target video is displayed in the first display area.
  • the first input may be a user's finger sliding down at least 1 cm in a preset area of the second display area, and then the finger continues to rotate clockwise; the first input may also be a Long press operation is not limited here.
  • Step 501 may refer to step 101, and details are not described herein again.
  • the target interface is a recording interface, and in response to the first input, the camera of the terminal is controlled to take M photos.
  • M is a positive integer
  • the first photo of the M photos is taken by the camera at the start time of the first input input
  • the M photo is the end time of the input of the first input
  • the terminal controls the camera of the terminal to shoot in response to the first input performed in step 501. M photos.
  • the terminal acquires time parameters of the first input, such as a start time and an end time, and controls the camera of the terminal to take M photos based on the time parameters.
  • the terminal can continuously take M photos at a set time interval.
  • FIG. 6 a flowchart of a short video recording method provided in an embodiment of the present disclosure is shown.
  • Step 601 Receive a first input from a user in a second display area in a state where a target interface of a target video is displayed in the first display area.
  • the first input may be a user's finger sliding down at least 1 cm in a preset area of the second display area, and then the finger continues to rotate counterclockwise; the first input may also be a Long press operation is not limited here.
  • Step 601 may refer to step 101, and details are not described herein again.
  • the target interface is a recording interface.
  • a short video recording interface is displayed in the second display area and a short video recording is started.
  • the terminal when the target interface for displaying the target video in the first display area is a recording interface, that is, when the first display area is recording a video, the terminal displays a short video recording interface in response to the first input performed in step 601. , While the first display area is recording a video, record a short video.
  • the first video image frame or N video image frames are intercepted from the target video.
  • edit or share the target image through the second input. Since the first input and the second input operations are performed in the second display area, the user's screenshot, edit, and share operations will not block the target video. And does not affect the continuity of video playback; when the recording interface of the target video is displayed in the second display area, in response to the first input, the control terminal takes one photo or M photos or controls the terminal to record a short video.
  • the first input is also performed in the second display area, so a photographing or recording operation will not obstruct the target video, nor will it affect the continuity of video recording, and the operation is relatively fast and convenient.
  • FIG. 7 a flowchart of a parameter setting method provided in an embodiment of the present disclosure is shown.
  • Step 701 Receive a first input of a user in a second display area in a state where a target interface of a target video is displayed in the first display area;
  • Step 702 In response to the first input, perform a target operation corresponding to the first input.
  • steps 701 and 702 may refer to steps 101 and 102 in the embodiment of the present disclosure, and details are not described herein again.
  • Step 703 Receive a third input from the user in the second display area.
  • the user obtains the recorded short video, the captured photo, or the target screenshot through steps 701 and 702, and has been displayed in the second display area.
  • the third input is performed in the second display area.
  • the third input may be a third operation, and the third operation may be a user's click, swipe, or touch operation on a preset mark; or a voice input, gesture input, etc. based on the preset mark. In the disclosed embodiments, this is not specifically limited.
  • Step 704 In response to the third input, a floating frame is displayed in the second display area, where the floating frame includes T candidate parameters.
  • the terminal invokes an effect parameter adjustment floating frame in a second display area, and the floating frame includes T alternative effect setting parameters, which can be used by a user to record the short video.
  • Video, captured photos, or target screenshots can be used to set effect parameter settings. Users can also set effect parameter settings for videos being recorded in the first display area.
  • the T candidate effect parameters can be filters, beauty, stickers, Effects such as characters and mosaics are not specifically limited in this embodiment of the present disclosure.
  • Step 705 Receive a fourth input of the user in the floating frame.
  • the fourth input may be a fourth operation.
  • the user performs a fourth input on the floating frame to select a target parameter to be set for the target object.
  • step 706 in response to the fourth input, the parameters of the target object currently displayed in the second display area are adjusted according to the target parameters selected by the fourth input.
  • the effect of the target object is adjusted through input operations, such as adjusting the beauty part, the color of the filter, and the type of sticker.
  • steps 705 and 706 are performed again.
  • the user after obtaining the target image through the first input, the user continues to perform the third input for calling out the floating frame, and then selects the target parameter in the floating frame through the fourth input and adjusts the target object. Parameters, and finally set the effect parameters for the target image.
  • the effect parameter setting process is performed in the second display area, which will not block the target video being played or recorded in the first display area, and will not affect the continuity of the target video.
  • the setting process is simple and fast, and it also meets the needs of users. Demand.
  • FIG. 8 a flowchart of a method for performing a target operation corresponding to a first input provided in an embodiment of the present disclosure is shown.
  • Step 801 Obtain an input feature of the first input.
  • the input direction of the first input is a preset direction
  • the input trajectory of the first input is a preset shape
  • the length of the input track of the first input is a preset length value
  • the input time of the first input is within a preset time range
  • the input parameter value of the first input is within a preset value range.
  • the input direction refers to a direction in which the user's finger slides on the touch screen
  • the input trajectory refers to the trajectory of the user's finger sliding on the touch screen
  • the length of the input trajectory refers to the user's finger in the The length of the slide on the touch screen
  • the input time of the first input is within a preset time range, which means that the length of time that the user's finger is pressed on the touch screen is greater than the preset time
  • the input parameter value is within the preset value range, which means that the pressure of the user's finger pressing the touch screen is within the preset pressure value range.
  • Step 802 Perform a target operation that matches the input characteristics.
  • different input features of the first input correspond to different operation types.
  • the user needs to make a first input according to a preset input feature, so that the terminal can recognize the type of operation the user wants the terminal to perform, such as a short video recording operation, a photographing operation, or a screenshot operation. If the first input does not correspond to a preset input feature, the terminal will fail to recognize or incorrectly identify the type of operation of the user. Therefore, a preset input rule needs to be set in the terminal first, so that the terminal corresponds to different input features in the first input with different operation types, and also allows the user to make targeted input according to the requirements of the preset rule.
  • the preset input rule for intercepting the first video image frame may be: a finger sliding direction is upward, and the length is at least 1 cm; the preset input rule for intercepting N video image frames may be: When the finger slides upward, the finger continues to rotate clockwise after a length of at least 1 cm.
  • the preset input rule for controlling the camera of the terminal to take a photo may be: the finger slides downward and the length 1 cm; the preset rule for the control terminal ’s camera to take M photos can be: the finger slides downward, the finger continues to rotate clockwise after a length of at least 1 cm; the preset input rule for recording short videos can be: finger slide direction to After the length is at least 1 cm, the fingers continue to rotate counterclockwise.
  • the input characteristics of the second input, the third input, and the fourth input also need to be preset in the terminal.
  • the preset third input rule for calling up the floating frame may be: the finger slides upward, the finger continues to rotate counterclockwise after the length is at least 1 cm.
  • FIG. 9 a schematic diagram of a first video capturing method provided in an embodiment of the present disclosure is shown.
  • the area indicated by reference numeral 1 is a first display area, which is playing a video; the area indicated by reference numeral 2 is a second display area, and a user's finger is performing a touch screen operation on the second display area.
  • First input where the upward arrow represents the upward sliding, and the length of the arrow represents the sliding distance.
  • the terminal intercepts the first video image frame from the target video.
  • FIG. 10 a schematic diagram of a first photographing method provided in an embodiment of the present disclosure is shown.
  • the area indicated by reference numeral 1 is a first display area, which is recording a video; the area indicated by reference numeral 2 is a second display area, and a user's finger is performing a touch screen operation on the second display area.
  • the first input where the downward arrow represents the downward sliding, and the length of the arrow represents the sliding distance. After the user makes the sliding operation, the camera of the terminal is controlled to take a photo.
  • the area indicated by reference numeral 1 is a first display area, which is recording or playing a video; the area indicated by reference numeral 2 is a second display area, and a user's finger is touching the second display area.
  • the control screen performs a third input, where an upward arrow represents sliding upwards, and a rotating arrow represents counterclockwise rotation.
  • the floating frame is called up, and after that, the user ’s finger continues to rotate counterclockwise, and the T candidate parameters of the floating frame are sequentially displayed.
  • the embodiment of the present disclosure completes the target operation execution process by acquiring the input characteristics of the first input and executing a target operation matching the input characteristics, wherein the input characteristics included in the first input Need to be set in advance.
  • the input process of the first input is performed in the second display area, and the target video in the first display area will not be blocked, and the continuity of the target video will not be affected.
  • the display area of the screen of the terminal 900 includes a first display area and a second display area.
  • the terminal 900 may specifically include:
  • a first input receiving module 901 configured to receive a first input of a user in a second display area in a state where a target interface of a target video is displayed in the first display area;
  • a target operation execution module 902 configured to execute a target operation corresponding to the first input in response to the first input;
  • the target interface includes a recording interface or a playback interface; and the target operation includes at least one of a short video recording operation, a photographing operation, and a screenshot operation.
  • the first display area and the second display area are different areas of a same display screen of a terminal; or, in a case where the terminal includes a first screen and a second screen, the first A display area is located in the first screen, and the second display area is located in the second screen.
  • the target execution module 902 of the terminal 900 may further include:
  • a first video capture sub-module 9021 configured to capture a first video image frame currently displayed in the first display area, generate a target image, and store the target image;
  • the terminal 900 may further include:
  • a display module 903 configured to display the target image and a preset identifier in a second display area
  • a second input receiving module 904 configured to receive a second input of the preset identifier by a user
  • An editing or sharing module 905 is configured to perform an editing operation or a sharing operation on the target image in response to the second input.
  • the target execution module 902 of the terminal 900 may further include:
  • the second video interception sub-module 9022 is configured to intercept N video image frames of the target video, generate N target images, and store them.
  • the target execution module 902 of the terminal 900 may further include:
  • the first shooting sub-module 9023 is configured to control a camera of the terminal to take a photo.
  • the target execution module 902 of the terminal 900 may further include:
  • the second shooting sub-module 9024 is configured to control the camera of the terminal to take M photos.
  • the target execution module 902 of the terminal 900 may further include:
  • the short video recording sub-module 9025 is configured to display a short video recording interface in the second display area and start recording a short video.
  • the terminal 900 may further include:
  • a third input receiving module 906, configured to receive a third input of the user in the second display area
  • a floating frame display module 907 configured to display a floating frame in a second display area in response to the third input, where the floating frame includes T candidate parameters;
  • a fourth input receiving module 908, configured to receive a fourth input of the user in the search floating box
  • a parameter adjustment module 909 is configured to adjust parameters of the target object currently displayed in the second display area in response to the fourth input and according to the target parameter selected by the fourth input.
  • the target object includes at least one of a captured image, a captured photo, and a short video recording interface.
  • the target execution module 902 of the terminal 900 may further include:
  • An input feature acquisition submodule 9026 which acquires the input feature of the first input
  • the target operation execution sub-module 9027 executes a target operation that matches the input characteristics.
  • the terminal 900 provided in FIG. 12 to FIG. 19 in the embodiment of the present disclosure can implement each process in the method embodiments of FIG. 1 to FIG. 8. To avoid repetition, details are not described herein again.
  • the first input of the user in the second display area is received in a state where the target interface of the target video is displayed in the first display area; in response to the first input, the first input is executed.
  • One input corresponds to the target operation.
  • FIG. 20 is a schematic diagram of a hardware structure for implementing a terminal in various embodiments of the present disclosure.
  • the terminal 1000 includes, but is not limited to, a radio frequency unit 1001, a network module 1002, a sound output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 10010, an interface unit 1008, a memory 1009, a processor 1010, and a power supply. 1011 and other components.
  • a radio frequency unit 1001 a radio frequency unit 1001
  • a network module 1002 a sound output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 10010, an interface unit 1008, a memory 1009, a processor 1010, and a power supply. 1011 and other components.
  • the terminal structure shown in FIG. 12 does not constitute a limitation on the terminal, and the terminal may include more or fewer components than shown in the figure, or some components may be combined, or different component arrangements.
  • the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a car terminal,
  • the user input unit 1007 is configured to receive a user's first input in the second display area in a state where the target interface of the target video is displayed in the first display area;
  • the processor 1010 is configured to execute a target operation corresponding to the first input in response to the first input.
  • a video processing method, a terminal, and a computer-readable storage medium provided in the embodiments of the present disclosure.
  • the video processing method is applied to a terminal.
  • a display area of the terminal screen includes a first display area and a second display area.
  • the method includes: In a state where the target interface of the target video is displayed in the first display area, a first input from the user in the second display area is received; a target operation corresponding to the first input is performed in response to the first input;
  • the target interface includes a recording interface or a playback interface; the target operation includes at least one of a short video recording operation, a photographing operation, and a screenshot operation.
  • the user can perform short video recording operations, camera operations, and screenshot operations on the second display area. Occlusion of the video currently being recorded or played will not affect the continuity of the current video, and the operation is relatively simple and fast.
  • the radio frequency unit 1001 may be used for receiving and sending signals during the process of receiving and sending information or during a call. Specifically, the downlink data from the base station is received and processed by the processor 1010; The uplink data is sent to the base station.
  • the radio frequency unit 1001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 1001 can also communicate with a network and other devices through a wireless communication system.
  • the terminal provides users with wireless broadband Internet access through the network module 1002, such as helping users to send and receive email, browse web pages, and access streaming media.
  • the sound output unit 1003 may convert sound data received by the radio frequency unit 1001 or the network module 1002 or stored in the memory 1009 into a sound signal and output it as a sound. Moreover, the sound output unit 1003 may also provide sound output (for example, a call signal receiving sound, a message receiving sound, etc.) related to a specific function performed by the terminal 1000.
  • the sound output unit 1003 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 1004 is used to receive a sound or video signal.
  • the input unit 1004 may include a graphics processing unit (GPU) 10041 and a microphone 10042.
  • the graphics processor 10041 pairs images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
  • the processed image frames may be displayed on the display unit 1006.
  • the image frames processed by the graphics processor 10041 may be stored in the memory 1009 (or other storage medium) or transmitted via the radio frequency unit 1001 or the network module 1002.
  • the microphone 10042 can receive sound, and can process such sound into sound data.
  • the processed sound data can be converted into a format that can be transmitted to a mobile communication base station via the radio frequency unit 1001 in the case of a telephone call mode and output.
  • the terminal 1000 further includes at least one sensor 1005, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 10061 according to the brightness of the ambient light.
  • the proximity sensor can turn off the display panel 10061 or the backlight when the terminal 1000 is moved to the ear.
  • an accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes).
  • sensor 1005 can also include fingerprint sensor, pressure sensor, iris sensor, molecular sensor, gyroscope, barometer, hygrometer, thermometer, infrared The sensors and the like are not repeated here.
  • the display unit 1006 is configured to display information input by the user or information provided to the user.
  • the display unit 1006 may include a display panel 10061.
  • the display panel 10061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the user input unit 1007 can be used to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the terminal.
  • the user input unit 1007 includes a touch panel 10071 and other input devices 10072.
  • the touch panel 10071 also known as a touch screen, can collect user's touch operations on or near it (for example, the user uses a finger, a stylus or any suitable object or accessory on the touch panel 10071 or near the touch panel 10071 operating).
  • the touch panel 10071 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, and detects the signal caused by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into contact coordinates, and sends it
  • the processor 1010 receives a command sent from the processor 1010 and executes the command.
  • the touch panel 10071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 1007 may also include other input devices 10072.
  • the other input device 10072 may include, but is not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and an operation lever, and details are not described herein again.
  • the touch panel 10071 may be overlaid on the display panel 10061.
  • the touch panel 10071 detects a touch operation on or near the touch panel 10071, the touch panel 10071 transmits the touch operation to the processor 1010 to determine the type of the touch event.
  • the type of event provides a responsive visual output on the display panel 10061.
  • the touch panel 10071 and the display panel 10061 are implemented as two independent components to implement the input and output functions of the terminal, in some embodiments, the touch panel 10071 and the display panel 10061 may be integrated to implement the input and output of the terminal.
  • the output function is not limited here.
  • the interface unit 1008 is an interface through which an external device is connected to the terminal 1000.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, and sound input / output (I / O) port, video I / O port, headphone port, and more.
  • the interface unit 1008 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements in the terminal 1000 or may be used to Transfer data.
  • the memory 1009 can be used to store software programs and various data.
  • the memory 1009 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required for at least one function; the storage data area may store data according to Data (such as voice data, phone book, etc.) created by the use of mobile phones.
  • the memory 1009 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 1010 is a control center of the terminal, and uses various interfaces and lines to connect various parts of the entire terminal.
  • the processor 1010 executes the terminal by running or executing software programs or modules stored in the memory 1009 and calling data stored in the memory 1009. Various functions and processing data to monitor the terminal as a whole.
  • the processor 1010 may include one or more processing units; optionally, the processor 1010 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, and an application program, etc.
  • the tuning processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1010.
  • the terminal 1000 may further include a power source 1011 (such as a battery) for supplying power to various components.
  • a power source 1011 such as a battery
  • the power source 1011 may be logically connected to the processor 1010 through a power management system, thereby implementing management of charging, discharging, and power management through the power management system. And other functions.
  • the terminal 1000 includes some functional modules that are not shown, and details are not described herein again.
  • an embodiment of the present disclosure further provides a terminal, including a processor 1010 and a memory 1009, and a computer program stored in the memory 1009 and executable on the processor 1010.
  • the computer program is implemented when the processor 1010 is executed.
  • An embodiment of the present disclosure also provides a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, each process of the foregoing video processing method embodiment is implemented. More details.
  • the computer-readable storage medium mentioned above is, for example, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk.
  • the methods in the above embodiments can be implemented by means of software plus a necessary universal hardware platform, or by hardware, but in many cases the former is better.
  • Implementation Based on this understanding, the technical solution of the present disclosure that is essentially or contributes to the existing technology can be embodied in the form of a software product that is stored in a storage medium (such as ROM / RAM, magnetic disk, The optical disc) includes several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the embodiments of the present disclosure.
  • a terminal which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Abstract

本公开提供了一种视频处理方法、终端及计算机可读存储介质,视频处理方法应用于终端,终端屏幕的显示区包括第一显示区和第二显示区,方法包括:在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;响应于第一输入,执行第一输入对应的目标操作;其中,目标界面包括录制界面或播放界面;目标操作包括短视频录制操作、拍照操作、截图操作中的至少一项。

Description

视频处理方法、终端及计算机可读存储介质
相关申请的交叉引用
本申请主张在2018年8月28日在中国提交的中国专利申请号No.201810989058.1的优先权,其全部内容通过引用包含于此。
技术领域
本公开涉及通信领域,尤其涉及一种视频处理方法、终端及计算机可读存储介质。
背景技术
随着终端技术的发展,用户使用终端来录制视频和播放视频的次数越来越多,也对其衍生功能提出了更高的要求。例如,用户希望能够从正在录制的视频中截取当前帧的图片,或在录制视频的过程中,同时拍照或录制一段短视频。
对于此需求,相关技术中第一种方案是调用手机自带的录屏功能,一般调用手机的录屏功能需要调用相应的功能界面,然后基于该功能界面的按钮实现截图或录屏操作,以得到截图或者一段短视频,在这个过程中,用户对功能界面的按钮的操作会遮挡当前的录制视频画面;第二种方案是在手机上借助第三方应用截图和截取一段视频,这种方式需要先打开第三方应用,把目标视频导入第三方应用中,然后在第三方应用中进行截图或截取一段视频。
发明人在应用上述技术方案的过程中发现,第一种方案在调用截图或录屏功能时,用户的调用操作会遮挡当前的视频画面,第二种方案需要用户停止录制视频并保存当前视频,才能进行录制视频的截取,影响了视频录制的连续性。综上所述,相关技术中的方案都无法在不影响当前录制视频的情况下,达到截图、拍照或录制子视频的目的。
发明内容
本公开实施例提出一种视频处理方法、终端及计算机可读存储介质,以 解决相关技术中,用户在录制视频的过程中截图或拍照、录制子视频的时候,会影响当前正在录制的视频的问题。
第一方面,本公开实施例提供了一种视频处理方法,应用于终端,所述终端屏幕的显示区包括第一显示区和第二显示区,所述方法包括:
在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;
响应于所述第一输入,执行所述第一输入对应的目标操作;
其中,所述目标界面包括录制界面或播放界面;所述目标操作包括短视频录制操作、拍照操作、截图操作中的至少一项。
第二方面,本公开实施例还提供了一种终端,所述终端屏幕的显示区包括第一显示区和第二显示区,其中,所述终端包括:
第一输入接收模块,用于在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;
目标操作执行模块,用于响应于所述第一输入,执行所述第一输入对应的目标操作;
其中,所述目标界面包括录制界面或播放界面;所述目标操作包括短视频录制操作、拍照操作、截图操作中的至少一项。
第三方面,本公开实施例还提供了一种终端,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序,所述程序被所述处理器执行时实现本公开所述的视频处理方法的步骤。
第四方面,本公开实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有程序,所述程序被处理器执行时实现本公开所述的视频处理方法的步骤。
在本公开实施例中,通过在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;响应于所述第一输入,执行所述第一输入对应的目标操作,这样,通过在第二显示区上接收目标输入的方式,来进行短视频录制、拍照或截图,既不会对第一显示区的视频画面产生遮挡,也不影响录制视频的连续性,操作简单、快捷。
附图说明
图1示出了本公开实施例中提供的一种视频处理方法的流程图;
图2示出了本公开实施例中提供的第一视频截取方法的流程图;
图3示出了本公开实施例中提供的第二视频截取方法的流程图;
图4示出了本公开实施例中提供的第一拍摄方法的流程图;
图5示出了本公开实施例中提供的第二拍摄方法的流程图;
图6示出了本公开实施例中提供的短视频录制方法的流程图;
图7示出了本公开实施例中提供的参数设置方法的流程图;
图8示出了本公开实施例中提供的执行第一输入对应的目标操作的方法流程图;
图9示出了本公开实施例中提供的第一视频截取方法示意图;
图10示出了本公开实施例中提供的第一拍摄方法示意图;
图11示出了根据本公开实施例中提供的参数设置方法示意图;
图12示出了本公开实施例中提供的第一种终端的结构框图;
图13示出了本公开实施例中提供的第二种终端的结构框图;
图14示出了本公开实施例中提供的第三种终端的结构框图;
图15示出了本公开实施例中提供的第四种终端的结构框图;
图16示出了本公开实施例中提供的第五种终端的结构框图;
图17示出了本公开实施例中提供的第六种终端的结构框图;
图18示出了本公开实施例中提供的第七种终端的结构框图;
图19示出了本公开实施例中提供的第八种终端的结构框图;
图20示出了本公开实施例中提供的终端的硬件结构示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
参照图1,示出了本公开实施例中提供的一种视频处理方法的流程图, 具体可以包括如下步骤:
步骤101,在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入。
在本公开实施例中,所述视频处理方法可以应用于终端,所述终端包括但不限于是诸如手机、平板电脑、笔记本电脑、掌上电脑、导航装置、可穿戴设备、智能手环、计步器等终端,在本公开实施例中,对此不做具体限定。
在本公开实施例中,所述终端包括第一显示区和第二显示区。所述终端可以具有一个显示屏,所述第一显示区和所述第二显示区分别位于同一个显示屏的不同的区域,这种显示方式用户可以通过使用终端的分屏功能调出。或者,所述终端包括第一屏和第二屏,所述第一显示区位于所述第一屏的显示区,且所述第二显示区位于所述第二屏的显示区。这种显示方式主要适用于具有折叠双面屏或具有由正面屏和第二屏组成的双面屏的终端,其中,所述第一显示区和第二显示区分别位于所述双面屏的哪一个屏,在本公开实施例中,对此不做具体限定。
所述终端具体为单面屏还是双面屏,在本公开实施例中,对此也不做具体限定。
在本公开实施例中,用户在第一显示区录制或播放视频,可以使用终端自带的软件录制或播放视频,也可以使用用户在终端上安装的第三方软件录制或播放视频,本公开实施例对此不做具体限定;其中,用户播放的视频内容可以是用户自己之前录制的录像,也可以是用户在终端上已下载保存的视频,也可以是用户在视频软件上实时点播的视频节目,在本公开实施例中,对此不做具体限定。
在本公开实施例中,所述第二显示区可以具有触控屏,所述第一输入可以是在第二显示区的预设区域进行的输入,所述预设区域可以是第二显示区去除上下四分之一屏幕的中间区域。所述预设区域也可以是第二显示区的其他区域,在本公开实施例中,对此不做具体限定。
在本公开实施例中,所述第一输入可以是第一操作,所述第一输入可以是用户在第二显示区的预设区域内点击、滑动、触摸等操作;所述第二显示区也可以具有智能语音控制功能,用户进行语音唤醒之后,就可以通过使用 不同的语音词对第二显示区进行第一输入;所述第二显示区也可以具有智能交互摄像头,用户可以通过在摄像头前使用预设的手势动作或预设的面部表情等对第二显示区完成第一输入。对于用户使用哪种方法在第二显示区中进行第一输入,在本公开实施例中,对此不做具体限定。
此步骤中,在第二显示区进行第一输入,不管采用何种输入方式,对第一显示区正在录制或播放的视频均没有影响,既不会调出功能界面遮挡当前视频画面,也不需要暂停当前视频,保证了视频画面的完整性,也保证了录制或播放视频的连续性。
步骤102,响应于所述第一输入,执行所述第一输入对应的目标操作。其中,所述目标界面包括录制界面或播放界面;所述目标操作包括短视频录制操作、拍照操作、截图操作中的至少一项。
第二显示区在接收到用户的第一输入后,根据所述第一输入对应的控制指令,立即控制所述终端对第一显示区正在录制或播放的视频界面,执行短视频录制操作、拍照操作、截图操作中的至少一项。
在本公开实施例中,短视频录制操作是指在第一显示区正在录制视频的情况下,再同时录制一段比较短的视频。对于录制的短视频长度,只要小于等于第一显示区正在录制的视频长度,且在终端的存储空间允许范围内即可,本公开实施例对此也不做具体限定。
拍照操作是指在第一显示区录制视频的过程中,同时控制摄像头拍摄照片。
截图操作是指从终端截取的能显示在屏幕或其他显示设备上的可视的静态图像;一般得到的是静态位图文件,格式可以为位图文件(Bitmap,BMP)、便携式网络图形(Portable Network Graphics,PNG)、联合图像专家小组(Joint Photographic Experts Group,JPEG)等;在本公开实施例中,对于截图文件的格式并不做具体限定。
对于截图的内容区域,可以是抓取全屏幕内容进行截图,可以是仅抓取当前活动窗口进行截图,也可以是仅抓取视频画面进行截图,其中,所述活动窗口的内容一般包括标题栏、菜单栏、工具栏以及主体内容,而视频画面一般仅包括一个视频活动窗口的主体内容。在实际应用中,用户在为正在录 制或播放的视频截图时,可选地,一般是希望仅抓取视频画面进行截图,而不需要其他无关内容,但在本公开实施例中,为了满足各种用户的不同需求,对于截图的内容区域并不做具体限定。
在执行目标操作的过程中,第一显示区正在录制或播放的视频不会受到影响,继续进行录制或播放;用户也可以根据需要,对第一显示区的视频进行暂停录制或暂停播放,本公开实施例对此不做具体限定。
在短视频录制操作、拍照操作、截图操作完成后,终端可以将结果文件保存在用户预先设置的存储位置,以方便用户查看和使用。
综上所述,在本公开实施例中,通过在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;响应于所述第一输入,执行所述第一输入对应的目标操作,这样,通过在第二显示区上接收目标输入的方式,来进行短视频录制、拍照或截图,既不会对第一显示区的视频画面产生遮挡,也不影响录制视频的连续性,操作简单、快捷。
参照图2,示出了本公开实施例中提供的第一视频截取方法的流程图。具体可以包括:
步骤201,在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入。
在本公开实施例中,所述第一输入可以是用户的手指在第二显示区的预设区域内向上滑动至少1厘米,所述第一输入也可以是其他操作手势,本公开实施例对此不做具体限定。
步骤201可以参照步骤101,此处不再赘述。
步骤202,所述目标界面为播放界面,响应于所述第一输入,截取所述第一显示区当前显示的第一视频图像帧,生成目标图像并存储。
在本公开实施例中,当第一显示区显示目标视频的目标界面为播放界面时,即第一显示区正在播放视频时,终端响应于步骤201中进行的第一输入,对第一显示区当前显示的第一视频图像帧进行截取,即截图,并将截图存储在预设位置。
步骤203,在第二显示区显示所述目标图像和预设标识。
在本公开实施例中,进行截图后,将所述截图的目标图像显示在第二显 示区,以供用户查看和编辑。在用户认为截图效果不理想时,用户可以快速截取下一个视频图像帧,比如再次重复本公开实施例的步骤201、步骤202;
截图完成后,在所述第二显示区还显示预设标识。所述预设标识可以指示用户对所述目标图像进行某种操作,例如所述预设标识可以为编辑标识或分享标识。
步骤204,接收用户对所述预设标识的第二输入。
用户基于预设标识进行第二输入,所述第二输入可以是第二操作,所述第二操作可以是用户在预设标识上的点击、滑动、触摸等操作;也可以是基于预设标识的语音输入、手势输入等,在本公开实施例中,对此不做具体限定。
步骤205,响应于所述第二输入,对所述目标图像执行编辑操作或分享操作。
在本公开实施例中,终端基于用户对预设标识的第二输入,调用图像编辑或社交应用软件,使用户可以为目标图像添加美颜、滤镜等效果;也可将目标图像分享至社交网络。在整个过程中,因为所有操作都在第二显示区完成,所以不会影响第一显示区中视频的录制和播放,这样,用户在录制或播放视频的同时,还可以完成更多操作,提高了行为效率,节省了用户时间。
参照图3,示出了本公开实施例中提供的第二视频截取方法的流程图。
步骤301,在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入。
在本公开实施例中,所述第一输入可以是用户的手指在第二显示区的预设区域内向上滑动至少1厘米,然后手指继续顺时针旋转;所述第一输入也可以是一种长按操作,如手指在预设区域内按压数秒。
步骤301可以参照步骤101,此处不再赘述。
步骤302,所述目标界面为播放界面,响应于所述第一输入,截取所述目标视频的N个视频图像帧,生成N张目标图像并存储。
其中,所述N个视频图像帧中的第1视频图像帧为所述第一输入的输入起始时刻所述第一显示区显示的所述目标视频的视频图像帧,所述N个视频图像帧中的第N视频图像帧为所述第一输入的输入结束时刻所述第一显示区 显示的所述目标视频的视频图像帧;N为正整数。
在所述第一输入的触控时间内,终端获取第一输入的时间参数,如开始时刻、结束时刻等,基于时间参数对所述目标视频连续截取N个图像帧。可选地,终端可以按设定时间间隔连续对所述目标视频进行N次截图,用户通过控制第一输入的输入时间,可灵活确定截取的视频帧的帧数,操作便捷。
例如,所述第一输入的开始时刻为14点30分01秒,结束时刻为14点30分10秒,若设定时间间隔为1秒,则每隔1秒都会从所述目标视频中截取一张视频图像帧,即14点30分01秒、14点30分02秒……一直到14点30分10秒,上述每个时刻都会截取一个视频图像帧,最终共截取10个视频图像帧。
参照图4,示出了本公开实施例中提供的第一拍摄方法的流程图。
步骤401,在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入。
在本公开实施例中,所述第一输入可以是用户的手指在第二显示区的预设区域内向下滑动至少1厘米。
步骤401可以参照步骤101,此处不再赘述。
步骤402,所述目标界面为录制界面,响应于所述第一输入,控制终端的摄像头拍摄一张照片。
在本公开实施例中,当第一显示区显示目标视频的目标界面为录制界面时,即第一显示区正在录制视频时,终端响应于步骤401中进行的第一输入,控制终端的摄像头拍摄一张照片。
参照图5,示出了本公开实施例中提供的第二拍摄方法的流程图。
步骤501,在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入。
在本公开实施例中,所述第一输入可以是用户的手指在第二显示区的预设区域内向下滑动至少1厘米,然后手指继续顺时针旋转;所述第一输入也可以是一种长按操作,此处不做具体限定。
步骤501可以参照步骤101,此处不再赘述。
步骤502,所述目标界面为录制界面,响应于所述第一输入,控制终端 的摄像头拍摄M张照片。
其中,M为正整数,所述M张照片中的第1张照片为所述第一输入的输入起始时刻所述摄像头拍摄的,第M张照片为所述第一输入的输入结束时刻所述摄像头拍摄的。
在本公开实施例中,当第一显示区显示目标视频的目标界面为录制界面时,即第一显示区正在录制视频时,终端响应于步骤501中进行的第一输入,控制终端的摄像头拍摄M张照片。
同样,在所述第一输入的触控时间内,终端获取第一输入的时间参数,如开始时刻、结束时刻等,基于时间参数控制终端的摄像头拍摄M张照片。可选地,终端可以按设定时间间隔连续拍摄M张照片。用户通过控制第一输入的输入时间,可触发连续拍摄,并可确定拍摄照片的张数,操作便捷。
参照图6,示出了本公开实施例中提供的短视频录制方法的流程图。
步骤601,在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入。
在本公开实施例中,所述第一输入可以是用户的手指在第二显示区的预设区域内向下滑动至少1厘米,然后手指继续逆时针旋转;所述第一输入也可以是一种长按操作,此处不做具体限定。
步骤601可以参照步骤101,此处不再赘述。
步骤602,所述目标界面为录制界面,响应于所述第一输入,在第二显示区,显示短视频录制界面,并开始录制短视频。
在本公开实施例中,当第一显示区显示目标视频的目标界面为录制界面时,即第一显示区正在录制视频时,终端响应于步骤601中进行的第一输入,显示短视频录制界面,在第一显示区正在录制视频的同时,再录制一段短视频。
综上所述,在本公开实施例中,通过在第一显示区显示目标视频的播放界面时,响应于所述第一输入,对目标视频截取第一视频图像帧或截取N个视频图像帧,随后通过第二输入对所述目标图像进行编辑或分享操作,由于第一输入和第二输入操作都在第二显示区进行,所以用户截图、编辑和分享的操作不会对目标视频产生遮挡,并且不会影响视频播放的连续性;在第二 显示区显示目标视频的录制界面时,响应于所述第一输入,控制终端拍摄一张照片或M张照片或控制终端录制短视频,由于所述第一输入也是在第二显示区进行,所以拍照或录制操作不会对目标视频产生遮挡,也不会影响视频录制的连续性,而且操作比较快捷、方便。
参照图7,示出了本公开实施例中提供的参数设置方法的流程图。
步骤701,在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;
步骤702,响应于所述第一输入,执行所述第一输入对应的目标操作。
在本公开实施例中,上述步骤701和步骤702可以参照本公开实施例中的步骤101和步骤102,此处不再赘述。
步骤703,接收用户在第二显示区中的第三输入。
在本公开实施例中,用户经过步骤701、步骤702,得到了录制的短视频、拍摄的照片或目标截图,并已显示在第二显示区域中,此时在第二显示区进行第三输入。所述第三输入可以是第三操作,所述第三操作可以是用户在预设标识上的点击、滑动、触摸等操作;也可以是基于预设标识的语音输入、手势输入等,在本公开实施例中,对此不做具体限定。
步骤704,响应于所述第三输入,在第二显示区中显示悬浮框,所述悬浮框包括T个备选参数。
所述终端根据所述第三输入对应的控制指令,在第二显示区中调出效果参数调节悬浮框,所述悬浮框包括T个备选的效果设置参数,可供用户对上述录制的短视频、拍摄的照片或目标截图进行效果参数设置,也可供用户对第一显示区正在录制的视频进行效果参数设置,所述T个备选的效果参数可以是滤镜、美颜、贴纸、文字、马赛克等效果,本公开实施例对此不做具体限定。
步骤705,接收用户在所述悬浮框中的第四输入。
所述第四输入可以是第四操作,用户通过对所述悬浮框进行第四输入,选择想为目标对象设置的一种目标参数。
步骤706,响应于所述第四输入,按照所述第四输入所选取的目标参数,调节第二显示区当前显示的目标对象的参数。
用户选择目标参数后,通过输入操作调节目标对象的效果,如调节美颜的部位、滤镜的颜色、贴纸的种类等。
可选地,在设置完一种参数后,可以继续选择其他参数进行设置,即再次执行步骤705、步骤706。
综上所述,在本公开实施例中,通过第一输入获得目标图像后,用户继续进行调出悬浮框的第三输入,再通过第四输入选取悬浮框中的目标参数并调节目标对象的参数,最终为目标图像设置好效果参数。该效果参数设置过程在第二显示区进行,不会对第一显示区正在播放或录制的目标视频产生遮挡,也不会影响目标视频的连续性,设置过程简单、快捷,也满足了用户多样化的需求。
参照图8,示出了本公开实施例中提供的执行第一输入对应的目标操作的方法流程图。
步骤801,获取所述第一输入的输入特征。
其中,所述输入特征包括以下至少一项:
所述第一输入的输入方向为预设方向;
所述第一输入的输入轨迹为预设形状;
所述第一输入的输入轨迹的长度为预设长度值;
所述第一输入的输入时间在预设时间范围内;
所述第一输入的输入参数值在预设取值范围内。
在本公开实施例中,上述输入方向是指用户的手指在触控屏上滑动的方向,输入轨迹是指用户的手指在触控屏上滑动的轨迹,输入轨迹的长度是指用户的手指在触控屏上滑动的长度值;所述第一输入的输入时间在预设时间范围内,是指用户的手指在触控屏上按压的时间长度大于预设时间长度;所述第一输入的输入参数值在预设取值范围内,是指用户的手指按压触控屏的压力在预设压力值范围内。
步骤802,执行与所述输入特征相匹配的目标操作。
在本公开实施例中,所述第一输入不同的输入特征对应不同的操作类型。用户需要根据预设的输入特征进行第一输入,以使终端辨别出用户希望终端执行的操作类型,如短视频录制操作、拍照操作或截图操作等。如果所述第 一输入与预设的输入特征不对应,终端将会无法识别或错误识别用户的操作类型。所以需要在终端中首先设置预设的输入规则,以使终端将第一输入中不同的输入特征与不同的操作类型相对应,也使用户可以根据预设规则的要求进行针对性的输入。
例如,在所述目标界面为播放界面时,截取第一视频图像帧的预设输入规则可以是:手指滑动方向向上,长度至少1厘米;截取N个视频图像帧的预设输入规则可以是:手指滑动方向向上,长度至少1厘米后手指继续顺时针旋转;在所述目标界面为录制界面时,控制终端的摄像头拍摄一张照片的预设输入规则可以是:手指滑动方向向下,长度至少1厘米;控制终端的摄像头拍摄M张照片的预设规则可以是:手指滑动方向向下,长度至少1厘米后手指继续顺时针旋转;录制短视频的预设输入规则可以是:手指滑动方向向下,长度至少1厘米后手指继续逆时针旋转。
对应的,第二输入、第三输入、第四输入的输入特征同样需要在终端中进行预设。例如,在所述目标界面为播放界面或录制界面时,调出悬浮框的预设第三输入规则可以是:手指滑动方向向上,长度至少1厘米后手指继续逆时针旋转。
以上预设输入规则仅为示例,本领域技术人员可以根据实际设置不同的预设输入规则,本公开实施例对此不做具体限定。
进一步参照图9,示出了本公开实施例中提供的第一视频截取方法的示意图。
在图9中,标号1所指的区域为第一显示区,其正在播放视频;标号2所指的区域为第二显示区,用户的手指正在对所述第二显示区的触控屏进行第一输入,其中,向上的箭头代表向上滑动,箭头的长短代表滑动的距离。在用户做出该第一输入后,终端对目标视频截取第一视频图像帧。
参照图10,示出了本公开实施例中提供的第一拍摄方法示意图。
在图10中,标号1所指的区域为第一显示区,其正在录制视频;标号2所指的区域为第二显示区,用户的手指正在对所述第二显示区的触控屏进行第一输入,其中,向下的箭头代表向下滑动,箭头的长短代表滑动的距离。在用户做出该滑动操作后,会控制终端的摄像头拍摄一张照片。
参照图11,示出了本公开实施例中提供的参数设置方法示意图。
在图11中,标号1所指的区域为第一显示区,其正在录制视频或播放视频;标号2所指的区域为第二显示区,用户的手指正在对所述第二显示区的触控屏进行第三输入,其中,向上的箭头代表向上滑动,旋转的箭头代表逆时针旋转。在用户做出该第四输入后,悬浮框被调出,此后用户的手指继续逆时针旋转,则悬浮框的T个备选参数依次显示。
综上所述,本公开实施例通过获取所述第一输入的输入特征,执行与所述输入特征相匹配的目标操作,完成了目标操作执行过程,其中,所述第一输入包括的输入特征需要预先设定。在本公开实施例中,第一输入的输入过程在第二显示区进行,不会对第一显示区的目标视频产生遮挡,也不会影响目标视频的连续性。
参照图12,示出了本公开实施例提供的第一种终端的结构框图。所述终端900屏幕的显示区包括第一显示区和第二显示区,所述终端900具体可以包括:
第一输入接收模块901,用于在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;
目标操作执行模块902,用于响应于所述第一输入,执行所述第一输入对应的目标操作;
其中,所述目标界面包括录制界面或播放界面;所述目标操作包括短视频录制操作、拍照操作、截图操作中的至少一项。
可选地,所述第一显示区和所述第二显示区分别为终端的同一个显示屏的不同区域;或者,在所述终端包括第一屏和第二屏的情况下,所述第一显示区位于所述第一屏中,且所述第二显示区位于所述第二屏中。
可选地,参照图13,在图12的基础上,该终端900的目标执行模块902还可以包括:
第一视频截取子模块9021,用于截取所述第一显示区当前显示的第一视频图像帧,生成目标图像并存储;
该终端900还可以包括:
显示模块903,用于在第二显示区显示所述目标图像和预设标识;
第二输入接收模块904,用于接收用户对所述预设标识的第二输入;
编辑或分享模块905,用于响应于所述第二输入,对所述目标图像执行编辑操作或分享操作。
可选地,参照图14,在图12的基础上,该终端900的目标执行模块902还可以包括:
第二视频截取子模块9022,用于截取所述目标视频的N个视频图像帧,生成N张目标图像并存储。
可选地,参照图15,在图12的基础上,该终端900的目标执行模块902还可以包括:
第一拍摄子模块9023,用于控制终端的摄像头拍摄一张照片。
可选地,参照图16,在图12的基础上,该终端900的目标执行模块902还可以包括:
第二拍摄子模块9024,用于控制终端的摄像头拍摄M张照片。
可选地,参照图17,在图12的基础上,该终端900的目标执行模块902还可以包括:
短视频录制子模块9025,用于在第二显示区,显示短视频录制界面,并开始录制短视频。
可选地,参照图18,在图12的基础上,该终端900还可以包括:
第三输入接收模块906,用于接收用户在第二显示区中的第三输入;
悬浮框显示模块907,用于响应于所述第三输入,在第二显示区中显示悬浮框,所述悬浮框包括T个备选参数;
第四输入接收模块908,用于接收用户在搜索悬浮框中的第四输入;
参数调节模块909,用于响应于所述第四输入,按照所述第四输入所选取的目标参数,调节第二显示区当前显示的目标对象的参数。
其中,所述目标对象包括截取的图像、拍摄的照片、短视频录制界面中的至少一项。
可选地,参照图19,在图12的基础上,该终端900的目标执行模块902还可以包括:
输入特征获取子模块9026,获取所述第一输入的输入特征;
目标操作执行子模块9027,执行与所述输入特征相匹配的目标操作。
本公开实施例图12-图19提供的终端900能够实现图1至图8的方法实施例中的各个过程,为避免重复,这里不再赘述。
这样,在本公开实施例中,通过在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;响应于所述第一输入,执行所述第一输入对应的目标操作,这样,通过在第二显示区上接收目标输入的方式,来进行短视频录制、拍照或截图,既不会对第一显示区的视频画面产生遮挡,也不影响录制视频的连续性,操作简单、快捷。
图20为实现本公开各个实施例中的一种终端的硬件结构示意图。
该终端1000包括但不限于:射频单元1001、网络模块1002、声音输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元10010、接口单元1008、存储器1009、处理器1010、以及电源1011等部件。本领域技术人员可以理解,图12中示出的终端结构并不构成对终端的限定,终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本公开实施例中,终端包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。
其中,用户输入单元1007,用于在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;
处理器1010,用于响应于所述第一输入,执行所述第一输入对应的目标操作。
本公开实施例提供的视频处理方法、终端及计算机可读存储介质,所述视频处理方法应用于终端,所述终端屏幕的显示区包括第一显示区和第二显示区,所述方法包括:在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;响应于所述第一输入,执行所述第一输入对应的目标操作;其中,所述目标界面包括录制界面或播放界面;所述目标操作包括短视频录制操作、拍照操作、截图操作中的至少一项。这样,当第一显示区在录制视频或播放视频时,基于第一显示区的录制界面或播放界面,用户可以在第二显示区进行短视频录制操作、拍照操作、截图操作,操作过程不会对当前正在录制或播放的视频产生遮挡,也不会影响当前视频的 连续性,而且操作比较简单、快捷。
应理解的是,本公开实施例中,射频单元1001可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器1010处理;另外,将上行的数据发送给基站。通常,射频单元1001包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元1001还可以通过无线通信系统与网络和其他设备通信。
终端通过网络模块1002为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
声音输出单元1003可以将射频单元1001或网络模块1002接收的或者在存储器1009中存储的声音数据转换成声音信号并且输出为声音。而且,声音输出单元1003还可以提供与终端1000执行的特定功能相关的声音输出(例如,呼叫信号接收声音、消息接收声音等等)。声音输出单元1003包括扬声器、蜂鸣器以及受话器等。
输入单元1004用于接收声音或视频信号。输入单元1004可以包括图形处理器(Graphics Processing Unit,GPU)10041和麦克风10042,图形处理器10041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元1006上。经图形处理器10041处理后的图像帧可以存储在存储器1009(或其它存储介质)中或者经由射频单元1001或网络模块1002进行发送。麦克风10042可以接收声音,并且能够将这样的声音处理为声音数据。处理后的声音数据可以在电话通话模式的情况下转换为可经由射频单元1001发送到移动通信基站的格式输出。
终端1000还包括至少一种传感器1005,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板10061的亮度,接近传感器可在终端1000移动到耳边时,关闭显示面板10061或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别终端姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等; 传感器1005还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元1006用于显示由用户输入的信息或提供给用户的信息。显示单元1006可包括显示面板10061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板10061。
用户输入单元1007可用于接收输入的数字或字符信息,以及产生与终端的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元1007包括触控面板10071以及其他输入设备10072。触控面板10071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板10071上或在触控面板10071附近的操作)。触控面板10071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1010,接收处理器1010发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板10071。除了触控面板10071,用户输入单元1007还可以包括其他输入设备10072。具体地,其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板10071可覆盖在显示面板10061上,当触控面板10071检测到在其上或附近的触摸操作后,传送给处理器1010以确定触摸事件的类型,随后处理器1010根据触摸事件的类型在显示面板10061上提供响应的视觉输出。虽然触控面板10071与显示面板10061是作为两个独立的部件来实现终端的输入和输出功能,但是在某些实施例中,可以将触控面板10071与显示面板10061集成而实现终端的输入和输出功能,具体此处不做限定。
接口单元1008为外部装置与终端1000连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、声音输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元1008可以用于接收 来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到终端1000内的一个或多个元件或者可以用于在终端1000和外部装置之间传输数据。
存储器1009可用于存储软件程序以及各种数据。存储器1009可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如声音数据、电话本等)等。此外,存储器1009可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器1010是终端的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器1009内的软件程序或模块,以及调用存储在存储器1009内的数据,执行终端的各种功能和处理数据,从而对终端进行整体监控。处理器1010可包括一个或多个处理单元;可选的,处理器1010可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
终端1000还可以包括给各个部件供电的电源1011(比如电池),可选的,电源1011可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,终端1000包括一些未示出的功能模块,在此不再赘述。
可选的,本公开实施例还提供一种终端,包括处理器1010,存储器1009,存储在存储器1009上并可在上述处理器1010上运行的计算机程序,该计算机程序被处理器1010执行时实现上述视频处理方法实施例的各个过程,为避免重复,这里不再赘述。
本公开实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述视频处理方法实施例的各个过程,为避免重复,这里不再赘述。其中,上述的计算机可读存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本公开各个实施例所述的方法。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本公开的保护之内。

Claims (22)

  1. 一种视频处理方法,应用于终端,所述终端屏幕的显示区包括第一显示区和第二显示区,所述方法包括:
    在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;
    响应于所述第一输入,执行所述第一输入对应的目标操作;
    其中,所述目标界面包括录制界面或播放界面;所述目标操作包括短视频录制操作、拍照操作、截图操作中的至少一项。
  2. 根据权利要求1所述的方法,其中,所述目标界面为播放界面;
    所述执行所述第一输入对应的目标操作,包括:
    截取所述第一显示区当前显示的第一视频图像帧,生成目标图像并存储。
  3. 根据权利要求2所述的方法,其中,所述截取所述第一显示区当前显示的第一视频图像帧,生成目标图像并存储之后,还包括:
    在第二显示区显示所述目标图像和预设标识;
    接收用户对所述预设标识的第二输入;
    响应于所述第二输入,对所述目标图像执行编辑操作或分享操作。
  4. 根据权利要求1所述的方法,其中,所述目标界面为播放界面;
    所述执行所述第一输入对应的目标操作,包括:
    截取所述目标视频的N个视频图像帧,生成N张目标图像并存储;
    其中,所述N个视频图像帧中的第1视频图像帧为所述第一输入的输入起始时刻所述第一显示区显示的所述目标视频的视频图像帧,所述N个视频图像帧中的第N视频图像帧为所述第一输入的输入结束时刻所述第一显示区显示的所述目标视频的视频图像帧;N为正整数。
  5. 根据权利要求1所述的方法,其中,所述目标界面为录制界面;
    所述执行所述第一输入对应的目标操作,包括:
    控制终端的摄像头拍摄一张照片。
  6. 根据权利要求1所述的方法,其中,所述目标界面为录制界面;
    所述执行所述第一输入对应的目标操作,包括:
    控制终端的摄像头拍摄M张照片;
    其中,M为正整数,所述M张照片中的第1张照片为所述第一输入的输入起始时刻所述摄像头拍摄的,第M张照片为所述第一输入的输入结束时刻所述摄像头拍摄的。
  7. 根据权利要求1所述的方法,其中,所述目标界面为录制界面;
    所述执行所述第一输入对应的目标操作,包括:
    在第二显示区,显示短视频录制界面,并开始录制短视频;
    其中,所述短视频的第一视频帧为所述第一输入的输入起始时刻所述摄像头采集的图像,所述短视频的最后一个视频帧为所述第一输入的输入结束时刻所述摄像头采集的图像。
  8. 根据权利要求1所述的方法,还包括:
    接收用户在第二显示区中的第三输入;
    响应于所述第三输入,在第二显示区中显示悬浮框,所述悬浮框包括T个备选参数;
    接收用户在所述悬浮框中的第四输入;
    响应于所述第四输入,按照所述第四输入所选取的目标参数,调节第二显示区当前显示的目标对象的参数;
    其中,所述目标对象包括截取的图像、拍摄的照片、短视频录制界面中的至少一项。
  9. 根据权利要求1所述的方法,其中,
    所述第一显示区和所述第二显示区分别为终端的同一个显示屏的不同区域;
    或者,在所述终端包括第一屏和第二屏的情况下,所述第一显示区位于所述第一屏中,且所述第二显示区位于所述第二屏中。
  10. 根据权利要求1所述的方法,其中,所述响应于所述第一输入,执行所述第一输入对应的目标操作,包括:
    获取所述第一输入的输入特征;
    执行与所述输入特征相匹配的目标操作;
    其中,所述输入特征包括以下至少一项:
    所述第一输入的输入方向为预设方向;
    所述第一输入的输入轨迹为预设形状;
    所述第一输入的输入轨迹的长度为预设长度值;
    所述第一输入的输入时间在预设时间范围内;
    所述第一输入的输入参数值在预设取值范围内。
  11. 一种终端,所述终端屏幕的显示区包括第一显示区和第二显示区,所述终端包括:
    第一输入接收模块,用于在第一显示区显示目标视频的目标界面的状态下,接收用户在第二显示区中的第一输入;
    目标操作执行模块,用于响应于所述第一输入,执行所述第一输入对应的目标操作;
    其中,所述目标界面包括录制界面或播放界面;所述目标操作包括短视频录制操作、拍照操作、截图操作中的至少一项。
  12. 根据权利要求11所述的终端,其中,所述目标界面为播放界面;
    所述目标操作执行模块包括:
    第一视频截取子模块,用于截取所述第一显示区当前显示的第一视频图像帧,生成目标图像并存储。
  13. 根据权利要求11所述的终端,其中,所述目标操作执行模块还包括:
    显示模块,用于在第二显示区显示所述目标图像和预设标识;
    第二输入接收模块,用于接收用户对所述预设标识的第二输入;
    编辑或分享模块,用于响应于所述第二输入,对所述目标图像执行编辑操作或分享操作。
  14. 根据权利要求11所述的终端,其中,所述目标界面为播放界面;
    所述目标操作执行模块包括:
    第二视频截取子模块,用于截取所述目标视频的N个视频图像帧,生成N张目标图像并存储;
    其中,所述N个视频图像帧中的第1视频图像帧为所述第一输入的输入起始时刻所述第一显示区显示的所述目标视频的视频图像帧,所述N个视 频图像帧中的第N视频图像帧为所述第一输入的输入结束时刻所述第一显示区显示的所述目标视频的视频图像帧;N为正整数。
  15. 根据权利要求11所述的终端,其中,所述目标界面为录制界面;
    所述目标操作执行模块包括:
    第一拍摄子模块,用于控制终端的摄像头拍摄一张照片。
  16. 根据权利要求11所述的终端,其中,所述目标界面为录制界面;
    所述目标操作执行模块包括:
    第二拍摄子模块,用于控制终端的摄像头拍摄M张照片;
    其中,M为正整数,所述M张照片中的第1张照片为所述第一输入的输入起始时刻所述摄像头拍摄的,第M张照片为所述第一输入的输入结束时刻所述摄像头拍摄的。
  17. 根据权利要求11所述的终端,其中,所述目标界面为录制界面;
    所述目标操作执行模块包括:
    短视频录制子模块,用于在第二显示区,显示短视频录制界面,并开始录制短视频;
    其中,所述短视频的第一视频帧为所述第一输入的输入起始时刻所述摄像头采集的图像,所述短视频的最后一个视频帧为所述第一输入的输入结束时刻所述摄像头采集的图像。
  18. 根据权利要求11所述的终端,还包括:
    第三输入接收模块,用于接收用户在第二显示区中的第三输入;
    悬浮框显示模块,用于响应于所述第三输入,在第二显示区中显示悬浮框,所述悬浮框包括T个备选参数;
    第四输入接收模块,用于接收用户在搜索悬浮框中的第四输入;
    参数调节模块,用于响应于所述第四输入,按照所述第四输入所选取的目标参数,调节第二显示区当前显示的目标对象的参数;
    其中,所述目标对象包括截取的图像、拍摄的照片、短视频录制界面中的至少一项。
  19. 根据权利要求11所述的终端,其中,
    所述第一显示区和所述第二显示区分别为终端的同一个显示屏的不同 区域;
    或者,在所述终端包括第一屏和第二屏的情况下,所述第一显示区位于所述第一屏中,且所述第二显示区位于所述第二屏中。
  20. 根据权利要求11所述的终端,其中,所述目标操作执行模块包括:
    输入特征获取子模块,用于获取所述第一输入的输入特征;
    目标操作执行子模块,用于执行与所述输入特征相匹配的目标操作;
    其中,所述输入特征包括以下至少一项:
    所述第一输入的输入方向为预设方向;
    所述第一输入的输入轨迹为预设形状;
    所述第一输入的输入轨迹的长度为预设长度值;
    所述第一输入的输入时间在预设时间范围内;
    所述第一输入的输入参数值在预设取值范围内。
  21. 一种终端,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序,所述程序被所述处理器执行时实现如权利要求1至10中任一项所述的视频处理方法的步骤。
  22. 一种计算机可读存储介质,所述计算机可读存储介质上存储有程序,所述程序被处理器执行时实现如权利要求1至10中任一项所述的视频处理方法的步骤。
PCT/CN2019/099921 2018-08-28 2019-08-09 视频处理方法、终端及计算机可读存储介质 WO2020042890A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810989058.1 2018-08-28
CN201810989058.1A CN109151546A (zh) 2018-08-28 2018-08-28 一种视频处理方法、终端及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020042890A1 true WO2020042890A1 (zh) 2020-03-05

Family

ID=64828650

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/099921 WO2020042890A1 (zh) 2018-08-28 2019-08-09 视频处理方法、终端及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN109151546A (zh)
WO (1) WO2020042890A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151546A (zh) * 2018-08-28 2019-01-04 维沃移动通信有限公司 一种视频处理方法、终端及计算机可读存储介质
CN109743634B (zh) * 2019-01-24 2021-05-28 维沃移动通信有限公司 一种视频播放控制方法及终端
CN109996121A (zh) * 2019-04-12 2019-07-09 晶晨半导体(上海)股份有限公司 一种远程操控视频播放终端的方法
CN110221795B (zh) * 2019-05-27 2021-10-22 维沃移动通信有限公司 一种屏幕录制方法及终端
CN112423092A (zh) * 2019-08-23 2021-02-26 北京小米移动软件有限公司 视频录制方法和视频录制装置
CN111010610B (zh) * 2019-12-18 2022-01-28 维沃移动通信有限公司 一种视频截图方法及电子设备
CN111061407B (zh) * 2019-12-25 2021-08-10 维沃移动通信有限公司 视频程序的操作控制方法、电子设备及存储介质
CN111182362A (zh) 2020-01-03 2020-05-19 北京小米移动软件有限公司 视频的控制处理方法及装置
CN113448658A (zh) * 2020-03-24 2021-09-28 华为技术有限公司 截屏处理的方法、图形用户接口及终端
CN113835656A (zh) * 2021-09-08 2021-12-24 维沃移动通信有限公司 显示方法、装置及电子设备
CN116828297A (zh) * 2021-10-22 2023-09-29 荣耀终端有限公司 一种视频处理方法和电子设备
CN116304176B (zh) * 2023-05-19 2023-08-22 江苏苏宁银行股份有限公司 基于实时数据中台的处理方法及处理系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070120762A1 (en) * 2005-11-30 2007-05-31 O'gorman Robert W Providing information in a multi-screen device
CN105468237A (zh) * 2015-11-25 2016-04-06 掌赢信息科技(上海)有限公司 一种视频通话中的截屏方法和电子设备
CN105573654A (zh) * 2015-12-23 2016-05-11 深圳市金立通信设备有限公司 一种播放控件的显示方法及终端
CN106648422A (zh) * 2016-11-18 2017-05-10 宇龙计算机通信科技(深圳)有限公司 一种应用显示控制方法及其装置
CN107707953A (zh) * 2017-10-20 2018-02-16 维沃移动通信有限公司 一种资源数据展示方法及移动终端
CN107885436A (zh) * 2017-11-28 2018-04-06 深圳天珑无线科技有限公司 一种图片交互操作的方法、装置及移动终端
CN108055572A (zh) * 2017-11-29 2018-05-18 努比亚技术有限公司 移动终端的控制方法、移动终端及计算机可读存储介质
CN109151546A (zh) * 2018-08-28 2019-01-04 维沃移动通信有限公司 一种视频处理方法、终端及计算机可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102238531B1 (ko) * 2014-06-25 2021-04-09 엘지전자 주식회사 이동단말기 및 그 제어방법
CN107395797A (zh) * 2017-07-14 2017-11-24 惠州Tcl移动通信有限公司 一种移动终端及其控制方法和可读存储介质
CN107454321A (zh) * 2017-07-28 2017-12-08 维沃移动通信有限公司 一种拍摄方法、移动终端及计算机可读存储介质
CN107613196A (zh) * 2017-09-05 2018-01-19 珠海格力电器股份有限公司 一种自拍方法及其装置、电子设备
CN108153466A (zh) * 2017-11-28 2018-06-12 北京珠穆朗玛移动通信有限公司 基于双屏的操作方法、移动终端及存储介质
CN108259761B (zh) * 2018-03-23 2020-09-15 维沃移动通信有限公司 一种拍摄方法及终端
CN108260013B (zh) * 2018-03-28 2021-02-09 维沃移动通信有限公司 一种视频播放控制方法及终端

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070120762A1 (en) * 2005-11-30 2007-05-31 O'gorman Robert W Providing information in a multi-screen device
CN105468237A (zh) * 2015-11-25 2016-04-06 掌赢信息科技(上海)有限公司 一种视频通话中的截屏方法和电子设备
CN105573654A (zh) * 2015-12-23 2016-05-11 深圳市金立通信设备有限公司 一种播放控件的显示方法及终端
CN106648422A (zh) * 2016-11-18 2017-05-10 宇龙计算机通信科技(深圳)有限公司 一种应用显示控制方法及其装置
CN107707953A (zh) * 2017-10-20 2018-02-16 维沃移动通信有限公司 一种资源数据展示方法及移动终端
CN107885436A (zh) * 2017-11-28 2018-04-06 深圳天珑无线科技有限公司 一种图片交互操作的方法、装置及移动终端
CN108055572A (zh) * 2017-11-29 2018-05-18 努比亚技术有限公司 移动终端的控制方法、移动终端及计算机可读存储介质
CN109151546A (zh) * 2018-08-28 2019-01-04 维沃移动通信有限公司 一种视频处理方法、终端及计算机可读存储介质

Also Published As

Publication number Publication date
CN109151546A (zh) 2019-01-04

Similar Documents

Publication Publication Date Title
WO2020042890A1 (zh) 视频处理方法、终端及计算机可读存储介质
US11689649B2 (en) Shooting method and terminal
WO2021036536A1 (zh) 视频拍摄方法及电子设备
CN108668083B (zh) 一种拍照方法及终端
WO2019137429A1 (zh) 图片处理方法及移动终端
US11210049B2 (en) Display control method and terminal
WO2019228294A1 (zh) 对象分享方法及移动终端
US11675442B2 (en) Image processing method and flexible-screen terminal
CN111010510B (zh) 一种拍摄控制方法、装置及电子设备
WO2021104236A1 (zh) 一种共享拍摄参数的方法及电子设备
WO2019223494A1 (zh) 截屏方法及移动终端
WO2019174628A1 (zh) 拍照方法及移动终端
WO2019196929A1 (zh) 一种视频数据处理方法及移动终端
WO2020199995A1 (zh) 图片编辑方法及终端
CN107786827B (zh) 视频拍摄方法、视频播放方法、装置及移动终端
WO2020020134A1 (zh) 拍摄方法及移动终端
CN110602565A (zh) 一种图像处理方法及电子设备
CN111147779B (zh) 视频制作方法、电子设备及介质
CN111010523B (zh) 一种视频录制方法及电子设备
WO2019114522A1 (zh) 屏幕控制方法、屏幕控制装置及移动终端
CN110855921B (zh) 一种视频录制控制方法及电子设备
WO2020011080A1 (zh) 显示控制方法及终端设备
WO2021129818A1 (zh) 视频播放方法及电子设备
WO2019120190A1 (zh) 拨号方法及移动终端
CN110865752A (zh) 一种照片查看方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19855906

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19855906

Country of ref document: EP

Kind code of ref document: A1