CN116668762A - Screen recording method and device - Google Patents

Screen recording method and device Download PDF

Info

Publication number
CN116668762A
CN116668762A CN202211405500.4A CN202211405500A CN116668762A CN 116668762 A CN116668762 A CN 116668762A CN 202211405500 A CN202211405500 A CN 202211405500A CN 116668762 A CN116668762 A CN 116668762A
Authority
CN
China
Prior art keywords
application
video
video stream
screen
recording
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211405500.4A
Other languages
Chinese (zh)
Other versions
CN116668762B (en
Inventor
王逸凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211405500.4A priority Critical patent/CN116668762B/en
Publication of CN116668762A publication Critical patent/CN116668762A/en
Application granted granted Critical
Publication of CN116668762B publication Critical patent/CN116668762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Abstract

The application provides a screen recording method and a screen recording device, wherein the method comprises the following steps: and recording the video played by the second application by using the first application to obtain a first recorded video, wherein the video stream of the second application can be decoded, the first recorded video is obtained by using the video stream corresponding to the video played by the second application, and the first recorded video does not contain screen additional information. In the technical scheme of the application, the first application records the second application, and the first application does not record the screen picture any more, but obtains the first recorded video by utilizing the video stream of the second application, so that the first recorded video is a pure video and does not contain additional information of the screen.

Description

Screen recording method and device
Technical Field
The present application relates to the field of electronic information technologies, and in particular, to a screen recording method and apparatus.
Background
Currently, applications such as online video and online lessons are increasingly used, and users can share videos through the applications to share hearts, skills, knowledge, and the like in various aspects such as life, work, learning, and the like. With the increasing demand from users for recording video. If the user wants to record the video being watched, the download can only be performed through the download function in the application, but many applications do not support video download. Thus, the user can only record through other recording applications.
However, in these current recording applications, video recording is performed by a screen picture recording function, which causes interference factors such as pop-up windows and pop-up curtains on a screen during recording to be recorded, so as to affect the quality of the recorded video. For example, it is assumed that the application a is a class application, a class video is being played, the application B is a screen recording application, when the video being played by the application B is used for recording the screen of the video being played by the application a, in the conventional scheme, the screen picture and the played sound during the video playing of the application a are actually recorded, and it is assumed that during the video playing of the application a, a short message popup window appears on the screen or an interrupt prompt such as loading appears on the screen, and the like, all are recorded in the final video, so that the user experience is affected.
Therefore, how to record the video played by the video application better is a technical problem to be solved.
Disclosure of Invention
The application provides a screen recording method and device, which can better record videos played by video applications.
In a first aspect, a screen recording method is provided, including: and recording the video played by the second application by using the first application to obtain a first recorded video, wherein the video stream of the second application can be decoded, the first recorded video is obtained by using the video stream corresponding to the video played by the second application, and the first recorded video does not contain screen additional information.
In the technical scheme of the application, the first application records the second application, and the first application does not record the screen picture any more, but obtains the first recorded video by utilizing the video stream of the second application, so that the first recorded video is a pure video and does not contain additional information of the screen.
It should be noted that, the video stream of the second application may be decoded, and then the video stream may be obtained after decoding, and if the video stream cannot be obtained after decoding, the first recorded video cannot be obtained.
The screen additional information may be understood as information outside of the video picture appearing on the screen during the playing of the video. In one implementation, the screen additional information includes at least one of pop-up information or a bullet screen.
Although the present application mainly uses a method of obtaining video stream to record in order to obtain pure video, the first application may also use a traditional scheme to record the third application. That is, the first application is enabled to record both video applications that are decodable by the video stream and video applications that are not decodable by the video stream.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes: and recording the video played by the third application by using the first application to obtain a second recorded video, wherein the video stream of the third application cannot be decoded, the second recorded video is obtained by using a screen picture corresponding to the video played by the third application, and the second recorded video contains screen additional information.
With reference to the first aspect, in some implementation manners of the first aspect, when a video played by a second application is recorded by using a first application, obtaining a first recorded video may include: the first application acquires video stream data corresponding to the video played by the second application; the first application synthesizes the first recorded video by using the video stream data.
For the current electronic device, different applications are independent from each other, so that the first application needs to be used for acquiring data of the second application, and the data can be further processed by the first application.
With reference to the first aspect, in some implementation manners of the first aspect, when the first application acquires video stream data of the second application, the method may include: after decoding the video stream of the second application, obtaining a readable buffer area of the video stream data; compressing the video stream data stored in the readable buffer area and storing the compressed video stream data in a shared memory; the first application reads the compressed video stream data from the shared memory.
For current electronic devices, during online video playback, none of the incoming pre-decoded video streams of the video application are available (intercepted). Therefore, in the embodiment of the application, during the video application playing video, after the video stream is decoded, the buffer area of the video stream data is acquired, so that the video stream data corresponding to the video being played is further acquired.
In one example, the acquisition may be achieved by modifying the processing flow of the source video stream. Through analysis, during the primary processing of the electronic device, when the video stream is sent and displayed, the video stream is stored in the first buffer area, and other conditions are stored in the second buffer area. Because the data in the first buffer area cannot be read, there is no way to obtain the video stream corresponding to the video played by the video application. It has been found that although the data in the first buffer is unreadable, the data in the second buffer is readable. Thus, embodiments of the present application employ modifying the process flow to be stored in a readable buffer (i.e., the second buffer herein) regardless of whether it is presented or not, so that the data stream of the video application is always available. When the step of the first application obtaining the video stream data corresponding to the video played by the second application is executed, the video stream can be obtained by using the readable buffer.
The video compression step can reduce the data volume of the video stream under the condition of meeting the requirement, and reduce the burden of subsequent processing. In addition, the mode of sharing the memory can further improve the processing efficiency.
With reference to the first aspect, in certain implementation manners of the first aspect, compressing video stream data stored in the readable buffer may include: and carrying out equal-proportion compression and resolution coupling on the video stream data in the readable buffer area to obtain compressed video stream data with even resolution. During video compression, the resolution of the incoming video stream must be even, otherwise transcoding failure (image churning) may occur. When scaling equally, it is possible to get the scaled resolution to be an odd number. Therefore, the resolution of the compressed video stream in subsequent processing is ensured to be even by performing even-fetching operation on the zoomed resolution, and image screen-missing caused by transcoding failure is avoided.
With reference to the first aspect, in some implementation manners of the first aspect, when the first application synthesizes the first recorded video using the video stream data, the method may include: synthesizing the video stream data and the audio data corresponding to the video played by the second application by using the first application to obtain a first recorded video; or synthesizing the compressed video stream data with the audio data corresponding to the video played by the second application by using the first application to obtain a first recorded video.
That is, if compression of the video stream data is not performed, the video stream data that is not compressed is utilized at the time of synthesis, and if compression of the video stream data is performed, the video stream data that is compressed is utilized at the time of synthesis. For example, assume that, in the above steps, the video stream data of the second application is compressed and stored in the shared memory, and equal-proportion compression and resolution decoupling are adopted during the compression, the first application reads the compressed video stream data from the shared memory, and the resolution of the compressed video stream data is even, and the compressed video stream data with the even resolution is utilized when synthesizing the video.
In a second aspect, a screen recording apparatus is provided, the apparatus comprising means for performing any one of the methods of the first aspect, comprised of software and/or hardware.
In a third aspect, there is provided an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor being capable of implementing any one of the methods of the first aspect when the computer program is executed.
In a fourth aspect, there is provided a chip comprising a processor for reading and executing a computer program stored in a memory, the computer program being capable of implementing any one of the methods of the first aspect when executed by the processor.
Optionally, the chip further comprises a memory, the memory being electrically connected to the processor.
Optionally, the chip may further comprise a communication interface.
In a fifth aspect, there is provided a computer readable storage medium storing a computer program capable of implementing any one of the methods of the first aspect when the computer program is executed by a processor.
In a sixth aspect, there is provided a computer program product comprising a computer program capable of implementing any one of the methods of the first aspect when the computer program is executed by a processor.
Drawings
Fig. 1 is a schematic diagram of an interactive scene for recording a video being played.
Fig. 2A-2E are schematic diagrams of an interactive scene for recording a video for playing, which is suitable for the embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 4 is a software architecture block diagram of an electronic device according to an embodiment of the present application.
Fig. 5 is a schematic flow chart of a screen recording method according to an embodiment of the present application.
Fig. 6 is a schematic diagram showing the difference of video stream sources in different recording methods.
Fig. 7 is a schematic diagram of a process interaction procedure in a video recording process according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a recording device according to an embodiment of the application.
Detailed Description
Embodiments of the present application will be described below with reference to the accompanying drawings. The screen recording method provided by the application can be applied to the scenes of video played by video recording applications of various electronic devices.
Fig. 1 is a schematic diagram of an interactive scene for recording a video being played. As shown in fig. 1, the interface (a 1) is a desktop interface of the electronic device a. The user may open the video application program to view video by clicking on an icon of the video application on the interface (a 1). Suppose that the user clicks on a video application at interface (a 1), and opens the application as shown at interface (a 2). The interface (a 2) is an operation interface of the video application, and in the interface (a 2), it can be seen that the user opens a video made of food shared by other users on the platform for viewing. At this time, it is assumed that the user wants to record the video being played, and because the member needs to be registered for this video application download, the user decides to record this video by using the recording application of the electronic device a. Suppose that the user opens a shortcut window for some alternative applications carried by the electronic device a by means of a slide-down gesture at the top of the screen, as shown in the interface (a 3). As can be seen from the interface (a 3), the shortcut window includes shortcut icons of a plurality of applications. Assuming that the user clicks on the shortcut icon of the screen-recording application in the interface (a 3), the screen-recording application of the electronic device a can be opened as shown in the interface (a 4). In the recording process, the screen is actually recorded in the recording time, so if popup windows and popups appear in the screen during recording, other elements such as the document content of the video in the interface (a 3) and the interface (a 4), praise, collection, forwarding and the like of the video are recorded in the video file, and the interrupt popup window appearing in the interface (a 4) is also recorded in the video file. The interface (a 4) is described by taking the example of additional information of the loading prompt popup window, that is, the loading prompt popup window is recorded when the video playing is interrupted due to the network problem during the recording.
Assuming that the user continues recording until the interface (a 5), the user clicks a stop icon in the recording option box in the interface (a 5) to stop recording, so that recording can be finished. The recorded video is stored in the electronic device a, and the user can open the recorded video through file management, gallery, video and other applications. The interface (a 6) is exemplified by a user looking up and opening a recorded video by clicking on gallery software. In the recorded video, various elements appearing on the screen during recording, including popup windows appearing midway, affect the viewing experience of the user. Assuming that the terminal popup window in interface (a 4) appears for 1 minute, this results in a still picture in which 1 minute out of the last recorded 5 minutes of video length is an interruption, and the viewing experience is poor.
It should be appreciated that FIG. 1 is only one example of a currently common recording process, and that other recording scenarios are possible. For example, there is no limitation in that the video may be recorded by other video applications, or the recording application is other applications capable of recording on a screen, or the video is viewed on a horizontal screen and viewed in full screen, and the barrage mode is opened for viewing, etc. But these recordings are essentially processes by which a screen application records the screen of video played by a video application. That is, the recording scenario shown in fig. 1, instead of recording the play video of application a by application a (in this manner, essentially downloading the video), records the play video of application B by application a. For convenience of understanding, in the embodiments of the present application, an application for playing video is referred to as a video application, and an application for recording video is referred to as a recording application.
In the recording process shown in fig. 1, because the recording application can essentially only record video through the recording of the screen, the recorded video includes additional information of popup window information or popup screen, which affects the recording effect.
Aiming at the problems, the embodiment of the application provides a screen recording method, which realizes the final screen recording by obtaining the video stream, so that the recorded video is purer and cannot be influenced by interference factors in the playing process.
For easy understanding, the following describes possible interaction scenarios of the screen recording method according to the embodiment of the present application with reference to fig. 2A to fig. 2E.
Fig. 2A-2E are schematic diagrams of an interactive scene for recording a video for playing, which is suitable for the embodiment of the present application. It will be appreciated that the video recording portion of fig. 2A-2E may utilize embodiments of the present application to record video. Whereas the video recording part in fig. 1 is recorded using a conventional scheme.
The electronic device B can be a mobile phone, a tablet computer, a notebook computer, an XR terminal, a vehicle-mounted terminal, an intelligent wearable device and the like. XR terminals may also include Virtual Reality (VR) terminals, augmented reality (augmented reality, AR) terminals, and Mixed Reality (MR) terminals.
As shown in fig. 2A, a video application is clicked in the interface 210 of the electronic device B, and opened. Browsing and viewing of information such as video is performed in an open running interface of the video application, such as shown by interface 220. Suppose that the user has opened the video application and then viewed the food preparation sharing video shown in interface 220. The user may open the recording application at interface 220 by making a left-hand gesture on the right side of the screen. It should be understood that, in the interface 220, the triggering manner of opening the screen recording application may be other manners, for example, may be a sliding operation at the top of the screen in fig. 1, or may be a sliding operation under the screen, etc., which is not limited, and the present application is exemplified by a left sliding.
As shown in FIG. 2B, a floating window, i.e., the floating window shown in interface 230, is opened by a left-hand sliding operation in interface 220. The floating window includes a plurality of application icons thereon, and the user can click on the icon of the application capable of video recording, here taking note-taking application as an example. It should be understood that a note application refers to an application that is capable of text recording and therefore will also be referred to as an application name of a memo, notepad, or the like. In the conventional technology, the note application can only record text data, but in the embodiment of the application, the function of recording video is also integrated in the application, so that a user can record images through the note application, and can not record text only. The note application herein is considered to be an example of a screen recording application of an embodiment of the present application. That is, clicking the icon of the screen recording application in the floating window of the interface 230 opens the interface of the screen recording application.
The interface 240 is an operation interface of the note application, and as shown in the interface 240, the video application and the note application are simultaneously displayed in the same interface in a split screen mode. Here, the screen is divided up and down, and the video is applied to the upper half screen, and the note is applied to the lower half screen. However, it should be understood that other split-screen modes may be used for display, for example, left and right split-screens may be used, for example, note may be applied to the upper half screen, and video may be applied to the lower half screen, without limitation. The split screen display shown in interface 240 is more compatible with the user's usage habits. It should also be appreciated that there is no limitation on the size of the display of the two applications. For example, the picture of the note application is relatively large, so that the user can edit the note application conveniently; for example, the picture of the video application is relatively large, so that the user can watch the video conveniently; or, the default note application picture is smaller, but when the user clicks the editing position of the note application, the picture of the note application is enlarged, and the requirements of the user on watching and editing are considered.
The user may click on the new build option in interface 240 to create a note.
As shown in fig. 2C, after the user clicks on the new option in interface 240, interface 250 is opened. In interface 250, the video continues to play and the display interface of the notes application includes editable title bars, function bars, and also recording and stopping option icons. The user can edit the title information in the interface 250, click on the recording option to record, or click on options such as menu, edit, style, handwriting, exit, etc. in the function bar to select and edit other functions. It should be understood that interface 250 is merely an example of an interface for a recording application, and that the elements included are merely examples, provided that elements are available that trigger recording options.
The user may begin recording by clicking on the recording option on interface 250. Interface 260 shows the recorded picture after a period of time has elapsed. In interface 260, the user may click on a pause option to pause the recording, and may also see the duration that has been recorded. Interface 260 also gives an example of screen-attached information, here an example of a information pop. If the conventional scheme causes the information popup to be recorded, but the recording process adopts the screen recording method in the embodiment of the application, the video stream is the source video stream of the video application, and not the screen picture stream during playing, so the information popup cannot be recorded.
As shown in FIG. 2D, at interface 270, the duration of the video recording has reached 05:18, and the user has also edited the title of cooking. Assuming that the user wants to end the recording at this point, he can click on the stop option in interface 270 to stop the recording.
Interface 280 is an interface after recording is stopped, and in interface 280, the recorded video screenshot can be placed under the title according to the layout rule applied by the note. The screenshot of the recorded video may be a video frame in the whole recording process, for example, a start frame or an end frame, and the interface 280 takes the first frame of the recording that is used to by the user as an example of the screenshot. The screenshot is also provided with a play icon, and the user can watch the recorded video by clicking the play icon. The user may also continue to make other edits with other functions of the notes application. However, it should be understood that the manner in which the recorded video is viewed is not limited, and the user may find and open the recorded video through various suitable applications such as file management, gallery, video, notes, etc.
As shown in fig. 2E, it is assumed that the user opens the recorded video to play, and can watch the recorded video again. No screen additional information is included in the video segment as shown by interface 290. For the convenience of describing the scheme of the embodiment of the present application, taking the picture corresponding to the interface 260 in fig. 2C as an example, the screen information such as the information popup window in the interface 260 will not appear in the picture played by the interface 290.
Throughout the interactive scenario of fig. 2A-2E, recording is stopped from when the recording option is clicked on interface 250 to when the recording is stopped when the stop option is clicked on interface 270. In the whole recording process, the screen recording method of the embodiment of the application is adopted for processing, so that the recorded video does not comprise additional information of a screen.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 3, the electronic device 300 may include a processor 310, an external memory interface 320, an internal memory 321, a universal serial bus (universal serial bus, USB) interface 330, a charge management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an earphone interface 370D, a sensor module 380, keys 390, a motor 391, an indicator 392, a camera 393, a display 394, a user identification module (subscriber identification module, SIM) card interface 395, and the like. The sensor module 380 may include, among other things, a pressure sensor 380A, a gyroscope sensor 380B, a barometric pressure sensor 380C, a magnetic sensor 380D, an acceleration sensor 380E, a distance sensor 380F, a proximity sensor 380G, a fingerprint sensor 380H, a temperature sensor 380J, a touch sensor 380K, an ambient light sensor 380L, a bone conduction sensor 380M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 300. In other embodiments of the application, electronic device 300 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Illustratively, the processor 310 shown in fig. 3 may include one or more processing units, such as: the processor 310 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 300, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 310 for storing instructions and data. In some embodiments, the memory in the processor 310 is a cache memory. The memory may hold instructions or data that the processor 310 has just used or recycled. If the processor 310 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 310 is reduced, thereby improving the efficiency of the system.
In some embodiments, processor 310 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
In some embodiments, the I2C interface is a bi-directional synchronous serial bus including a serial data line (SDA) and a serial clock line (derail clock line, SCL). The processor 310 may contain multiple sets of I2C buses. The processor 310 may be coupled to the touch sensor 380K, charger, flash, camera 393, etc., respectively, via different I2C bus interfaces. For example, the processor 310 may couple the touch sensor 380K through an I2C interface, causing the processor 310 to communicate with the touch sensor 380K through an I2C bus interface, implementing the touch functionality of the electronic device 300.
In some embodiments, the I2S interface may be used for audio communication. The processor 310 may contain multiple sets of I2S buses. The processor 310 may be coupled to the audio module 370 via an I2S bus to enable communication between the processor 310 and the audio module 370.
In some embodiments, the audio module 370 may communicate audio signals to the wireless communication module 360 via the I2S interface to enable answering calls via the bluetooth headset.
In some embodiments, the PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. The audio module 370 and the wireless communication module 360 may be coupled through a PCM bus interface.
In some embodiments, the audio module 370 may also transmit audio signals to the wireless communication module 360 via the PCM interface to enable phone answering via the bluetooth headset. It should be appreciated that both the I2S interface and the PCM interface may be used for audio communication.
In some embodiments, the UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. UART interfaces are typically used to connect the processor 310 with the wireless communication module 360. For example, the processor 310 communicates with a bluetooth module in the wireless communication module 360 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 370 may transmit audio signals to the wireless communication module 360 through a UART interface to implement a function of playing music through a bluetooth headset.
In some embodiments, a MIPI interface may be used to connect processor 310 with peripheral devices such as display screen 394, camera 393, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. The processor 310 and the camera 393 communicate through the CSI interface to implement the photographing function of the electronic device 300. The processor 310 and the display screen 394 communicate via a DSI interface to implement the display functions of the electronic device 300.
In some embodiments, the GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. GPIO interfaces may be used to connect processor 310 with camera 393, display screen 394, wireless communication module 360, audio module 370, sensor module 380, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
Illustratively, the USB interface 330 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 300, or may be used to transfer data between the electronic device 300 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 300. In other embodiments of the present application, the electronic device 300 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 340 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 340 may receive a charging input of a wired charger through the USB interface 330. In some wireless charging embodiments, the charge management module 340 may receive wireless charging input through a wireless charging coil of the electronic device 300. The battery 342 is charged by the charge management module 340, and the electronic device may be powered by the power management module 341.
The power management module 341 is configured to connect the battery 342, the charge management module 340 and the processor 310. The power management module 341 receives input from the battery 342 and/or the charge management module 340 to power the processor 310, the internal memory 321, the external memory, the display screen 394, the camera 393, the wireless communication module 360, and the like. The power management module 341 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance), and other parameters. In other embodiments, the power management module 341 may also be disposed in the processor 310. In other embodiments, the power management module 341 and the charging management module 340 may also be disposed in the same device.
The wireless communication function of the electronic device 300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 300 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example, the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 350 may provide a solution for wireless communication applied on the electronic device 300, such as at least one of the following: second generation (2th generation,2G) mobile communications solutions, third generation (3 g) mobile communications solutions, fourth generation (4th generation,5G) mobile communications solutions, fifth generation (5th generation,5G) mobile communications solutions. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 350 may receive electromagnetic waves from the antenna 1, perform processes such as filtering and amplifying the received electromagnetic waves, and then transmit the electromagnetic waves to a modem processor for demodulation. The mobile communication module 350 may further amplify the signal modulated by the modem processor, and the amplified signal is converted into electromagnetic waves by the antenna 1 and radiated. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 310. In some embodiments, at least some of the functional modules of the mobile communication module 350 may be provided in the same device as at least some of the modules of the processor 310.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to speaker 370A, receiver 370B, etc.), or displays images or video through display screen 394. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 350 or other functional module, independent of the processor 310.
The wireless communication module 360 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., applied to the electronic device 300. The wireless communication module 360 may be one or more devices that integrate at least one communication processing module. The wireless communication module 360 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 310. The wireless communication module 360 may also receive a signal to be transmitted from the processor 310, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 of electronic device 300 is coupled to mobile communication module 350 and antenna 2 of electronic device 300 is coupled to wireless communication module 360 such that electronic device 300 may communicate with networks and other electronic devices via wireless communication techniques. The wireless communication technology may include at least one of the following communication technologies: global system for mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, IR technologies. The GNSS may include at least one of the following positioning techniques: global satellite positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), beidou satellite navigation system (beidou navigation satellite system, BDS), quasi zenith satellite system (quasi-zenith satellite system, QZSS), satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 300 implements display functions through a GPU, a display screen 394, an application processor, and the like. The GPU is a microprocessor for image processing, connected to the display screen 394 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 310 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 394 is used for displaying images, videos, and the like. The display screen X94 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (flex-emitting diode), mini-Led, micro-OLED, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 300 may include 1 or N display screens 394, N being a positive integer greater than 1.
Electronic device 300 may implement capture functionality through an ISP, camera 393, video codec, GPU, display 394, and application processor, among others.
The ISP is used to process the data fed back by camera 393. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 393.
Camera 393 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 300 may include 1 or N cameras 393, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 300 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 300 may support one or more video codecs. Thus, the electronic device 300 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the electronic device 300 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 320 may be used to connect an external memory card, such as a Secure Digital (SD) card, to enable expanding the memory capabilities of the electronic device 300. The external memory card communicates with the processor 310 through the external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 321 may be used to store computer executable program code comprising instructions. The processor 310 executes various functional applications of the electronic device 300 and data processing by executing instructions stored in the internal memory 321. The internal memory 321 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 300 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 321 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 300 may implement audio functionality through an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an ear-headphone interface 370D, and an application processor, among others. Such as music playing, recording, etc.
The audio module 370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 370 may also be used to encode and decode audio signals. In some embodiments, the audio module 370 may be disposed in the processor 310, or some of the functional modules of the audio module 370 may be disposed in the processor 310.
Speaker 370A, also known as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 300 may listen to music, or to hands-free conversations, through the speaker 370A.
A receiver 370B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 300 is answering a telephone call or voice message, voice may be received by placing receiver 370B close to the human ear.
Microphone 370C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 370C through the mouth, inputting a sound signal to the microphone 370C. The electronic device 300 may be provided with at least one microphone 370C. In other embodiments, the electronic device 300 may be provided with two microphones 370C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 300 may also be provided with three, four, or more microphones 370C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 370D is for connecting a wired earphone. The headset interface 370D may be a USB interface 330 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 380A is configured to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 380A may be disposed on the display screen 394. The pressure sensor 380A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor 380A, the capacitance between the electrodes changes. The electronic device 300 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 394, the electronic apparatus 300 detects the touch operation intensity according to the pressure sensor 380A. The electronic device 300 may also calculate the location of the touch based on the detection signal of the pressure sensor 380A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity smaller than a first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyroscope sensor 380B may be used to determine a motion gesture of the electronic device 300. In some embodiments, the angular velocity of electronic device 300 about three axes (i.e., the 3, y, and z axes) may be determined by gyro sensor 380B. The gyro sensor 380B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 380B detects the shake angle of the electronic device 300, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 300 through the reverse motion, thereby realizing anti-shake. The gyro sensor 380B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 380C is used to measure air pressure. In some embodiments, the electronic device 300 calculates altitude from barometric pressure values measured by the barometric pressure sensor 380C, aiding in positioning and navigation.
The magnetic sensor 380D includes a hall sensor. The electronic device 300 may detect the opening and closing of the flip holster using the magnetic sensor 380D. In some embodiments, when the electronic device 300 is a flip machine, the electronic device 300 may detect the opening and closing of the flip according to the magnetic sensor 380D; and setting the characteristics of automatic unlocking of the flip cover and the like according to the detected opening and closing state of the leather sheath or the detected opening and closing state of the flip cover.
The acceleration sensor 380E may detect the magnitude of acceleration of the electronic device 300 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 300 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
The distance sensor 380F is used to measure distance. The electronic device 300 may measure the distance by infrared or laser. In some embodiments, the electronic device 300 may range using the distance sensor 380F to achieve fast focus.
The proximity light sensor 380G may include, for example, a light-emitting diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 300 emits infrared light outward through the light emitting diode. The electronic device 300 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that an object is in the vicinity of the electronic device 300. When insufficient reflected light is detected, the electronic device 300 may determine that there is no object in the vicinity of the electronic device 300. The electronic device 300 can detect that the user holds the electronic device 300 close to the ear by using the proximity light sensor 380G, so as to automatically extinguish the screen to achieve the purpose of saving power. The proximity light sensor 380G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 380L is used to sense ambient light level. The electronic device 300 may adaptively adjust the brightness of the display screen 394 based on the perceived ambient light level. The ambient light sensor 380L may also be used to automatically adjust white balance during photographing. The ambient light sensor 380L may also cooperate with the proximity light sensor 380G to detect if the electronic device 300 is in a pocket to prevent false touches.
The fingerprint sensor 380H is used to collect a fingerprint. The electronic device 300 can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access an application lock, fingerprint photographing, fingerprint incoming call answering and the like.
The temperature sensor 380J is used to detect temperature. In some embodiments, the electronic device 300 performs a temperature processing strategy using the temperature detected by the temperature sensor 380J. For example, when the temperature reported by temperature sensor 380J exceeds a threshold, electronic device 300 performs a reduction in performance of a processor located in the vicinity of temperature sensor 380J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 300 heats the battery 342 to avoid the low temperature causing the electronic device 300 to shut down abnormally. In other embodiments, when the temperature is below a further threshold, the electronic device 300 performs boosting of the output voltage of the battery 342 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 380K, also referred to as a "touch panel". The touch sensor 380K may be disposed on the display screen 394, and the touch sensor 380K and the display screen 394 form a touch screen, which is also referred to as a "touch screen". The touch sensor 380K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display screen 394. In other embodiments, touch sensor 380K may also be located on a surface of electronic device 300 other than at display 394.
The bone conduction sensor 380M may acquire a vibration signal. In some embodiments, bone conduction sensor 380M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 380M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 380M may also be provided in the headset, in combination with an osteoinductive headset. The audio module 370 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 380M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beat signals acquired by the bone conduction sensor 380M, so as to realize a heart rate detection function.
The keys 390 include a power on key, a volume key, etc. Key 390 may be a mechanical key. Or may be a touch key. The electronic device 300 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 300.
The motor 391 may generate a vibration alert. The motor 391 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 391 may also correspond to different vibration feedback effects by touch operations applied to different areas of the display screen 394. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 392 may be an indicator light, which may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 395 is for interfacing with a SIM card. The SIM card may be inserted into the SIM card interface 395 or removed from the SIM card interface 395 to enable contact and separation with the electronic device 300. The electronic device 300 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 395 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 395 can be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 395 may also be compatible with different types of SIM cards. The SIM card interface 395 may also be compatible with external memory cards. The electronic device 300 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 300 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 300 and cannot be separated from the electronic device 300.
The software system of the electronic device 300 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the application takes an Android (Android) system with a layered architecture as an example, and illustrates the software structure of electronic equipment. The software structure of an electronic device is exemplarily described below with reference to fig. 4 by taking the electronic device with a hierarchical Android (Android) system as an example.
Fig. 4 is a software architecture block diagram of an electronic device according to an embodiment of the present application. The electronic device in fig. 4 may be the electronic device 300 shown in fig. 3, and as shown in fig. 4, the layered architecture divides the software into several layers, each layer having a distinct role and division of labor; the layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, namely an application layer, an application framework layer, a system layer and a kernel layer from top to bottom. The application layer may include a series of application packages.
As shown in fig. 4, the application package may include Applications (APP) such as short messages, notes, cameras, video, navigation, conversation, gallery, etc. For convenience, in the embodiment of the present application, the application program is simply referred to as an application.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 4, the application framework layer may include a window manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager may obtain the size of the display screen, determine if there is a status bar, lock the screen, obtain the screen, etc.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
The system layer may include Android runtime (Android run) including core libraries and virtual machines, and system libraries. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. Such as surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGL ES), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
The media library supports playback and recording of multiple audio formats, playback and recording of multiple video formats, and still image files. The media library may support a variety of audio video coding formats such as MPEG4, h.264, moving picture experts group audio layer 3 (moving picture experts group audio layer III, MP 3), advanced audio coding (advanced audio coding, AAC), adaptive multi-rate (AMR), joint picture experts group (joint photo graphic experts group, JPG), and portable network graphics (portable network graphics, PNG).
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The kernel layer refers to a layer between hardware and software; the inner core layer at least comprises a sensor driver, a camera driver, a display driver and the like.
It should be noted that fig. 3 listed above is a block diagram of one possible electronic device, and fig. 4 is a software architecture diagram of one possible electronic device. Any of the electronic devices shown in fig. 1, 2A-2E may have the structure described in fig. 3 and the software structure shown in fig. 4.
Fig. 5 is a schematic flow chart of a screen recording method according to an embodiment of the present application. The method can be applied to various electronic devices shown in fig. 1 to 4. The steps of fig. 5 are described below.
S501, recording a video played by a second application by using a first application to obtain a first recorded video.
In step S501, the video stream of the second application is decodable.
The first recorded video is obtained by utilizing a video stream corresponding to the video played by the second application, and the first recorded video does not contain screen additional information.
The screen additional information may be understood as information outside of the video picture appearing on the screen during the playing of the video. In one implementation, the screen additional information includes at least one of pop-up information or a bullet screen.
The first application is an application capable of video recording, and may be, for example, a video recording application such as the note taking application shown in fig. 2A-2E.
The second application is an application capable of video playback and may be, for example, the video application shown in fig. 2A-2E. Step 501 is to record a video played by the second application by using the first application or record a video played by the second application by using the first application.
In practice it is uncertain whether the video stream of the second application can be encoded. If the video stream of the second application can be decoded, recording can be performed by means of acquiring the video stream, and if the video stream of the second application cannot be decoded, recording can only be performed by means of recording the screen picture (i.e. acquiring the screen picture stream).
That is, the video stream of the second application may be decoded before the video stream is obtained after decoding, and if the video stream cannot be obtained after decoding, the first recorded video cannot be obtained.
In connection with fig. 2A-2E, after the operation of clicking on an icon of a note application (an example of a first application) in the interface 230, the electronic device B may open the first application while maintaining the operation of a second application (here, the video application in fig. 2A-2E) in response to an interaction event of the clicking operation, so that both applications may be simultaneously operated, as shown in the interface 240. After the operation of clicking the record start option in the interface 250, the electronic device B starts video recording in response to the clicking operation, that is, performs step S501 to record the video. And after the operation of clicking the stop option in the interface 270, the electronic device B responds to the interaction event of the clicking operation to stop recording, that is, end step S501, and obtain the first recorded video. And no screen additional information is included in the first recorded video.
In one implementation, step S501 includes:
S501A, acquiring video stream data corresponding to video played by a second application by using a first application;
S501B, synthesizing the acquired video stream data into a first recorded video by using a first application.
Optionally, in step S501A, after the video stream of the second application is decoded, a buffer (buffer) of the video stream data may be obtained, so as to intercept the video stream.
For current electronic devices, during online video playback, none of the incoming pre-decoded video streams of the video application are available (intercepted). Therefore, in the embodiment of the application, during the video application playing video, after the video stream is decoded, the buffer area of the video stream data is acquired, so that the video stream data corresponding to the video being played is further acquired.
In one example, the acquisition may be achieved by modifying the processing flow of the source video stream. Through analysis, during the primary processing of the electronic device, when the video stream is sent and displayed, the video stream is stored in the first buffer area, and other conditions are stored in the second buffer area. Because the data in the first buffer area cannot be read, there is no way to obtain the video stream corresponding to the video played by the video application. It has been found that although the data in the first buffer is unreadable, the data in the second buffer is readable. Thus, embodiments of the present application employ modifying the process flow to be stored in a readable buffer (i.e., the second buffer herein) regardless of whether it is presented or not, so that the data stream of the video application is always available. When step S501A is performed, the video stream can be acquired using the second buffer.
Alternatively, the buffer of the video stream can be obtained in MediaCodec: onReleaseOutputBuffer by modifying the relevant code in ccodecbufferbhannel:: start to the following code.
GraphiOutputBuffers in the code may be considered as an example of the first buffer, and RawGraphiOutputBuffers may be considered as an example of the second buffer (i.e., the readable buffer). The processing mode of storing GraphiOutputBuffers only when sending display and storing RawGraphiOutputBuffers in other cases is modified to a processing mode of directly storing RawGraphiOutputBuffers without judging whether to send display or not. On one hand, the execution of the original sending and displaying process is not influenced, and on the other hand, the video stream can be stored into a readable buffer area, so that the video stream can be acquired when the video is recorded.
Optionally, step S501A may further include a step of video compression, which may reduce the data amount of the video stream in case of meeting the demand, reducing the burden of the subsequent processing. In addition, the processing efficiency can be improved and the storage space can be saved by using a shared memory mode.
Since it is limited for an electronic device to transfer data directly between two applications, for example, it may not be supported between different applications for security reasons. In the embodiment of the application, because the video stream data occupies a larger storage space, and the processing processes of the video application and the video recording application cannot be seamlessly connected, for example, the video application only needs to play and does not need other processing, the generation speed of the video stream is higher, the video recording application needs a series of processing such as compression, synthesis and the like, the processing speed is slower, if the video stream data is directly copied from the video application to the video recording application for use in a copying way, the situation of data backlog can be caused, the larger storage space is needed, the copying speed is relatively slower, and the delay is increased. Aiming at the problems, the embodiment of the application further improves the processing efficiency and reduces the occupied storage space by a mode of enabling the video application and the screen recording application to share the memory.
In one implementation, step S501A includes: after decoding the video stream of the second application, obtaining a readable buffer area of the video stream data; the video stream data stored in the readable buffer area is compressed and then stored in a shared memory (such as a media server); the first application reads the compressed video stream data from the shared memory. It should also be appreciated that the video stream of the second application is compressed and stored in the shared memory, so that the compressed video stream data is then utilized by the first application when executing step S501B.
In one example, compressing video stream data stored in the readable buffer may include: and carrying out equal-proportion compression and resolution coupling on the video stream data in the readable buffer area to obtain compressed video stream data with even resolution.
During video compression, the resolution of the incoming video stream must be even, otherwise transcoding failure (image churning) may occur. When scaling equally, it is possible to get the scaled resolution to be an odd number. Therefore, the resolution of the compressed video stream in subsequent processing is ensured to be even by performing even-fetching operation on the zoomed resolution, and image screen-missing caused by transcoding failure is avoided. For example, a +1 or-1 process may be performed on odd numbers of widths or heights. For example: when the video of 1920x1080 is compressed to 480p in equal ratio, the algorithm is 1920x 480/1080=853; 853 is odd, so 853+1=854. Thus, the target resolution after compression is determined to be 854x480.
However, it should be understood that in other implementations, video stream data that is not compressed may be stored directly in the shared memory without performing video compression, where the video stream data read from the shared memory by the first application is not compressed, and the video stream data that is not compressed is used in the subsequent synthesis. Compared with the traditional copying scheme, the method has the advantages that the processing efficiency can be improved to a certain extent, the occupied storage space is reduced, but compared with the method of firstly compressing and then storing into the shared memory, the method is limited in improvement degree.
It should also be appreciated that while two steps of compression and storage are required, compression and storage can be accomplished simultaneously in one operation as practical.
In one example, compression and sharing of video may be achieved in a manner that combines process application communication (binder) with memory management (management).
The creation of the shared memory and the compression and storage of the video stream data into the shared memory may be performed in the process of the video application (the second application in step S501), and the reading of the data stream data from the shared memory may be performed in the process of the recording application (the first application in step S501).
The following is an example.
Video applications (e.g., the second application described above) may implement video compression and storage into shared memory through the following code.
Creation of shared memory
sp<IMemory>yuvSharedMemory=new MemoryBase(new MemoryHeapBase(dstBufferSize,0,"yuvSharedHeap"),0,dstBufferSize);
Data compression and copying to shared memory
ssize_t heapOffset;
size_t heapSize;
sp<IMemoryHeap>heap=yuvSharedMemory->getMemory(&heapOffset,&heapSize);
ScaleI420(buffer->data(),mWidth,mHeight,
(uint8_t*)heap->base()+heapOffset,mScaledWidth,mScaledHeight);
Wherein, scaleI420 is a transcoding function, mWIDTH is the original width of the video stream data, and mHeight is the original height of the video stream data; mScaledWidth is the target width of the video stream data (the width when compressed in shared memory), and mScaledHeight is the target height of the video stream data (the height when compressed in shared memory).
In one implementation, step S501B may include: synthesizing the video stream data and the audio data corresponding to the video played by the second application by using the first application to obtain a first recorded video; or synthesizing the compressed video stream data with the audio data corresponding to the video played by the second application by using the first application to obtain a first recorded video.
That is, if compression of the video stream data is not performed, the video stream data that is not compressed is utilized at the time of synthesis, and if compression of the video stream data is performed, the video stream data that is compressed is utilized at the time of synthesis. Assuming that the video stream data of the second application is compressed and stored in the shared memory in step S501A, and equal-proportion compression and resolution decoupling are adopted during the compression, the first application reads the compressed video stream data from the shared memory, and the resolution of the compressed video stream data is even, and in this case, step S501B uses the compressed video stream data with the even resolution.
In the method shown in fig. 5, the first application records the second application, and the first application does not record the screen image any more, but uses the video stream of the second application to obtain the first recorded video, so that the first recorded video is a pure video and does not contain additional screen information.
Although the present application mainly adopts a mode of obtaining video stream to record for obtaining pure video two, the first application can also adopt a traditional scheme to record the third application. That is, the method of the embodiment of the present application may be capable of recording both video applications where the video stream is decodable and video applications where the video stream is not decodable.
In some implementations, the method shown in fig. 5 may further include step S502.
S502, recording the video played by the third application by using the first application to obtain a second recorded video.
In step S502, the video stream of the third application is not decodable.
The second recorded video is obtained by utilizing a screen picture corresponding to the video played by the third application, and the second recorded video contains screen additional information.
That is, since the video stream of the third application is not decodable, the video stream of the third application cannot be obtained, and therefore, when the first application is used for recording the third application, only the conventional screen recording mode can be adopted.
In one implementation, step S502 may include: and acquiring a screen picture data stream when the third application plays the video by using the first application, and synthesizing the screen picture data stream into a second recorded video.
In one example, step S502 may further include: and obtaining the original audio data when the third application plays the video, and synthesizing the screen picture data stream and the original audio data into a second recorded video. In this example, the second recorded video would include screen additional information and would not include sound information other than the video itself.
In another example, step S502 may further include: and recording the audio data of the third application when playing the video, and synthesizing the screen picture data stream and the recorded audio data into a second recorded video. In this example, the second recorded video would include screen additional information, as well as audio information other than the video itself. The recording may be performed, for example, by starting a recording application of the electronic device, or a recording function may be set in the first application.
As can be seen from steps S501 and S502, when the video stream of the video application is decodable, the video application is the second application, and step S501 may be performed, where the first application is used to record the first recorded video. When the video stream of the video application is not decodable, the video application is the third application, and step S502 may be executed, where the first application is used to record the second recorded video.
In order to facilitate understanding of the difference between the scheme of the present application and the conventional scheme, it is assumed that the fourth application is a recording application in the conventional scheme, for example, the recording application in fig. 1, and if the video played by the video application is recorded by using the fourth application, no matter whether the video stream of the video application is decodable or not, only the recording video containing the additional information of the screen can be obtained. That is, if the fourth application is used to record the video played by the second application and the third application, only the recorded video containing the additional information of the screen can be obtained.
It should be understood that the video stream and the screen data stream are different, and the conventional screen recording scheme adopts a mode of intercepting the screen data stream, that is, essentially recording a screen during the playing process of the video application, and then synthesizing the screen into a video, where the screen includes additional screen information. In the embodiment of the application, the video stream is a source video stream of the video played by the video application, wherein the source video stream does not contain information of a screen, and if the source audio stream of the video played by the video application and the source video stream are synthesized into a recorded video, the video which does not contain screen interference factors is obtained. The following description is made in connection with fig. 6 for the sake of easy understanding.
Fig. 6 is a schematic diagram showing the difference of video stream sources in different recording methods. As shown in fig. 6, when playing video, the player needs to decode the video stream first and then display the video image on the screen, that is, perform 601 decoding the video first and then perform 602 displaying the video image. The conventional recording scheme is to perform 603 a process of capturing the screen data stream after 602, thereby obtaining a video stream and finally obtaining a recorded video. In the present embodiment, 604 is performed directly, and the pure video stream is intercepted from the player, so as to obtain the video stream, and then the first recorded video is obtained. It can be seen that 604 is an example of step S501A.
A player may be understood as a background hardware device that performs the operations associated with playing a video when the video is played.
For ease of understanding, the process of recording video is described below in conjunction with fig. 7. Fig. 7 is a schematic diagram of a process interaction procedure in a video recording process according to an embodiment of the present application.
As shown in fig. 7, the recording video process may include four processes, a video application process, a recording service (MediaServer) process, a recording screen process, and a note taking process. The recording process may be a process started after the recording is started in the note application, the video application process may be a process when the video application plays video, and the recording service process is a process for providing background service during video playing and/or recording.
As can be seen from the pictures, the video application process, the screen recording process and the note taking process comprise both the application program layer process and the application program framework layer (which can be called as the active layer) process, and the recording service process is the active layer process.
For ease of understanding, the various processes illustrated in FIG. 7 are described below in connection with FIGS. 2A-2E.
In fig. 7, the video application process may be considered as a process when the video application in fig. 2A-2E plays video, for example, when the user clicks an icon of the video application on the interface 210 to open the video application, for example, when the user views video on the interface 220, in the video application process shown in fig. 7, a video stream of the video application playing video may be sent to a MediaCodec module, then the MediaCodec module sends the video data compression module, then the video data compression module sends the compressed video stream to a MediaServer process, or returns the compressed video stream to the video application; the Audio stream of the video application playing video is sent to the AudioTrack module, then the AudioTrack module sends the Audio stream to the AudioHal module, and then the AudioHal module sends the Audio stream to the audiorecorder module of the screen recording process.
The recording service process mainly forwards Video stream data of the Video application to a Video encoding module in the recording process through a Video data forwarding module. Fig. 7 exemplifies that the forwarded video stream data is compressed.
The screen recording process mainly synthesizes the received video stream data and the received audio stream data into video. In fig. 2A-2E, after a recording option is clicked on in interface 250 for recording, the recording process here begins to perform the relevant steps.
The note process comprises a screen recording starting module and an HnMODECODEC module for starting a recording service process. In fig. 2A-2E, when the note application icon is clicked on at the interface 230, the note application is launched and the note process begins to perform the relevant steps. After clicking on the new note at interface 240, the note process begins to perform the steps associated with note taking, such as initiating text taking. If the recording option is clicked in the interface 250 to record, the note process uses the start screen module to send information for starting the recording process to the recording process, and uses the hnm ediacode module to send information for starting the recording service process to the MediaServer process.
For example, when the recording option is clicked on the interface 250, the note process starts, and the screen recording process and the MediaServer process are started by the note process. The Video application process is started, the Video application sends a Video stream to the media codec module, then the media codec module sends the Video stream to the Video data compression module, the Video data compression module compresses the Video stream and sends the compressed Video stream to the media server process, and then the Video stream data forwarding module sends the compressed Video stream to the Video Encoder module; simultaneously, the video application sends the Audio stream to an AudioTrack module, then the AudioTrack module sends the Audio stream to an AudioHal module, and then the AudioHal module sends the Audio stream to an audiorecorder module of a screen recording process; and the screen recording process synthesizes the received audio stream and the compressed video stream into video.
It should be understood that fig. 7 is only one example of each process, and in practice the number of modules and the specific module types included in each process may be adjusted as desired. For example, the screen recording process may further include other modules for video synthesis, which are not listed one by one.
The method of the embodiment of the present application is mainly described above with reference to the drawings. It should be understood that, although the steps in the flowcharts related to the embodiments described above are shown in order, these steps are not necessarily performed in the order shown in the figures. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages. The following describes an apparatus according to an embodiment of the present application with reference to the accompanying drawings.
Fig. 8 is a schematic diagram of a recording device according to an embodiment of the application. As shown in fig. 8, the apparatus 2000 includes a processing unit 2001. The apparatus 2000 may be any of the electronic devices described above, such as electronic device a, electronic device B, or electronic device 300.
The apparatus 2000 can be used to perform any of the above screen recording methods. For example, steps S501 and/or S502 may be performed.
In one implementation, the processing unit 2001 may also deploy the first application, and deploy the second application and/or the third application.
In one implementation, the apparatus 2000 may further include a storage unit 2002 for storing data such as a video stream, an audio stream, a synthesized video, and the like. The memory unit 2002 may be integrated in the processing unit 2001 or may be a unit separate from the processing unit 2001.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the application also provides electronic equipment, which comprises: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing steps of any of the methods described above when the computer program is executed.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that enable the implementation of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/electronic apparatus, recording medium, computer memory, read-only memory (ROM), random access memory (random access memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other manners. For example, the apparatus/electronic device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (16)

1. A method of recording a screen, comprising:
and recording the video played by the second application by using the first application to obtain a first recorded video, wherein the video stream of the second application can be decoded, the first recorded video is obtained by using the video stream corresponding to the video played by the second application, and the first recorded video does not contain screen additional information.
2. The method of claim 1, wherein the screen additional information comprises at least one of pop-up information or a bullet screen.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and recording the video played by the third application by using the first application to obtain a second recorded video, wherein the video stream of the third application is not decodable, the second recorded video is obtained by using a screen picture corresponding to the video played by the third application, and the second recorded video comprises the screen additional information.
4. A method according to any one of claims 1 to 3, wherein the recording, with the first application, the video played by the second application to obtain a first recorded video includes:
Acquiring video stream data corresponding to the video played by the second application by utilizing the first application;
synthesizing the video stream data into the first recorded video using the first application.
5. The method of claim 4, wherein the obtaining video streaming data of the second application with the first application comprises:
after the video stream of the second application is decoded, a readable buffer area of the video stream data is obtained;
compressing the video stream data stored in the readable buffer area and storing the compressed video stream data into a shared memory;
and reading the compressed video stream data from the shared memory by using the first application.
6. The method of claim 5, wherein compressing the video stream data stored in the readable buffer comprises:
and carrying out equal-proportion compression and resolution coupling on the video stream data of the readable buffer area to obtain compressed video stream data with even resolution.
7. The method of any of claims 4 to 6, wherein the synthesizing the video stream data into the first recorded video with the first application comprises:
Synthesizing the video stream data and audio data corresponding to the video played by the second application by using the first application to obtain the first recorded video; or synthesizing the compressed video stream data and the audio data corresponding to the video played by the second application by using the first application to obtain the first recorded video.
8. A screen recording apparatus, comprising:
the processing unit is used for recording the video played by the second application by using the first application to obtain a first recorded video, the video stream of the second application can be decoded, the first recorded video is obtained by using the video stream corresponding to the video played by the second application, and the first recorded video does not contain screen additional information.
9. The apparatus of claim 8, wherein the screen additional information comprises at least one of pop-up information or a bullet screen.
10. The apparatus according to claim 8 or 9, wherein the processing unit is further configured to:
and recording the video played by the third application by using the first application to obtain a second recorded video, wherein the video stream of the third application is not decodable, the second recorded video is obtained by using a screen picture corresponding to the video played by the third application, and the second recorded video comprises the screen additional information.
11. The apparatus according to any one of claims 8 to 10, wherein the processing unit is specifically configured to:
acquiring video stream data corresponding to the video played by the second application by utilizing the first application;
synthesizing the video stream data into the first recorded video using the first application.
12. The apparatus according to claim 11, wherein the processing unit is specifically configured to:
after the video stream of the second application is decoded, a readable buffer area of the video stream data is obtained;
compressing the video stream data stored in the readable buffer area and storing the compressed video stream data into a shared memory;
and reading the compressed video stream data from the shared memory by using the first application.
13. The apparatus according to claim 12, wherein the processing unit is specifically configured to:
and carrying out equal-proportion compression and resolution coupling on the video stream data of the readable buffer area to obtain compressed video stream data with even resolution.
14. The apparatus according to any one of claims 11 to 13, wherein the processing unit is specifically configured to:
synthesizing the video stream data and audio data corresponding to the video played by the second application by using the first application to obtain the first recorded video; or synthesizing the compressed video stream data and the audio data corresponding to the video played by the second application by using the first application to obtain the first recorded video.
15. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 7 when the computer program is executed.
16. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 7.
CN202211405500.4A 2022-11-10 2022-11-10 Screen recording method and device Active CN116668762B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211405500.4A CN116668762B (en) 2022-11-10 2022-11-10 Screen recording method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211405500.4A CN116668762B (en) 2022-11-10 2022-11-10 Screen recording method and device

Publications (2)

Publication Number Publication Date
CN116668762A true CN116668762A (en) 2023-08-29
CN116668762B CN116668762B (en) 2024-04-05

Family

ID=87724838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211405500.4A Active CN116668762B (en) 2022-11-10 2022-11-10 Screen recording method and device

Country Status (1)

Country Link
CN (1) CN116668762B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832205A (en) * 2017-10-16 2018-03-23 深圳天珑无线科技有限公司 Terminal and its record screen method of running, external equipment, storage device
CN109089131A (en) * 2018-09-21 2018-12-25 广州虎牙信息科技有限公司 A kind of record screen live broadcasting method, device, equipment and storage medium based on IOS system
CN111031177A (en) * 2019-12-09 2020-04-17 上海传英信息技术有限公司 Screen recording method, device and readable storage medium
WO2022105341A1 (en) * 2020-11-18 2022-05-27 北京达佳互联信息技术有限公司 Video data processing method and apparatus, computer storage medium, and electronic device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832205A (en) * 2017-10-16 2018-03-23 深圳天珑无线科技有限公司 Terminal and its record screen method of running, external equipment, storage device
CN109089131A (en) * 2018-09-21 2018-12-25 广州虎牙信息科技有限公司 A kind of record screen live broadcasting method, device, equipment and storage medium based on IOS system
CN111031177A (en) * 2019-12-09 2020-04-17 上海传英信息技术有限公司 Screen recording method, device and readable storage medium
WO2022105341A1 (en) * 2020-11-18 2022-05-27 北京达佳互联信息技术有限公司 Video data processing method and apparatus, computer storage medium, and electronic device

Also Published As

Publication number Publication date
CN116668762B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
WO2020253719A1 (en) Screen recording method and electronic device
US11669242B2 (en) Screenshot method and electronic device
WO2020078299A1 (en) Method for processing video file, and electronic device
CN113542839B (en) Screen projection method of electronic equipment and electronic equipment
CN115473957B (en) Image processing method and electronic equipment
CN109559270B (en) Image processing method and electronic equipment
CN111176506A (en) Screen display method and electronic equipment
CN114040242B (en) Screen projection method, electronic equipment and storage medium
CN113838490B (en) Video synthesis method and device, electronic equipment and storage medium
CN113572954A (en) Video recording method, electronic device, medium, and program product
CN114650363A (en) Image display method and electronic equipment
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN114185503B (en) Multi-screen interaction system, method, device and medium
CN114827581A (en) Synchronization delay measuring method, content synchronization method, terminal device, and storage medium
CN113593567B (en) Method for converting video and sound into text and related equipment
CN114866860B (en) Video playing method and electronic equipment
WO2021052388A1 (en) Video communication method and video communication apparatus
CN114173184A (en) Screen projection method and electronic equipment
WO2023000746A1 (en) Augmented reality video processing method and electronic device
CN116668762B (en) Screen recording method and device
CN115730091A (en) Comment display method and device, terminal device and readable storage medium
CN114827098A (en) Method and device for close shooting, electronic equipment and readable storage medium
CN116668764B (en) Method and device for processing video
CN116668763B (en) Screen recording method and device
CN116166347A (en) Screen capturing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant