CN110417991B - Screen recording method and electronic equipment - Google Patents

Screen recording method and electronic equipment Download PDF

Info

Publication number
CN110417991B
CN110417991B CN201910528140.9A CN201910528140A CN110417991B CN 110417991 B CN110417991 B CN 110417991B CN 201910528140 A CN201910528140 A CN 201910528140A CN 110417991 B CN110417991 B CN 110417991B
Authority
CN
China
Prior art keywords
terminal
recording
application
screen
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910528140.9A
Other languages
Chinese (zh)
Other versions
CN110417991A (en
Inventor
熊刘冬
余平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910528140.9A priority Critical patent/CN110417991B/en
Publication of CN110417991A publication Critical patent/CN110417991A/en
Priority to PCT/CN2020/096559 priority patent/WO2020253719A1/en
Application granted granted Critical
Publication of CN110417991B publication Critical patent/CN110417991B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

A screen recording method and electronic equipment relate to the technical field of communication, can record one or more application windows in a current screen, improve the ratio of effective contents in a screen recording file, and meet diversified requirements, and the method comprises the following steps: the terminal displays a first interface comprising N application windows; detecting the operation of starting a screen recording function, and displaying a second interface comprising N controls; detecting one or more operations aiming at the N controls, and determining M application windows as recording objects; the terminal detects the operation of starting recording, and the terminal starts to record the contents of the M application windows; the terminal detects the operation of adjusting one application window of the M application windows, and the terminal changes the size or the position of the application window in the screen; when the terminal detects the operation of stopping recording, the terminal generates a video file comprising the contents of M application windows, or generates M video files, wherein each video file comprises the contents of one application window of the M application windows.

Description

Screen recording method and electronic equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a screen recording method and an electronic device.
Background
The screen recording function is a common function of the mobile phone. The existing screen recording method is to record the whole interface displayed on the current screen of the mobile phone and store the recorded interface as a video file. However, with the increase of the mobile phone screen, especially with the appearance of a folding screen mobile phone, a user can use the mobile phone to open multiple applications simultaneously, that is, the screen displays the interfaces of the multiple applications in one interface. In this case, it is inconvenient for the user to record a plurality of applications in the entire screen using the related art. For example: during recording, there may be content on the cell phone screen that the user does not wish to record into, such as a window of an instant messaging application that relates to the privacy of the user. Another example is: when a recorded video file is viewed, the recorded video file contains windows of all application programs in a screen, so that a user may be confused about the focus of attention, and the viewing is disordered.
Disclosure of Invention
According to the screen recording method and the electronic equipment, one or more application windows in the current screen can be recorded, the ratio of effective contents in a screen recording file is improved, and diversified requirements are met.
A first aspect provides a screen recording method, including: the terminal displays a first interface, wherein the first interface comprises N application windows, and N is an integer greater than or equal to 2; the terminal detects a first operation for starting a screen recording function; responding to the first operation, the terminal displays a second interface, wherein the second interface comprises N controls which are in one-to-one correspondence with the N application windows; the terminal detects one or more second operations aiming at least one control in the N controls; responding to one or more second operations, the terminal determines M application windows in the N application windows as recording objects, wherein M is a positive integer less than or equal to N; the terminal detects a third operation for starting recording; responding to the third operation, and starting to record the contents of the M application windows by the terminal; the terminal detects a fourth operation for adjusting the size or the position of one application window in the M application windows; responding to the fourth operation, the terminal changes the size or the position of the application window in the screen; the terminal detects a fifth operation for stopping recording; in response to the fifth operation, the terminal generates a video file including contents of the M application windows.
Therefore, in the application, the terminal can record one or more application windows in the first interface according to the operation (recording or non-recording selection) of selecting the application windows by the user, so that the user-defined recording range is realized, the occupation ratio of effective contents in a screen recording file (namely a video file generated after screen recording) is improved, and the diversified requirements of the user are met.
In one possible implementation, the generated video file does not include the contents of M-N application windows other than M application windows among the N application windows.
In this way, the user can protect the user privacy by setting, for example, an application window related to the user privacy as an application window that is not recorded. The application window which does not need to be concerned can be set as the application window which is not recorded, the occupation ratio of effective contents in the screen recording file can be improved, and the user experience is improved.
In a possible implementation manner, after the terminal detects the third operation for starting recording and before the terminal detects the fifth operation for stopping recording, the method further includes: the terminal detects a sixth operation for closing one of the M application windows; in response to the sixth operation, the terminal stops recording the content of the closed application window.
Therefore, in the process of one-time recording, if the terminal detects that the user closes one or more application windows selected by the user, the screen recording of the one or more application windows can be automatically stopped, and the screen recording process is more intelligent.
In some embodiments, in order to avoid the user's misoperation, the terminal may display a prompt message when detecting that the user closes the recording application window.
In other embodiments, the recording process may be automatically ended if the terminal detects that all of the application windows selected by the user for recording are closed.
In a possible implementation manner, after the terminal detects the third operation for starting recording and before the terminal detects the fifth operation for stopping recording, the method further includes: the terminal detects a seventh operation for opening the application window; responding to the seventh operation, and prompting the user to select whether to record the content of the newly opened application window by the terminal; the terminal detects an eighth operation for recording the newly opened application window; and responding to the eighth operation, and starting to record the content of the newly opened application window while recording the M application windows by the terminal.
Of course, the terminal may also default to not record the opened new application window. Then, when it is detected that the user starts a new application, the terminal may not display the prompt information, and continue recording the originally selected application window.
In one possible implementation, the video file includes contents of M application windows and contents of newly opened application windows.
In a possible implementation manner, the generating of a video file by a terminal specifically includes: in the recording process, the terminal acquires L images, wherein the L images are images of the M application windows before the terminal synthesizes the contents of the N application windows into a terminal display interface; the terminal generates a video file from the L images.
A second aspect provides another screen recording method, including: the terminal displays a first interface, wherein the first interface comprises N application windows, and N is an integer greater than or equal to 2; the terminal detects a first operation for starting a screen recording function; responding to the first operation, the terminal displays a second interface, wherein the second interface comprises N controls which are in one-to-one correspondence with the N application windows; the terminal detects a second operation aiming at one or more of at least one control in the N controls; responding to one or more second operations, the terminal determines M application windows in the N application windows as recording objects, wherein M is a positive integer less than or equal to N; the terminal detects a third operation for starting recording; responding to the third operation, and starting to record the contents of the M application windows by the terminal; the terminal detects a fourth operation for adjusting the size or the position of one application window in the M application windows; responding to the fourth operation, the terminal changes the size or the position of the application window in the screen; the terminal detects a fifth operation for stopping recording; in response to the fifth operation, the terminal generates M video files, each of the M video files containing the content of one of the M application windows.
In a possible implementation manner, after the terminal detects the third operation for starting recording and before the terminal detects the fifth operation for stopping recording, the method further includes: the terminal detects a sixth operation for closing one of the M application windows; in response to the sixth operation, the terminal stops recording the content of the closed application window.
In a possible implementation manner, after the terminal detects the third operation for starting recording and before the terminal detects the fifth operation for stopping recording, the method further includes: the terminal detects a seventh operation for opening the application window; responding to the seventh operation, and prompting the user to select whether to record the content of the newly opened application window by the terminal; the terminal detects an eighth operation for recording the newly opened application window; and responding to the eighth operation, and starting to record the content of the newly opened application window while recording the M application windows by the terminal.
In a possible implementation manner, after the terminal detects a fifth operation for stopping recording, the method further includes: in response to the fifth operation, the terminal also generates a video file containing the content of the newly opened application window.
In a possible implementation manner, the generating, by the terminal, M video files specifically includes: in the recording process, the terminal acquires P images, wherein the P images are images of M application windows before the terminal synthesizes the contents of N application windows into a terminal display interface; the terminal generates M video files from the P images.
A third aspect provides an electronic device comprising a touch screen; one or more processors; one or more memories; one or more sensors; and one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the screen recording method as described in the above aspect and any one of its possible implementations.
A fourth aspect provides an electronic device comprising: the flexible screen comprises a main screen and an auxiliary screen when the flexible screen is in a non-unfolded state; one or more processors; one or more memories; one or more sensors; and one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the screen recording method as described in the above aspect and any one of its possible implementations.
A fifth aspect provides an apparatus, which is included in an electronic device, and which has functionality to implement the behavior of the electronic device in any of the methods of the above aspects and possible implementations. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the above functions. For example, a detection module or unit, a display module or unit, a determination module or unit, a recording module or unit, and a generation module or unit, etc.
A sixth aspect provides a computer storage medium comprising computer instructions which, when run on a terminal, cause the terminal to perform a screen recording method as described in the above aspect and any one of its possible implementations.
A seventh aspect provides a computer program product for causing a computer to perform the screen recording method as described in the above aspect and any one of the possible implementations when the computer program product runs on the computer.
An eighth aspect provides a chip comprising a processor, wherein when the processor executes instructions, the processor executes the screen recording method as described in the above aspect and any one of the possible implementations thereof.
A ninth aspect provides a graphical user interface on an electronic device with a display screen, a camera, a memory, and one or more processors to execute one or more computer programs stored in the memory, the graphical user interface comprising graphical user interfaces displayed when the electronic device performs the methods as described in the above aspects and any one of the possible implementations thereof.
Drawings
Fig. 1 is a first schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a user interface of some electronic devices provided by embodiments of the present application;
FIG. 4 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 5 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 6 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 7 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a method for generating a screen recording file according to an embodiment of the present application;
fig. 9 is a schematic diagram of another method for generating a screen recording file according to an embodiment of the present application;
FIG. 10 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
fig. 11 is a schematic diagram of another method for generating a screen recording file according to an embodiment of the present application;
FIG. 12 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 13 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 14 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 15 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a chip system according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
For example, the electronic device in the present application may be a mobile phone, a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart watch, a netbook, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an in-vehicle device, a smart car, a smart audio, a robot, and the like, and the specific form of the electronic device is not particularly limited in the present application.
Fig. 1 shows a schematic structural diagram of an electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
In some embodiments of the present application, the processor 110 is configured to select image data of each application window to be recorded from the acquisition user; coding and compressing the acquired image data of each application window; and respectively packaging the image data after the coding and the compression with corresponding audio data to generate a plurality of screen recording files. Each screen recording file only contains the content of one application window, and each screen recording file is automatically named by the name of the contained application.
In other embodiments of the present application, the processor 110 is configured to obtain data including a composite image of all application windows of the current screen and information of each application window; according to the information of each application window, modifying or cutting the interface content of the application window which is selected by the user and does not need to be recorded in the synthetic image data; coding and compressing the modified or cut synthetic image data; and packaging the image data after the coding compression and the corresponding audio data into a screen recording file. The screen recording file comprises a plurality of contents applying the window.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
In some embodiments of the present application, the application package further comprises a screen recording application. The screen recording application may provide an interface for the electronic device 100 to interact with the user, such as: the user can select the interface of the application window to be recorded, select the recording mode (combining recording or separating recording), control the recording process (suspending recording and ending recording), select the recording sound source, and the like.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
In some embodiments of the present application, the window manager may be embodied as a Window Management Service (WMS) that stores information of each application window displayed in the current screen, for example, information of coordinates, size, and the like of each application window.
When the application program of the application program layer starts to run, the WMS is informed of information such as the size and position of the application window to be started. And, when the size and position of the application window change, the changed data can be refreshed in the WMS in time. Accordingly, information of each application window displayed in the current screen may be acquired from the WMS.
For example, when the electronic device 100 starts the screen recording function, the screen recording application may provide an option for the user to select an application window to be recorded according to information of the application window currently displayed on the screen, which is acquired from the WMS.
For another example, when the electronic device 100 performs merge recording, the screen recording application may modify or cut the interface displayed on the current screen according to the information of the application window to be recorded, which is acquired from the WMS and selected by the user, so that the generated screen recording file only contains the content of the application window to be recorded, which is selected by the user, and the specific implementation please refer to the following text.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
In some embodiments of the present application, when the electronic device 100 starts multiple applications, each application draws respective interface content (i.e., image data, or image layer), and the surface manager invokes the surface flicker to perform composition, so as to generate an interface displayed on the final display screen.
In some examples, when the electronic device 100 separately records the application window that needs to be recorded and is selected by the user, before the surfefinger performs composition, interface content of each application program may be acquired, and then video encoding compression, encapsulation, and the like may be performed to generate a screen recording file of each application program. For a specific implementation, refer to the following.
In other examples, when the electronic device 100 performs merged recording on an application window selected by the user and needing recording, a composite image including all application windows of the current screen may be acquired after the surfefinger performs composition. And then modifying or cutting the composite image according to the information such as the size and the position of the application window which needs to be recorded by the user and the like acquired from the WMS to obtain the content only containing the application window which needs to be recorded and selected by the user, and then performing video coding compression, packaging and the like to generate a screen recording file. The screen recording file comprises the content of a plurality of application windows selected to be recorded by a user. For a specific implementation, refer to the following.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary work flows of software and hardware of the electronic device 100 in conjunction with a screen recording scenario.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a screen recording control in the notification bar as an example, the screen recording application or the function calls an interface of the application framework layer to obtain the interface content of the application window selected to be recorded by the user, or modifies or cuts the display interface of the current screen to obtain the interface content of the application window selected to be recorded by the user. And then, generating a screen recording file according to the obtained content of the application window, and storing the screen recording file in the album application.
The technical solutions in the following embodiments can be implemented in the electronic device 100 having the above hardware architecture and software architecture. Hereinafter, the electronic device 100 will be described as a mobile phone by way of example.
The screen recording (which may be simply referred to as screen recording) method provided by the embodiments of the present application is described in detail below with reference to the accompanying drawings.
With the increase of the screen of the mobile phone, especially with the appearance of a mobile phone with a folding screen, a user can start a plurality of application programs by using the mobile phone and operate interfaces of the plurality of application programs in one display interface of the screen. In other words, windows of a plurality of application programs (hereinafter simply referred to as application windows) may be included in the first interface. In an actual screen recording scenario, the user may only need to record one or a few of the multiple application windows. Alternatively, the user may not want certain application windows to be recorded. Therefore, in the technical scheme provided by the application, the mobile phone can record one or more application windows in the first interface according to the operation (selecting to record or not recording) of selecting the application window by the user, so that the user-defined recording range is realized, the ratio of effective contents in a screen recording file (namely a video file generated after screen recording) is improved, and the diversified requirements of the user are met.
Specifically, when the user wishes to record a screen, a screen recording operation may be performed. The screen recording operation may be any one of operations of pressing physical keys by a user (for example, simultaneously pressing a power key and a volume plus key), performing predefined gestures by the user (for example, sliding three fingers down on the screen, knocking the screen by finger joints, shaking the mobile phone), operating corresponding buttons by the user on the screen (clicking a screen recording control in the notification bar), inputting voice commands by the user, and the like. The embodiment of the application does not specifically limit screen recording operation.
For example: as shown in fig. 3 where interface 306 is an example of a first interface. The interface 306 includes a window of a browser application (APP1), a chat window of a WeChat application (APP2), and a window of a document application (APP 3). In response to the user pulling down from the status bar location on the first interface, the handset opens the notification bar on the first interface, displaying an interface 300 as shown in fig. 3. A screen recording control (also referred to simply as a screen recording control) 301 is displayed in the notification bar of interface 300.
In some embodiments of the present application, after the user clicks the screen recording control 301, the mobile phone starts a screen recording function, and displays a second interface for prompting the user to select an application window that needs to be recorded or an application window that does not need to be recorded.
In a specific implementation manner, in response to detecting the screen recording operation, the screen recording application or the screen recording function in the mobile phone may call a Window Management Service (WMS) of the application framework layer, and obtain information (e.g., an identifier, a size, a coordinate, and the like) of each application window displayed in the current screen. Then, the mobile phone draws a prompt interface (namely a second interface) according to the acquired information of each application window displayed on the current screen, and prompts a user to select the application window displayed in the current screen so as to determine the application window needing to be recorded or determine the application window not needing to be recorded.
For example: the interface 302 as shown in fig. 3 is an example of a second interface. The interface 302 includes a dialog box 303, and the dialog box 303 may include options corresponding to a plurality of application windows displayed in the first interface. The user can select a plurality of application windows currently displayed in the screen through these options. An application window that needs to be recorded may be selected, or an application window that does not need to be recorded may be selected. In some examples, the cell phone may record desktop content by default or not record desktop content by default when the cell phone screen is not full of the application window, i.e., the cell phone screen also displays a portion of the desktop. Of course, the mobile phone may record or not record the desktop content according to the selection of the user. For example: the second interface may further include an option of whether to record the desktop, so that the user may select whether to record the desktop.
Another example is: interface 400 as shown in fig. 4 is another example of a second interface. The interface 400 displays controls corresponding to each application window one-to-one on the mobile phone, for example: add controls, delete controls, etc. Wherein, the adding control can be used for selecting the application window needing to be recorded. The delete control can be used to select application windows that do not require recording. In some examples, the cell phone may record desktop content by default or not record desktop content by default when the cell phone screen is not full of the application window, i.e., the cell phone screen also displays a portion of the desktop. Of course, the mobile phone may record or not record the desktop content according to the selection of the user. For example: one or more controls corresponding to the desktop can be further included in the second interface, so that the user can select whether to record the desktop through the controls.
For example, after the mobile phone starts the screen recording function, all application windows displayed in the default screen are required to be recorded. At this time, each application window in the screen corresponds to one deletion control. And then, the user can remove the application windows which are not required to be recorded by the user through the operation of the deletion control. For example, in response to the user clicking a delete control 401 corresponding to the Word document application, the cell phone displays an interface 404 as shown in fig. 4. Meanwhile, the mobile phone can determine that the browser window and the WeChat chat window are application windows needing recording. Or, the mobile phone determines the Word document application window as an application window which does not need to be recorded. Of course, at this time, the Word document application corresponds to the adding control 405, and the user can also add the Word document application window back by adding the control 405, that is, reselect the Word document application as the application window to be recorded.
Of course, after the mobile phone starts the screen recording function, it is also possible to default that all application windows displayed in the screen are not required to be recorded. At this time, each application window in the screen corresponds to one add control. Then, the user can select the application window to be recorded through the operation of the adding control.
After the user selects the application window needing to be recorded or selects the application window not needing to be recorded, a specific recording mode can be selected, and the recording mode comprises separated recording and combined recording.
The separated recording means that after the mobile phone records a plurality of application windows selected by a user to be recorded, a plurality of screen recording files are generated. Wherein each screen recording file only contains the content of one application window. For example: the first interface includes the contents of three application windows, APP1, APP2 and APP3, and the user selects the window to record APP1 and APP2 on the first interface. Then, if the user selects the separate recording mode, two screen recording files are generated by recording. One of the screen recording files contains window content of APP1, and does not contain window content of APP2 and APP 3. The screen recording file may or may not contain the contents of the desktop. The other screen recording file contains the window content of APP2, but does not contain the window content of APP1 and APP 3. The screen recording file may or may not contain the contents of the desktop. The implementation of separate recording will be described in detail below.
The merging and recording means that after the mobile phone records a plurality of application windows which need to be recorded and are selected by a user, a screen recording file is generated, and the screen recording file comprises the contents of the plurality of application windows. For example: the first interface includes the contents of three application windows, APP1, APP2 and APP3, and the user selects the window to record APP1 and APP2 on the first interface. Then, if the user selects the merging recording mode, a screen recording file is generated by recording. The screen recording file comprises window contents of APP1 and APP2, and does not comprise the window content of APP 3. In some examples, the screen recording file may or may not contain the contents of the desktop. The implementation of merged recording will be described in detail below.
In some embodiments, the mobile phone can also automatically name the generated screen recording file. Of course, the mobile phone also receives the input of the user, and the user names the generated screen recording file. The embodiment of the application does not limit the naming method of the screen recording file.
For example: the mobile phone automatically names the screen recording file according to the name of the contained application program. And if the separated recording is selected, finally generating a plurality of screen recording files. Each screen recording file only contains one application window, and the name in each screen recording file contains the name of one application program. In other words, the names of different screen files may be distinguished by the names of different applications contained therein. If the merged recording is selected, a finally generated screen recording file contains a plurality of application windows, and the screen recording file can contain the names of the plurality of application programs.
Another example is: the mobile phone automatically names the time of generating the screen recording file. If there are a plurality of screen recording files produced simultaneously, the screen recording files can include serial numbers in addition to production time. For example, when recording is separated, the mobile phone generates two screen recording files at the same time. One of the screen recording files is named as 'generation time-1', and the other screen recording file is named as 'generation time-2'.
In some embodiments of the present application, the second interface may include a recording mode option for prompting selection of a separate recording mode or a combined recording mode. In some examples, the second interface may further include a start recording control for instructing the mobile phone to start recording the plurality of application windows selected by the user and requiring recording.
For example: as shown in fig. 5, interface 500 includes a recording mode option 501 in interface 500. The user may select the split recording mode or the merged recording mode. A start recording control 502 may also be included in interface 500. In response to the user clicking the start recording control 502, the mobile phone starts recording according to the selected recording mode.
In other embodiments of the present application, the second interface may include a separate recording control and a merge recording control. The separated recording control is used for indicating the mobile phone to record the application window selected by the user and generating a plurality of recording screen files after recording. And the combined recording control is used for indicating the mobile phone to record the application window selected by the user and generating a screen recording file after recording, wherein the screen recording file comprises the contents of the plurality of application windows selected by the user. In other words, the user can select a specific recording mode at one time and instruct the mobile phone to start recording the selected application window through a separated recording control or a combined recording control, so that the operation of the user is simplified, and the efficient interaction between the user and the mobile phone is realized.
For example: a split recording control 304 and a merge recording control 305 in the interface 302 shown in fig. 3. Another example is: interface 400 shown in fig. 4, and separate recording control 402 and merge recording control 403 in interface 404.
It is understood that the above description is made by taking an example of displaying the option of the recording mode and the control for starting recording on the second interface. The mobile phone can also display an option for selecting a recording mode and a control for starting recording on one or more other interfaces. This is not particularly limited in the embodiments of the present application.
In addition, the mobile phone can also prompt the user to select an interface for recording the sound source, an interface for setting the recorded picture quality and the like, and the like. In some examples, the mobile phone may display a setting item of a sound source, a setting item of picture quality, and the like in the second interface. In other examples, the cell phone may be displayed in one or more other interfaces.
For example: before or after the cell phone displays the second interface, the cell phone displays an interface 600 as shown in fig. 6. The interface 600 includes a dialog box 601 for prompting the user to select the source of the sound at the time of recording. For example, silence may be selected, i.e., no sound is recorded while recording. For another example, a microphone may be selected, that is, ambient sound around the mobile phone is collected by the microphone during recording. Also for example, a system play can be selected. Namely, the sound of the built-in sound source of the mobile phone, namely the sound played outwards by the mobile phone, is recorded. In one example, the handset may play the audio through an API recording system that calls audio capture (audio recorder) at the application framework layer. It will be appreciated that if the user chooses silence, the microphone and system can no longer be selected to play the sound. When the user can select to record the sound, the user can select any one of the microphone and the system playing sound, and can also select to record the microphone and the system playing sound at the same time.
It should be noted that, in the embodiment of the present application, the sequence of operations, such as selecting an application window that needs to be recorded (or selecting an application window that does not need to be recorded), selecting a source of sound when the user selects to record, and setting the quality of a recorded picture by the user, is not limited. The selection and setting may be in one interface or in different interfaces, and the embodiments of the present application are not particularly limited.
And in response to the mobile phone detecting that the user executes the operation of starting recording, the mobile phone starts recording the application window selected by the user. Or after a preset time period, the mobile phone automatically starts to record the application window selected by the user.
The user may perform the operation of starting recording, for example, the user may click the separate recording control 402 or the merge recording control 403 on the interface 404 in fig. 4, or the user may click the start recording control 502 on the interface 500 in fig. 5. And after detecting that the user executes the operation of starting recording, the mobile phone returns to the first interface and starts to record the screen content.
During the recording process, a recording control can be displayed on a mobile phone screen, for example: any one or any combination of an end control, a pause control, and a record time control. The recording control can be displayed at any position on the mobile phone screen in a floating window mode, can also be displayed in a notification bar, can also be displayed in a status bar and the like. It will be appreciated that the recording controls are used to prompt the user to control the recording process. And after the recording is finished, the recording control is not displayed in the screen recording file.
For example: as in interface 700 shown in fig. 7, the recording controls include an end control 701, a pause key 702, and a record time control 703. And the recording control is displayed on the upper left of the mobile phone screen in a floating window mode. And an end control 701, configured to instruct the mobile phone to end the recording process. That is, in response to the user clicking the end control 701, the mobile phone ends the recording process, and automatically generates a screen recording file. And a pause control 702 for instructing the mobile phone to pause the recording process. That is, in response to the user clicking the pause control 702, the cell phone pauses the recording process, at which point the pause control 702 may become the resume control 702. When the user is detected to click the continue control 702, the mobile phone continues to record the screen. Or, when the user clicks the ending control 701 in the detection, the mobile phone receives the recording process and generates a screen recording file. And the recording time control displays the currently recorded time length.
In some embodiments, during the recording process, the mobile phone may mark the recording range (or the range that does not need to be recorded) so as to prompt the user of the current recording range. In one example, the mobile phone may display the ranges that do not require recording as a particular pattern of black, gray, mosaic, white, etc. In another example, the mobile phone may also highlight the range that needs to be recorded, for example: highlighted, or outlined with a black border, etc. The embodiment of the present application does not limit the marking manner of the recording range (or the range that does not need to be recorded).
For example: as shown in the interface 700 of fig. 7, the application windows that are not within the recording range are marked and distinguished by shading.
Another example is: as shown in fig. 7, interface 704 marks the application windows and desktop that are not in the recording area, and the marks are distinguished by shading.
Another example is: as shown in fig. 7, interface 705, the application window to the recording area is circled with a black border, etc.
It should be noted that the screen recording scheme provided by the present application takes an application window selected by a user as a recording object. In other words, when the size or position of one or more recording objects (application windows) changes, the recording range of the mobile phone changes accordingly.
In order to illustrate that the recording range of the mobile phone changes with the change of the size and the position of the recording application window, the following description is made with reference to specific implementations of a separate recording manner and a combined recording manner.
First, a separate recording manner is introduced.
Each application corresponds to one or more graphical interfaces, and each graphical interface is a surface (surface). When the mobile phone runs a plurality of application programs simultaneously, each application program draws a respective surface (including interface content of each application program and information such as size and position of each application program in the screen), and then the drawn surfaces (i.e. the surfaces of the application programs) are synthesized by a surface flicker of an application program framework layer and rendered to obtain a final displayed interface in the screen.
When the mobile phone needs to separate and record the screen for the application window selected by the user, the surface of each application program selected by the user to be recorded, that is, the image data of each application program, can be acquired according to a certain period before the surface flicker synthesizes the surfaces of the plurality of application programs. Then, the image data of each application is encoded and compressed, for example, into a video format such as H264 or H265, to obtain an image stream of each application. Meanwhile, the mobile phone also acquires audio data (including system playing sound and/or environmental sound collected by a microphone), and encodes and compresses the acquired audio data into an audio format such as acc or mp3, so as to obtain an audio stream. And finally, respectively packaging the image stream and the audio stream of each application program, for example, packaging by adopting a video packaging format such as mp4, avi or rmvb and the like to obtain a screen recording file of each application program.
And if the user selects to record the system playing, the acquired audio data is the system playing. In some embodiments, when the mobile phone runs multiple applications, the mobile phone can only play audio of one currently active application, and then the system play sound acquired here is audio data of the application. In other embodiments, the handset may play audio for multiple applications simultaneously, and the system obtained here plays audio for the multiple applications.
If the user selects the microphone, the obtained audio data is the environment sound adopted by the microphone. If the user selects to record the system playing sound and the microphone, the acquired audio data comprises the environmental sound collected by the system playing sound and the microphone.
It should be further noted that, if the user selects or the mobile phone performs silent recording by default, the mobile phone may directly package the image stream of each application program without acquiring audio data, so as to obtain the screen recording file of each application program.
In some examples, assume that three application windows of APP1, APP2, and APP3 are displayed on the desktop of the cell phone, and the user selects the application window that records APP1 and APP 2. As shown in fig. 8 (a), when the mobile phone detects an instruction to start recording, the mobile phone obtains image data (e.g., surface) of APP1 and APP2, and compresses APP1 and APP2, respectively.
If the user selects the sound source, the silent mode is selected, namely the user selects silent recording. And the mobile phone directly encapsulates the compressed APP1 image data to obtain a screen recording file of the APP 1. And in a similar way, directly packaging the compressed APP2 image data to obtain a screen recording file of the APP 2. Fig. 8 (a) shows only the formation process of the recording file of the APP 1. The formation process of the screen recording file of the APP2 can refer to the formation process of the screen recording file of the APP 1.
If the user selects the recording with voice, the mobile phone also needs to acquire and compress the audio data. And if the user selects to record the system playing, the acquired audio data is the system playing. If the user selects the microphone, the acquired audio data is the ambient sound used by the microphone. If the user selects to record the system playing sound and the microphone, the acquired audio data comprises the environmental sound collected by the system playing sound and the microphone. And then, encapsulating the compressed image data of the APP1 and the compressed audio data to obtain a screen recording file of the APP 1. And similarly, encapsulating the compressed image data of the APP2 and the compressed audio data to obtain a screen recording file of the APP 2. It can be seen that in this example, the cell phone does not acquire image data and audio data of APP3, and the last screen recording file also does not have content of APP 3. Fig. 8 (b) shows only the formation process of the recording file of the APP 1. The formation process of the screen recording file of the APP2 can refer to the formation process of the screen recording file of the APP 1.
In other examples, as shown in fig. 9, when compressing the image data of APP1, the mobile phone may also simply combine the desktop image data and the image data of APP1, compress the combined image data, and encapsulate the combined image data and the compressed audio data together to obtain the screen recording file of APP 1. The screen recording file contains the interface content of the APP1 and the interface content of the desktop. Similarly, when compressing the image data of APP2, the mobile phone may also simply synthesize the desktop image data and the image data of APP2, compress the synthesized image data, and package the synthesized image data together with the compressed audio data to obtain the screen recording file of APP 2. The screen recording file contains the interface content of the APP2 and the interface content of the desktop.
It should be noted that, in the above recording method, the mobile phone uses the application window selected by the user as the recording object, and acquires the image data of each selected application window from the system as the source data of the screen recording file, including the interface content of each application window, the size of each application window, and the position in the screen. In other words, the screen recording file generated by the mobile phone finally presents the interface content of each application window according to the acquired size and position information of each application window. Therefore, when the size and the position of the selected application window (for example, APP1 and APP2) are changed, the interface content of the recorded APP1 and APP2 of the mobile phone is not affected.
Still take the example of the user selecting the interface content of recording APP1 and APP2 as an example. For example: suppose the size of the cell phone screen is 146mm 161 mm. In the interface 1001 shown in fig. 10, the window 1002 of the APP1 is 50mm by 150mm in size. The window 1003 of APP2 is 50mm 75mm in size. Then, at this time, the range of the mobile phone recording screen is area 1 where APP1 is located and area 2 where APP2 is located.
If the user modifies the size of the window 1002 of APP1 to 50mm 75mm and moves the window of APP2 from the right side of the window of APP1 to below it. That is, the layout of the application windows in the cell phone screen is adjusted to the layout as shown in interface 1004 in fig. 10. Then, at this time, the range of the mobile phone recording screen becomes area 3 where APP1 is located and area 4 where APP2 is located. Compared with the interface 1001, the regions 3 and 4 correspond to the original region 1. That is to say, in the embodiment of the present application, in a screen recording process of a mobile phone, a screen range of a mobile phone recording may change with changes in the size and the position of an application window that is selected to be recorded. Of course, the content recorded by the mobile phone is still the content of the application window which is selected to be recorded.
In the prior art, a mobile phone may define a recording area, and perform a screen recording method for the content of the recording area. If the application window which the user wishes to record is not in the recording area or moves from the recording area to the outside of the recording area, the content of the application window cannot be recorded. However, in the recording method provided by the present application, the mobile phone can record the application window selected by the user all the time by using the application window as a recording object. The recording area of the mobile phone automatically changes correspondingly according to the size and the position of the application window selected by the user, so that the screen recording function is more intelligent and personalized.
Further, a merged recording mode is introduced.
In some embodiments, as shown in fig. 11, when the mobile phone starts to merge and record the selected multiple application windows, image data obtained by synthesizing the surfaces of the multiple application programs by the surface flickers is obtained according to a certain period or in real time, and information of each application window may be obtained by the WMS, for example: size, location, and hierarchy. The image data after the surfaflinger synthesis includes image data of all application windows in the current screen, that is, the image data includes application windows that the user does not need to record. Therefore, it is necessary to remove the contents of the application windows that are not required to be recorded from the image data after the surfefinger synthesis, based on the acquired information of each application window. Illustratively, according to the size and position information of the application window which does not need to be recorded, the application window which does not need to be recorded is cut from the image synthesized by the surface flicker, or the content of the application window which does not need to be recorded is modified into a preset pattern. Then, the modified image data is compressed, for example, into a video format such as H264 or H265, so as to obtain an image stream. Meanwhile, the mobile phone also acquires audio data (including system playing sound and/or ambient sound collected by a microphone), and compresses the acquired audio data into an audio format such as acc or mp3, so as to obtain an audio stream. And finally, respectively packaging the image stream and the audio stream, for example, packaging by adopting a video packaging format such as mp4, avi or rmvb and the like to obtain a screen recording file. The screen recording file comprises interface contents of a plurality of application windows which are selected by a user and need to be recorded.
And if the user selects to record the system playing, the acquired audio data is the system playing. In some embodiments, when the mobile phone runs multiple applications, the mobile phone can only play audio of one currently active application, and then the system play sound acquired here is audio data of the application. In other embodiments, the handset may play audio for multiple applications simultaneously, and the system obtained here plays audio for the multiple applications.
If the user selects the microphone, the obtained audio data is the environment sound adopted by the microphone. If the user selects to record the system playing sound and the microphone, the acquired audio data comprises the environmental sound collected by the system playing sound and the microphone.
If the user selects or the mobile phone defaults to carry out silent recording, the mobile phone can directly package the obtained image stream without acquiring audio data to obtain the final screen recording file.
In a specific implementation manner, the mobile phone may modify a pixel value of a display area of an application window that is selected by a user and does not need to be recorded in the synthesized image, so that the display area displays a specific pattern such as black, gray, white, or mosaic.
Still take the example of the user selecting the interface content of recording APP1 and APP2 as an example.
For example: an image 1101 shown in fig. 12 is a synthesized image. Then, the mobile phone acquires the window information of APP1 and APP2, and may modify the image 1101 according to the window information of APP1 and APP 2. Specifically, the pixel values of the regions other than the display regions where APP1 and APP2 are located are modified to a specific pattern. As shown by the image 1105 in fig. 12, the contents of the window 1102 of APP1 and the window 1103 of APP2 are retained, and the other area is modified to a specific image, and the modified area is shown in fig. 12 by hatching.
Another example is: the mobile phone can also obtain information of application windows which the user does not need to record, such as window information of the APP 3. The image 1101 is modified according to the window information of the APP 3. Specifically, the pixel value of the display area where the APP3 is located is modified to a specific image. As shown in the image 1106 in fig. 12, the contents of the window 1102 of APP1, the window 1103 of APP2, and the desktop are retained, the pixel values of the APP3 application window are modified to a specific image, and the modified region is shaded in fig. 12.
In another specific implementation manner, the mobile phone may crop the display area of the application window that the user selects not to record in the synthesized image. Namely, the size of the cut image is smaller than that of the synthesized image, which is beneficial to reducing the size of the screen recording file and saving the storage space.
It should be noted that, in the above recording method, the mobile phone still uses the application window selected by the user as the recording object, and modifies the composite image including the entire screen interface according to the information (size and position) of each selected application window to generate the screen recording file. When the size and the position of the selected application program are changed, the mobile phone can modify the composite image according to the changed size and position. Therefore, when the size and the position of the selected application window (for example, APP1 and APP2) are changed, the interface content of the recorded APP1 and APP2 of the mobile phone is not affected.
For other contents, reference may be made to the description of related contents in the separate recording method, which is not described herein again.
In other embodiments, when the mobile phone starts to merge and record the selected multiple application windows, the surface of each application program selected by the user to be recorded, that is, the image data of each application program, may also be acquired according to a certain period or in real time before the surface flicker synthesizes the surfaces of the multiple application programs. Then, the image data of the respective applications acquired by the user are synthesized. As can be seen, the synthesized image data includes image data of a plurality of applications that the user selects to be screen-recorded. The synthesized image data is then encoded and compressed, for example, into a video format such as H264 or H265, to obtain an image stream. Meanwhile, the mobile phone also acquires audio data (including system playing sound and/or environmental sound collected by a microphone), and encodes and compresses the acquired audio data into an audio format such as acc or mp3, so as to obtain an audio stream. And finally, respectively packaging the image stream and the audio stream, for example, packaging by adopting a video packaging format such as mp4, avi or rmvb and the like to obtain a screen recording file.
For the rest of the contents, reference may be made to the description of the relevant contents of the merging recording method in the foregoing embodiment, which is not described herein again.
It can be understood that, when a user triggers an operation of starting screen recording, the mobile phone may default to select any one of the split recording mode and the merged recording mode to generate a screen recording file, or may simultaneously adopt the split recording mode or the merged recording mode to generate the screen recording file, which is not limited in the embodiment of the present application.
Optionally, the mobile phone may also select different methods by default to generate the screen recording file when detecting that the user performs different operations for starting the screen recording function. For example: when a user presses a screen locking key and a volume adding key, the mobile phone generates a screen recording file by adopting a separation recording mode by default. When a user presses the screen locking key and the volume reduction key, the mobile phone generates a screen recording file by default by adopting a method of combining recording modes. Another example is: when a user presses a screen locking key and a volume key, the mobile phone generates a screen recording file by a method of a separated recording mode by default. When a user clicks a screen recording control under the notification bar, the mobile phone generates a screen recording file by default by adopting a method of combining recording modes.
In addition, in the process of recording once, if the mobile phone detects that the user closes one or more application windows selected by the user, the mobile phone can automatically stop recording the one or more application windows.
For example: in the process of recording the windows of the APP1 and APP2 by the mobile phone, if the mobile phone detects that the window of the APP2 is closed by the user, the mobile phone can default to not record the screen of the APP 2.
In a specific implementation, if the mobile phone adopts a separate recording mode, the mobile phone cannot acquire the image data of the APP2 at this time. In some examples, the cell phone may end the recording of APP2, i.e., no longer acquire image data of APP 2. In this case, the time length of the screen recording file of the APP2 is smaller than the time length of the screen recording file of the APP 1. In other examples, the cell phone may pause recording of the APP 2. When the mobile phone detects that the user opens the application window of the APP2 again, the recording of the APP2 can be continued, that is, the image data of the APP2 is continuously obtained and corresponding operation is performed. In this case, the time length of the screen recording file of APP2 is still less than the time length of the screen recording file of APP 1. Of course, when the image data of the APP2 is not acquired, the mobile phone may adopt data of a default specific pattern as the image data of the APP 2. In this case, the time length of the screen recording file of the APP2 is the same as the time length of the screen recording file of the APP 1.
If the mobile phone adopts the merged recording mode, the mobile phone cannot acquire the window information of the APP2 at the moment. In some examples, the mobile phone may end recording the APP2, that is, no longer obtain the window information of the APP 2. At this time, the pattern of the area where the APP2 originally exists may be modified to a specific pattern, or to desktop content. In other examples, the cell phone may pause recording of the APP 2. When the mobile phone detects that the user opens the application window of the APP2 again, the recording of the APP2 can be continued, that is, the window information of the APP2 is continuously obtained, and the synthesized image containing the entire screen content is modified or cut according to the window information of the APP2 and the APP 1.
In some embodiments, to avoid the user's misoperation, the mobile phone may display a prompt message when detecting that the user closes the recording application window. For example: as shown in fig. 13, if it is detected that the user clicks the close button of APP2, and APP2 is an application window in which the mobile phone is recording, the mobile phone may display a prompt message 1301 to inquire whether the user determines to close APP 2. If the user selects yes, the window of the APP2 is closed, and if the user selects no, the window of the APP2 is not closed.
In other embodiments, the mobile phone may automatically end the recording process after detecting that all of the application windows selected by the user for recording are closed.
In the process of once recording, if the mobile phone detects that the user opens a new application window, the mobile phone can display prompt information to inquire whether the user records the new application window.
For example: in the process of recording windows of APP1 and APP2 by the mobile phone, if the mobile phone detects that a user opens a new application, for example: tencent video application (APP 4). The handset may display an interface 1400 as shown in fig. 14. The interface 1400 includes a prompt 1401 to ask the user whether to record the APP 4. If the user chooses yes, the handset starts recording the APP 4. If the user selects no, the mobile phone does not record the APP4, and continues to record the originally selected APP1 and APP 2.
Of course, the mobile phone may also default to not record the opened new application window. Then, when the mobile phone detects that the user starts a new application, the mobile phone may not display the prompt message and continue to record the originally selected application window.
And after receiving the instruction of finishing recording, the mobile phone stores the generated one or more screen recording files in the album. The user may view, edit, delete, or forward the screen-recorded file from the album. In some examples, the mobile phone may display a reminder message when the recording is finished. For example: an interface 1500 as shown in fig. 15 is displayed. The interface 1500 includes prompt information 1501: and after the recording is finished, the screen recording file is saved in the photo album.
Embodiments of the present application also provide a chip system, as shown in fig. 16, which includes at least one processor 1601 and at least one interface circuit 1602. The processor 1601 and the interface circuit 1602 may be interconnected by a line. For example, the interface circuit 1602 may be used to receive signals from other devices, such as a memory of the mobile terminal 100. Also for example, the interface circuit 1602 may be used to send signals to other devices, such as the processor 1601. Illustratively, the interface circuit 1602 may read instructions stored in memory and send the instructions to the processor 1601. The instructions, when executed by the processor 1601, may cause the electronic device to perform the steps performed by the electronic device 100 (e.g., a mobile phone) in the above embodiments. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
It is to be understood that the above-mentioned terminal and the like include hardware structures and/or software modules corresponding to the respective functions for realizing the above-mentioned functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
In the embodiment of the present application, the terminal and the like may be divided into functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. A screen recording method is characterized by comprising the following steps:
the method comprises the steps that a terminal displays a first interface, wherein the first interface comprises N application windows, and N is an integer greater than or equal to 2;
the terminal detects a first operation for starting a screen recording function;
responding to the first operation, the terminal displays a second interface, wherein the second interface comprises N controls in one-to-one correspondence with the N application windows;
the terminal detects one or more second operations aiming at least one control in the N controls;
responding to the one or more second operations, the terminal determines M application windows in the N application windows to be recording objects, wherein M is a positive integer less than or equal to N;
the terminal detects a third operation for starting recording;
responding to the third operation, and starting to record the contents of the M application windows by the terminal;
the terminal detects a fourth operation for adjusting the size or the position of one application window in the M application windows;
responding to the fourth operation, the terminal changes the size or the position of the application window in the screen;
the terminal detects a fifth operation for stopping recording;
and responding to the fifth operation, the terminal generates a video file, and the video file comprises the contents of the M application windows.
2. The method of claim 1, wherein the video file does not include contents of M-N application windows of the N application windows other than the M application windows.
3. The method according to claim 1 or 2, wherein after the terminal detects the third operation for starting recording and before the terminal detects the fifth operation for stopping recording, the method further comprises:
the terminal detects a sixth operation for closing one of the M application windows;
and responding to the sixth operation, and stopping recording the content of the closed application window by the terminal.
4. The method according to claim 1 or 2, wherein after the terminal detects the third operation for starting recording and before the terminal detects the fifth operation for stopping recording, the method further comprises:
the terminal detects a seventh operation for opening an application window;
responding to the seventh operation, and prompting the user to select whether to record the content of the newly opened application window by the terminal;
the terminal detects an eighth operation for recording the newly opened application window;
and responding to the eighth operation, and starting to record the content of the newly opened application window while recording the content of the M application windows by the terminal.
5. The method of claim 4, wherein the contents of the M application windows and the contents of the newly opened application window are included in the video file.
6. The method according to claim 1, 2 or 5, wherein the terminal generates a video file, specifically comprising:
in the recording process, the terminal acquires L images, wherein the L images are images of the M application windows before the terminal synthesizes the contents of the N application windows into a display interface of the terminal, and L is a positive integer;
and the terminal generates the video file according to the L images.
7. The method according to claim 3, wherein the terminal generates a video file, specifically comprising:
in the recording process, the terminal acquires L images, wherein the L images are images of the M application windows before the terminal synthesizes the contents of the N application windows into a display interface of the terminal, and L is a positive integer;
and the terminal generates the video file according to the L images.
8. The method according to claim 4, wherein the terminal generates a video file, specifically comprising:
in the recording process, the terminal acquires L images, wherein the L images are images of the M application windows before the terminal synthesizes the contents of the N application windows into a display interface of the terminal, and L is a positive integer;
and the terminal generates the video file according to the L images.
9. A screen recording method is characterized by comprising the following steps:
the method comprises the steps that a terminal displays a first interface, wherein the first interface comprises N application windows, and N is an integer greater than or equal to 2;
the terminal detects a first operation for starting a screen recording function;
responding to the first operation, the terminal displays a second interface, wherein the second interface comprises N controls in one-to-one correspondence with the N application windows;
the terminal detects one or more second operations aiming at least one control in the N controls;
responding to the one or more second operations, the terminal determines M application windows in the N application windows to be recording objects, wherein M is a positive integer less than or equal to N;
the terminal detects a third operation for starting recording;
responding to the third operation, and starting to record the contents of the M application windows by the terminal;
the terminal detects a fourth operation for adjusting the size or the position of one application window in the M application windows;
responding to the fourth operation, the terminal changes the size or the position of the application window in the screen;
the terminal detects a fifth operation for stopping recording;
in response to the fifth operation, the terminal generates M video files, each of the M video files containing the content of one of the M application windows.
10. The method according to claim 9, wherein after the terminal detects the third operation for starting recording and before the terminal detects the fifth operation for stopping recording, the method further comprises:
the terminal detects a sixth operation for closing one of the M application windows;
and responding to the sixth operation, and stopping recording the content of the closed application window by the terminal.
11. The method according to claim 9 or 10, wherein after the terminal detects the third operation for starting recording and before the terminal detects the fifth operation for stopping recording, the method further comprises:
the terminal detects a seventh operation for opening an application window;
responding to the seventh operation, and prompting the user to select whether to record the content of the newly opened application window by the terminal;
the terminal detects an eighth operation for recording the newly opened application window;
and responding to the eighth operation, and starting to record the content of the newly opened application window while recording the content of the M application windows by the terminal.
12. The method according to claim 11, wherein after the terminal detects a fifth operation for stopping recording, the method further comprises:
and responding to the fifth operation, and the terminal also generates a video file containing the content of the newly opened application window.
13. The method according to claim 9 or 10, wherein the terminal generates M video files, specifically including:
in the recording process, the terminal acquires P images, wherein the P images are images of the M application windows before the terminal synthesizes the contents of the N application windows into a display interface of the terminal, and P is a positive integer;
and the terminal generates the M video files according to the P images.
14. The method according to claim 11, wherein the terminal generates M video files, specifically including:
in the recording process, the terminal acquires P images, wherein the P images are images of the M application windows before the terminal synthesizes the contents of the N application windows into a display interface of the terminal, and P is a positive integer;
and the terminal generates the M video files according to the P images.
15. An electronic device, comprising:
a touch screen;
one or more processors;
one or more memories;
one or more sensors;
and one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions that, when executed by the electronic device, cause the electronic device to perform the screen recording method of any of claims 1-8.
16. An electronic device, comprising:
a touch screen;
one or more processors;
one or more memories;
one or more sensors;
and one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the screen recording method of any of claims 9-14.
17. A graphical user interface on an electronic device, the electronic device having a display screen, a camera, a memory, and one or more processors to execute one or more computer programs stored in the memory, the graphical user interface comprising a graphical user interface displayed when the electronic device performs the method of any of claims 1-8 or a graphical user interface displayed when the method of any of claims 9-14 is performed.
18. A computer storage medium comprising computer instructions which, when run on a terminal, cause the terminal to perform a screen recording method according to any one of claims 1-8, or a screen recording method according to any one of claims 9-14.
CN201910528140.9A 2019-06-18 2019-06-18 Screen recording method and electronic equipment Active CN110417991B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910528140.9A CN110417991B (en) 2019-06-18 2019-06-18 Screen recording method and electronic equipment
PCT/CN2020/096559 WO2020253719A1 (en) 2019-06-18 2020-06-17 Screen recording method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910528140.9A CN110417991B (en) 2019-06-18 2019-06-18 Screen recording method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110417991A CN110417991A (en) 2019-11-05
CN110417991B true CN110417991B (en) 2021-01-29

Family

ID=68359343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910528140.9A Active CN110417991B (en) 2019-06-18 2019-06-18 Screen recording method and electronic equipment

Country Status (2)

Country Link
CN (1) CN110417991B (en)
WO (1) WO2020253719A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110417991B (en) * 2019-06-18 2021-01-29 华为技术有限公司 Screen recording method and electronic equipment
CN117873357A (en) * 2020-04-27 2024-04-12 华为技术有限公司 Application window management method, terminal device and computer readable storage medium
CN111614992B (en) * 2020-05-20 2022-05-31 西安闻泰电子科技有限公司 Screen recording method and device and electronic equipment
CN111866423B (en) * 2020-07-07 2023-02-21 广州三星通信技术研究有限公司 Screen recording method for electronic terminal and corresponding equipment
CN111858277B (en) * 2020-07-07 2024-02-27 广州三星通信技术研究有限公司 Screen recording method and screen recording device for electronic terminal
CN113992876A (en) * 2020-07-27 2022-01-28 北京金山办公软件股份有限公司 Method for recording document and playing video, storage medium and terminal
CN112230830A (en) * 2020-09-01 2021-01-15 盐城华旭光电技术有限公司 Screen capturing device and screen capturing method of touch screen equipment
CN112153436B (en) * 2020-09-03 2022-10-18 Oppo广东移动通信有限公司 Screen recording method, device, equipment and storage medium
CN114390341B (en) * 2020-10-22 2023-06-06 华为技术有限公司 Video recording method, electronic equipment, storage medium and chip
US20220161145A1 (en) * 2020-11-23 2022-05-26 International Business Machines Corporation Modifying user interface of application during recording session
EP4224279A4 (en) * 2020-12-03 2024-03-13 Samsung Electronics Co Ltd Electronic device comprising flexible display
CN113014987A (en) * 2021-02-22 2021-06-22 Oppo广东移动通信有限公司 Screen recording method and device, electronic equipment and storage medium
CN115531889A (en) * 2021-06-30 2022-12-30 华为技术有限公司 Multi-application screen recording method and device
CN113672326B (en) * 2021-08-13 2024-05-28 康佳集团股份有限公司 Application window screen recording method and device, terminal equipment and storage medium
CN113852774A (en) * 2021-09-01 2021-12-28 维沃移动通信(杭州)有限公司 Screen recording method and device
CN113840165B (en) * 2021-11-26 2022-02-22 北京易真学思教育科技有限公司 Screen recording method, device, equipment and medium
CN115567666B (en) * 2022-04-11 2023-08-11 荣耀终端有限公司 Screen recording method, electronic device and readable storage medium
CN115002538B (en) * 2022-05-13 2024-03-12 深圳康佳电子科技有限公司 Multi-window video recording control method, device, terminal equipment and storage medium
CN116132790B (en) * 2022-05-25 2023-12-05 荣耀终端有限公司 Video recording method and related device
CN118055287A (en) * 2022-11-17 2024-05-17 荣耀终端有限公司 Screen recording method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104184904A (en) * 2014-09-10 2014-12-03 上海斐讯数据通信技术有限公司 Mobile phone screen recording method allowing user to define recording region
CN106055239A (en) * 2016-06-02 2016-10-26 北京金山安全软件有限公司 Screen recording method and device
CN108055491A (en) * 2017-11-30 2018-05-18 努比亚技术有限公司 A kind of record screen method and terminal
CN108520186A (en) * 2018-03-09 2018-09-11 广东欧珀移动通信有限公司 Record screen method, mobile terminal and computer readable storage medium
CN108920226A (en) * 2018-05-04 2018-11-30 维沃移动通信有限公司 screen recording method and device
CN108924452A (en) * 2018-06-12 2018-11-30 西安艾润物联网技术服务有限责任公司 Part record screen method, apparatus and computer readable storage medium
CN109040419A (en) * 2018-06-11 2018-12-18 Oppo(重庆)智能科技有限公司 Record screen method, apparatus, mobile terminal and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010110766A1 (en) * 2009-03-23 2010-09-30 Thomson Licensing Method and apparatus for recording screen displays
US8701001B2 (en) * 2011-01-28 2014-04-15 International Business Machines Corporation Screen capture
US9786328B2 (en) * 2013-01-14 2017-10-10 Discovery Communications, Llc Methods and systems for previewing a recording
CN110417991B (en) * 2019-06-18 2021-01-29 华为技术有限公司 Screen recording method and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104184904A (en) * 2014-09-10 2014-12-03 上海斐讯数据通信技术有限公司 Mobile phone screen recording method allowing user to define recording region
CN106055239A (en) * 2016-06-02 2016-10-26 北京金山安全软件有限公司 Screen recording method and device
CN108055491A (en) * 2017-11-30 2018-05-18 努比亚技术有限公司 A kind of record screen method and terminal
CN108520186A (en) * 2018-03-09 2018-09-11 广东欧珀移动通信有限公司 Record screen method, mobile terminal and computer readable storage medium
CN108920226A (en) * 2018-05-04 2018-11-30 维沃移动通信有限公司 screen recording method and device
CN109040419A (en) * 2018-06-11 2018-12-18 Oppo(重庆)智能科技有限公司 Record screen method, apparatus, mobile terminal and storage medium
CN108924452A (en) * 2018-06-12 2018-11-30 西安艾润物联网技术服务有限责任公司 Part record screen method, apparatus and computer readable storage medium

Also Published As

Publication number Publication date
CN110417991A (en) 2019-11-05
WO2020253719A1 (en) 2020-12-24

Similar Documents

Publication Publication Date Title
CN110417991B (en) Screen recording method and electronic equipment
CN110231905B (en) Screen capturing method and electronic equipment
CN112130742B (en) Full screen display method and device of mobile terminal
CN110114747B (en) Notification processing method and electronic equipment
CN109274828B (en) Method for generating screenshot, control method and electronic equipment
CN109559270B (en) Image processing method and electronic equipment
CN113838490B (en) Video synthesis method and device, electronic equipment and storage medium
CN113722058B (en) Resource calling method and electronic equipment
CN113645351A (en) Application interface interaction method, electronic device and computer-readable storage medium
CN111602108B (en) Application icon display method and terminal
CN114650363A (en) Image display method and electronic equipment
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN114363462A (en) Interface display method and related device
CN113452945A (en) Method and device for sharing application interface, electronic equipment and readable storage medium
CN110286975B (en) Display method of foreground elements and electronic equipment
CN112637477A (en) Image processing method and electronic equipment
CN112449101A (en) Shooting method and electronic equipment
CN114222020B (en) Position relation identification method and device and readable storage medium
CN114911400A (en) Method for sharing pictures and electronic equipment
CN112437341B (en) Video stream processing method and electronic equipment
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
WO2023000746A1 (en) Augmented reality video processing method and electronic device
CN114691248B (en) Method, device, equipment and readable storage medium for displaying virtual reality interface
CN114827098A (en) Method and device for close shooting, electronic equipment and readable storage medium
CN113495733A (en) Theme pack installation method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant