CN117631919A - Operation method, electronic equipment and medium - Google Patents

Operation method, electronic equipment and medium Download PDF

Info

Publication number
CN117631919A
CN117631919A CN202210995317.8A CN202210995317A CN117631919A CN 117631919 A CN117631919 A CN 117631919A CN 202210995317 A CN202210995317 A CN 202210995317A CN 117631919 A CN117631919 A CN 117631919A
Authority
CN
China
Prior art keywords
information
electronic device
video
interface
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210995317.8A
Other languages
Chinese (zh)
Inventor
胡怡洁
严凯凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210995317.8A priority Critical patent/CN117631919A/en
Priority to PCT/CN2023/112297 priority patent/WO2024037421A1/en
Publication of CN117631919A publication Critical patent/CN117631919A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing

Abstract

The application relates to the technical field of communication, and discloses an operation method, electronic equipment and a medium. The operation method comprises the steps that first electronic equipment obtains a first video and first information related to the first video, wherein the first information comprises interface information of at least one display interface recorded in a screen recording process and identification information of a target control operated by a user on the at least one display interface; and playing the first video, and performing an operation based on the first information in the process of playing the first video. Based on the scheme, the first electronic device can execute the same operation as the operation executed by the user on the electronic device in the video recording process in the video playing process, so that the user does not need to operate by himself, and the convenience and the user experience of the operation are improved.

Description

Operation method, electronic equipment and medium
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to an operation method, an electronic device, and a medium.
Background
At present, electronic devices such as smart phones and the like are widely applied to various customer groups, for example, the electronic devices such as smart phones and the like are also widely applied to middle-aged and elderly people, and problems that most users are not familiar with some operations or do not execute some operations at all in the process of using the mobile phones, such as Bluetooth connection and WIFI connection are not performed, font display size of the mobile phones is not adjusted, and at the moment, families are not at hand and cannot conduct on-site guidance are existed. In addition, in addition to middle-aged and elderly users, there may be cases where some operations are not used by other users, resulting in the users not being able to perform some operations.
In the prior art, as shown in fig. 1a, a user a typically clicks a screen recording control 101 of a mobile phone 100 to perform a process of performing a target operation, for example, a process of performing operations of some bluetooth connection or WIFI connection in a setup application 102, records the video as a video 103, and then sends the video 103 to a mobile phone 200 of a user B that needs to learn how to perform the target operation according to video content by using communication software as shown in fig. 1B. However, in the above solution, the user B is likely to forget how to operate after watching the video, and therefore, needs to repeatedly watch the video learning. Therefore, the user is caused to realize the operation more complicated, and the user experience is reduced.
Disclosure of Invention
In order to solve the technical problem that the user has complicated operation in video recording, the embodiment of the application provides an operation method, electronic equipment and a medium.
In a first aspect, an embodiment of the present application provides an operation method, applied to a first electronic device, including: acquiring a first video and first information associated with the first video, wherein the first information comprises interface information of at least one display interface recorded in a screen recording process and identification information of a target control operated by a user on the at least one display interface; and playing the first video, and executing an operation based on the first information in the process of playing the first video.
It is understood that in the embodiment of the present application, the first video may refer to a recorded video mentioned in the embodiment of the present application.
Based on the scheme, the first electronic device can execute the same operation as the operation executed by the user on the electronic device in the video recording process, so that the user does not need to operate by himself, and the convenience of operation and user experience are improved.
In addition, the first information includes identification information of a target control operated by a user on the display interface, wherein the identification information of the target space can be layout ID information of the target space, and the identification information of each operation target is unique. Even if the interfaces of the first electronic device and the electronic device in the recorded video are different, the first electronic device can still accurately find the corresponding control according to the identification of the target control, execute the corresponding operation, and effectively improve the accuracy of the operation.
In one possible implementation, the operation is consistent with an operation of the user on the at least one display interface in the first video.
In one possible implementation, the first video and the first information are stored as the same file, or the first video and the first information are stored as different files.
In one possible implementation, the method includes: the first information also includes a manner in which a user operates the target control on the at least one display interface.
In one possible implementation manner, the performing an operation based on the first information includes: acquiring interface information of a first display interface in the first information, and determining a first interface to be operated of the electronic equipment based on the interface information of the first display interface; acquiring identification information of a target control operated by a user on the first display interface, and determining a control to be operated of the first interface to be operated based on the identification information of the target control operated by the user on the first display interface; acquiring an operation mode of a user on the first display interface on the target control, and correspondingly operating the control to be operated based on the operation mode of the user on the first display interface on the target control; acquiring interface information of a second display interface in the first information, and determining a second interface to be operated of the electronic equipment based on the interface information of the second display interface.
In one possible implementation manner, in a case that the operation mode of the target control is a multi-state operation mode, the first information further includes auxiliary information corresponding to the operation mode.
In one possible implementation, the multi-state operation mode includes a typing operation and an adjusting operation, the auxiliary information corresponding to the typing operation includes a typing content, and the auxiliary information corresponding to the adjusting operation includes an adjusting state.
It will be appreciated that adjusting a hash may include sliding operations, such as sliding the operation of a progress bar, etc.
It can be understood that when the second electronic device in the embodiment of the present application performs operations with multiple states, such as sliding, typing, etc., the operations that should be performed on the target control may be accurately obtained according to the auxiliary information, so as to improve accuracy of the operations.
In one possible implementation, the interface information includes an interface name, an interface application name, and an interface activity name.
In one possible implementation, the first electronic device includes a first display area and a second display area;
in the process of playing the first video by the first electronic device, the first display area displays a playing picture of the first video, and the second display area displays an operation process of executing the operation by the first electronic device.
In one possible implementation, the first display region overlaps a partial region of the second display region, or the second display region overlaps a partial region of the first display region.
In one possible implementation, the second display area is displayed on a split screen with the first display area.
In one possible implementation manner, the third electronic device is connected to the first electronic device, the third electronic device displays a playing picture of the first video, and the first electronic device displays an operation procedure of the first electronic device to execute the operation.
In one possible implementation manner, the third electronic device is connected to the first electronic device, the third electronic device displays an operation procedure of the first electronic device for executing the operation, and the first electronic device displays a playing picture of the first video.
In a possible implementation manner, the third electronic device is connected with the first electronic device, and the first electronic device displays an operation process of the first electronic device to execute the operation and/or a playing picture of the first video; the third electronic device comprises a third display area and a fourth display area, the third display area displays the display content of the first electronic device, and the fourth display area displays the operation process or the playing picture of the first video.
In one possible implementation, the method further includes: and sending prompt information to a display interface of the first electronic equipment under the conditions that the play exit instruction is received and the operation is detected to be incomplete.
In a possible implementation manner, a plurality of time points are marked in the first video, and each time point corresponds to a part of operations of a user in the screen recording process.
In one possible implementation, the method further includes: and fixing the partial time points with the association in the plurality of time points so that the video corresponding to the partial time points with the association is continuously played.
It may be understood that, in the embodiment of the present application, the time point with association may be a time point during an operation of executing any function, for example, a plurality of time points during an operation of executing the WIFI function in the embodiment of the present application. The fixed time points with the association can enable the electronic equipment to not drag the progress bar between the time points with the association back and forth when the video is to be played, so that the problem that the electronic equipment executes redundant part of operations or less executes part of operations is avoided when the user drags the progress bar to the middle process of executing a certain function for subsequent watching when the video is recorded and the executing process of a plurality of functions is included.
In one possible implementation manner, when the first video is played to a target time point, stopping playing the first video under the condition that the operation corresponding to the target time point is not completed; and after the operation corresponding to the target moment is completed, continuing to play the first video.
In one possible implementation, the method further includes: and editing the first video to obtain a second video.
In a possible implementation, the clipping process includes a clipping process and/or a splitting process and/or a reorganization process.
In one possible implementation, the first electronic device stores the first video and the first information in a MOV format, an EXIF format, or a compressed package format.
In a possible implementation manner, the operation is performed based on the first information in the process of playing the first video; comprising the following steps: detecting a play instruction, playing the first video, adding an operation corresponding to a first moment point into a task queue when the first video is played to the first moment point, and executing the operation in the task queue based on the first information; and when the first video is played to a second moment, adding the operation corresponding to the second moment into the task queue.
In a second aspect, the present application provides an operation method applied to a second electronic device, including: detecting a screen recording instruction; starting screen recording, and generating a first video and first information, wherein the first information comprises interface information of at least one display interface recorded in the screen recording process and identification information of a target control operated by a user on the at least one display interface, and the first information can be used for: and the electronic equipment executes corresponding operation based on the first information in the process of playing the first video.
In one possible implementation, the second electronic device sends the first video and the first information to the first electronic device.
In a third aspect, the present application provides an electronic device, including: a memory for storing instructions for execution by one or more processors of the electronic device, and the processor is one of the one or more processors of the electronic device for performing the methods of operation mentioned herein.
In a fourth aspect, the present application provides a readable storage medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform the method of operation mentioned herein.
In a fifth aspect, the present application provides a computer program product comprising: executing instructions stored in a readable storage medium from which at least one processor of an electronic device can read, the executing instructions being executed by the at least one processor to cause the electronic device to perform the methods of operation mentioned herein.
Drawings
Fig. 1a shows a schematic diagram of a screen recording process according to some embodiments of the prior art.
Fig. 1b shows a schematic diagram of playing a recorded video according to some embodiments of the prior art.
Fig. 2a illustrates a schematic diagram of a screen recording process, according to some embodiments of the present application.
Fig. 2b illustrates a schematic diagram of a screen recording process, according to some embodiments of the present application.
Fig. 3 illustrates a process diagram of screen recording, according to some embodiments of the present application.
Fig. 4a-4h illustrate a process diagram of screen recording, according to some embodiments of the present application.
Fig. 5 illustrates a schematic structural diagram of an electronic device, according to some embodiments of the present application.
Fig. 6 illustrates a flow diagram of a method of operation, according to some embodiments of the present application.
Figures 7a-7c illustrate schematic representations of a recorded video, respectively, according to some embodiments of the present application.
Fig. 7d illustrates a storage schematic of recorded video and operational information, respectively, according to some embodiments of the present application.
Fig. 8 illustrates a playback schematic of a recorded video, according to some embodiments of the present application.
Figures 9a-9b illustrate playback diagrams of a recorded video, respectively, according to some embodiments of the present application.
Fig. 10 illustrates a playback schematic of a recorded video, according to some embodiments of the present application.
FIG. 11 illustrates a playback schematic of a recorded video, according to some embodiments of the present application
FIG. 12 illustrates a playback schematic of a recorded video, according to some embodiments of the present application
Fig. 13 illustrates a schematic view of a scene exiting play during playback of a recorded video, according to some embodiments of the present application.
Fig. 14 illustrates a schematic view of a setup of time points in a recorded video, according to some embodiments of the present application.
Fig. 15 illustrates an associated fixed schematic of time points in a recorded video, according to some embodiments of the present application.
Fig. 16 illustrates a schematic diagram of the generation of a task queue during playback of a recorded video, according to some embodiments of the present application.
17a-17b illustrate schematic diagrams of a video clip, according to some embodiments of the present application.
Fig. 18 illustrates a hardware architecture diagram of an electronic device, according to some embodiments of the present application.
Detailed Description
Illustrative embodiments of the present application include, but are not limited to, a method of operation, an electronic device, and a medium.
To solve the above problems, an embodiment of the present application provides an operation method, including: after detecting the recording triggering operation, the first electronic device starts screen recording and records the position information of the target control triggered by the user on each interface in the recording process, for example, as shown in fig. 2a, in the recording process, when the user clicks the WIFI control 4031 on the setting interface 403 of the mobile phone 100, the electronic device may record the position information of the WIFI control 4031, for example, the position information of the WIFI control 4031 is "the second control row of the setting interface".
And the first electronic device can send the recorded video and the corresponding position information of the target control of each interface to the second electronic device. The second electronic equipment can automatically trigger the target control according to the position information of the target control of each interface while playing the video, namely, the operation consistent with the operation in the recorded video is realized. For example, during the recording process of the first electronic device, the user clicks the WIFI control 4031 on the setting interface 403 of the mobile phone 100. The second electronic device may trigger the WIFI control on the second electronic device according to the position information of the WIFI control 4031 while playing the video.
The scheme can realize that the corresponding operation is automatically triggered when the video is played, and the operation process is simplified. In some embodiments, however, the location of different controls on different devices may vary, for example, as shown in fig. 2 a. The location information of the WIFI control 4031 on the mobile phone 100 is "the second control row of the setting interface", and the location of the WIFI control 2011 on the mobile phone 200 is the third control row of the setting interface 201. When playing the recorded video, the mobile phone 200 still triggers the second control row of the setting interface 201 of the mobile phone 200 according to the location information, so that an error operation is generated.
In addition, in the above scheme, the first electronic device can only record the position of the target control, so that there is a control with multiple state operations, such as multiple adjustment states, multiple typing states, and the like, and the second electronic device cannot determine the specific state of the target control, so that the specific operation cannot be performed. For example, as shown in fig. 2b, during the recording process, the user slides the brightness adjustment control 1001 on the shortcut interface 101 of the mobile phone 100, and in the above solution, the first electronic device can only record the position where the brightness adjustment control 1001 is located, for example, in the last control row of the shortcut interface. However, the brightness adjustment degree is not recorded, so based on the scheme, the second electronic device cannot determine the brightness adjustment degree according to the position of the brightness adjustment control, and therefore, the second electronic device cannot adjust the brightness to the same degree as the brightness in the recorded video.
To solve the above problems, an embodiment of the present application provides an operation method, including: the first electronic device detects a recording triggering operation, for example, as shown in fig. 3, when the user clicks the recording control 4011 in the mobile phone 100, screen recording is started, that is, a process of changing the screen display caused by the user operation is recorded, recorded video and first information are obtained, and the recorded video and the first information are stored. The first information may be operation information of the user on the first electronic device in the recording process. The first information comprises interface information of each interface recorded in the screen recording process and operation information of a user on each interface, wherein the operation information of the user on each interface can comprise an operation mode, an operation target and a layout ID of the operation target of the user.
It may be understood that in the embodiment of the present application, the operation information of the user at each interface may further include information such as coordinates of the operation target, a control row position where the operation target is located, and a control size. Wherein, the operation target may refer to a target control operated by a user.
It can be appreciated that in the embodiment of the present application, the recorded video and the first information may be stored as two separate files, or may be stored as one file.
In the embodiment of the application, the first electronic device may send the recorded video and the corresponding first information to the second electronic device, and if the second electronic device detects the play operation of the user on the recorded video, the recorded video is played, and in the process of playing the recorded video, the corresponding operation is performed on the second electronic device based on the first information, so that the second electronic device can execute the same operation as the user in the recorded video on the first electronic device. For example, in the recorded video played by the second electronic device, when the user operates the first electronic device to open the setting application, the second electronic device may open the setting application based on the received first information.
The interface information of each interface may include a package name, an application name, an activity name, and the like of the interface.
The operation information of the user at each interface may include an operation style (e.g., click, key, fill information, slide, select, key, etc.) of the user, an operation target, a layout ID of the operation target, and the like. Wherein the layout ID of each operation target is unique. For example, if the layout ID of the WIFI control of the setup interface is "12345", then on any device, the layout ID of the WIFI control is "12345". Thus, even if the interfaces of the second electronic device and the first electronic device are different, the electronic device can still accurately find the corresponding control according to the layout ID of the target control and execute the corresponding operation.
For example, in the recording process, when the user clicks the WIFI control on the setting interface of the first electronic device, the first electronic device records interface information of the setting interface, including, for example, an application name "setting", a package name "com.setting" and an activity name "setting interface", and may record operation information of the user on the setting interface, for example, may include an operation style "clicking" of the user, an operation target "WIFI control", and a layout ID "12345" of the WIFI control ".
It is understood that when some of the operation patterns are operations having various states such as sliding, typing, and the like, the operation information may further include auxiliary information corresponding to the operation patterns. For example, when the operation style is a key-in operation, there may be a plurality of types of key-in contents, and thus the key-in auxiliary information may also include key-in contents; when the operation style is sliding, there are various sliding positions, and thus the auxiliary information of sliding may also include a sliding position, or a progress bar adjustment position, or the like.
For example, as shown in fig. 2b, during the recording process, the user slides the brightness adjustment control 1001 on the shortcut interface 101 of the mobile phone 100, and in addition to recording the operation style "sliding" of the user, the operation target "brightness adjustment control" and the layout ID "12347" of the brightness adjustment control 1001 in this embodiment of the present application, the brightness adjustment degree, that is, the adjustment position of the brightness bar, for example, the position of thirty percent of the overall brightness bar may be recorded. In this way, the second electronic device can accurately adjust the luminance according to the adjustment position of the luminance bar when performing the luminance adjustment operation based on the operation information.
Based on the scheme, when the second electronic device performs operations with various states such as sliding, typing and the like, the operations to be performed on the target control can be accurately acquired according to the auxiliary information, and the accuracy of the operations is improved.
It will be appreciated that in some embodiments, when the operation target is content in a list form such as a table, an index ID of a corresponding position of the content in the list may be used instead of the layout ID.
It will be appreciated that in some embodiments, the first electronic device may send the recorded video and the corresponding first information to the second electronic device via any means such as communication software or bluetooth transmission.
It may be appreciated that, in some embodiments, the manner in which the second electronic device performs the corresponding operation on the second electronic device based on the first information during the video playing process may be:
the second electronic equipment acquires the interface information of the first interface in the received first information and the user operation information, enters the interface to be operated according to the interface information, traverses the layout IDs of all the controls in the interface to be operated according to the layout IDs of the corresponding controls in the operation information of the user, searches for a target control consistent with the layout ID of the operation target in the received operation information of the first interface, and carries out corresponding operation on the target control according to the operation style in the operation information. After the operation on the first interface is executed, the interface information and the user operation information of the next interface are obtained, and the steps are executed, so that the corresponding operation on the next interface is completed.
In summary, in the embodiment of the present application, a user may acquire a recorded video and first information of the user on the first electronic device in a recording process by performing screen recording on the first electronic device, and send the recorded video and the first information of the user on the first electronic device in the recording process to the second electronic device. Therefore, in the process of playing the video, the second electronic equipment can execute the same operation as the operation executed by the user on the first electronic equipment in the video recording process, so that the user receiving the screen recorded video and the first information does not need to operate by himself, the electronic equipment can perform corresponding operation according to the received recorded video and the first information, and the convenience of operation and user experience are improved.
Meanwhile, the first information of the user comprises detailed interface information and operation information corresponding to the interface, such as layout ID of a target space, auxiliary information and the like, so that the second electronic equipment can accurately identify the interface to be operated and the corresponding control to be operated according to the first information, and operation accuracy is improved.
It will be appreciated that in some embodiments, during the video playing process of the second electronic device, the performing operation process of the second electronic device may be displayed in a floating window playing form, so as to be convenient for the user to watch.
The scheme in the embodiment of the application is described below by taking a scenario that a user performs screen recording in a WIFI function starting process through a first electronic device and sends recorded video and operation information to a second electronic device as an example. The first electronic device may be a mobile phone 100, and the second electronic device may be a mobile phone 200.
For example, as shown in fig. 4a, when the user a clicks the operation recording control 4011 on the shortcut interface 401 on the mobile phone 100, the mobile phone 100 records a change process of the display state of the screen caused by the operation of the user a, and records the first information corresponding to the operation of the user, and the mobile phone 100 may display the recording box 400 shown in fig. 4b in the whole recording process. For example, as shown in fig. 4b, when the user records WIFI connection operation on the mobile phone 100, the user first clicks the setting application 4021 on the main interface 402, and the mobile phone 100 may record interface information of the main interface 402 while recording video, for example, including activity name "main interface", and may record operation information of the user on the main interface 402, for example, including operation style "click" of the user, operation target "setting application", and layout ID "1234" of the setting application 4021.
After the user clicks the setting control 4021, as shown in fig. 4c, the mobile phone 100 may display the setting interface 403, the user clicks the WIFI control 4031 at the setting interface 403, and the mobile phone 100 may record interface information of the setting interface 403 while recording video, including, for example, an application name "setting", a package name "com.setting" and an activity name "setting interface", and operation information of the user at the setting interface 403, for example, may include an operation style "click" of the user, an operation target "WIFI control", and a layout ID "12345" of the WIFI control 4031.
After the user clicks the WIFI control 4031, as shown in fig. 4d, the mobile phone 100 may display the WIFI interface 404, where the user clicks the switch control 4041 at the WIFI, and the mobile phone 100 may record interface information of the WIFI interface 404 while recording the video, including, for example, an application name "set", a package name "com.setting", and an activity name "WIFI interface", and operation information of the user on the WIFI interface 404, for example, may include an operation style "click" of the user, an operation target "switch control", and a layout ID "12346" of the switch control 4041. After the user clicks on the switch control 4041, as shown in fig. 4e, the WIFI interface 404 of the handset 100 may display a list of available WLANs. When the user finishes the operation, the user may click on the recording stop control 4001 in the recording box 400 to stop recording, and at this time, the mobile phone 100 acquires and stores the recorded video 500 and the corresponding first information in the whole operation process.
If the user a wants to send the recorded video 500 to the mobile phone 200 of the user B, the recorded video 500 can be sent to the mobile phone 200 of the user B through the communication software as shown in fig. 4 f. It will be appreciated that the first information is also transmitted to the mobile phone 200 of user B at the same time as the recorded video 500 is transmitted. As shown in fig. 4g, user B may receive and present recorded video 500 via communication software.
When the user B clicks the recorded video 500 to play the video, as shown in fig. 4h, while the recorded video 500 is played, the electronic device of the user B may operate the mobile phone 200 based on the recorded information, and display the operation procedure in the operation floating window 405. The operation process specifically can comprise: firstly, triggering a setting control on a main interface, displaying the setting interface, and then triggering a WIFI control on the setting interface, and displaying the WIFI interface. And triggering a switch control on the WIFI interface to start the WIFI function.
Before describing the operation method in the embodiment of the present application in detail, the electronic device provided in the embodiment of the present application will be described in detail first. The electronic equipment is first electronic equipment or second electronic equipment.
The software system of the electronic device may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the invention, an Android system with a layered architecture is taken as an example, and the software structure of the electronic equipment is illustrated.
Fig. 5 is a software configuration block diagram of an electronic device according to an embodiment of the present invention. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 5, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 5, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
It will be appreciated that in some embodiments, the window manager may send a play instruction to a corresponding application program in response to a play operation of the user on the recorded video, for example, a play clicked by the user in the album application, and then the window manager may send the play instruction to the album application. In addition, the window manager can respond to the playing operation of the user on the recorded video, acquire the first information, and control the corresponding application program to perform corresponding operation on the second electronic device based on the first information.
For example, the window manager may acquire the interface information and the user operation information of the first interface in the received first information, determine the interface to be operated according to the interface information, traverse the layout IDs of all the controls in the interface to be operated according to the layout IDs of the corresponding controls in the operation information of the user, so as to find a target control consistent with the layout ID of the operation target in the operation information of the first interface, and control the corresponding application to perform corresponding operation on the target control according to the operation style in the operation information. After the operation on the first interface is executed, the interface information and the user operation information of the next interface are obtained, and the steps are executed, so that the corresponding operation on the next interface is completed.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is for providing communication functions of the electronic device. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
It can be appreciated that in the embodiment of the present application, the media library may call a dedicated recording interface to record on-screen in response to a recording operation of a user, and store the acquired recorded video and first information of the first electronic device by the user in a set location, for example, in an album application during the recording process.
For example, in some embodiments, the media library may store files containing the recorded video and the first information in a compressed package in an album application of the electronic device.
In some embodiments, the media library may store the entire file corresponding to the recorded video and the first information in a multi-track format, such as an exchangeable image file format (Exchangeable image file format, EXIF) or MOV format.
Wherein, when the recorded video and the first information are stored in an album or other folders of the electronic device in an EXIF format. The recorded video may be stored in the form of a dynamic image in a photo album or other folder of the first electronic device via a video track and the first information may be stored in the form of text in a photo album or other folder of the first electronic device via a data track. It will be appreciated that files in EXIF format do not support operations such as clipping and editing.
When recorded video and first information is saved in an album or other folder of the electronic device in MOV format. The media library may store recorded video in video format via video tracks. The media library can output each interface information in the first information as a corresponding text file through a command line, and the operation information of each interface can also be output as a corresponding text file through the command line, wherein the text files corresponding to the interface information and the operation information of each interface can be stored in a text form through a data track.
It will be appreciated that in the embodiments of the present application, the MOV format of the video supports any cropping and editing operations, so storing the recorded video and the first information in the MOV format can enable the recorded video to be in an editable form, which facilitates post-processing.
In some embodiments, the media library may bundle the recorded video and the first information and save the bundled video in an album application of the electronic device in MP4 format.
And in some embodiments, the media library may receive recorded video and first information sent by other electronic devices.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes the operation method provided in the embodiment of the present application in detail based on the above electronic device. Fig. 6 shows a flow of interaction between a first electronic device and a second electronic device according to an embodiment of the present application, and as shown in fig. 6, an operation method includes:
601: and the first electronic equipment detects a recording triggering operation and starts screen recording.
It will be appreciated that, as shown in fig. 4a, the recording trigger operation may be an operation that the user clicks the operation recording control 4011 in the mobile phone 100.
It can be appreciated that, after the first electronic device starts the screen recording, during the recording process, a recording frame may be displayed on the display interface to prompt the user that the recording process is being performed, and prompt the user for the recording time. In some embodiments, a recording stop control may also be present in the recording box.
For example, as shown in fig. 4b, a recording box 400 may be displayed on the display interface of the mobile phone 100, and a recording time and recording stop control 4001 may be included in the recording box 400.
602: the first electronic device obtains the recorded video and the first information.
It will be appreciated that recording video may include changing the screen display as a result of user manipulation during recording.
The first information may include interface information of each interface recorded in the screen recording process and operation information of the user on each interface.
The interface information of each interface may include a package name, an application name, an activity name, and the like of the interface.
The operation information of the user at each interface may include an operation style (e.g., click, key, fill information, slide, select, key, etc.) of the user, an operation target, a layout ID of the operation target, and the like. Wherein the layout ID of each operation target is unique. For example, if the layout ID of the WIFI control of the setup interface is "12345", then on any device, the layout ID of the WIFI control is "12345". Thus, even if the interfaces of the second electronic device and the first electronic device are different, the electronic device can still accurately find the corresponding control according to the layout ID of the target control and execute the corresponding operation.
For example, in the recording process, when the user clicks the WIFI control on the setting interface of the first electronic device, the first electronic device records interface information of the setting interface, including, for example, an application name "setting", a package name "com.setting" and an activity name "setting interface", and may record operation information of the user on the setting interface, for example, may include an operation style "clicking" of the user, an operation target "WIFI control", and a layout ID "12345" of the WIFI control ".
It is understood that when some of the operation patterns are operations having various states such as sliding, typing, and the like, the operation information may further include auxiliary information corresponding to the operation patterns. For example, when the operation style is a key-in operation, there may be a plurality of types of key-in contents, and thus the key-in auxiliary information may also include key-in contents; when the operation style is sliding, there are various sliding positions, and thus the auxiliary information of the sliding may also include a sliding position, a progress bar adjustment position, or the like.
For example, as shown in fig. 2b, during the recording process, the user slides the brightness adjustment control 1001 on the shortcut interface 101 of the mobile phone 100, and in addition to recording the operation style "sliding" of the user, the operation target "brightness adjustment control" and the layout ID "12347" of the brightness adjustment control 1001 in this embodiment of the present application, the brightness adjustment degree, that is, the adjustment position of the brightness bar, for example, the position of thirty percent of the overall brightness bar may be recorded. In this way, the second electronic device can accurately adjust the luminance according to the adjustment position of the luminance bar when performing the luminance adjustment operation based on the operation information.
Based on the scheme, when the second electronic device performs operations with various states such as sliding, typing and the like, the operations to be performed on the target control can be accurately acquired according to the auxiliary information, and the accuracy of the operations is improved.
It will be appreciated that in some embodiments, when the operation target is content in a list form such as a table, an index ID of a corresponding position of the content in the list may be used instead of the layout ID.
It is understood that the first electronic device may store the recorded video and the first information. There are various storage forms of the recorded video and the first information in the first electronic device.
In some embodiments, the recorded video and the first information will be bundled and stored in a compressed package file format in an album application or other folder on the first end electronic device.
For example, in some embodiments, the compressed package file containing the recorded video and the first information may be presented in a video-plus-operation-marked fashion in an album application. For example, the presentation form of the compressed package file may be a video style as shown in fig. 7a, and a frame highlighting operation mark 501 is added, the presentation form of the compressed package file may also be a video style as shown in fig. 7b, and an angle marking operation mark 502 is added, and the presentation form of the compressed package file may also be a video style as shown in fig. 7c, and a masking prompting operation mark 503 is added.
In some embodiments, the recorded video and the first information may be stored in an album application or other folder of the first electronic device in MP4 format.
In some embodiments, the overall file corresponding to the recorded video and the first information may also be stored in a multi-track format, such as an exchangeable image file format (Exchangeable image file format, EXIF) or MOV format.
Wherein, when the recorded video and the first information are stored in an album or other folders of the electronic device in an EXIF format. The recorded video may be stored in the form of a dynamic image in a photo album or other folder of the first electronic device via a video track and the first information may be stored in the form of text in a photo album or other folder of the first electronic device via a data track. It will be appreciated that files in EXIF format do not support operations such as clipping and editing.
As shown in fig. 7d, when the recorded video and the first information are saved in an album or other folder of the electronic device in the MOV format as shown in fig. 7 d. The recorded video may be stored via a video track in a video format, such as MP4 format. Each interface information in the first information can be output as a corresponding text file through a command line, the operation information of each interface can also be output as a corresponding text file through a command line, and the interface information and the text file corresponding to the operation information of each interface can be stored in a text form through a data track.
It will be appreciated that in the embodiments of the present application, the MOV format of the video supports any cropping and editing operations, so storing the recorded video and the first information in the MOV format can enable the recorded video to be in an editable form, which facilitates post-processing.
603: the first electronic device sends the recorded video and the corresponding first information to the second electronic device.
In some embodiments, the first electronic device may send the recorded video and the file corresponding to the corresponding first information, for example, the compressed package file, to the second electronic device through the communication software and through the network.
In some embodiments, the first electronic device may send the recorded video and the file corresponding to the corresponding first information to the second electronic device by means of bluetooth transmission.
In some embodiments, the first electronic device may further upload the file corresponding to the recorded video and the corresponding first information to the cloud space, and the second electronic device may download the file corresponding to the recorded video and the corresponding first information from the cloud space.
In some embodiments, the first electronic device shares the recorded video and the file corresponding to the corresponding first information to the second electronic device in a link, password sharing or other manner, and the second electronic device obtains the recorded video and the corresponding first information by opening the sharing link or inputting the password.
It can be appreciated that the above-mentioned ways of transmitting the recorded video and the corresponding first information to the second electronic device by the first electronic device are merely illustrative, and in this embodiment of the present application, the ways of transmitting the recorded video and the corresponding first information to the second electronic device by the first electronic device may be any implementation manners.
For example, if the user a wants to send the recorded video 500 to the mobile phone 200 of the user B as shown in fig. 4f, the recorded video 500 may be sent to the mobile phone 200 of the user B through the network and the communication software as shown in fig. 4 f. It will be appreciated that the first information may be transmitted to the mobile phone 200 of user B simultaneously with the transmission of the recorded video.
604: the second electronic device receives the recorded video and the corresponding first information.
It will be appreciated that the second electronic device receives the first information simultaneously with the recorded video.
As shown in fig. 4g, user B may receive recorded video 500 via communication software and present the recorded video in a video format.
It will be appreciated that after the second electronic device receives the recorded video and the corresponding first information, the recorded video and the corresponding first information may be stored.
There are a variety of storage formats in which recorded video and first information may be stored in the second electronic device. In some embodiments, the recorded video and the first information will be available for bundling and saving in a compressed package file format in an album application or other folder of the second electronic device.
For example, in some embodiments, the compressed package file containing the recorded video and the first information may be presented in a video-plus-operation-marked fashion in an album application. For example, the presentation form of the compressed package file may be a video style as shown in fig. 7a, and the frame highlighting operation mark 501 is added, the presentation form of the compressed package file may also be a video style as shown in fig. 7b, and the corner mark operation mark 502 is added, and the presentation form of the compressed package file may also be a video style as shown in fig. 7c, and the mask prompting operation mark 503 is added.
In some embodiments, the recorded video and the first information will be bundled and saved in an album application or other folder of the second electronic device in MP4 format.
In some embodiments, the recorded video and the first information will be bundled and saved in an album application or other folder of the second electronic device in MP4 format.
In some embodiments, the overall file corresponding to the recorded video and the first information may also be stored in a multi-track format, such as an exchangeable image file format (Exchangeable image file format, EXIF) or MOV format.
Wherein, when the recorded video and the first information are stored in an album or other folders of the electronic device in an EXIF format. The recorded video may be stored in the form of a dynamic image in a photo album or other folder of the first electronic device via a video track and the first information may be stored in the form of text in a photo album or other folder of the first electronic device via a data track. It will be appreciated that files in EXIF format do not support operations such as clipping and editing.
As shown in fig. 7d, when the recorded video and the first information are saved in an album or other folder of the electronic device in the MOV format as shown in fig. 7 d. The recorded video may be stored via a video track in a video format, such as MP4 format. Each interface information in the first information can be output as a corresponding text file through a command line, the operation information of each interface can also be output as a corresponding text file through a command line, and the interface information and the text file corresponding to the operation information of each interface can be stored in a text form through a data track.
It will be appreciated that in the embodiments of the present application, the MOV format of the video supports any cropping and editing operations, so storing the recorded video and the first information in the MOV format can enable the recorded video to be in an editable form, which facilitates post-processing.
605: and the second electronic equipment responds to the playing operation of the user and executes corresponding operation based on the first information.
It can be understood that, in the embodiment of the present application, the operation performed by the second electronic device based on the first information is consistent with the operation performed by the user on the display interface in the recorded video.
It may be appreciated that in the embodiment of the present application, the manner in which the second electronic device performs the corresponding operation based on the first information may be:
the second electronic equipment acquires the information of the first interface in the received first information and the user operation information, enters the interface to be operated according to the interface information, traverses the layout IDs of all the controls in the interface to be operated according to the layout IDs of the corresponding controls in the operation information of the user, searches for a target control consistent with the layout ID of the operation target in the received operation information of the first interface, and carries out corresponding operation on the target control according to the operation style in the operation information. After the operation on the first interface is executed, the interface information and the user operation information of the next interface are obtained, and the steps are executed, so that the corresponding operation on the next interface is completed.
For example, taking the second electronic device as the mobile phone 200, the content in the recorded video is the operation procedure for starting the WIFI function shown in fig. 4, which illustrates the manner in which the second electronic device performs corresponding operation on the second electronic device based on the first information in the process of playing the video:
The mobile phone 200 first obtains interface information of the main interface 402 in the first information, for example, including activity name "main interface", and obtains operation information of the user on the main interface 402, for example, may include an operation style "click" of the user, an operation target "set application", and a layout ID "1234" of the set application 4021.
And then finding the main interface of the mobile phone 200 as an interface to be operated according to the interface information of the main interface 402. According to the layout ID '1234' of the corresponding setting application 4021 in the operation information of the user, traversing the layout IDs of all the controls on the main interface of the mobile phone 200 to find a target control consistent with the layout ID '1234' of the setting application 4021, and clicking the target control according to the operation style 'click' in the first information. After the click operation on the target control is completed, the interface information of the setting interface 403 and the user operation information are obtained, and the above steps are executed, so as to complete the corresponding operation on the setting interface of the mobile phone 200.
In some embodiments, as shown in fig. 8, in response to a play operation of the user, the second electronic device may perform an operation in the background while performing full-screen play or large-window play on the recorded video, and the operation process is not illustrated.
In some embodiments, in response to a playing operation of the user, the second electronic device may perform full-screen playing or large-window playing on the recorded video, and display an operation procedure in the operation floating window.
For example, as shown in fig. 4h, when the user clicks on the recorded video 500 received by the mobile phone 200 to play the video, as shown in fig. 4h, the recorded video may be played in full screen on the main interface of the mobile phone 200, and the operation procedure is shown in the operation floating window 405.
In some embodiments, in response to a play operation of the user, the second electronic device may present the recorded video in a video operation floating window and place the operation process in a large window for play.
For example, as shown in fig. 9a, when the user clicks on the recorded video 500 received by the mobile phone 200 to play the video, the recorded video may be played in the video hover window 406 during the main interface display operation of the mobile phone 200.
In some embodiments, as shown in fig. 9b, the second electronic device may include a first display area and a second display area, the first display area displaying the recorded video and the second display area displaying the operation procedure, i.e., the recorded video and the operation procedure split screen display.
In some embodiments, the second electronic device may have any number of other devices that are interconnected. And video playing can be performed through any device in the plurality of devices connected with each other, and any device in the other devices realizes control of the second electronic device to execute the operation corresponding to the first information. The mode of the interconnection can be any mode such as Bluetooth connection, mobile phone NFC function pairing connection, code scanning connection, wired connection and the like.
For example, as shown in fig. 10, the third electronic device is a tablet pc 300, the second electronic device is a mobile phone 200, and if the tablet pc 300 and the mobile phone 200 are paired and connected through a mobile phone NFC function, the third electronic device may be used to play the recorded video 500, and the second electronic device may be used to perform a corresponding operation according to the first information.
As another example, as shown in fig. 11, the mobile phone 200 may be used for playing the recorded video 500, and the tablet 300 may be used for controlling the second device to perform a corresponding operation according to the first information, and displaying the operation procedure.
In some embodiments, as shown in fig. 12, the tablet 300 may display a virtual interface 302 corresponding to the mobile phone 200, and the mobile phone 200 may display an operation process on a main interface of the mobile phone 200, and the virtual interface 302 of the tablet 300 may be consistent with the mobile phone, to display the operation process. And the play window 301 of the tablet pc 300 can play the recorded video at the same time.
In some embodiments, when the user exits the playing, if the second electronic device has an incomplete operation, the user may be reminded in a pop-up window or other form.
For example, as shown in fig. 13, when the user exits the play, it may be prompted in a prompt box 600 whether the operation is not complete or not to continue. When the user clicks the yes control 601 in the prompt box 600, the video will continue to be played, and when the user clicks the no control 602 in the prompt box 600, the video will stop playing.
It will be appreciated that in some embodiments, the recorded video may retain any of the functions of fast forward, fast reverse, window size adjustment, and pause of conventional video. And the recorded video can be played repeatedly.
It will be appreciated that in some embodiments, the first electronic device may mark a plurality of points in time in the recorded video. For example, in some embodiments, a point in time may be marked during recording after all operations on each interface are completed before the next interface is displayed.
In the following, taking a first electronic device as the mobile phone 100, an operation process that the mobile phone 100 records that the user a starts the WIFI function is taken as an example to describe that recording information and first information are obtained in the embodiment of the present application. As shown in fig. 4b, the user performs recording of WIFI on operation on the mobile phone 100, the user first clicks the setting application 4021 on the main interface 402, and the mobile phone 100 records interface information of the main interface 402 while recording video, for example, including activity name "main interface", and may record operation information of the user on the main interface 402, for example, including operation style "click" of the user, operation target "setting application", and layout ID "1234" of the setting application 4021. After the user clicks the set control 4021, the mobile phone 100 may mark the first time point in the recorded video.
After the user clicks the setting control 4021, as shown in fig. 4c, the mobile phone 100 may display the setting interface 403, when the user clicks the WIFI control 401 on the setting interface 403, the mobile phone 100 may record interface information of the setting interface 403, including, for example, an application name "set", a package name "com.setting" and an activity name "setting interface", and operation information of the user on the setting interface 403, for example, may include an operation style "click" of the user, an operation target "WIFI control", a layout ID "12345" of the WIFI control 401, and after the user clicks the WIFI control 401, the mobile phone 100 may mark a second time point in the recorded video.
After the user clicks the WIFI control 401, as shown in fig. 4d, the mobile phone 100 may display the WIFI interface 404, where the user clicks the switch control 4041 at WIFI, and the mobile phone 100 may record interface information of the WIFI interface 404, including, for example, an application name "set", a package name "com" and an activity name "WIFI interface", and operation information of the user on the WIFI interface 404, for example, may include an operation style "click" of the user, an operation target "switch control", and a layout ID "12346" of the switch control 4041. After the user clicks on the switch control 4041, as shown in fig. 4e, the WIFI interface 404 of the handset 100 may display a list of available WLANs. When the user finishes the operation, the recording stop control 4001 in the recording box 400 may be clicked to stop recording, and the mobile phone 100 marks a third time point, that is, the ending time point. The mobile phone 100 acquires and stores the recorded video and the corresponding first information of the whole operation process.
For example, as shown in fig. 14, the acquired recorded video 500 may include three time points, where each time point may correspond to one of the sub-tasks of the WIFI-on function task.
It is appreciated that in some embodiments, several of the multiple points in time may have associated points in time fixed with one another, and that the associated points in time may be points in time during operation to perform any of the functions, such as the multiple points in time during operation to perform the on WIFI function described above. Therefore, when the first electronic device or the second electronic device plays the video, the progress bar between the associated time points cannot be dragged back and forth, and the problem that the electronic device executes redundant part of operations or less part of operations is caused by the fact that a user drags the progress bar to an intermediate process of executing a certain function for subsequent watching when the video is recorded and the executing process of a plurality of functions is avoided.
For example, in some scenarios, the user may have recorded the transfer process in addition to starting the recording of the WIFI process. For example, as shown in fig. 15, in the case where the first time point in the recorded video 500 is the time point after the user clicks the WIFI control 401 in the scene shown in fig. 4, the second time point is the time point marked after the user clicks the WIFI control 401, the third time point is the time point after the user clicks the switch control 4041, the operation between the third time point and the fourth time point is the operation of returning to the main interface, and the fifth time point, the sixth time point and the seventh time point are the time points in the recording process when the user performs the transfer function, then the first electronic device may associate and fix the first time point, the second time point and the third time point. And the fifth time point, the sixth time point and the seventh time point are also associated and fixed. The first electronic device or the second electronic device cannot drag the progress bar before the third moment point between the starting moments back and forth when playing the video.
It is understood that the content between the fixed point in time and the previous point in time is the operation content corresponding to the fixed point in time, and therefore, when any point in time is fixed, the content between the fixed point in time and the previous point in time is also fixed. Because for the recorded video shown in fig. 15, the second electronic device or the first electronic device cannot drag the progress bar between the start time and the third time point when playing the recorded video 500. The progress bar between the fourth time point and the seventh time point cannot be dragged back and forth. If the user wants to drag the progress bar, the user can only drag the progress bar from the initial time point to the third time point directly, play the video after the third time point directly, and can not drag the progress bar from the initial time point to the first time point or the second time point directly. Therefore, the electronic equipment can be effectively prevented from executing partial functions if the user wants to execute the operation for starting the WIFI function, and the electronic equipment can be prevented from executing partial operations for starting the WIFI function if the user wants to execute the operation for transferring the money.
In some embodiments, the second electronic device may begin playing the video simultaneously, i.e., based on the first information, with a corresponding operation on the second electronic device. In the process of playing the video, when the second electronic device plays to any moment, if the operation before the moment is not finished, the second electronic device can automatically pause the playing so as to wait for the operation before the moment to finish and play.
For example, in the process of playing the recorded video 500, if the recorded video 500 has been played to the second time point, but the second electronic device does not complete the operation of clicking the WIFI control on the setting interface, the second electronic device may stop playing the video, and wait for the second electronic device to complete the operation of clicking the WIFI control on the setting interface and play the video.
In some embodiments, the second electronic device may trigger to execute the operation not while playing the video, but when playing to each time point, trigger to execute the operation task before the time point, and may add each operation task to the task queue, where the second device sequentially executes the operation tasks according to the order in which the operation tasks are added. The second electronic device triggers the execution operation when playing to the first moment.
For example, as shown in fig. 16, when the received video is the operation procedure for starting the WIFI function shown in fig. 4, and when the second electronic device plays to the first time point, the first operation task is triggered, that is, an operation before the first time point, for example, an operation for implementing the setting application on the main interface by clicking, and the first operation task is added to the task queue; when the second electronic device plays to the second time point, triggering a second task, namely an operation before the second time point, for example, to realize an operation of clicking the WIFI control on the setting interface, and adding the second operation task into the task queue.
It will be appreciated that in the embodiment shown in fig. 16, there is a case where the recorded video is allowed to have been played to the second time point, and the operation task corresponding to the first time point has not been completed yet, at this time, the video may continue to be played, and the background continues to sequentially execute the operation tasks according to the task queue.
In some embodiments, both the first electronic device and the second electronic device may also edit the recorded video, such as by performing a wrong operation, splitting an operation to obtain multiple video files, etc. Therefore, the situation that the operation is wrong or redundant operation exists in the recording process can be effectively avoided. It will be appreciated that the first information is also correspondingly adjusted during editing of the recorded video by the electronic device.
For example, as shown in fig. 17a, after the electronic device performs the operation of clicking the WIFI control on the setting interface, the second time point is marked, then the volume key is inadvertently pressed, and the electronic device marks the third time point, so that the operation corresponding to the third time point is redundant operation, at this time, the user may cut out the video content between the second time point and the third time point as shown in fig. 17b, and at the same time, the electronic device may automatically delete the first information corresponding to the video between the second time point and the third time point.
The following describes a hardware structure of the electronic device provided in the embodiment of the present application by taking the mobile phone 10 as an example.
As shown in fig. 18, the mobile phone 10 may include a processor 110, a power module 140, a memory 180, a mobile communication module 130, a wireless communication module 120, a sensor module 190, an audio module 150, a camera 170, an interface module 160, keys 101, a display 102, and the like.
It should be understood that the illustrated structure of the embodiment of the present invention is not limited to the specific configuration of the mobile phone 10. In other embodiments of the present application, the handset 10 may include more or fewer components than shown, or certain components may be combined, certain components may be split, or different arrangements of components may be provided. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, for example, processing modules or processing circuits that may include a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a digital signal processor DSP, a microprocessor (Micro-programmed Control Unit, MCU), an artificial intelligence (Artificial Intelligence, AI) processor, a programmable logic device (Field Programmable Gate Array, FPGA), or the like. Wherein the different processing units may be separate devices or may be integrated in one or more processors. A memory unit may be provided in the processor 110 for storing instructions and data. In some embodiments, the memory location in the processor 110 is a cache 180, and the cache 180 may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory.
It will be appreciated that the methods of operation in embodiments of the present application may be performed by the processor 110 of the corresponding electronic device.
The power module 140 may include a power source, a power management component, and the like. The power source may be a battery. The power management component is used for managing the charging of the power supply and the power supply supplying of the power supply to other modules. In some embodiments, the power management component includes a charge management module and a power management module. The charging management module is used for receiving charging input from the charger; the power management module is used for connecting a power supply, and the charging management module is connected with the processor 110. The power management module receives input from the power and/or charge management module and provides power to the processor 110, the display 102, the camera 170, the wireless communication module 120, and the like.
The mobile communication module 130 may include, but is not limited to, an antenna, a power amplifier, a filter, an LNA (Low noise amplify, low noise amplifier), etc. The mobile communication module 130 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied to the handset 10. The mobile communication module 130 may receive electromagnetic waves from an antenna, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to a modem processor for demodulation. The mobile communication module 130 may amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 130 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 130 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through audio devices (not limited to speakers 170A, receivers 170B, etc.) or displays images or video through the display screen 102. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 130 or other functional module, independent of the processor 110.
The wireless communication module 120 may include an antenna, and transmit and receive electromagnetic waves via the antenna. The wireless communication module 120 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. applied to the handset 10. The handset 10 may communicate with a network and other devices via wireless communication technology.
In some embodiments, the mobile communication module 130 and the wireless communication module 120 of the handset 10 may also be located in the same module.
The display screen 102 is used for displaying human-computer interaction interfaces, images, videos, and the like. The display screen 102 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like.
The sensor module 190 may include a proximity light sensor, a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
The audio module 150 is used to convert digital audio information into an analog audio signal output, or to convert an analog audio input into a digital audio signal. The audio module 150 may also be used to encode and decode audio signals. In some embodiments, the audio module 150 may be disposed in the processor 110, or some functional modules of the audio module 150 may be disposed in the processor 110. In some embodiments, the audio module 150 may include a speaker, an earpiece, a microphone, and an earphone interface.
Speakers, also known as "horns," are used to convert audio electrical signals into sound signals.
Headphones, also known as "receivers," are used to convert audio electrical signals into sound signals.
Microphones, also known as "microphones" and "microphones", are used to convert sound signals into electrical signals. When making a call or transmitting voice information, a user can sound near the microphone through the mouth, inputting a sound signal to the microphone. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the handset 10 may also be provided with three, four or more microphones to enable collection of sound signals, noise reduction, identification of sound sources, directional recording, etc.
The earphone interface is used for connecting a wired earphone. The earphone interface may be a USB interface or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The camera 170 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to an ISP (Image Signal Processing ) to be converted into a digital image signal. The handset 10 may implement shooting functions through an ISP, a camera 170, a video codec, a GPU (Graphic Processing Unit, graphics processor), a display screen 102, an application processor, and the like.
Specifically, the ISP is used to process the data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (MOVing picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The interface module 160 includes an external memory interface, a universal serial bus (universal serial bus, USB) interface, a subscriber identity module (subscriber identification module, SIM) card interface, and the like. Wherein the external memory interface may be used to connect an external memory card, such as a Micro SD card, to extend the memory capabilities of the handset 10. The external memory card communicates with the processor 110 through an external memory interface to implement data storage functions. The universal serial bus interface is used for communication between the handset 10 and other electronic devices. The subscriber identity module card interface is used to communicate with a SIM card mounted to the handset 1010, for example, by reading a telephone number stored in the SIM card or by writing a telephone number to the SIM card.
In some embodiments, the handset 10 further includes keys 101, motors, indicators, and the like. The key 101 may include a volume key, an on/off key, and the like. The motor may generate a vibration alert. The motor can be used for incoming call vibration prompting and also can be used for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor may also correspond to different vibration feedback effects for touch operations applied to different areas of the display screen 102. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization. The indicators may include laser indicators, radio frequency indicators, LED indicators, and the like.
An embodiment of the present application provides an electronic device, including: the memory is used for storing instructions executed by one or more processors of the electronic device, and the processor is one of the one or more processors of the electronic device and is used for executing the operation method.
The embodiment of the application provides a readable storage medium, wherein instructions are stored on the readable storage medium, and when the instructions are executed on electronic equipment, the instructions cause the electronic equipment to execute the operation method.
Embodiments of the present application provide a computer program product comprising: and executing the instructions, wherein the executing instructions are stored in the readable storage medium, and at least one processor of the electronic device can read the executing instructions from the readable storage medium, and the executing instructions are executed by the at least one processor, so that the electronic device realizes the operation method.
Embodiments disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the present application may be implemented as a computer program or program code that is executed on a programmable system including at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), microcontroller, application Specific Integrated Circuit (ASIC), or microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope to any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including but not limited to floppy diskettes, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) in an electrical, optical, acoustical or other form of propagated signal using the internet. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the present application, each unit/module is a logic unit/module, and in physical aspect, one logic unit/module may be one physical unit/module, or may be a part of one physical unit/module, or may be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logic unit/module itself is not the most important, and the combination of functions implemented by the logic unit/module is the key to solve the technical problem posed by the present application. Furthermore, to highlight the innovative part of the present application, the above-described device embodiments of the present application do not introduce units/modules that are less closely related to solving the technical problems presented by the present application, which does not indicate that the above-described device embodiments do not have other units/modules.
It should be noted that in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the present application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the present application.

Claims (27)

1. A method of operation for a first electronic device, comprising:
acquiring a first video and first information associated with the first video, wherein the first information comprises interface information of at least one display interface recorded in a screen recording process and identification information of a target control operated by a user on the at least one display interface;
and playing the first video, and executing an operation based on the first information in the process of playing the first video.
2. The method of claim 1, wherein the operation is consistent with a user operation on the at least one display interface in the first video.
3. The method of claim 1, wherein the first video and the first information are stored as the same file or wherein the first video and the first information are stored as different files.
4. A method according to any of claims 1-3, wherein the first information further comprises a manner in which a user operates the target control on the at least one display interface.
5. The method of claim 4, wherein the performing an operation based on the first information comprises:
Acquiring interface information of a first display interface in the first information, and determining a first interface to be operated of the first electronic equipment based on the interface information of the first display interface;
acquiring identification information of a target control operated by a user on the first display interface, and determining a control to be operated of the first interface to be operated based on the identification information of the target control operated by the user on the first display interface;
acquiring an operation mode of a user on the first display interface on the target control, and correspondingly operating the control to be operated based on the operation mode of the user on the first display interface on the target control;
acquiring interface information of a second display interface in the first information, and determining a second interface to be operated of the first electronic equipment based on the interface information of the second display interface.
6. The method according to claim 5, wherein in the case that the operation mode of the target control is a multi-state operation mode, the first information further includes auxiliary information corresponding to the operation mode.
7. The method of claim 6, wherein the multi-state operational mode comprises a typing operation and an adjustment operation, wherein the auxiliary information corresponding to the typing operation comprises typing content, and wherein the auxiliary information corresponding to the adjustment operation comprises an adjustment state.
8. The method of claim 1, wherein the interface information includes an interface name, an interface application name, and an interface activity name.
9. The method of any of claims 1-3,6-8, wherein the first electronic device comprises a first display area and a second display area;
in the process of playing the first video by the first electronic device, the first display area displays a playing picture of the first video, and the second display area displays an operation process of executing the operation by the first electronic device.
10. The method of claim 9, wherein the first display region overlaps a partial region of the second display region or the second display region overlaps a partial region of the first display region.
11. The method of claim 9, wherein the second display area is split screen displayed from the first display area.
12. The method of any of claims 1-3,6-8, 10-11, wherein a third electronic device is coupled to the first electronic device, the third electronic device displaying a playback view of the first video, the first electronic device displaying an operational procedure for the first electronic device to perform the operation.
13. The method of any of claims 1-3,6-8,10-11, wherein a third electronic device is coupled to the first electronic device, the third electronic device displaying an operational procedure of the first electronic device performing the operation, the first electronic device displaying a play of the first video.
14. The method according to any of claims 1-3,6-8,10-11, wherein a third electronic device is connected to the first electronic device, the first electronic device displaying an operation procedure of the first electronic device performing the operation and/or a play picture of a first video;
the third electronic device comprises a third display area and a fourth display area, the third display area displays the display content of the first electronic device, and the fourth display area displays the operation process or the playing picture of the first video.
15. The method of any one of claims 1-3,6-8,10-11, further comprising: and sending prompt information to a display interface of the first electronic equipment under the conditions that the play exit instruction is received and the operation is detected to be incomplete.
16. The method of any of claims 1-3,6-8,10-11, wherein a plurality of time points are marked in the first video, each of the time points corresponding to a portion of a user's operations during the screen recording.
17. The method as recited in claim 16, further comprising: and fixing the partial time points with the association in the plurality of time points so that the video corresponding to the partial time points with the association is continuously played.
18. The method according to claim 17, wherein when the first video is played to a target time point, the playing of the first video is stopped if the operation corresponding to the target time point is not completed;
and after the operation corresponding to the target moment is completed, continuing to play the first video.
19. The method of any one of claims 1-3,6-8,10-11, 17-18, further comprising: and editing the first video to obtain a second video.
20. Method according to claim 19, characterized in that the clipping process comprises a clipping process and/or a splitting process and/or a reorganization process.
21. The method of any of claims 1-3,6-8,10-11, 17-18, wherein the first electronic device stores the first video and the first information in a MOV format, an EXIF format, or a compressed package format.
22. The method of claim 1, wherein the performing an operation based on the first information during the playing of the first video; comprising the following steps:
detecting a play instruction, playing the first video, adding an operation corresponding to a first moment point into a task queue when the first video is played to the first moment point, and executing the operation in the task queue based on the first information;
and when the first video is played to a second moment, adding the operation corresponding to the second moment into the task queue.
23. An operating method applied to a second electronic device, comprising:
detecting a screen recording instruction;
starting screen recording, and generating a first video and first information, wherein the first information comprises interface information of at least one display interface recorded in the screen recording process and identification information of a target control operated by a user on the at least one display interface, and the first information can be used for: and the electronic equipment executes corresponding operation based on the first information in the process of playing the first video.
24. The method of claim 23, wherein the second electronic device transmits the first video and the first information to the first electronic device.
25. An electronic device, comprising: a memory for storing instructions for execution by one or more processors of the electronic device, and the processor, which is one of the one or more processors of the electronic device, for performing the method of operation of any of claims 1-24.
26. A readable storage medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform the method of operation of any of claims 1-24.
27. A computer program product comprising: executing instructions stored in a readable storage medium from which at least one processor of an electronic device can read, the executing instructions being executed by the at least one processor to cause the electronic device to implement the method of operation of any one of claims 1-24.
CN202210995317.8A 2022-08-18 2022-08-18 Operation method, electronic equipment and medium Pending CN117631919A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210995317.8A CN117631919A (en) 2022-08-18 2022-08-18 Operation method, electronic equipment and medium
PCT/CN2023/112297 WO2024037421A1 (en) 2022-08-18 2023-08-10 Operation method, electronic device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210995317.8A CN117631919A (en) 2022-08-18 2022-08-18 Operation method, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117631919A true CN117631919A (en) 2024-03-01

Family

ID=89940716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210995317.8A Pending CN117631919A (en) 2022-08-18 2022-08-18 Operation method, electronic equipment and medium

Country Status (2)

Country Link
CN (1) CN117631919A (en)
WO (1) WO2024037421A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108111903A (en) * 2018-01-17 2018-06-01 广东欧珀移动通信有限公司 Record screen document play-back method, device and terminal
CN109561271B (en) * 2018-11-19 2021-06-25 维沃移动通信有限公司 Method for guiding terminal operation, first terminal and second terminal
CN112533041A (en) * 2019-09-19 2021-03-19 百度在线网络技术(北京)有限公司 Video playing method and device, electronic equipment and readable storage medium
CN113923461B (en) * 2020-07-10 2023-06-27 华为技术有限公司 Screen recording method and screen recording system
CN112286617B (en) * 2020-10-30 2023-07-21 维沃移动通信有限公司 Operation guidance method and device and electronic equipment

Also Published As

Publication number Publication date
WO2024037421A1 (en) 2024-02-22

Similar Documents

Publication Publication Date Title
US11922005B2 (en) Screen capture method and related device
JP7414842B2 (en) How to add comments and electronic devices
CN110231905B (en) Screen capturing method and electronic equipment
CN112394811B (en) Interaction method of air-separation gestures and electronic equipment
CN109981885B (en) Method for presenting video by electronic equipment in incoming call and electronic equipment
CN112383664B (en) Device control method, first terminal device, second terminal device and computer readable storage medium
WO2022057852A1 (en) Method for interaction between multiple applications
WO2023236794A1 (en) Audio track marking method and electronic device
CN114185503B (en) Multi-screen interaction system, method, device and medium
CN114697732A (en) Shooting method, system and electronic equipment
CN111597004B (en) Terminal and user interface display method in application
CN111176766A (en) Communication terminal and component display method
CN112637477A (en) Image processing method and electronic equipment
CN115016697A (en) Screen projection method, computer device, readable storage medium, and program product
CN114650330A (en) Method, electronic equipment and system for adding operation sequence
CN113642010B (en) Method for acquiring data of extended storage device and mobile terminal
CN117631919A (en) Operation method, electronic equipment and medium
CN114449171A (en) Method for controlling camera, terminal device, storage medium and program product
CN116204254A (en) Annotating page generation method, electronic equipment and storage medium
CN115002336A (en) Video information generation method, electronic device and medium
CN115696020A (en) Cross-device shooting method, related device and system
CN115268737A (en) Information processing method and device
CN111142648B (en) Data processing method and intelligent terminal
CN113641533B (en) Terminal and short message processing method
WO2022206600A1 (en) Screen projection method and system, and related apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination