CN116069429A - Application processing method, device, electronic equipment and medium - Google Patents

Application processing method, device, electronic equipment and medium Download PDF

Info

Publication number
CN116069429A
CN116069429A CN202310115429.4A CN202310115429A CN116069429A CN 116069429 A CN116069429 A CN 116069429A CN 202310115429 A CN202310115429 A CN 202310115429A CN 116069429 A CN116069429 A CN 116069429A
Authority
CN
China
Prior art keywords
application
dynamic effect
scene
input
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310115429.4A
Other languages
Chinese (zh)
Inventor
耿胜恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310115429.4A priority Critical patent/CN116069429A/en
Publication of CN116069429A publication Critical patent/CN116069429A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Abstract

The application discloses an application processing method, an application processing device, electronic equipment and a readable storage medium, and belongs to the technical field of electronics. Wherein the method comprises the following steps: acquiring a first dynamic effect in a first application; wherein the first dynamic effect is applied in a first scene of the first application; outputting the identification of the first dynamic effect; the first dynamic effect identifier is used for indicating the first dynamic effect; receiving an identification of the first dynamic effect and a first input of a second application; responsive to the first input, obtaining a second scene of the second application that matches the first scene; the first dynamic effect is applied in the second scene.

Description

Application processing method, device, electronic equipment and medium
Technical Field
The application belongs to the technical field of electronics, and particularly relates to an application processing method, an application processing device, electronic equipment and a readable storage medium.
Background
At present, electronic equipment comprises a large number of applications, and different dynamic effects can be realized based on the design of manufacturers of the electronic equipment in different applications. For example, in a certain application, the user slides right on the screen, and the current page returns to the previous page, where the page switching mode is a dynamic effect based on the sliding of the user. For another example, in another application, the user slides up and down the screen, the list content switches in a smooth scrolling manner, and bottoms out springback, where smooth scrolling of the list content is a dynamic effect based on the user's sliding.
In the related art, a user encounters the following problems during use of an electronic device: the user frequently uses a certain application, and the dynamic effects in the application are familiar, when other applications are used, the user is unfamiliar because the dynamic effects provided by different applications in the same scene are different, so that the convenience of the user in use is influenced, and the use experience of the user is further influenced.
It can be seen that, in the related art, the user is inconvenient to use due to different dynamic effects provided by different applications under the same scene.
Disclosure of Invention
An object of the embodiments of the present application is to provide an application processing method, which can solve the problem that in the related art, users are inconvenient to use due to different dynamic effects provided by different applications under the same scene.
In a first aspect, an embodiment of the present application provides an application processing method, where the method includes: acquiring a first dynamic effect in a first application; wherein the first dynamic effect is applied in a first scene of the first application; outputting the identification of the first dynamic effect; the first dynamic effect identifier is used for indicating the first dynamic effect; receiving an identification of the first dynamic effect and a first input of a second application; responsive to the first input, obtaining a second scene of the second application that matches the first scene; the first dynamic effect is applied in the second scene.
In a second aspect, an embodiment of the present application provides an application processing apparatus, including: the first acquisition module is used for acquiring a first dynamic effect in the first application; wherein the first dynamic effect is applied in a first scene of the first application; the output module is used for outputting the identification of the first dynamic effect; the first dynamic effect identifier is used for indicating the first dynamic effect; a first receiving module for receiving an identification of the first dynamic effect and a first input of a second application; a second obtaining module, configured to obtain, in response to the first input, a second scene of the second application that matches the first scene; and the application module is used for applying the first dynamic effect to the second scene.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
Thus, in the embodiment of the present application, for the first dynamic effect in the first application, the specific content includes an operation manner of the user and an update manner of the page content, which may be obtained separately and output an identifier of the first dynamic effect, where the specific content of the first dynamic effect may be displayed based on the identifier of the first dynamic effect; further, the user can perform the first input on the identification of the first dynamic effect and the second application, so that when the first scene applied by the first dynamic effect is matched with the second scene of the second application, if both scenes are switched to the scene of the upper-level page, the first dynamic effect is applied to the second scene, and therefore the user can perform control in the same scene of different applications in the same operation mode, and the page content can be updated based on the same mode. Therefore, based on the embodiment of the application, the dynamic effects among different applications can be copied and pasted mutually, so that a user can smoothly operate in a plurality of applications only by being familiar with uniform dynamic effects when using a plurality of applications, and different dynamic effects are not required to be adapted in the same scene of different applications, thereby simplifying user operation and improving user experience.
Drawings
FIG. 1 is a flow chart of an application processing method of an embodiment of the present application;
FIG. 2 is one of the display schematic diagrams of the electronic device of the embodiment of the present application;
FIG. 3 is a second schematic diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 4 is a third schematic illustration of a display of an electronic device according to an embodiment of the present application;
FIG. 5 is a fourth schematic illustration of a display of an electronic device according to an embodiment of the present application;
FIG. 6 is a block diagram of an application processing device of an embodiment of the present application;
fig. 7 is one of the hardware structural diagrams of the electronic device according to the embodiment of the present application;
fig. 8 is a second schematic diagram of a hardware structure of the electronic device according to the embodiment of the present application.
Detailed Description
Technical solutions of embodiments of the present application will be clearly described below with reference to the accompanying drawings of embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The execution body of the application processing method provided by the embodiment of the application may be an application processing device provided by the embodiment of the application, or an electronic device integrated with the application processing device, where the application processing device may be implemented in a hardware or software manner.
The application processing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 shows a flowchart of an application processing method according to an embodiment of the present application, where the method is applied to an electronic device for example, and includes:
step 110: acquiring a first dynamic effect in a first application; wherein the first dynamic effect is applied in a first scene of the first application.
Optionally, the first application is any application in the electronic device.
Optionally, the first dynamic effect is any dynamic effect in the first application.
Under the specified user operation mode, the first application updates the page content in the specified mode, namely, a dynamic effect.
For example, the user slides rightward, the first-level sub page is switched back to the main page in a sliding manner, and the above-mentioned user operation and the switching effect are combined to form a dynamic effect, which can be used as the first dynamic effect in this embodiment.
Correspondingly, the first dynamic effect is applied in the first scene.
In the first scene, based on the first dynamic effect, the first application realizes updating of the content.
For example, the first-level sub page is switched back to the main page in a sliding manner, and the content update of the first application is completed, namely the first scene.
Step 120: outputting the identification of the first dynamic effect; the first dynamic effect identifier is used for indicating the first dynamic effect.
Alternatively, the identification of the first action may be a carrier that visually plays the first action.
For example, the first action is identified as a popup window, so that in the popup window, the first action is played in a manner of playing video and moving pictures.
For another example, the first moving effect is identified as a split screen area, so that in the split screen area, the first moving effect is played in a mode of playing video and moving pictures.
For another example, the first action is identified as an interface, so that in the interface, the first action is played in a manner of playing a video or a moving picture.
Optionally, in the process of playing the first dynamic effect, not only the updating process of the page content, but also the operation trace of the user, such as a sliding trace, a sliding direction, and the like, of the user sliding on the screen, may be played.
Step 130: a first input is received identifying a first dynamic effect and a second application.
The first input includes a touch input made by a user on a screen, and is not limited to click, slide, drag, and other inputs. The first input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press, etc. Moreover, the first input comprises one or more inputs, wherein the plurality of inputs may be continuous or time-spaced.
In this step, the first input is used to select an identification of the first dynamic effect and the second application to apply the first dynamic effect of the first application in the context of the second application.
Optionally, in the case of outputting the identifier of the first action, an icon of the second application is displayed in association with the identifier of the first action, so as to implement the first input.
For example, the user clicks on an icon of the second application.
Optionally, the number of second applications is at least one.
Step 140: in response to the first input, a second scene of a second application that matches the first scene is acquired.
In this step, a second scene of the second application is acquired, and the second scene matches the first scene.
Optionally, in this step, the interpretation of the match is: both scenarios are updates for implementing certain application content.
For example, a first scenario is to switch from a level sub-page to a level higher page, i.e. the updated application content is the page content of the level higher. Correspondingly, the matched second scene is also the switching from the next-level page to the previous-level page so as to update the page content of the previous level.
In this embodiment, the first application and the second application are different applications, so as to implement cross-application dynamic copy and paste.
Step 150: the first dynamic effect is applied in the second scene.
In this step, the first dynamic effect may be applied in the second scene based on the matching relationship between the first scene and the second scene.
For example, in the first application, when the next page returns to the previous page, the user can realize the operation mode of sliding to the right, so that the switching between the pages is displayed with the effect of sliding to the left and right in the first application; before the dynamic effect is applied to the same scene of the second application, the user needs to click a return key in the second application to jump to the upper page, and after the dynamic effect is applied to the scene of the second application, the user can directly slide right on the screen, so that the current page is switched to the upper page in a left-right switching manner.
Thus, in the embodiment of the present application, for the first dynamic effect in the first application, the specific content includes an operation manner of the user and an update manner of the page content, which may be obtained separately and output an identifier of the first dynamic effect, where the specific content of the first dynamic effect may be displayed based on the identifier of the first dynamic effect; further, the user can perform the first input on the identification of the first dynamic effect and the second application, so that when the first scene applied by the first dynamic effect is matched with the second scene of the second application, if both scenes are switched to the scene of the upper-level page, the first dynamic effect is applied to the second scene, and therefore the user can perform control in the same scene of different applications in the same operation mode, and the page content can be updated based on the same mode. Therefore, based on the embodiment of the application, the dynamic effects among different applications can be copied and pasted mutually, so that a user can smoothly operate in a plurality of applications only by being familiar with uniform dynamic effects when using a plurality of applications, and different dynamic effects are not required to be adapted in the same scene of different applications, thereby simplifying user operation and improving user experience.
In the flow of the application processing method according to another embodiment of the present application, step 110 includes:
substep A1: a second input to the first application is received.
The second input includes touch input made by the user on the screen, and is not limited to click, slide, drag and other inputs. The second input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press, etc. Moreover, the second input comprises one or more inputs, wherein the plurality of inputs may be continuous or time-spaced.
In this step, the second input is used to trigger entry into the extraction recording mode to record a first video comprising a first dynamic effect in the extraction recording mode.
For example, the user opens a first application, drops down a menu, and selects the "extract recording mode" option in the menu, thereby entering the extract recording mode. Wherein upon entering the extraction recording mode, a designation mark is displayed in the screen for indicating that the extraction recording mode is currently in.
Substep A2: in response to the second input, the content is updated in the first application according to a first action associated with the second input, and the first video is recorded during the updating of the content.
In this step, in the extraction recording mode, the user operates in the first application in the operation mode related to the first action, and in the first application, the content is updated in the first application in the update mode related to the first action.
Optionally, after the first application completes updating the content, the user turns off the extraction recording mode.
For example, the user pulls down a menu in which to select the "extract recording mode" option, thereby exiting the extract recording mode. Wherein after exiting the extraction recording mode, the specified mark displayed in the screen disappears for indicating not to be in the extraction recording mode.
Wherein the input for exiting the extraction recording mode described above may be included in the second input.
Alternatively, after recording of one action is completed, the extraction recording mode is automatically exited while the specified mark displayed in the screen disappears.
For example, in the extraction recording mode, the user slides right on the screen, and the first application is switched from the current page to the previous page in a left-right switching manner, so that after the background recording user operation and page updating process are completed, the user automatically exits from the extraction recording mode after recognizing that one active recording is completed.
Optionally, the first video is a screen recorded video.
Substep A3: according to the first video, a first dynamic effect in the first application is obtained, and a first scene to which the first dynamic effect is applied is obtained.
In this step, in a first video, a first motion effect in the video is acquired, and a first scene in the video is acquired.
Optionally, the first scene is acquired in a finer manner.
For example, in the first dynamic effect, after the user slides to the right, the page returns to the upper level, which can be further classified into whether the first level sub-page returns to the main page or the second level sub-page returns to the first level sub-page, so that the obtained first scene can be fine to the scene of which level page returns to which level page, and after the first dynamic effect is applied to the second application, the first dynamic effect is only used for realizing the return of the designated two adjacent level pages, and is not used for realizing the return of other adjacent level pages.
Optionally, in the case of acquiring the first scene, the option may be displayed for the user to select according to a specific scene division rule, so as to achieve fine acquisition of the first scene.
In this embodiment, a method for copying a first dynamic effect is provided, in which the first dynamic effect in a first application is recorded in a manner of recording video, so that the first dynamic effect and the first scene are obtained in the recorded video, and then the first dynamic effect is copied. The copying method provided by the embodiment is simple to operate, intelligent to obtain and accurate to copy.
In the flow of the application processing method according to another embodiment of the present application, step 110 includes:
substep B1: outputting N dynamic identifiers in the first application; wherein N is a positive integer.
Optionally, N regions are displayed separately, and one region is used for outputting an identification of a dynamic effect.
Optionally, the N dynamic identifiers are sequentially output in a region.
Optionally, all the dynamic effects involved in the first application are acquired, and the identification of each dynamic effect is output.
Optionally, the form of the identification of the dynamic effect refers to the identification of the first dynamic effect.
Substep B2: receiving a third input of an identification of the first dynamic effect; the N dynamic effect identifiers comprise identifiers of first dynamic effects.
The third input includes touch input made by the user on the screen, and is not limited to click, slide, drag, and other inputs. The third input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press, etc. Moreover, the third input comprises one or more inputs, wherein the plurality of inputs may be continuous or time-spaced.
In this step, a third input is used to select the identity of the first action from among the N identities of the actions.
For example, an input such as clicking, long pressing, dragging, or the like is performed on the identification of the first action.
Substep B3: in response to the third input, a first action is acquired, and a first scene to which the first action applies is acquired.
In this step, the first action is acquired based on the content played in the identification of the first action, and the first scene is acquired.
In this embodiment, a method for automatically acquiring a dynamic effect in a first application and automatically outputting a dynamic effect identifier is provided, so that a user selects the first dynamic effect identifier from the output dynamic effect identifiers, and the user has high selectivity and simple operation.
In a flow of an application processing method according to another embodiment of the present application, the first input includes a first sub-input. Step 140, including:
substep C1: in response to the first sub-input to the second application, the content is updated in the second application according to the second dynamic effect associated with the first sub-input, and the second video is recorded during the updating of the content.
In this step, the first sub-input is used to trigger entry into the effects application mode to record a second video comprising a second scene in the effects application mode.
For example, a first dynamic effect is played in an interface to output an identification of the first dynamic effect, a "local application" option is displayed in the interface, a user clicks the "local application" option, a plurality of application icons are displayed, an icon of a second application is selected, the user jumps to the interface of the second application, a menu is pulled down by the user, an "effect application mode" option is selected, an effect application mode is entered, in which mode the user operates in the second application based on an operation mode supported by the second application, and in response to the operation, content is updated in the second application based on an updating mode supported by the second application, namely the second dynamic effect. Further, after the second application completes the content update, the user pulls down the menu, selects the "effect application mode" option, and closes the effect application mode.
In practice, referring to fig. 2, after entering the effects application mode, the screen displays a "start" flag 201 for indicating that the effects application mode is currently in, the user clicks on the "set" page, clicks on the "mobile network" option, enters the "mobile network" page, clicks on the "return" option in the "mobile network" page, and returns to the "set" page; referring to fig. 3, the exit effects application mode, the screen displays an "end" tab 301 to designate the current exit effects application mode. Thus, the whole process of the above page switching is recorded.
Wherein the inputs referred to above are all contained in the first sub-input.
Optionally, the effects application mode is automatically exited after the completion of recording of a scene is identified.
Optionally, the second video is a screen recorded video.
Substep C2: and acquiring a second scene of a second application matched with the first scene according to the second video.
In the step, in the second video, a scene in the video is acquired, then the scene is matched with the first scene, and if the matching is successful, the scene in the second video is acquired and used as the second scene.
For example, the first scene is a scene returning to the previous page, and in the second video, the scene returning to the previous page is included, and the matching is successful.
More precisely, the first scene and the second scene are: and returning to the scene of the adjacent upper level page by the sub page of a certain level.
In this embodiment, a method for acquiring a second scene is provided, in which a video is recorded, a scene in a second application is recorded, and then matching is performed, and if matching is successful, acquisition is performed. The embodiment is suitable for application situations of one-to-one copy and paste of dynamic effects and scenes, and meets personalized requirements of users.
In the application processing method according to another embodiment of the present application, the dynamic effects are not limited to being pasted in the specified scene, and correspondingly, the dynamic effects are directly pasted in the application.
Optionally, the number of the second applications is at least one, so that the scene matched with the first scene is acquired in all the second applications and can be used as the second scene.
For example, the first action is played in the interface to output the identifier of the first action, meanwhile, a "batch application" option is displayed in the interface, the user clicks the "batch application" option, the interface is jumped, the jumped interface displays a plurality of application icons, see fig. 4, each application icon is in a selectable state, for example, a "circle" mark 402 is displayed on the icon 401, the user selects an application icon, for example, clicks a "circle" mark to generate a "number" in the "circle" mark, that is, as a second application, after all the application icons are selected, the user clicks a "finish" option 403 provided in the interface, so as to acquire a second scene in each second application.
For another example, the first action is played in the interface to output the identifier of the first action, meanwhile, a "batch application" option is displayed in the interface, the user clicks the "batch application" option, referring to fig. 5, the first action is played in the first split screen area 501, the plurality of application icons are displayed in the second split screen area 502, the user presses the application icons for a long time and drags the application icons to play the third split screen area 503, based on this, the plurality of application icons can be dragged to the third split screen area 503, and the applications corresponding to all the application icons in the third split screen area 503 can be used as the second applications, so as to obtain the second scene in each second application.
Optionally, in this embodiment, in the process of applying the first dynamic effect to the second scene, that is, in the pasting process, each time an application is pasted, the application icon is displayed in association with the identifier of the first dynamic effect, and a progress bar is displayed on the application icon to indicate the progress of completing the pasting, and after the first dynamic effect is pasted to all the matching scenes of the application, a completion mark is displayed on the application icon.
For example, the first dynamic effect is applied to a scene of returning to the previous level page, and in a certain second application, the scene includes ten matched second scenes, specifically, for example, the first level sub page returns to the main page, the second level sub page returns to the first level sub page, and the like, and the progress bar updates the progress once every time the dynamic effect pasting of one scene is completed. Specifically, the progress bar displays "one tenth", updates to "two tenth" in turn, and so on. Further, after the dynamic effect pasting of all scenes is completed, displaying a completion mark; or, the dynamic effect pasting of all scenes cannot be completed, and the incomplete mark is displayed.
Optionally, indicia such as text indicia, symbolic indicia, and the like.
For example, referring to fig. 5, an icon 504 of a second application that is pasting an active effect is displayed in a first split screen area 501, a progress bar is displayed during pasting, and after pasting is finished, "number-checking" is displayed; if the pasting cannot be completed, displaying the error number.
Further, referring to fig. 5, the user inputs an icon 504 of the second application within the first split screen area 501, and the current icon is no longer displayed.
In the present embodiment, an implementation of pasting a first action to multiple scenes of one application or multiple scenes of multiple applications is provided to enrich the embodiments of the present application.
In the application processing method of another embodiment of the present application, certain types of applications, such as social applications, may be automatically screened out under the condition of the application type, so that only social applications are used as second applications to complete the pasting of dynamic effects, and the user does not need to select a plurality of application icons one by one, thereby further simplifying the user operation.
In another embodiment of the present application, the application processing method includes a step 150, including:
substep D1: in a second scenario, content is updated according to the first dynamic effect.
Wherein, in the second scene, the parameters of the first dynamic effect include: dynamic effect curve, dynamic effect duration and dynamic effect transparency.
Optionally, the content updating modes contained by different dynamic effects are different, and the corresponding user operation modes are different, so that the parameters contained by different dynamic effects are also different.
In the present embodiment, several parameters are provided, but are not limited to the above.
Correspondingly, when the identification of the first dynamic effect is output, an edit box is provided in which the user can modify the respective parameters.
For example, the first dynamic effect is played on the interface to output the identification of the first dynamic effect, meanwhile, the edit box is displayed on the interface, each parameter related to the first dynamic effect is displayed in the edit box, after the user clicks the edit box, each parameter is changed into an editable state, and the user can modify each parameter.
In this embodiment, based on the parameters of the first dynamic effect, the content is updated in the second scene in the second application, so as to implement the pasting of the dynamic effect, and ensure the realizability of the copy-pasting of the dynamic effect across applications.
In the flow of the application processing method according to another embodiment of the present application, before step 150, the method further includes:
step E1: receiving a fourth input of an identification of the first action and an identification of a third action in the third application; wherein the identification of the third action is used to indicate the third action.
The fourth input includes a touch input made by the user on the screen, and is not limited to click, slide, drag, and other inputs. The fourth input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press, etc. Moreover, the fourth input comprises one or more inputs, wherein the plurality of inputs may be continuous or time-spaced.
In this step, a fourth input is used to select the identity of the first action and the identity of the third action.
For example, two popups respectively play the first action and the third action, and the user drags one popup to the other popup for a long time.
Alternatively, the first application and the third application may be the same application, i.e. the first action and the third action are two actions in one application.
Alternatively, the first application and the third application may be different applications.
Step E2: and responding to the fourth input, and respectively acquiring the first parameter included in the first dynamic effect and the second parameter included in the third dynamic effect.
Step E3: under the condition that the first parameter and the second parameter are used for describing the target attribute information, combining the first parameter and the second parameter, and obtaining a third parameter; wherein, the first action and the third action both have target attribute information.
In this step, the parameters of the first action and the parameters of the third action each include parameters for describing the target attribute information.
For example, the parameters of the first and third effects each include the effect transparency, and the attribute information of the two effect transparency descriptions is the same.
Further, the two parameters are superimposed.
For example, two dynamic transparencies are superimposed; as another example, two motion accelerations are superimposed.
Step E4: and updating the first parameter included in the first dynamic effect into the third parameter according to the third parameter.
In the step, the superimposed parameters are replaced in the dynamic effect parameters, and the corresponding dynamic effects are applied to the scene.
Optionally, in one aspect, in combination with an embodiment of the present application, the superimposed parameter is replaced with the parameter of the first dynamic effect, and then the first dynamic effect is pasted in the scene. On the other hand, based on the input of the user, the superimposed parameters are replaced to the parameters of the third action, and then the third action is pasted in the scene.
In this embodiment, a method for mutually combining multiple dynamic effects is provided, so that parameters for describing the same attribute information in the multiple dynamic effects are overlapped, and after the overlapped parameters update the dynamic effect parameters, the dynamic effects are applied to a scene, so that the purpose of enhancing the dynamic effect is achieved, and the operation of a user is simpler.
In summary, in the present application, first, a user selects an active effect of an application to copy. The method comprises the specific operations that a user opens an application, clicks and opens a first-level sub-page, enters an extraction recording mode, slides rightward to switch back to a main page, and exits the extraction recording mode, so that screen recording is completed in the extraction recording mode, and dynamic effects are copied in recorded videos. The user then selects the application to make a paste of the effects. The method comprises the specific operations that a user opens another application, clicks the opening and sub-page, enters an effect application mode, then switches back to a main page according to dynamic effects supported by the application, and exits the effect application mode, so that screen recording is completed in the effect application mode, and a scene to be pasted is acquired in the recorded screen. And finally, pasting the copied dynamic effect in the acquired scene. Based on the above process, when the user performs the operation of returning the first-level sub page to the main page in the two applications, the operation mode is the same, and the page switching mode is the same.
In the above, one-to-one copy and paste is performed based on a certain dynamic effect in an application and the same scene of a certain application, and more, one-to-many copy and paste can also be performed based on a certain dynamic effect in an application, the same scene of a plurality of applications and similar scenes. Therefore, in order to realize quick pasting and copying of the dynamic effects, a simple operation path is provided, a user is not required to be familiar with different dynamic effects of different applications, and the user operation is simplified.
According to the application processing method provided by the embodiment of the application, the execution subject can be an application processing device. In the embodiment of the present application, an application processing device executes an application processing method by using an application processing device as an example, and the application processing device provided in the embodiment of the present application is described.
Fig. 6 shows a block diagram of an application processing apparatus according to another embodiment of the present application, the apparatus comprising:
a first obtaining module 10, configured to obtain a first dynamic effect in a first application; wherein the first dynamic effect is applied in a first scene of the first application;
the output module 20 is configured to output an identifier of the first dynamic effect; the first dynamic effect identifier is used for indicating the first dynamic effect;
a first receiving module 30 for receiving an identification of a first dynamic effect and a first input of a second application;
A second obtaining module 40, configured to obtain, in response to the first input, a second scene of a second application that matches the first scene;
an application module 50 for applying the first dynamic effect in the second scenario.
Thus, in the embodiment of the present application, for the first dynamic effect in the first application, the specific content includes an operation manner of the user and an update manner of the page content, which may be obtained separately and output an identifier of the first dynamic effect, where the specific content of the first dynamic effect may be displayed based on the identifier of the first dynamic effect; further, the user can perform the first input on the identification of the first dynamic effect and the second application, so that when the first scene applied by the first dynamic effect is matched with the second scene of the second application, if both scenes are switched to the scene of the upper-level page, the first dynamic effect is applied to the second scene, and therefore the user can perform control in the same scene of different applications in the same operation mode, and the page content can be updated based on the same mode. Therefore, based on the embodiment of the application, the dynamic effects among different applications can be copied and pasted mutually, so that a user can smoothly operate in a plurality of applications only by being familiar with uniform dynamic effects when using a plurality of applications, and different dynamic effects are not required to be adapted in the same scene of different applications, thereby simplifying user operation and improving user experience.
Optionally, the first acquisition module 10 includes:
a first receiving unit for receiving a second input to the first application;
a first updating unit, configured to respond to the second input, update content in the first application according to a first dynamic effect associated with the second input, and record a first video in the process of updating the content;
the first obtaining unit is used for obtaining a first dynamic effect in the first application according to the first video and obtaining a first scene to which the first dynamic effect is applied.
Optionally, the first acquisition module 10 includes:
the output unit is used for outputting N dynamic effect identifiers in the first application; wherein N is a positive integer;
a second receiving unit for receiving a third input of an identification of the first dynamic effect; the N dynamic effect identifiers comprise identifiers of first dynamic effects;
the second obtaining unit is used for responding to the third input, obtaining the first dynamic effect and obtaining a first scene to which the first dynamic effect is applied.
Optionally, the first input comprises a first sub-input; the second acquisition module 40 includes:
a second updating unit, configured to respond to a first sub-input to a second application, update content in the second application according to a second dynamic effect associated with the first sub-input, and record a second video in the process of updating the content;
And the third acquisition unit is used for acquiring a second scene of a second application matched with the first scene according to the second video.
Optionally, the application module 50 includes:
a third updating unit, configured to update content according to the first dynamic effect in the second scene;
wherein, in the second scene, the parameters of the first dynamic effect include: dynamic effect curve, dynamic effect duration and dynamic effect transparency.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving a fourth input of the identification of the first dynamic effect and the identification of the third dynamic effect in the third application; the identifier of the third action is used for indicating the third action;
the third acquisition module is used for responding to the fourth input and respectively acquiring a first parameter included in the first dynamic effect and a second parameter included in the third dynamic effect;
the merging module is used for merging the first parameter and the second parameter and obtaining a third parameter under the condition that the first parameter and the second parameter are used for describing the target attribute information; wherein, the first dynamic effect and the third dynamic effect both have target attribute information;
and the updating module is used for updating the first parameter included in the first dynamic effect into the third parameter according to the third parameter.
The application processing device in the embodiment of the application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The application processing device of the embodiment of the application may be a device with an action system. The action system may be an Android (Android) action system, may be an ios action system, and may also be other possible action systems, which are not specifically limited in the embodiments of the present application.
The application processing device provided in the embodiment of the present application can implement each process implemented by the embodiment of the method, and in order to avoid repetition, details are not repeated here.
Optionally, as shown in fig. 7, the embodiment of the present application further provides an electronic device 100, including a processor 101, a memory 102, and a program or an instruction stored in the memory 102 and capable of running on the processor 101, where the program or the instruction implements each step of any one of the application processing method embodiments described above when executed by the processor 101, and the steps can achieve the same technical effect, and for avoiding repetition, a description is omitted herein.
The electronic device of the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 8 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 1010 is configured to obtain a first action in a first application; wherein the first dynamic effect is applied in a first scene of the first application; outputting the identification of the first dynamic effect; the first dynamic effect identifier is used for indicating the first dynamic effect; a user input unit 1007 for receiving an identification of the first dynamic effect and a first input of a second application; the processor 1010 is further configured to obtain, in response to the first input, a second scene of the second application that matches the first scene; the first dynamic effect is applied in the second scene.
Thus, in the embodiment of the present application, for the first dynamic effect in the first application, the specific content includes an operation manner of the user and an update manner of the page content, which may be obtained separately and output an identifier of the first dynamic effect, where the specific content of the first dynamic effect may be displayed based on the identifier of the first dynamic effect; further, the user can perform the first input on the identification of the first dynamic effect and the second application, so that when the first scene applied by the first dynamic effect is matched with the second scene of the second application, if both scenes are switched to the scene of the upper-level page, the first dynamic effect is applied to the second scene, and therefore the user can perform control in the same scene of different applications in the same operation mode, and the page content can be updated based on the same mode. Therefore, based on the embodiment of the application, the dynamic effects among different applications can be copied and pasted mutually, so that a user can smoothly operate in a plurality of applications only by being familiar with uniform dynamic effects when using a plurality of applications, and different dynamic effects are not required to be adapted in the same scene of different applications, thereby simplifying user operation and improving user experience.
Optionally, the user input unit 1007 is further configured to receive a second input to the first application; a processor 1010 further configured to update content in the first application according to the first dynamic effect associated with the second input in response to the second input, and record a first video during the updating of content; and acquiring a first dynamic effect in the first application according to the first video, and acquiring the first scene to which the first dynamic effect is applied.
Optionally, the processor 1010 is further configured to output identifiers of N dynamic effects in the first application; wherein N is a positive integer; a user input unit 1007 further for receiving a third input of an identification of the first dynamic effect; wherein the N dynamic effect identifiers comprise the first dynamic effect identifier; the processor 1010 is further configured to obtain the first action in response to the third input, and obtain the first scene to which the first action is applied.
Optionally, the first input comprises a first sub-input; a processor 1010 further configured to update content in the second application according to a second dynamic effect associated with the first sub-input in response to the first sub-input to the second application, and record a second video during the updating of content; and acquiring a second scene of the second application matched with the first scene according to the second video.
Optionally, the processor 1010 is further configured to update, in the second scenario, content according to the first dynamic effect; wherein, in the second scenario, the parameters of the first dynamic effect include: dynamic effect curve, dynamic effect duration and dynamic effect transparency.
Optionally, the user input unit 1007 is further configured to receive a fourth input of the identification of the first action and the identification of the third action in the third application; wherein the identifier of the third action is used for indicating the third action; the processor 1010 is further configured to, in response to the fourth input, respectively obtain a first parameter included in the first action and a second parameter included in the third action; combining the first parameter and the second parameter and obtaining a third parameter under the condition that the first parameter and the second parameter are used for describing target attribute information; wherein the first action and the third action both have the target attribute information; and updating the first parameter included in the first dynamic effect into the third parameter according to the third parameter.
In summary, in the present application, first, a user selects an active effect of an application to copy. The method comprises the specific operations that a user opens an application, clicks and opens a first-level sub-page, enters an extraction recording mode, slides rightward to switch back to a main page, and exits the extraction recording mode, so that screen recording is completed in the extraction recording mode, and dynamic effects are copied in recorded videos. The user then selects the application to make a paste of the effects. The method comprises the specific operations that a user opens another application, clicks the opening and sub-page, enters an effect application mode, then switches back to a main page according to dynamic effects supported by the application, and exits the effect application mode, so that screen recording is completed in the effect application mode, and a scene to be pasted is acquired in the recorded screen. And finally, pasting the copied dynamic effect in the acquired scene. Based on the above process, when the user performs the operation of returning the first-level sub page to the main page in the two applications, the operation mode is the same, and the page switching mode is the same.
In the above, one-to-one copy and paste is performed based on a certain dynamic effect in an application and the same scene of a certain application, and more, one-to-many copy and paste can also be performed based on a certain dynamic effect in an application, the same scene of a plurality of applications and similar scenes. Therefore, in order to realize quick pasting and copying of the dynamic effects, a simple operation path is provided, a user is not required to be familiar with different dynamic effects of different applications, and the user operation is simplified.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of a still picture or a video image obtained by an image capturing device (such as a camera) in a video image capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 1009 may be used to store software programs as well as various data including, but not limited to, application programs and an action system. The processor 1010 may integrate an application processor that primarily processes an action system, user pages, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1009 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the embodiment of the application processing method, and the same technical effect can be achieved, so that repetition is avoided, and no description is repeated here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the application processing method embodiment, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the application processing method, and achieve the same technical effects, and are not described herein in detail for avoiding repetition.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. An application processing method, the method comprising:
acquiring a first dynamic effect in a first application; wherein the first dynamic effect is applied in a first scene of the first application;
outputting the identification of the first dynamic effect; the first dynamic effect identifier is used for indicating the first dynamic effect;
receiving an identification of the first dynamic effect and a first input of a second application;
responsive to the first input, obtaining a second scene of the second application that matches the first scene;
the first dynamic effect is applied in the second scene.
2. The method of claim 1, wherein the obtaining the first action in the first application comprises:
receiving a second input to the first application;
in response to the second input, updating content in the first application according to the first dynamic effect associated with the second input, and recording a first video during updating content;
and acquiring a first dynamic effect in the first application according to the first video, and acquiring the first scene to which the first dynamic effect is applied.
3. The method of claim 1, wherein the obtaining the first action in the first application comprises:
Outputting N dynamic identifiers in the first application; wherein N is a positive integer;
receiving a third input of an identification of the first dynamic effect; wherein the N dynamic effect identifiers comprise the first dynamic effect identifier;
in response to the third input, the first action is acquired, and the first scene to which the first action applies is acquired.
4. The method of claim 1, wherein the first input comprises a first sub-input; the obtaining, in response to the first input, a second scene of the second application that matches the first scene, comprising:
in response to the first sub-input to the second application, updating content in the second application according to a second dynamic effect associated with the first sub-input, and recording a second video during the updating of content;
and acquiring a second scene of the second application matched with the first scene according to the second video.
5. The method of claim 1, wherein said applying said first dynamic effect in said second scene comprises:
in the second scene, updating content according to the first dynamic effect;
Wherein, in the second scenario, the parameters of the first dynamic effect include: dynamic effect curve, dynamic effect duration and dynamic effect transparency.
6. The method of claim 1, wherein prior to said applying said first dynamic effect in said second scene, said method further comprises:
receiving a fourth input of the identification of the first dynamic effect and the identification of a third dynamic effect in a third application; wherein the identifier of the third action is used for indicating the third action;
responding to the fourth input, and respectively acquiring a first parameter included in the first dynamic effect and a second parameter included in the third dynamic effect;
combining the first parameter and the second parameter and obtaining a third parameter under the condition that the first parameter and the second parameter are used for describing target attribute information; wherein the first action and the third action both have the target attribute information;
and updating the first parameter included in the first dynamic effect into the third parameter according to the third parameter.
7. An application processing apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a first dynamic effect in the first application; wherein the first dynamic effect is applied in a first scene of the first application;
The output module is used for outputting the identification of the first dynamic effect; the first dynamic effect identifier is used for indicating the first dynamic effect;
a first receiving module for receiving an identification of the first dynamic effect and a first input of a second application;
a second obtaining module, configured to obtain, in response to the first input, a second scene of the second application that matches the first scene;
and the application module is used for applying the first dynamic effect to the second scene.
8. The apparatus of claim 7, wherein the first acquisition module comprises:
a first receiving unit for receiving a second input to the first application;
a first updating unit, configured to respond to the second input, update content in the first application according to the first dynamic effect associated with the second input, and record a first video in the process of updating content;
the first obtaining unit is used for obtaining a first dynamic effect in the first application according to the first video and obtaining the first scene to which the first dynamic effect is applied.
9. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the application processing method of any one of claims 1 to 6.
10. A readable storage medium, wherein a program or instructions is stored on the readable storage medium, which when executed by a processor, implements the steps of the application processing method according to any one of claims 1 to 6.
CN202310115429.4A 2023-02-13 2023-02-13 Application processing method, device, electronic equipment and medium Pending CN116069429A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310115429.4A CN116069429A (en) 2023-02-13 2023-02-13 Application processing method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310115429.4A CN116069429A (en) 2023-02-13 2023-02-13 Application processing method, device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN116069429A true CN116069429A (en) 2023-05-05

Family

ID=86171343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310115429.4A Pending CN116069429A (en) 2023-02-13 2023-02-13 Application processing method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN116069429A (en)

Similar Documents

Publication Publication Date Title
CN111694490B (en) Setting method and device and electronic equipment
CN112148163A (en) Screen recording method and device and electronic equipment
CN113596555B (en) Video playing method and device and electronic equipment
CN113268182B (en) Application icon management method and electronic device
CN114518822A (en) Application icon management method and device and electronic equipment
CN112181252B (en) Screen capturing method and device and electronic equipment
CN113253883A (en) Application interface display method and device and electronic equipment
CN112306320A (en) Page display method, device, equipment and medium
CN113190365B (en) Information processing method and device and electronic equipment
CN113325986B (en) Program control method, program control device, electronic device and readable storage medium
CN114397989A (en) Parameter value setting method and device, electronic equipment and storage medium
CN115167721A (en) Display method and device of functional interface
CN114845171A (en) Video editing method and device and electronic equipment
CN114679546A (en) Display method and device, electronic equipment and readable storage medium
CN116069429A (en) Application processing method, device, electronic equipment and medium
CN113436297A (en) Picture processing method and electronic equipment
CN117354594A (en) Method and device for generating media content, electronic equipment and readable storage medium
CN113360073A (en) Information input method and information input device
CN117082056A (en) File sharing method and electronic equipment
CN115686285A (en) Page display method and device, electronic equipment and readable storage medium
CN117193919A (en) Display method, display device, electronic equipment and readable storage medium
CN115718553A (en) Application program starting method, device, equipment and readable storage medium
CN113835592A (en) Application function combination method and device and electronic equipment
CN116257154A (en) Display control method and device and electronic equipment
CN117707732A (en) Application switching method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination