CN115705241B - Application scheduling method and electronic equipment - Google Patents

Application scheduling method and electronic equipment Download PDF

Info

Publication number
CN115705241B
CN115705241B CN202210898089.2A CN202210898089A CN115705241B CN 115705241 B CN115705241 B CN 115705241B CN 202210898089 A CN202210898089 A CN 202210898089A CN 115705241 B CN115705241 B CN 115705241B
Authority
CN
China
Prior art keywords
application
service
camera
user
services
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210898089.2A
Other languages
Chinese (zh)
Other versions
CN115705241A (en
Inventor
夏兵
张威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210898089.2A priority Critical patent/CN115705241B/en
Publication of CN115705241A publication Critical patent/CN115705241A/en
Application granted granted Critical
Publication of CN115705241B publication Critical patent/CN115705241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4488Object-oriented
    • G06F9/449Object-oriented method invocation or resolution

Abstract

The application provides an application calling method and electronic equipment. The method comprises the following steps: the electronic device may launch a preload service of the camera application upon launching the camera application based on a corresponding preload service of the camera application. Wherein the preloaded services of the camera application are determined based on the services invoked each time the user uses the camera application. The application provides a service preloading method, which can determine the preloading service corresponding to the camera application based on the using habit of the user on the camera application, so that when the camera application is started, the electronic equipment can start the AI service in advance before the user triggers the AI service, thereby reducing the cold start time of the service, shortening the response time of the user waiting for the service to start and improving the user experience.

Description

Application scheduling method and electronic equipment
The application is a divisional application, the name of the original application is an application scheduling method and electronic equipment, the application number of the original application is 202110917469.1, the original application date is 2021, month 08 and 10, and the whole content of the original application is incorporated by reference.
Technical Field
The present application relates to the field of terminal devices, and in particular, to an application scheduling method and an electronic device.
Background
Currently, after an electronic device responds to a received user operation to start an application program, for some optional services, the electronic device loads a designated service based on the received user operation, which results in slower response of part of the services and influences the user experience.
Disclosure of Invention
In order to solve the problems, the application provides an application scheduling method and electronic equipment. According to the method, the electronic equipment can determine the pre-loading service corresponding to the application based on the using habit of the user on the application, and automatically start the pre-loading service after the application is started, so that the service cold start time is shortened, the waiting time of the user is shortened, and the user experience is improved.
In a first aspect, the present application provides an electronic device. The electronic device includes: one or more processors, memory, and fingerprint sensors; and one or more computer programs, wherein the one or more computer programs are stored on the memory, which when executed by the one or more processors, cause the electronic device to perform the steps of: responding to the received first user operation, and starting a camera application; when the camera application is operated, responding to the received operation of clicking a first AI option of the camera application by a user, and starting an artificial intelligent AI service; performing AI processing on a first image acquired by a camera based on an AI service; after the camera application is closed, responding to the received second user operation, and restarting the camera application; when the camera application is operated, responding to the received operation of clicking the first AI option by the user, and starting AI service; performing AI processing on a second image acquired by the camera based on AI service; determining that the preloaded services of the camera application include AI services; after the camera application is closed again, responding to the received third user operation, starting the camera application, and starting a preloading service of the camera application; and responding to the received operation of clicking the first AI option by the user, and carrying out AI processing on a third image acquired by the camera based on the started AI service. In this way, the electronic device may obtain habits of the user using the camera application based on the services invoked each time the camera application is run, to determine the preloaded services corresponding to the camera application. Correspondingly, when the camera application is started, the electronic equipment can automatically load the pre-load service corresponding to the camera application, so that the service cold start time is shortened, the waiting time of a user is shortened, and the use experience of the user is improved.
According to a first aspect, the computer program, when executed by one or more processors, causes the electronic device to perform the steps of: after the camera application is closed, first user habit information is sent to the server, wherein the first user habit information is used for indicating that the electronic equipment invokes the AI service when the camera application is operated. In this way, the server side can count the service called when the user uses the camera application based on the acquired user habit information sent by the electronic device, so as to acquire the use habit when the user uses the camera application, namely, determine the preloaded service corresponding to the camera application.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by one or more processors, causes the electronic device to perform the steps of: and after the camera application is closed again, sending second user habit information to the server, wherein the second user habit information is used for indicating that the AI service is invoked when the electronic equipment runs the camera application again. In this way, the server side can count the service called when the user uses the camera application based on the acquired user habit information sent by the electronic device, so as to acquire the use habit when the user uses the camera application, namely, determine the preloaded service corresponding to the camera application.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by one or more processors, causes the electronic device to perform the steps of: receiving first indication information sent by a server, wherein the first indication information is used for indicating that a preloading service of a camera application comprises an AI service; in response to receiving the first indication information, it is determined that the preloaded services of the camera application include AI services. In this way, the server may instruct the electronic device to automatically load the preload service when the camera application is started after determining the preload service corresponding to the camera application.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by one or more processors, causes the electronic device to perform the steps of: after the camera application is closed, responding to the received fourth user operation, starting the camera application, and starting an AI service; when the camera application is operated, responding to the received operation of clicking the filter option of the camera application by a user, and starting a filter service; rendering a fourth image acquired by the camera based on the filter service; after the camera application is closed, responding to the received fifth user operation, starting the camera application, and starting an AI service; when the camera application runs, responding to the received operation of clicking the filter option by a user, and starting a filter service; rendering a fifth image acquired by the camera based on the filter service; determining that the preloaded services of the camera application include the filter service and not the AI service; after the camera application is closed again, responding to the received sixth operation, starting the camera application, and starting a preloading service of the camera application; and responding to the received operation of clicking the filter option by the user, and rendering a sixth image acquired by the camera based on the started filter service. In this way, the electronic device may periodically update the preload service to which the camera application corresponds. That is, the preload service to which the camera application corresponds may be updated periodically according to the usage habit of the user. The electronic device may initiate a corresponding preload service when the camera application is initiated based on the updated camera application corresponding preload service.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by one or more processors, causes the electronic device to perform the steps of: after the electronic equipment renders the fourth image acquired by the camera based on the filter service, third user habit information is sent to the server, and the third user habit information is used for indicating that the filter service is invoked when the electronic equipment runs the camera application. In this way, the server can periodically count and update the habit of the user, so that the preloaded service can meet the requirements of the user in different periods when the electronic equipment uses different applications.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by one or more processors, causes the electronic device to perform the steps of: after the electronic equipment renders the fifth image acquired by the camera based on the filter service, fourth user habit information is sent to the server, and the fourth user habit information is used for indicating that the filter service is invoked when the electronic equipment runs the camera application. In this way, the server can periodically count and update the habit of the user, so that the preloaded service can meet the requirements of the user in different periods when the electronic equipment uses different applications.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by one or more processors, causes the electronic device to perform the steps of: receiving second indication information sent by a server, wherein the second indication information is used for indicating that the preloading service of the camera application comprises a filter service; in response to receiving the second indication information, it is determined that the preloaded services of the camera application include the filter service and not the AI service. In this way, the electronic device can update the pre-load service corresponding to the stored camera application according to the instruction of the server.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by one or more processors, causes the electronic device to perform the steps of: after AI processing is carried out on the first image acquired by the camera based on the AI service, responding to the received seventh user operation, closing the camera application, and closing the AI service; responding to the received eighth user operation, and starting a gallery application; when the gallery application is operated, responding to the received operation of clicking a second AI option of the gallery application by a user, and starting an AI service; performing AI processing on the images in the gallery based on the AI service; after AI processing is carried out on the second image acquired by the camera based on the AI service, responding to the received ninth user operation, closing the camera application, and closing the AI service; responding to the received tenth user operation, and starting a gallery application; when the gallery application is operated, responding to the received operation of clicking the second AI option by the user, and starting AI service; performing AI processing on the images in the gallery based on the AI service; determining that the keep-alive services of the camera application include AI services; after AI processing is performed on the third image acquired by the camera based on the started AI service, the camera application is closed in response to the received eleventh user operation, wherein the AI service is in a started state. The application also provides a service keep-alive scheme, and the electronic equipment or the server can acquire the association relation between the services used by different applications based on the using habit of the user so as to determine the keep-alive application corresponding to the camera application. So that the electronic device can still retain some of the services of the camera application, such as AI services, after closing the camera application.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by one or more processors, causes the electronic device to perform the steps of: responding to the received twelfth user operation, and starting a gallery application; and responding to the received operation of clicking the second AI option by the user, and carrying out AI processing on the fifth image in the gallery based on the started AI service. Therefore, by the service keep-alive scheme provided by the application, after the gallery application is started, the AI service which is not closed by the camera application can be directly called, so that the service cold start time is effectively shortened. And, the overhead of repeatedly opening and closing the service can be reduced.
In a second aspect, the present application provides a scheduling method for an application. The method comprises the following steps: responding to the received first user operation, and starting a camera application; when the camera application is operated, responding to the received operation of clicking a first AI option of the camera application by a user, and starting an artificial intelligent AI service; performing AI processing on a first image acquired by a camera based on an AI service; after the camera application is closed, responding to the received second user operation, and restarting the camera application; when the camera application is operated, responding to the received operation of clicking the first AI option by the user, and starting AI service; performing AI processing on a second image acquired by the camera based on AI service; determining that the preloaded services of the camera application include AI services; after the camera application is closed again, responding to the received third user operation, starting the camera application, and starting a preloading service of the camera application; and responding to the received operation of clicking the first AI option by the user, and carrying out AI processing on a third image acquired by the camera based on the started AI service.
According to a second aspect, after the camera application is closed, the method further comprises: and sending first user habit information to a server, wherein the first user habit information is used for indicating that the electronic equipment invokes the AI service when the camera application is operated.
According to a second aspect, or any implementation manner of the second aspect, after the camera application is turned off again, the method further includes: and sending second user habit information to the server, wherein the second user habit information is used for indicating that the AI service is invoked when the electronic equipment runs the camera application again.
According to a second aspect, or any implementation manner of the second aspect, it is determined that the preload service of the camera application includes an AI service, including: receiving first indication information sent by a server, wherein the first indication information is used for indicating that a preloading service of a camera application comprises an AI service; in response to receiving the first indication information, it is determined that the preloaded services of the camera application include AI services.
According to a second aspect, or any implementation manner of the second aspect above, the method further comprises: after the camera application is closed, responding to the received fourth user operation, starting the camera application, and starting an AI service; when the camera application is operated, responding to the received operation of clicking the filter option of the camera application by a user, and starting a filter service; rendering a fourth image acquired by the camera based on the filter service; after the camera application is closed, responding to the received fifth user operation, starting the camera application, and starting an AI service; when the camera application runs, responding to the received operation of clicking the filter option by a user, and starting a filter service; rendering a fifth image acquired by the camera based on the filter service; determining that the preloaded services of the camera application include the filter service and not the AI service; after the camera application is closed again, responding to the received sixth operation, starting the camera application, and starting a preloading service of the camera application; and responding to the received operation of clicking the filter option by the user, and rendering a sixth image acquired by the camera based on the started filter service.
According to a second aspect, or any implementation manner of the second aspect, after the electronic device renders the fourth image acquired by the camera based on the filter service, the method further includes: and sending third user habit information to the server, wherein the third user habit information is used for indicating that the electronic equipment invokes the filter service when the camera application is operated.
According to a second aspect, or any implementation manner of the second aspect, after the electronic device renders the fifth image collected by the camera based on the filter service, the method further includes: and sending fourth user habit information to the server, wherein the fourth user habit information is used for indicating that the electronic equipment invokes the filter service when the camera application is operated.
According to a second aspect, or any implementation of the second aspect above, determining that the preload service of the camera application includes the filter service and not the AI service includes: receiving second indication information sent by a server, wherein the second indication information is used for indicating that the preloading service of the camera application comprises a filter service; in response to receiving the second indication information, it is determined that the preloaded services of the camera application include the filter service and not the AI service.
According to a second aspect, or any implementation manner of the second aspect, after performing AI processing on the first image acquired by the camera based on the AI service, the method further includes: in response to the received seventh user operation, closing the camera application and closing the AI service; responding to the received eighth user operation, and starting a gallery application; when the gallery application is operated, responding to the received operation of clicking a second AI option of the gallery application by a user, and starting an AI service; performing AI processing on the images in the gallery based on the AI service; after AI processing is performed on the second image acquired by the camera based on the AI service, the method further comprises: in response to the received ninth user operation, closing the camera application and closing the AI service; responding to the received tenth user operation, and starting a gallery application; when the gallery application is operated, responding to the received operation of clicking the second AI option by the user, and starting AI service; performing AI processing on the images in the gallery based on the AI service; determining that the keep-alive services of the camera application include AI services; after AI processing is performed on the third image acquired by the camera based on the started AI service, the method further includes: and closing the camera application in response to the received eleventh user operation, wherein the AI service is in a starting state.
According to a second aspect, or any implementation manner of the second aspect above, the method further comprises: responding to the received twelfth user operation, and starting a gallery application; and responding to the received operation of clicking the second AI option by the user, and carrying out AI processing on the fifth image in the gallery based on the started AI service.
Any implementation manner of the second aspect and the second aspect corresponds to any implementation manner of the first aspect and the first aspect, respectively. The technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a third aspect, the application provides a computer readable medium for storing a computer program comprising instructions for performing the method of the second aspect or any possible implementation of the second aspect.
In a fourth aspect, the present application provides a computer program comprising instructions for performing the method of the second aspect or any possible implementation of the second aspect.
In a fifth aspect, the present application provides a chip comprising processing circuitry, transceiver pins. Wherein the transceiver pin and the processing circuit communicate with each other via an internal connection path, the processing circuit performing the method of the second aspect or any one of the possible implementations of the second aspect to control the receiver pin to receive signals and to control the transmitter pin to transmit signals.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an electronic device exemplarily shown;
FIG. 2 is a schematic diagram of a software architecture of an exemplary electronic device;
FIG. 3 is a schematic diagram of an exemplary user interface;
FIG. 4 is a schematic diagram of an exemplary user interface;
FIG. 5 is a schematic diagram of exemplary module interactions;
FIG. 6 is a schematic diagram of exemplary module interactions;
FIG. 7 is a schematic diagram of an exemplary user interface;
FIG. 8 is a schematic diagram of exemplary module interactions;
fig. 9 a-9 b are exemplary user interface diagrams
Fig. 10 is a schematic diagram illustrating interaction between a mobile phone and a cloud end;
FIG. 11 is a schematic diagram of an exemplary user interface;
FIG. 12 is a schematic diagram of an exemplary user interface;
FIG. 13 is a schematic diagram of an exemplary user interface;
FIG. 14 is a schematic diagram of exemplary module interactions;
fig. 15 is a schematic diagram illustrating interaction between a mobile phone and a cloud end;
fig. 16 a-16 b are schematic diagrams of user habit results analyzed by the cloud based on the received user habit information;
fig. 17 is a schematic diagram illustrating interaction between a mobile phone and a cloud end;
FIG. 18 is a schematic diagram of exemplary module interactions;
FIG. 19 is a schematic diagram of an exemplary user interface;
fig. 20 is a schematic view of the structure of the device shown in an exemplary manner.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the application, are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
Fig. 1 shows a schematic configuration of an electronic device 100. It should be understood that the electronic device 100 shown in fig. 1 is only one example of an electronic device, and that the electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. Illustratively, in an embodiment of the present application, processor 110 causes electronic device 100 to preload a portion of an application program when the application is invoked by executing instructions stored in internal memory 121. And, after closing the application, keep-alive part of the application program.
Alternatively, the internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 2 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application.
The layered architecture of the electronic device 100 divides the software into several layers, each with a distinct role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 2, the application framework layer may include a window manager, a view system, a Media service, an AI (Artificial Intelligence) service, a code scanning service, a Media service (Media Server), an Audio service (Audio Server), a Camera service (Camera Server), and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The AI service is used for AI identification. For example, the AI service may perform AI-recognition on an image captured by a camera to identify a person in the image. The AI service may further perform AI identification on the image captured by the camera to identify objects, scenes, etc. in the image, and obtain image processing parameters corresponding to the objects, scenes, etc. in the image to instruct the view system to process the image (e.g., render the image) based on the image processing parameters.
The code scanning service is used for identifying graphic codes in images acquired by the camera.
The media service is used for processing of managing audio data and image data, for example, controlling a data stream of the audio data and the image data, writing the audio stream and the image stream to an MP4 file, and the like. It should be noted that, in the description of the embodiment of the present application, the audio data and the image data may also be referred to as an audio stream and an image stream, or audio information and image information, respectively, and the present application is not limited thereto.
The audio service is used for processing the audio stream accordingly. The camera service is used for carrying out corresponding processing on the image stream.
The system library and Runtime layer includes a system library and Android Runtime (Android run time). The system library may include a plurality of functional modules. For example: a browser kernel, a 3D graphics library (e.g., openGL ES), a font library, etc. The browser kernel is responsible for interpreting the web page language (e.g., one application HTML, javaScript in standard generic markup language) and rendering (displaying) the web page. The 3D graphic library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The font library is used for realizing the input of different fonts. The android runtime includes a core library and virtual machines. And the android running time is responsible for scheduling and managing an android system. The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
It is to be appreciated that the components contained in the system framework layer, the system library, and the runtime layer shown in fig. 2 do not constitute a particular limitation of the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components.
The HAL layer is an interface layer between the operating system kernel and the hardware circuitry. HAL layers include, but are not limited to: an Audio hardware abstraction layer (Audio HAL) and a Camera hardware abstraction layer (Camera HAL). The audio hardware abstract layer is used for processing the audio stream, such as noise reduction, directional enhancement and the like, and the camera hardware abstract layer is used for processing the image stream.
The kernel layer is a layer between the hardware and the software layers described above. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver. The hardware may include a camera, a display screen, a microphone, a processor, a memory, and the like.
It is to be appreciated that the components contained in the system framework layer, the system library, and the runtime layer shown in fig. 2 do not constitute a particular limitation of the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components.
Fig. 3 is a schematic diagram of an exemplary user interface. Referring to FIG. 3, an exemplary display interface 301 includes one or more controls. Controls include, but are not limited to: network controls, power controls, application icon controls, and the like. Exemplary application icon controls include, but are not limited to: video application icon control, weather application icon control, set application icon control, camera application icon control 302, and the like. In an embodiment of the present application, the user may click on the camera application icon control 302.
Referring to fig. 4, an exemplary handset displays a camera preview interface 401 in response to a received user click on the camera application icon control 302. Exemplary, camera preview interfaces 401 include, but are not limited to: a camera preview window 402 and one or more shooting options. Shooting options include, but are not limited to: aperture, night scene, portrait, photograph, video 403, specialty, etc. Illustratively, the camera preview window is used to display images captured by the camera. In the embodiment of the present application, the user selects the recording option 403 as an example, that is, the camera application is in the recording mode.
For example, the recording process of the electronic device may be divided into two parts, the first part being a creation process, optionally the camera application calls a media service, which calls at least one service or module to create a corresponding instance, and may also be understood as a preparation process, as shown in fig. 5, and the second part being a recording process, i.e. a process in which each instance processes acquired data (audio or image), as shown in fig. 6. The creation process mainly comprises the steps of creating corresponding examples by each module. The recording process is the processing of data (including audio streams and image streams) by each instance.
First part, creation process
1. Referring to fig. 5, an exemplary camera application is launched and invokes a media service to cause the media service to create a corresponding instance. Specifically, as shown in fig. 3, after detecting that the user clicks the camera application icon control, the mobile phone starts the camera application. As shown in fig. 4, the handset displays a camera application preview interface 401.
Illustratively, after the camera application is started, a Media Recorder instance is created in the application framework layer through an interface with the application framework layer to start the recording process. The Media Recorder instance instructs the Media service to create a corresponding instance. It should be noted that, the "instance" described in the embodiments of the present application may also be understood as a program code or a process code running in a process, for performing corresponding processing on received data (such as an audio stream or an image stream). It should be noted that, in the description of the embodiments of the present application, the camera application is described as an example, and in other embodiments, the application may also be other applications with shooting functions, for example, a camera function in a chat application, etc., which is not limited by the present application.
Illustratively, the Media service creates instances corresponding to audio and images in response to the indication of the Media Recorder instance. Specifically, the media service creates Stagefright Recorder (recording process) instances. Among other things, the Stagefright Recorder instance is used to manage the initialization of audio and image data and the data flow.
Stagefright Recorder examples create a Camera Source example, an Audio Record example, a Video encoding example, an Audio encoding example, a Mpeg4 Writer example. In the embodiment of the present application, only the creation of the MP4 format file is described as an example, and in other embodiments, other video formats may be generated and corresponding examples may be created.
2. The media service instructs the camera service and the audio service to create corresponding instances.
Illustratively, the Camera Source instance indicates that the Camera service creates a Camera instance, and the Audio Record instance indicates that the Audio service creates a Record Thread instance. Accordingly, the Camera service creates a Camera instance, and the audio service creates a Record Thread instance.
3. The camera service instructs the camera hardware abstraction layer to create a corresponding instance, and the audio service instructs the audio hardware abstraction layer to create a corresponding instance.
Illustratively, the Camera instance instructs the Camera hardware abstraction layer to create a Camera 3Device (Camera Device, where numeral 3 represents the version number of the Camera service, which may be updated with the version) instance, and the Record Thread instance instructs the audio hardware abstraction layer to create an Input Stream instance.
4. The camera hardware abstraction layer invokes the camera driver and the audio hardware abstraction layer invokes the microphone driver. Illustratively, the Camera 3Device instance triggers a Camera driver launch, and the Input Stream instance triggers a microphone driver launch.
5. The camera driver invokes the camera to collect the image stream and the microphone driver invokes the microphone to collect the audio stream.
Second part, recording process
1. Referring to fig. 6, an exemplary camera outputs a captured image stream to a camera driver and a microphone outputs a picked-up audio stream to a microphone driver.
2. The camera driver outputs the image stream and corresponding system time to the camera hardware abstraction layer, and the microphone driver outputs the audio stream to the audio hardware abstraction layer. Illustratively, the Camera 3Device instance obtains an image Stream Input by a Camera, and the Input Stream instance obtains an audio Stream Input by a microphone driver.
3. The camera hardware abstraction layer outputs the acquired image stream to the camera service, and the audio hardware abstraction layer outputs the acquired audio stream to the audio service.
Illustratively, the Camera instance obtains an image Stream Input by the Camera 3Device instance, and the Record Thread instance obtains an audio Stream Input by the Input Stream instance.
4. The camera service outputs each image in the image stream to the media service. And, the audio service outputs each audio stream to the media service.
Illustratively, the Camera Source instance obtains images in an image stream input by the Camera instance, and the Audio Record instance obtains an Audio stream input by the Record Thread instance.
5. The media service generates an MP4 file based on the acquired plurality of images and the plurality of audio streams.
Illustratively, the Camera Source instance outputs the acquired plurality of images to the video encoding instance, and the Audio Record instance outputs the acquired plurality of Audio streams to the Audio encoding instance.
The video encoding instance encodes the plurality of images to generate corresponding image frames, and the video encoding instance outputs the plurality of image frames to the Mpeg4 Writer instance. And the Audio Encoder instance encodes the plurality of Audio streams to generate corresponding Audio frames, and the Audio Encoder instance outputs the plurality of Audio frames to the Mpeg4 Writer instance.
Illustratively, the Mpeg4 Writer instance generates an MP4 file based on the acquired plurality of images and the plurality of audio streams. Wherein the MP4 file includes image data (i.e., a plurality of image frames) and audio data (i.e., a plurality of audio frames). When the MP4 file is played on any platform or player, the player decodes the image frames and the audio frames according to the MPEG4 standard to obtain the original images corresponding to the image frames and the original audio corresponding to the audio frames. The player plays the decoded image and audio.
Illustratively, the media service outputs a plurality of images to the camera application. The camera application displays an image of the media service input in a camera preview window 402.
Fig. 7 is a schematic diagram of an exemplary illustrated user interface. Referring to fig. 7 (1), illustratively, during the preview process, the user clicks on the AI option 404 to initiate AI services. Referring to fig. 7 (2), the exemplary mobile phone initiates an AI service in response to a received user operation. After the AI service is started, the mobile phone can perform AI identification on the image acquired by the camera, and perform image processing on the image based on the AI identification result. For example, still referring to (2) of fig. 7, after the AI service is started, the AI service identifies the head portrait collected by the camera, and identifies that the image collected by the camera includes a person image. The AI service may display AI recognition results, such as a "portrait" option, in the camera preview window 402 for indicating that the current captured scene is a portrait scene. In addition, the AI service obtains the pre-stored image processing parameters associated with the portrait scene, and the AI service may perform corresponding processing on the image based on the image processing parameters, for example, in the embodiment of the present application, after the AI service identifies that the image includes a portrait, the AI service may perform background blurring processing on a background other than the portrait. Alternatively, a cancel button may be included in the "portrait" option, and if the user clicks the cancel button, the AI service cancels the current processing of the image, i.e., cancel background blurring, and restores the preview image to the original image.
Fig. 8 is a schematic diagram of exemplary module interactions. Referring to fig. 8, services or modules such as a media service, a camera service, and an audio service are exemplary and are executed according to the flow described in fig. 6. In response to receiving the user click on the AI option 404, the camera application sends indication information to the media service indicating that the media service invokes the AI service. Illustratively, the media service invokes (may also be referred to as loading) an AI service in response to an indication of the camera application.
After the AI service is started, the camera service may output an image stream to the AI service. The AI service may perform AI identification on the image stream and, after AI processing, output the processed image to the media service. The media service outputs the image to the camera application. The camera application may display the AI-processed image in the camera preview window 402.
It should be noted that, as shown in fig. 7 and 8, the AI service is loaded after the camera application is started, and there is a response time period between the start of the AI service and the operation of clicking the AI option by the user, which may be, for example, 1s (seconds). That is, from a user perspective, after 1s of interval from the user clicking on the AI option 404, the background from which the image is seen from the camera preview window is blurred (where the AI service process processes the image and the time delay of interaction between the media service and the camera application is negligible).
The embodiment of the application provides a preloading mode, which can be used for preloading at least one application or at least one service in the applications by counting the habit information of the user, so that the starting efficiency of the application or the service is effectively improved, and the user experience is improved.
Fig. 9a is a schematic diagram of an exemplary user interface. Referring to FIG. 9a, exemplary display interface 301 includes one or more controls. Controls include, but are not limited to: network controls, power controls, application icon controls, and the like. Exemplary application icon controls include, but are not limited to: video application icon control, weather application icon control, set application icon control 302, and the like. In an embodiment of the present application, the user may click on the set application icon control 302. Referring to fig. 9b, the exemplary handset displays a setup interface 303 in response to a received user click. One or more options are included in the settings interface 303. Optionally, account options 304 are included in the settings interface 303. Account options 304 are used to indicate the account the user is logging in. The user may view account information by clicking on account option 304. For example, after the user logs in the glowing account, the cloud end can record relevant information of the mobile phone through the glowing account. For example, in the embodiment of the present application, after the mobile phone sends the user habit information to the cloud, the cloud may associate the received user habit information with the glowing account of the mobile phone.
The scenario in fig. 7 is still taken as an example below. Referring to fig. 10, an exemplary mobile phone may send user habit information to a cloud. Illustratively, user habit information is used to describe the user's habits when using a certain application. For example, as described above, the user initiates the AI service while using the camera application. In the process that the mobile phone sends the habit information of the user to the cloud for indicating the user to use the camera application, the loaded services include but are not limited to: media services, camera services, AI services, camera hardware abstraction layers, audio hardware abstraction layers, camera drivers, microphone drivers, etc.
Alternatively, the mobile phone may send the user habit information to the cloud after the camera application is used, i.e. the camera application is turned off, or is switched to another application.
Optionally, the mobile phone may also send user habit information to the cloud end during the use process of the camera application. For example, after the AI service is started, the mobile phone may send the user habit information to the cloud. For another example, if the user starts the flash service, the mobile phone may send user habit information to the cloud, where the user habit information is used to indicate that the user uses the camera application, and the loaded service includes but is not limited to: media services, camera services, AI services, flash services, camera hardware abstraction layers, audio hardware abstraction layers, camera drivers, microphone drivers, etc.
Optionally, the mobile phone may send the user account information to the cloud end while sending the user habit information to the cloud end. And the cloud receives user habit information and account information sent by the mobile phone. The cloud end can determine a corresponding relationship between the user habit information and the user account based on the account information. For example, in the embodiment of the present application, the mobile phone may send the user account and the user habit information to the cloud after the user uses the camera application each time. In the embodiment of the application, it is assumed that the AI service is started every time the user uses the camera, that is, the AI service is included in the service loaded by the camera application scene in the user habit information sent every time by the mobile phone.
The scenario shown in fig. 7 is a service call flow of a camera application scenario. The following describes a service call flow in a payment scenario, taking a scenario in which a camera is called in the payment scenario as an example. Fig. 11 to 13 are schematic views of a call camera in an exemplary payment scene. Referring to fig. 11, an exemplary display interface 1101 includes a wallet application icon control 1102. Descriptions of other controls in the display interface 1101 may be referred to in the relevant description of fig. 3, and are not repeated here.
Illustratively, the user clicks the wallet application icon control 1102. Referring to fig. 12, the mobile phone starts a wallet application in response to a received user operation, and displays a wallet application interface 1201. Wallet application interface 1201 includes, but is not limited to: the sweep code option 1202, service option box 1203. One or more services are included in service options box 1203, but are not limited thereto. Services include, but are not limited to: payment services, ride services, key services, card packaging services, etc.
The user may click on the swipe code option 1202. Referring to fig. 13, the mobile phone displays a swipe interface 1301 in response to a received user operation. The scan-by-scan interface 1301 displays the graphic code captured by the camera. The code scanning service can identify the graphic codes acquired by the camera.
With reference to fig. 13, fig. 14 is a schematic diagram illustrating exemplary module interactions. Referring to FIG. 14, an exemplary wallet application is launched that invokes a media service in response to a user clicking on the swipe code option 1202. The media service invokes the camera service, which invokes the camera hardware abstraction layer. The camera hardware abstraction layer invokes a camera driver and the camera driver invokes a camera. And, the wallet application invokes the sweep code service. Specific calling procedures can be described with reference to fig. 5, and will not be described in detail herein.
Similar to the description in fig. 7, after the user clicks on the scan option, the wallet application invokes the scan service in response to the received user operation. Accordingly, there may be a response delay from the user clicking to displaying the swipe interface, which may be, for example, 1s.
Referring to fig. 15, an exemplary mobile phone sends user habit information and user account information to a cloud. The user habit information is used for indicating that the user uses the code scanning application, and the loaded service comprises the following steps: code scanning service, media service, camera hardware abstraction layer, camera driver, etc.
The cloud receives user habit information and user account information sent by the mobile phone, and correlates the received user habit information and the user account information. That is, the cloud has received two pieces of user habit information, both of which are associated with the same user account information.
It should be noted that, fig. 10 and fig. 15 only schematically illustrate one interaction between the mobile phone and the cloud. In the embodiment of the application, after the user uses the camera application and the code scanning application each time, the mobile phone can send the service loaded in the camera application scene and the service loaded in the code scanning application scene to the cloud. For example, the user may use the camera application 10 times a day, and 8 of the 10 uses of the camera application may trigger the AI function. Correspondingly, after the camera application is closed each time, the mobile phone sends user habit information to the cloud end for indicating related services (also can be understood as a started service set) started by the camera application. That is, the mobile phone transmits 10 pieces of user habit information corresponding to the camera application to the cloud end in one day. In the 10 pieces of user habit information, 8 pieces of user habit information indicate that the camera application invokes a plurality of services including the AI service in the current starting process, and it can be understood that an intersection exists in a service set invoked by the application indicated in the plurality of pieces of user habit information, and the intersection is the plurality of services including the AI service.
For example, the cloud may periodically count user habit information. As described above, the cloud associates the received plurality of user habit information with the user account. The cloud end can analyze the received habit information of the user based on a period (for example, 3 days, can be set according to actual requirements, and is not limited by the application) to obtain the habit of the user corresponding to the user account for different applications, so as to further obtain the service items to be preloaded in each application scene.
For example, fig. 16 a-16 b are user habit results analyzed by the cloud based on the received user habit information. Referring to fig. 16a, the cloud optionally receives multiple user habit information of the user while using the camera application in a period as described above. The cloud end can analyze the habit information of a plurality of users corresponding to the camera application in the period to obtain the use probability of each service in the camera application scene. As shown in fig. 16a, for this user, in the scene where the camera application is used in a period, the probability that the media service is invoked is 98%, the probability that the camera service is invoked is 87%, the probability that the camera hardware abstraction layer is invoked is 87%, the probability that the camera driver is invoked is 87%, the probability that the audio service is invoked is 50%, the probability that the audio hardware abstraction layer is invoked is 59%, the probability that the audio driver is invoked is 50%, the probability that the AI service is invoked is 60%, the probability that the service a is invoked is 20%, the probability that the service c is invoked is 5%, the probability that the service b is invoked is 10%, the probability that the service d is invoked is 4%, and the probability that the service e is invoked is 2%. It should be noted that, in the embodiment of the present application, the probability (which may also be referred to as the probability that the service is invoked, or the probability of using, etc.) corresponding to the service is optionally the ratio of the service to the application to be invoked, for example, if the camera application is invoked 100 times in a period (for example, three days), the media service is invoked 98 times, the AI service is invoked 60 times, the probability that the media service is invoked is 98%, and the probability that the AI service is invoked is 60%. The above values are only illustrative examples and the present application is not limited thereto.
Referring to fig. 16b, an exemplary cloud end analyzes multiple user habit information corresponding to the wallet application in a period to obtain the usage probability of each service in the wallet application scenario. As shown in fig. 16b, for the user, in the case where the wallet application is used in a period, the probability of calling the media service is 80%, the probability of calling the camera hardware abstraction layer is 80%, the probability of calling the camera driver is 80%, the probability of calling the sweep service is 80%, the probability of calling the ride service is 20%, the probability of calling the payment service is 90%, the probability of calling the graphic code service is 90%, and the probability of calling the service m is 10%. It should be noted that the names and numbers of the services and the corresponding probabilities shown in the embodiments of the present application are only illustrative examples, and the present application is not limited thereto.
After the cloud acquires the service to be loaded and the corresponding use probability (also called the called probability) in each application scene of the user, the cloud can detect whether the use probability of the service corresponding to each application is greater than or equal to a set threshold value. For example, the set threshold may be 60%, which is only illustrative, and may be set according to actual requirements, and the present application is not limited thereto. Alternatively, in the embodiment of the present application, a service whose usage probability exceeds a set threshold may be referred to as a preloaded service. For example, please continue to refer to fig. 16a, the cloud counts the probability of each service in the camera application scene, and determines the media service, the camera hardware abstraction layer, the camera driver, and the AI service as the preloaded service corresponding to the camera application. The cloud counts the probability of each service in the wallet application scene, and determines that the media service, the camera hardware abstraction layer, the camera driver, the code scanning service, the payment service and the graphic code service are preloaded services corresponding to the wallet application.
The cloud sends the pre-loaded service corresponding to each application after statistics to the mobile phone. For example, referring to fig. 17, the cloud end sends first preload service information and second preload service information to the mobile phone, where the first preload service information is used to indicate a preload service (may also be referred to as a preload service set) corresponding to the camera application scene, and the second preload service information is used to indicate a preload service corresponding to the wallet application scene. It should be noted that, in the embodiment of the present application, only the camera application and the wallet application are described as examples. In other embodiments, other application scenarios of the mobile phone may refer to the solutions in the foregoing embodiments, and the description of the present application is not repeated.
In an exemplary embodiment of the present application, a mobile phone receives a pre-load service corresponding to a camera application scene and a pre-load service corresponding to a wallet application, where the pre-load service corresponding to the camera application scene and the pre-load service corresponding to the wallet application are recorded by the mobile phone. Optionally, in the embodiment of the present application, the identification information of the service corresponding to each application scenario, which may be sent to the cloud by the mobile phone, may be, for example, a service name, which is not limited by the present application. Accordingly, the cloud returns to the mobile phone identification information, such as a service name, of the optionally preloaded service. The handset may record identification information of the preloaded application corresponding to each application.
For example, the pre-load services corresponding to the mobile phone recordable camera application scenario include, but are not limited to: media services, camera hardware abstraction layer, camera drivers, AI services. The pre-load services corresponding to wallet application scenarios include, but are not limited to: media services, camera hardware abstraction layer, camera drivers, code scanning services, payment services, graphics code services.
In an exemplary embodiment of the present application, after a mobile phone starts a corresponding application, the mobile phone may pre-call the pre-load service based on the pre-load service corresponding to the recorded application scenario, so as to improve the response speed of the application. For example, still referring to FIG. 3, the user clicks the camera application icon control 302, for example. As shown in fig. 4, the mobile phone displays a camera preview interface 401 in response to the received user operation, and the detailed description may refer to the related content above, which is not described herein.
With reference to fig. 3 and 4, fig. 18 is a schematic diagram illustrating exemplary module interactions. Referring to fig. 18, an exemplary camera application may detect a preloaded service corresponding to a stored camera application scene in a mobile phone after the camera application is started. For example, the preload service includes, but is not limited to: media services, camera hardware abstraction layer, camera drivers, AI services. Accordingly, the camera application invokes the preload service described above. That is, the AI service has completed loading before the user has clicked on AI option 404. Referring to fig. 19 (1), the user clicks the AI option 404, for example. Referring to (2) of fig. 19, the mobile phone performs AI identification and AI processing on an image through AI services in response to a received user operation. Reference is made to the above for a specific description, and no further description is given here. It should be noted that, in the embodiment of the present application, since the mobile phone has preloaded the AI service, that is, the mobile phone has been loaded before the user clicks the AI option. Thus, when the user clicks on the AI service, the AI service may directly AI-identify the image. That is, from a user perspective, the interval from the user clicking on the AI option until the image is AI-processed (i.e., the response time period) may be only 200ms. It should be noted that this numerical value is only illustrative, and the present application is not limited thereto.
Also, for wallet applications, the handset initiates the wallet application in response to a received user clicking on the wallet application icon. After the wallet application is started, a plurality of services including code scanning service are preloaded based on the preloaded application corresponding to the wallet application scene stored in the mobile phone. Correspondingly, after the user clicks the code scanning option, the code scanning service can immediately respond to the user operation to display a one-scan interface, so that the response speed of the application service is improved.
The preloading schemes in the above embodiments are all described by taking a single application as an example. That is, after the application is started, the application may pre-load at least one service corresponding to the application to increase the response speed of the application. The embodiment of the application also provides a linkage type preloading service scheme to improve the response speed of the application switching scene. For example, the user optionally switches to the chat application after taking a picture using the camera application. The user may share photos taken by the camera application in the chat application. Illustratively, the chat application optionally needs to invoke at least one service (e.g., service set a) in the camera application during the sharing of the photos. In this scenario, the mobile phone may send user habit information to the cloud end, where the user habit information is used to indicate user habit information corresponding to the camera application scenario (i.e. a service set invoked when the camera application runs) and user habit information corresponding to the chat application (i.e. a service set invoked when the chat application runs). Optionally, the user habit information may also be used to indicate a switching relationship between the camera application and the chat application. That is, the cloud may determine that the user often switches to the chat application after using the camera application based on the user habit information. And, it may be further determined, based on the user habit information, that after the mobile phone is switched from the camera application to the chat application, the chat application invokes a part of the applications (i.e. the service set a) of the camera application.
For example, the cloud may receive multiple user habit information sent by the mobile phone in a period. For example, the cloud may determine a switching relationship between different applications based on a plurality of user habit information. For example, according to the application scenario statistics, within three days, the user switches to the chat application 80% of the time after using the camera application. In addition, in the running process of the chat application, some services which are called by the chat application and have the use frequency exceeding a set threshold value are called by the camera application. That is, in the process of determining the associated service that needs to be preloaded between applications, the cloud end in the embodiment of the present application may determine whether the service is the associated service that needs to be preloaded based on the following conditions:
1) In the period, the frequency of switching from the application A to the application B is larger than a set threshold (which can be set according to actual requirements, and the application is not limited).
2) The service called by the application B is partially overlapped with the service called by the application A. Also, for application B, the probability of use of the overlapping partial services is greater than a set threshold (e.g., the threshold for the preloaded service set above, e.g., 60%).
When a part of the services (for example, the service set a) in the application a meets the above condition, the part of the services is also included in the preloaded service set corresponding to the application a scene. Accordingly, the cloud end sends the preloaded service corresponding to the camera application scene to the mobile phone, wherein the preloaded service comprises the service set A and other services, such as the AI service and the like.
For example, after the camera application is started, services such as the service set a and the AI service may be preloaded. Thus, when a user takes a picture using the camera application and wants to share through the chat application, the user opens the chat application and triggers a sharing function (e.g., clicks a share button). The chat application can respond to the received user operation, call the service set A and execute the picture sharing flow through each service in the service set A. The service set A is preloaded, so that the response time from the user triggering of the sharing function of the chat application to the pop-up of the sharing interface is short, the time for the user to wait for the service to start can be effectively shortened, and the user experience is improved.
It should be noted that, in the embodiment of the present application, only one user is taken as an example for explanation. In other embodiments, the technical solution in the embodiments of the present application may be applied to any user. For example, assuming that the user is the user a, the cloud may determine the preloaded service corresponding to each application scenario used by the user a based on the user habit information sent by the user a. For the user B, the user B may optionally use a camera application and a wallet application, and the mobile phone of the user B may send user habit information of the user B, that is, user habit information corresponding to a camera application scene (that is, a service loaded by the camera application scene) and user habit information corresponding to a wallet application scene (that is, a service loaded by the video application scene) to the cloud. The cloud end can determine the preloading service corresponding to the camera application scene and the preloading service corresponding to the wallet application scene based on the received user habit information of the user B. Alternatively, the pre-load service corresponding to the camera application scenario of user B may be the same as or different from the pre-load service corresponding to the camera application scenario of user a. Alternatively, the pre-load service corresponding to the wallet application scenario of the user B may be the same as or different from the pre-load service corresponding to the wallet application scenario of the user a. For example, when using the camera service, the user B rarely uses the AI service, and often uses the beauty service. And the probability of the service corresponding to the camera service determined by the cloud is smaller than a set threshold, and the probability of the service using the AI service is larger than the set threshold. Thus, the preloaded services corresponding to the camera application of the user B include, but are not limited to, media services, beauty services, and the like, and do not include AI services. The cloud end sends the pre-load service corresponding to the camera application scene of the user B to the mobile phone of the user B. When the user B triggers the camera application, the camera application is started, and corresponding services, such as media services, camera services, beauty services and the like, are loaded based on the pre-loading services sent by the cloud. Accordingly, when the user clicks the beauty service Yan Xuanxiang, the time for waiting for the service to start by the user can be effectively shortened, and the user experience is improved because the beauty service is preloaded.
In one possible implementation, the cloud may also count habits of multiple users. For example, the cloud obtains 100 pieces of user habit information corresponding to the camera application sent by each of the 100 users (i.e., the cloud receives 1000 pieces of user habit information corresponding to the camera application in total) in a period (e.g., three days). The cloud end can analyze the habit of the user in the camera application scene of each user, and the analysis process is described above and is not repeated here. The cloud end can analyze the use probability of each service of the camera application by combining the user habit information of 100 users. Specific analysis is referred to above and will not be described in detail here. For example, the cloud may obtain a probability of use of each service of the camera application, thereby determining a corresponding set of preloaded services of the camera application (as distinguished from a single user's combination of preloaded services, which in this example may be referred to as a global set of preloaded services). For example, the analysis results are still as shown in fig. 16a, i.e., it is possible that most of 100 users call media services, camera hardware abstraction layers, camera drivers, AI services, etc. when using the camera application. The cloud may record the analysis results (the global set of preloaded services to which the camera application corresponds). For example, when a new user is registered in the cloud, the cloud may send the global preload service set to the new user due to user habit information of the user that the cloud fails to acquire. And after the mobile phone of the new user starts the camera application in response to the received user operation, the corresponding service can be loaded based on the received global preloading service combination. It should be noted that, because this manner is not obtained according to the user habit of the user, the new mobile phone of the user can count the service called by the camera application and send the user habit information to the cloud end in the actual use process of the camera application. And the cloud end can obtain a preloading service set in the camera application scene corresponding to the user according to the user habit information of the user. The cloud may send the updated set of preloaded services to the new user's handset. When the camera application is called, the mobile phone of the new user can call the corresponding service based on the newly acquired preloaded service set.
In another possible implementation, as described above, the cloud may record the correspondence between the user account and the preloaded service in each application scenario. Optionally, if the user uses another mobile phone login account, the mobile phone may obtain the preloaded service in each application scenario corresponding to the user account from the cloud. The mobile phone can call the pre-loading service corresponding to a certain application after responding to the received user operation to start the application based on the pre-loading service in each application scene. Details not described can be found above and are not repeated here. Optionally, the mobile phone may send a request message to the cloud end to obtain the preloaded service information from the cloud end. Optionally, after the mobile phone logs in the user account, the cloud end may actively send the preloaded service information to the mobile phone.
In yet another possible implementation manner, the cloud end may update the corresponding relationship of the pre-loaded services of the different applications corresponding to the recorded user accounts periodically. For example, in the first period, the cloud end determining, based on the plurality of user habit information sent by the user a, a pre-load service corresponding to the camera application includes: media services, camera hardware abstraction layers, camera drivers, AI services, etc. Correspondingly, the cloud end can send indication information to the mobile phone of the user A, and the indication information is used for indicating to load services such as media services, camera services, a camera hardware abstraction layer, a camera driver, an AI service and the like when the camera application is started. The mobile phone can respond to the received indication information, start the camera application after receiving the operation of the user, and load media service, camera hardware abstraction layer, camera driver and AI service in the process of starting the application A. For example, in the second period, the cloud end determining, based on the plurality of user habit information sent by the user a, a pre-load service corresponding to the camera application includes: media services, camera hardware abstraction layers, camera drivers, flash services, filter services, etc. That is, during the second period, user a may not use the AI service each time (or in most cases) the camera application is used, but rather, by clicking on the flash option and the filter option, the flash service and the filter service are activated. Correspondingly, the cloud end can send indication information to the mobile phone of the user A, and the indication information is used for indicating to load services such as media service, camera hardware abstraction layer, camera driver, flash lamp service, filter service and the like when the camera application is started. The mobile phone can respond to the received indication information, start the camera application after receiving the operation of the user, and load media service, camera hardware abstraction layer, camera drive, flash lamp service, filter service and the like in the process of starting the camera application. For example, when the user clicks on a filter option, the filter service that has been activated may perform a corresponding process on the image, such as a rendering operation to add a filter. Illustratively, in other embodiments, the user may launch the camera application while the camera application is loading multiple pre-load services, including filter services, and automatically add filters to the image. Alternatively, in the embodiment of the present application, the mobile phone may store the preloaded services (including the keep-alive service in the following embodiment) corresponding to different applications in the memory, for example, in the memory. For example, after receiving the pre-load service of the camera application sent by the cloud, the mobile phone may update the pre-load service corresponding to the stored camera application, for example, delete the pre-load service corresponding to the stored camera application and store the new pre-load service. The mobile phone can load corresponding services based on the pre-loaded services corresponding to the updated camera application in a period.
It should be noted that, in the embodiment of the present application, the camera application starts the AI service, but in reality the AI service is only preloaded, but the AI processing is not performed on the image. Only after receiving the user click on the AI option, the AI service that has been loaded can directly a-process the image. In other embodiments, after the camera application determines that the AI service is included in the pre-load service, the image may also be processed by the AI service after the AI service is loaded. That is, the AI service is changed from the default off state to the default on state, and after the user opens the camera application, the camera application can start the AI service and perform AI processing on the image through the AI service.
The embodiment of the application also provides a keep-alive scheme of the application service, which can selectively keep part of the services of the closed application according to the using habit of the user. In the embodiment of the application, the cloud can determine the keep-alive service corresponding to each application scene based on the user habit information sent by the mobile phone.
In one example, the cloud may treat services having a probability of use greater than a set threshold (e.g., 60%, which may be the same as or different from the set threshold of the preloaded services, without limitation of the application) as keep-alive services. For example, referring to fig. 16a, taking a camera application scenario as an example, for a camera application, the cloud may determine that a keep-alive service corresponding to the camera application scenario is a media service, a camera hardware abstraction layer, a camera driver, an AI service, or the like. And the cloud end sends the keep-alive service corresponding to the camera application scene to the mobile phone. It should be noted that, the cloud end may send indication information to the mobile phone, where the indication information includes a preloaded service and a keep-alive service corresponding to the camera application scenario. The cloud end can also send first indication information and second indication information to the mobile phone, wherein the first indication information comprises a preloaded service corresponding to the camera application scene. The second indication information includes a keep-alive service (may be one service or a plurality of services) corresponding to the camera application, which is not limited by the present application. And the mobile phone responds to the received keep-alive service corresponding to the camera application scene, and the corresponding relation between the camera application and the keep-alive service is saved. If the user triggers the camera application, the mobile phone can load the corresponding service based on the pre-loaded application corresponding to the camera application described above. After the user uses the camera application, the camera application is closed. The camera application determines that shutdown is required in response to the received user operation. The camera application acquires the keep-alive service corresponding to the camera application stored in the mobile phone. Including, for example, media services, camera hardware abstraction layers, camera drivers, AI services. The camera service closes the corresponding processes of other services outside the keep-alive service. For example, referring to fig. 16a, the services that the camera application may currently open include, but are not limited to: media services, camera hardware abstraction layers, camera drivers, AI services, audio hardware abstraction layers, audio drivers, etc. And closing the processes corresponding to the services such as the audio service, the audio hardware abstraction layer, the audio driver and the like by the camera application based on the obtained keep-alive service corresponding to the camera application, and exiting the application. Therefore, when the user starts the camera application again, as part of services of the camera application are kept in an open state, the starting time of the camera application can be shortened, a soft start effect can be achieved, and the user experience is effectively improved.
Optionally, the cloud may further count the frequency of use of the application. For example, assuming that the frequency of use of the camera application is greater than a set threshold (e.g., 50%) and the frequency of use of the video application is less than the set threshold over a period (e.g., three days), the cloud may not count keep-alive services for the video application. That is, for applications with a smaller frequency of use, the keep-alive mechanism in the embodiment of the present application may not be used to reduce the memory occupation.
In another example, as described above, there may be associated services between different applications, for example, application a may call a portion of the services among application B, which may also be understood as application a using a portion of the same services as application B. Unlike the scenario described in the above embodiment in which the application a pre-loads the service associated with the application B (e.g., the service set a described above), in the embodiment of the present application, the cloud end may determine the keep-alive service corresponding to the application based on the user habit information, that is, the service still remains running after the application is closed. For example, in the process of using the application a, the service called by the application a is the service set C. And the mobile phone responds to the received user operation to start the application B. During each running process of the application B in the period (for example, the application does not limit the probability of using the service and the application), a part of the services (for example, the service subset H) in the service set C is called. Is hearty, application B starts every 2 hours after application a is shut down. For example, the user starts application B every 2 hours after application a is closed, or during application a running. And the mobile phone responds to the received user operation to start the application C. Each time an application C runs during a period, a portion of the services (e.g., subset I) in service set C are invoked. Illustratively, the user launches application C each time after application A is shut down for 2 hours (e.g., 5 hours apart, or one day apart). Alternatively, the subset of services H and the subset of services I may be the same or different.
The mobile phone sends user habit information to the cloud end, wherein the user habit information is optionally used for indicating a service called by an application a scene (namely, a service set C), and the user habit information is also used for indicating a service called by an application B scene and a service called by an application C scene. And the user habit information is also used for indicating the interval duration called between the application B and the application C and the application A. The cloud may determine, based on user habit information, that there is a coincident service between the services invoked by application a and application B during the period. And, the startup interval between application a and application B is less than or equal to a set time threshold (e.g., 3 hours). Accordingly, the cloud may determine that the service in the subset of services H is a keep-alive service for application a. The cloud end sends keep-alive service information to the mobile phone, and the keep-alive service information is used for indicating keep-alive services corresponding to the application A. Accordingly, when the application a determines that the application needs to be closed in response to the received user operation. Application a may close processes corresponding to other services than service subset H based on the keep-alive service (e.g., including service subset H) corresponding to application a. After the mobile phone starts the application B in response to the received user operation, the application B can directly call the service sub-set H in the survival state when the service sub-set H is called because the service sub-set H is in the survival state, so that the service starting response time is shortened.
The cloud determines that, in the period, the service called by the application a and the service called by the application C are coincident services based on the habit information of the user. However, the average startup interval (e.g., 5 hours) between application C and application a is greater than the set time threshold. Thus, service subset I does not belong to the keep-alive service of application a. That is, if application A remains alive after shutdown, application C will typically invoke subset I of services after 5 hours after application A is shutdown. While the service subset I is always in a keep-alive state, memory will be occupied, and the performance of the mobile phone will be affected.
The following is illustrative of a specific application, as described above, the corresponding preloading services of the camera application include, but are not limited to: media services, camera hardware abstraction layers, camera drivers, AI services, etc., that is, each time a camera application is started, before the camera application's preloaded services are updated. For example, during a period (e.g., three days), the user launches 10 camera applications, which each time launch multiple services, including the AI service. After the user uses the camera application, the camera application is closed, and the camera application closes all services including the AI service each time. Illustratively, after each use of the camera application by the user, i.e., within 2 hours after the camera application is closed, the mobile phone responds to the received user operation to start the gallery application (may also be other applications, such as a drawing application, an article scanning application, etc.), and during this period, the mobile phone receives an operation that the user clicks the AI option in the gallery application each time the gallery application is operated. And the mobile phone responds to the received user operation, starts an AI service, and performs AI processing on the images in the gallery based on the AI service. Reference is made to the above for a specific description, and no further description is given here.
For example, the mobile phone sends multiple pieces of user habit information to the cloud end in a period, the user habit information is used for indicating that the gallery application is used 10 times, the AI service is called each time, the AI service is a preloading service of the camera application, and the using interval between the camera application and the gallery application is 2 hours. For example, the starting time and closing time of the camera application, and the using time and closing time of the gallery application can be recorded in the user habit information sent to the cloud by the mobile phone. Accordingly, the cloud may count based on the received plurality of user habit information that the camera application will start the AI service each time, and the gallery application will call the AI service each time, and the usage interval between the gallery application and the camera application (i.e., the difference between the time when the camera application is turned off and the time when the gallery application is started) is less than a set threshold (e.g., 2 hours). Accordingly, the cloud may determine, based on the received plurality of user habit information, that during the period, keep-alive services for the camera application include, but are not limited to: AI services. The cloud may send keep-alive services of the camera application, i.e., AI services, to the user's cell phone. Illustratively, the mobile phone receives a keep-alive service of the camera application. The mobile phone responds to the received user operation to close the camera application, and when the camera application is closed, all services of the camera application, except for the keep-alive service (AI service), which are started when the camera application is in operation, such as media service, camera hardware abstraction layer, camera driver and the like. Thus, when the mobile phone responds to the received user operation and starts the gallery application, the AI service can be automatically started. Correspondingly, when the user clicks the AI option in the gallery application, the AI service which is already loaded can directly perform AI processing on the image in the gallery, so that the time for waiting for the start of the AI service is reduced, and the response time is shortened.
In the embodiment of the application, cloud is taken as an example to analyze habit information of a user, and an execution subject of a pre-loading service and a keep-alive service corresponding to an application is determined. In other embodiments, the mobile phone may also perform the steps performed by the cloud. For example, the mobile phone may periodically obtain user habit information, and analyze the user habit information to obtain a preloaded service and a keep-alive service corresponding to each application. The specific implementation details are the same as the relevant steps executed by the cloud, and are not described here again.
It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. The present application can be implemented in hardware or a combination of hardware and computer software, in conjunction with the example algorithm steps described in connection with the embodiments disclosed herein. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In one example, fig. 20 shows a schematic block diagram of an apparatus 2000 of an embodiment of the present application, the apparatus 2000 may include: processor 2001 and transceiver/transceiving pin 2002, optionally, also include memory 2003.
The various components of the device 2000 are coupled together by a bus 2004, where the bus 2004 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are referred to in the figures as bus 2004.
Alternatively, the memory 2003 may be used for instructions in the method embodiments described previously. The processor 2001 may be used to execute instructions in the memory 2003 and control the receive pin to receive signals and the transmit pin to transmit signals.
The apparatus 2000 may be an electronic device or a chip of an electronic device in the above-described method embodiments.
All relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The present embodiment also provides a computer storage medium having stored therein computer instructions which, when executed on an electronic device, cause the electronic device to execute the above-mentioned related method steps to implement the calling method of the application in the above-mentioned embodiments.
The present embodiment also provides a computer program product which, when run on a computer, causes the computer to perform the above-mentioned related steps to implement the calling method of the application in the above-mentioned embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be embodied as a chip, component or module, which may include a processor and a memory coupled to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip executes the calling method of the application in each method embodiment.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are used to execute the corresponding methods provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding methods provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Any of the various embodiments of the application, as well as any of the same embodiments, may be freely combined. Any combination of the above is within the scope of the application.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
The steps of a method or algorithm described in connection with the present disclosure may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in random access Memory (Random Access Memory, RAM), flash Memory, read Only Memory (ROM), erasable programmable Read Only Memory (Erasable Programmable ROM), electrically Erasable Programmable Read Only Memory (EEPROM), registers, hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (13)

1. An application scheduling method, comprising:
acquiring a plurality of user habit information corresponding to a first application in terminal equipment; the user habit information is used for indicating service information called by the terminal equipment when the first application is operated; the user habit information is further used for indicating a second application associated with the first application in the terminal device, wherein the frequency of switching to the second application when the terminal device runs the first application is greater than a second threshold;
analyzing a service calling relationship corresponding to the first application according to the plurality of user habit information;
determining a first associated service corresponding to the first application according to a service calling relation corresponding to the first application, so that the terminal equipment preloads the first associated service when the first application is started;
Wherein determining a first associated service corresponding to the first application according to a service call relationship corresponding to the first application comprises: according to the service calling relation corresponding to the first application, calculating the called probability of each service; taking the service with the called probability meeting a first threshold as a first associated service corresponding to the first application;
the method further comprises the steps of:
and if the probability of calling the first target service by the second application in the terminal equipment is larger than a third threshold value and the first target service is the service called by the first application in running, using the first target service as a first associated service corresponding to the first application.
2. The method as recited in claim 1, further comprising: and determining a second associated service corresponding to the first application according to the service calling relation corresponding to the first application, so that the terminal equipment continues to operate the second associated service when exiting the first application.
3. The method of claim 2, wherein determining a second associated service corresponding to the first application based on a service invocation relationship corresponding to the first application, comprises:
According to the service calling relation corresponding to the first application, calculating the called probability of each service;
and taking the service with the called probability meeting a fourth threshold value as a second associated service corresponding to the first application.
4. The method according to claim 2, wherein the user habit information is further used for indicating a third application associated with the first application in the terminal device, wherein the terminal device starts the third application during the process of running the first application or the terminal device starts the third application within a preset period of time after exiting the first application;
the method further comprises the steps of:
and if the second target service is the service called by the third application in the running process and the second target service is the service called by the first application in the running process, the second target service is used as a second associated service corresponding to the first application.
5. The method of claim 2, wherein determining a second associated service corresponding to the first application based on a service invocation relationship corresponding to the first application, comprises:
and if the use frequency of the first application in the terminal equipment meets a preset condition, determining a second associated service corresponding to the first application according to a service calling relation corresponding to the first application.
6. The method of claim 1, further comprising, after determining a first associated service corresponding to the first application:
and updating the first associated service corresponding to the first application based on the plurality of pieces of user habit information which are re-acquired in the terminal equipment and correspond to the first application, so that the terminal equipment pre-loads the updated first associated service when the first application is started.
7. The method of claim 2, further comprising, after determining a second associated service corresponding to the first application:
and updating the second associated service corresponding to the first application based on the plurality of pieces of user habit information which are re-acquired in the terminal equipment and correspond to the first application, so that the terminal equipment can continue to operate the updated second associated service when exiting the first application.
8. The method as recited in claim 1, further comprising:
acquiring user habit information corresponding to the first application in a plurality of terminal devices, and analyzing a service call total relation corresponding to the first application according to the user habit information;
Determining a third associated service corresponding to the first application according to a service call total relation corresponding to the first application, so that the target terminal equipment preloads the third associated service when the first application is started;
and the target terminal equipment does not have user habit information corresponding to the first application.
9. The method as recited in claim 8, further comprising:
and determining a fourth associated service corresponding to the first application according to the service call total relation corresponding to the first application, so that the target terminal equipment continues to operate the fourth associated service when exiting the first application.
10. The method of claim 1, wherein the first application is a camera application or a wallet application;
a first associated service corresponding to the camera application, comprising at least one of: media services, camera hardware abstraction layer, camera drivers, AI services;
a first associated service corresponding to the wallet application, comprising at least one of: media services, camera hardware abstraction layer, camera drivers, code scanning services, payment services, graphics code services.
11. The method of claim 2, wherein the first application comprises a camera application;
a second associated service corresponding to the camera application, comprising at least one of: media services, camera hardware abstraction layer, camera drivers, AI services.
12. An electronic device, comprising:
one or more processors;
a memory;
and one or more computer programs, wherein the one or more computer programs are stored on the memory, which when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-11.
13. A computer readable storage medium comprising a computer program, characterized in that the computer program, when run on an electronic device, causes the electronic device to perform the method of any of claims 1-11.
CN202210898089.2A 2021-08-10 2021-08-10 Application scheduling method and electronic equipment Active CN115705241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210898089.2A CN115705241B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110917469.1A CN113934519B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment
CN202210898089.2A CN115705241B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110917469.1A Division CN113934519B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment

Publications (2)

Publication Number Publication Date
CN115705241A CN115705241A (en) 2023-02-17
CN115705241B true CN115705241B (en) 2023-12-15

Family

ID=79274370

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110917469.1A Active CN113934519B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment
CN202210898089.2A Active CN115705241B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110917469.1A Active CN113934519B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment

Country Status (1)

Country Link
CN (2) CN113934519B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116679900B (en) * 2022-12-23 2024-04-09 荣耀终端有限公司 Audio service processing method, firmware loading method and related devices
CN116244008B (en) * 2023-05-10 2023-09-15 荣耀终端有限公司 Application starting method, electronic device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833465A (en) * 2010-04-23 2010-09-15 中国科学院声学研究所 Embedded system supporting dynamic loading operation of application programs
CN105893129A (en) * 2016-03-30 2016-08-24 北京小米移动软件有限公司 Processing method and device for application programs in terminal
CN106708617A (en) * 2016-12-23 2017-05-24 武汉斗鱼网络科技有限公司 Service-based application process keep-alive system and keep-alive method
CN108647055A (en) * 2018-05-10 2018-10-12 Oppo广东移动通信有限公司 Application program preloads method, apparatus, storage medium and terminal
CN109151216A (en) * 2018-10-30 2019-01-04 努比亚技术有限公司 Using starting method, mobile terminal, server and computer readable storage medium
WO2019223510A1 (en) * 2018-05-21 2019-11-28 Oppo广东移动通信有限公司 Application program preloading method and apparatus, storage medium, and mobile terminal
EP3575962A1 (en) * 2018-05-29 2019-12-04 Guangdong Oppo Mobile Telecommunications Corp., Ltd Method and device for preloading application, storage medium and intelligent terminal
CN112162796A (en) * 2020-10-10 2021-01-01 Oppo广东移动通信有限公司 Application starting method and device, terminal equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017000138A1 (en) * 2015-06-29 2017-01-05 Orange Method for controlling the execution of a program configurable into a disabled state and enabled state
CN109144676B (en) * 2017-06-15 2022-05-17 阿里巴巴集团控股有限公司 Self-starting detection method and device of application program and server
CN111279312B (en) * 2017-09-13 2022-05-27 优步技术公司 Alternative service paths for service applications
CN107748698A (en) * 2017-11-21 2018-03-02 广东欧珀移动通信有限公司 Start control method, device, storage medium and the terminal of application with broadcast mode
CN112527403B (en) * 2019-09-19 2022-07-05 荣耀终端有限公司 Application starting method and electronic equipment
US11544502B2 (en) * 2019-12-19 2023-01-03 Microsoft Technology Licensing, Llc Management of indexed data to improve content retrieval processing
CN111464690B (en) * 2020-02-27 2021-08-31 华为技术有限公司 Application preloading method, electronic equipment, chip system and readable storage medium
CN112527407B (en) * 2020-12-07 2023-09-22 深圳创维-Rgb电子有限公司 Application starting method, terminal and computer readable storage medium
CN112631679A (en) * 2020-12-28 2021-04-09 北京三快在线科技有限公司 Preloading method and device for micro-application

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833465A (en) * 2010-04-23 2010-09-15 中国科学院声学研究所 Embedded system supporting dynamic loading operation of application programs
CN105893129A (en) * 2016-03-30 2016-08-24 北京小米移动软件有限公司 Processing method and device for application programs in terminal
CN106708617A (en) * 2016-12-23 2017-05-24 武汉斗鱼网络科技有限公司 Service-based application process keep-alive system and keep-alive method
CN108647055A (en) * 2018-05-10 2018-10-12 Oppo广东移动通信有限公司 Application program preloads method, apparatus, storage medium and terminal
WO2019223510A1 (en) * 2018-05-21 2019-11-28 Oppo广东移动通信有限公司 Application program preloading method and apparatus, storage medium, and mobile terminal
EP3575962A1 (en) * 2018-05-29 2019-12-04 Guangdong Oppo Mobile Telecommunications Corp., Ltd Method and device for preloading application, storage medium and intelligent terminal
CN109151216A (en) * 2018-10-30 2019-01-04 努比亚技术有限公司 Using starting method, mobile terminal, server and computer readable storage medium
CN112162796A (en) * 2020-10-10 2021-01-01 Oppo广东移动通信有限公司 Application starting method and device, terminal equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
掌上兴电手机综合服务平台研究与应用;张涛 等;《绿色科技》(第20期);第156-158页 *

Also Published As

Publication number Publication date
CN113934519A (en) 2022-01-14
CN115705241A (en) 2023-02-17
CN113934519B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
WO2020259452A1 (en) Full-screen display method for mobile terminal, and apparatus
CN110114747B (en) Notification processing method and electronic equipment
CN111095723B (en) Wireless charging method and electronic equipment
WO2021169337A1 (en) In-screen fingerprint display method and electronic device
CN114650363B (en) Image display method and electronic equipment
CN111913750B (en) Application program management method, device and equipment
CN113452945A (en) Method and device for sharing application interface, electronic equipment and readable storage medium
CN115705241B (en) Application scheduling method and electronic equipment
WO2021218429A1 (en) Method for managing application window, and terminal device and computer-readable storage medium
CN112532508B (en) Video communication method and video communication device
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
WO2023000746A1 (en) Augmented reality video processing method and electronic device
CN114489469B (en) Data reading method, electronic equipment and storage medium
CN115022807B (en) Express information reminding method and electronic equipment
US20240114110A1 (en) Video call method and related device
CN116828100A (en) Bluetooth audio playing method, electronic equipment and storage medium
CN113407300A (en) Application false killing evaluation method and related equipment
CN114911400A (en) Method for sharing pictures and electronic equipment
CN116048831B (en) Target signal processing method and electronic equipment
CN115482143B (en) Image data calling method and system for application, electronic equipment and storage medium
CN116389884B (en) Thumbnail display method and terminal equipment
CN115529379B (en) Method for preventing Bluetooth audio Track jitter, electronic equipment and storage medium
CN116233599B (en) Video mode recommendation method and electronic equipment
CN114020186B (en) Health data display method and device
CN114490006A (en) Task determination method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant